Hello,

Looking at the standard streams provided by the stl libraries, it seems as though it is impossible to have 1 stream to use for both standard console IO and file/string IO. However, before I give up I would like to ask, is this possible:

generic_stream stream;
stream=cin;
//now I can use stream as cin
stream>>value;
stream=fstream("myFile.txt");
//now stream acts as a stream to myFile...
value>>stream;
stream=sstream("123 test");

From what it looks like, since fstreams use input and output they derive iostream, which in turn derives istream, which in turn derives ios. However cout uses ostream instead of istream and therefore its common ancestor with fstreams would be ios. But it doesn't look like ios has any actual IO functionality. Does this mean I am SOL? If there is no way using the STLs is it maybe possible using the old c libraries (can a FILE point to standard console IO)?

If output alone were your goal, I would say to use an ostream for the variable class and ofstream for the output file. However, you are then using the file for input, which does indeed lead to the problem you describe.

Wait, what is the goal, here? Why would you need to be able to use a single stream serve for multiple purposes like that?

Edited 2 Years Ago by Schol-R-LEA

However cout uses ostream instead of istream and therefore its common ancestor with fstreams would be ios.

Well, naturally, cout uses ostream because cout is an output stream, not an input stream. If you want an input stream, you have to use cin. And fstream derives from both istream and ostream, btw.

Generally, if you want to have a stream object that is either a file stream or a standard (console) stream, or a string stream, for that matter, the general solution is to have a pointer to a ostream or istream, depending on the direction you need.

You cannot, however, have a generic bi-directional stream (iostream) because only some stream can allow that (e.g., files and strings), but not standard streams (cin / cout). And this is the case whether you use C++ streams or C-style IO functions, because the standard streams behave the same in both cases. The only difference is that in C++ you are forbidden at compile-time from attempting to do input operations on cout, or vice versa. With the C functions, doing so is permitted by the compiler, but will result in an error at run-time.

So, if you need iostream, then you can use that too, but only for things that actually support this bi-directionality, which excludes the standard streams (and any other similar streams, like internet socket streams).

Here is how you would typically handle having a generic stream:

class any_ostream {
  private:
    std::ostream* out_str;
    bool should_delete;

    // non-copyable:
    any_ostream(const any_ostream&);
    any_ostream& operator=(const any_ostream&);
  public:

    any_ostream(std::ostream& aOutStr) : 
      out_str(&aOutStr), 
      should_delete(false) { };

    any_ostream(const std::string& aFileName) : 
      out_str(new std::ofstream(aFileName.c_str())),
      should_delete(true) { };

    any_ostream() : 
      out_str(new std::ostringstream()),
      should_delete(true) { };

    ~any_ostream() {
      if( should_delete )
        delete out_str;
    };

    std::ostream& get() const {
      return *out_str;
    };

    operator std::ostream&() const {
      return *out_str;
    };
};

At least, that's the general idea. There are other variations, using smart-pointers maybe, or, keeping track of the pointer some other way (within the class/function where it's used).

Why would you need to be able to use a single stream serve for multiple purposes like that?

This is a very common thing to do. Streams are polymorphic for good reason, because it is often useful to be able to output to or input from anywhere. One basic example is being able to switch between a piped "stdin" input to a program versus inputting from a specified input file. This is extremely common on Unix-like platforms where typically, if you launch a command-line program and you don't specify an input file-name or an output file-name, the program will use the console stream instead (stdin for input, stdout for output), assuming that you are either piping in or piping out those streams. And then, there are a myriad of other use-cases for this kind of "generic" streams, which the whole reason why iostreams are the only dynamically polymorphic (OOP) classes in the whole C++ standard library (excl. C++11 libraries).

Edited 2 Years Ago by mike_2000_17: answered question

mike_2000_17: While that is indeed a solution to the problem as given, I'm not sure it is the answer Labdabeta needs. This has the smell of a shoe or bottle problem to me; I think Labdabeta should possibly reconsider the direction he's taking, if he finds himself needing such an unusual solution. I may be wrong; there certainly are places where a stream might have to serve multiple purposes like this. I am just not certain this is one of them, and would like a broader picture of the problem.

Thanks, a great reply as usual. I just don't understand why we can't do read and write to the standard console. Every OS seems to have some way to do it, but none of them seem to be able to get along (EG: I know windows has Get/SetConsoleCursorPosition to 'go back' and I remember I got something similar working on linux once).

For now is it possible for me to write my own class deriving from iostream that would deal with cin and cout, but act as one stream. My guess would be something like:

class ioconstream:public std::iostream
{// private:
    public:
    template<typename T> ioconstream &operator<<(const T& o){cout<<o;}
    template<typename T> ioconstream &operator>>(T& o){cin>>o;}
}cio;

As for why I want to be able to do this, I think it will keep my code cleaner as I have a situation in which I will have 2 streams of any kind open which will be able to read/write to/from each other. It saves me a lot of work if I can use iostream a,b; instead of fstream fa,fb; sstream sa,sb; enum{FILE,STRING,STD}a,b;.

EDIT: Didn't see Schol-R-LEA's post, but I think I addressed his concerns in the last paragraph. However for more detail, I am implementing an emulator for an educational machine language. It allows opening of two streams which can point to either STDIO, the file space or a string manipulation space. I believe that finding a way to shoehorn all three streams into two stream variables would make my code much simpler. Am I wrong?

Edited 2 Years Ago by Labdabeta: didn't see post

Well, it's true that this may not be appropriate for whatever is Labdabeta's actual problem in its broader context. And, like you, I would invite him to give us more information on that.

However, this is far from being an unusual solution. The code I posted above, or a variation thereof, is extremely common. I have written such code in many places, and I have seen it in many other places too. And there is nothing unusual about the situation Labdabeta is in.

I would say that the only thing that is unusual here is that Labdabeta seems to require a bi-directional (I/O) stream. That is very rarely needed. A true bi-directional stream, meaning that you can go back and forth between reading and writing on the same data, is rarely needed, and never a good idea either (typically, do most operations in memory, and then, dump to a stream, or vice versa, take all from the stream and then operate in memory).

Edited 2 Years Ago by mike_2000_17: note

is it possible for me to write my own class deriving from iostream that would deal with cin and cout, but act as one stream.

Yes and no. To do this, you would have to violate the interface of iostream. The problem here is that cin and cout are not really connected with each other (except for synchronization), they are two distinct streams. Whatever you write onto cout, you will never be able to read out of cin. They are two distinct streams, i.e., like two distinct files. This means that if you were to create an iostream that takes all inputs from cin and puts all output to cout, you would not get the same behavior as you get from a iostream like fstream or stringstream, which both operate on a single buffer. For example, if you have a stringstream, you can write stuff to it, and then read it back out afterwards, and the same goes for fstream. The point is, this behavior is the expected behavior you get when using an iostream, and you will never be able to get that behavior out of standard streams (stdin, stdout) because they are separate streams.

You could do some trickery to merge the two standard streams, which is possible through OS-specific functions. For example, you could pipe stdout into stdin, and then, you would get the correct iostream behavior. However, with that, you lose the actual console input completely (and probably the console output too), and at that point, you are just dealing with an fstream (because stdin and stdout are "fake" files).

What I think you should do is consider using two generic streams: a generic input stream and a generic output stream. With that, you can assume that they are not connected, which means that it works for stdin/stdout. But, they could be connected if you use a single fstream or stringstream to fullfill both types of operations.

N.B.: Deriving from iostream classes is a very tricky thing to do, and I don't recommend that you try to. In fact, iostream classes don't have public virtual functions, and their polymorphism is done through the streambuf base-class, which is very tricky to derive from.

I just don't understand why we can't do read and write to the standard console.

Because having everything that you write out to cout being echoed back via cin is a very odd behavior, and it would be extremely annoying if that was the default behavior. Now, needing this kind of echoing behavior might happen on extremely rare occasions (I can't think of any reason for it), and it can be done with OS-specific functions, and that's sufficient. The standard library cannot accommodate such extremely unusual circumstances, because there would be a cost to everyone.

Also, remember that C++ is not just meant to run on PCs that use a very capable OS (Linux / Win / Mac), it is also meant to run on tiny micro-controllers, and other weird devices. Although most PC OSes do have functions to interactive console (e.g., curses, conio, etc.), writing such highly interactive console programs is a very rare thing these days (and has always been rare). And also, different OSes may provide similar functionality in terms of letting out move the cursor and read/write anywhere, they don't behave exactly the same, and therefore, it would be very difficult to have a "standard" way to tap into those capabilities. Remember, standardization is all about specifying exactly the expected behavior on all implementations, which is a tough thing to do across varying platforms, and often requires a "lower common denominator" solution.

Also note that the separation of input and output is a really good thing in general. It allows for really neat stuff to happen. For example, if you execute a program remotely (through ssh or telnet or whatever), the interaction can happen transparently while the input and output streams are piped through (secure-)sockets and routed through the web. If you had to implement a bi-directional capability on standard streams, this would be nearly impossible to do.

Also note that C++ is not alone in this convention, I don't know of any language that provides bi-directional standard streams, for all the reasons I just mentioned.

Edited 2 Years Ago by mike_2000_17: Added reply

I suppose you are right. Combined I/O is too difficult to standardize. I am going to have to change my definitons so that I don't require it.

This program is an emulator for a machine language for a tutorial I am working on. Unfortunately it is made by experience and I have the most experience with MIX (from The Art of Computer Programming) which is so outdated that it isn't even a RISC computer. Of course I also know basic MIPS and ARM assembly, so I am familliar with RISC architecture. I believe that this issue was caused by trying to shoehorn the two situations together.

The problem is that I never liked memory-mapped I/O, so I took the I/O systems from MIX and tried to fit them into a modern file system (forgetting that that functionality is provided by OS, no?). This lead to me assuming that I needed OPEN and CLOSE commands, and thus a finite count of streams which can be open at once and a generic stream in the implementation.

Looking back at MIX however, I see that indeed Knuth does not use OPEN or CLOSE capability, but rather defines the IN and OUT commands so that they take as input which device they will be accessing, a much more elegant solution.

I remember that in my course on low-level design we looked at memory-mapped I/O and how it uses flags passed back and forth to not require open/close commands either. I suppose I should have examined a larger possible solution space when designing the I/O features of my language.

Thank you for pointing that out.

This question has already been answered. Start a new discussion instead.