use nio
|
A wall of text follows this. If you do not want to read it all, then please do not reply unless you feel you have something genuinely worthy to contribute.
I am writing a server from scratch. I am designing it to be like winterLove. I'm sure a lot of you who are experienced with programming servers have read or looked at how winterLove works. Basically there is thread to accept connections, there are individual threads for each player (the infamous "thread per client"), and a thread that processes all of the updating (PlayerHandler). It have it designed exactly like that. I like the design -- it's simple.
Anyways, when I finished writing some content, I decided that I wanted to perform some kind of a stress test. So what I did was, I sent the All-Go-Free bot, the SYIpkpker bot, to a friend of mine, and I instructed him on how to use it. Rather than running it locally, I figured it would be ever so slightly more realistic if the connections were not local. What I discovered from the test was that the cycle times were pretty great, except that once about 200 players connected, the cycle times would spike up dramatically (we're talking from 15 milliseconds to like 5000). At first, I was thinking that it was somehow associated with the acceptor thread, because I noticed that when the cycles slowed down I was receiving "A player has connected from ..." messages less frequently. However, I found out that was not the cause after I decided to time how long individual steps of an update cycle took. The bottleneck was in player updating. After I scrolled to the very bottom of the update method, I instantly recognized the cause.
The problem is that at the end of each player update, the update packet is sent. The large, sometimes multiple kilobytes in size, packet is sent through the socket's output stream once it is constructed. That writing of the packet is a blocking operation, as many of you know. And that blocking is what was causing the seemingly random spikes in cycle times. Once a couple of hundred players would connect, the server would attempt to update all of those players. Sometimes though, one of the connections would be a bit slow. So the server would keep processing players until it attempted to process a player which was connected through a "slow" socket. Once it constructed the update packet, it would attempt to send it. However, the sending blocks for a long amount of time due to the slow socket. While the sending operation is blocking, none of the other players are being updated. As a result, cycles would sometimes take an extremely long time.
Now, I am kind of stuck on how to approach this problem. The ideal situation would be: having the server construct the packet and hand off the packet to the individual player thread and have it send it (and merrily block in its own little thread). This would be accomplished by creating a variable in the Player class of type RS2Buffer (the class that I use to construct my packets). When the update packet would be constructed, the server thread would set that variable to equal the newly-constructed packet and perhaps raise some flag that signals that a packet was constructed. This sounds fine, but the problem is that a lot of the time, the player thread is caught up in its own blocking operation. The player thread is blocking because it is waiting to receive a packet. So the update packet would be written once it finishes being blocked by the read, but I am afraid that the update packet would be sent way after it was actually constructed, which would make it seem that the player is lagging.
So here is when I ask you guys for suggestions? How would you approach this problem? Sorry for this rather lengthy post.
By the way, if you want to see the code, just click the link in my signature.
use nio
you realize there's a reason that design isn't used anymore
Reinventing the (bad) concept is quite dumb, there's plenty of options to pick from if you really want to go with it
Yes, your problem is what you suspect. If you are using java.io, then you should be doing the I/O in a different thread.
In BlakeScape it uses one thread per player for this, iirc after a read() happens it'll also try to write anything pending in the output buffer. I don't think this is ideal because if a client doesn't send you packets often, it also won't get packets sent to it often. Most people tend to use one thread dedicated to reading per player, and one thread dedicated to writing per player. Best way for you to do this I think is some blocking queue. You already create new Rs2Buffer objects in your writePacket() function, add these to a blocking queue, the writing thread should take items from this queue (take blocks if the queue is empty so you won't end up spinning) and then write them to the socket.
This problem doesn't exist in BlakeScape, it deals with it already in exactly the same way that you are proposing. It only does I/O in separate threads, all the actual game logic processing (such as handling packets, updating players, updating npcs) is done in one thread.However, the fundamental problem
* was that synchronization became difficult. On top of general thread synchronization, synchronizing player updates
* became problematic. For example, some thread would finish reading packets, moving, updating, and reset before
* another got the chance to update. This became especially obvious with movement. Some threads would finish
* handling movement and updating before another got the chance to update. This resulted in inconsistencies; the
* position of a character on one player's screen would sometimes not match the position of one character on another
* player's screen. The simplistic winterLove design fixes this problem by processing all of the players in one
* thread
Btw a suggestion. Split up the actual gameplay aspects of Player and the stuff that deals with network code, e.g. have Player & Session. This is because your Player objects will probably end up being quite big, and you create them straight away when a new connection arrives even though that connection may never lead to a successful login (this is one problem with winterlove). Also if you ever end up adding e.g. an update server, you don't want a player to be attached to an update connection. This also makes a bit more sense from a design point of view as well imo.
« Previous Thread | Next Thread » |
Thread Information |
Users Browsing this ThreadThere are currently 1 users browsing this thread. (0 members and 1 guests) |