Hi,
I've have been toying around with a join optimization as my product, kogama.com, usually has to send a lot of data pr. joining peer. The basic idea is most easily understood in the following:
while (pendingResponses.Count > 0)
{
ResponseBase responseBase = pendingResponses.Peek();
if (!responseBase.IsReady)
{
return;
}
Peer.AllowSendBufferFull = true;
while (!responseBase.IsDone)
{
SendResult result = responseBase.SendParameters(Peer);
if (result == SendResult.SendBufferFull)
{
Peer.AllowSendBufferFull = false;
return;
}
}
pendingResponses.Dequeue();
}
I simply push data onto the joing Peers sendbuffer until full. This technique has worked well for us for cached events: Events that happened while the peer was getting the game snapshot. I've decided to use this for our entire joinflow making it almost twice as fast. Sadly though the game server uses much more cpu than expected and eventually becomes unstable. I have the following questions:
1: Is sending events v.s. send operation responses different within Photon? A change I did was to use operation responses for this. Previously I have only used the above technique for events.
2: One of the symptoms of this elevated CPU usage is more threads. My photonserver.config is set to default regarding the various thread settings. When the issue occurs at least 50 unaccounted threads has been created. Does Photon allocate new thread on the fly? And when does Photon do this?
3: I've created a async db layer where I allocate 40 threads on startup. These are set to above normal priority. When not used they are dormant. Will this cause issues for Photon?
I've have been toying around with a join optimization as my product, kogama.com, usually has to send a lot of data pr. joining peer. The basic idea is most easily understood in the following:
while (pendingResponses.Count > 0)
{
ResponseBase responseBase = pendingResponses.Peek();
if (!responseBase.IsReady)
{
return;
}
Peer.AllowSendBufferFull = true;
while (!responseBase.IsDone)
{
SendResult result = responseBase.SendParameters(Peer);
if (result == SendResult.SendBufferFull)
{
Peer.AllowSendBufferFull = false;
return;
}
}
pendingResponses.Dequeue();
}
I simply push data onto the joing Peers sendbuffer until full. This technique has worked well for us for cached events: Events that happened while the peer was getting the game snapshot. I've decided to use this for our entire joinflow making it almost twice as fast. Sadly though the game server uses much more cpu than expected and eventually becomes unstable. I have the following questions:
1: Is sending events v.s. send operation responses different within Photon? A change I did was to use operation responses for this. Previously I have only used the above technique for events.
2: One of the symptoms of this elevated CPU usage is more threads. My photonserver.config is set to default regarding the various thread settings. When the issue occurs at least 50 unaccounted threads has been created. Does Photon allocate new thread on the fly? And when does Photon do this?
3: I've created a async db layer where I allocate 40 threads on startup. These are set to above normal priority. When not used they are dormant. Will this cause issues for Photon?