Friday, September 4, 2015

"Snake the Net" is Now Available

After a hazardous test period, we are proud to announce that “Snake the net” is now on the GooglePlay. We have fixed some major and minor issues but, above all, we have tried to bring you a smooth and stable gameplay.

If you have any issue or suggestion don’t hesitate to tell us.

I hope you will  enjoy it.

Download at Snake The Net



Tuesday, April 7, 2015

Migrating from TCP to UDP to reduce latency



The TCP protocol guarantees a reliable connection between two ports of different machines. It is reliable because it guarantees that packets will arrive in order to the other side. The implementation of the protocol is based on the confirmation packet to get from one end to another. If a packet is not confirmed, the sender  machine will resend the packet. In scenarios  where latency is not an issue is a safe option using the TCP protocol, but in applications that need continuous and flexible interaction between remote clients and the server must bet on the use of UDP.
For our part, we implement what TCP provides us with additional functionality in our application.
A good practice is to design the application assuming that we will have a secure connection. Moreover, it is even recommended to make a first implementation using TCP.  we will keep logic inside communication classes, modules, libraries etc ... (depending on the language). The idea is to have an abstraction of communication between clients and the server that guarantees that  there is really a flow control packet. We gain with this approach?

- We can test the functionality of the application under ideal conditions. We will not have the desired final velocity but it will  we serve us to detect functional errors.
- Some functionalities of the application can function correctly under TCP. For example: Our multiplayer game establishes a dialogue between client and the server before the start of the gamer. In this scenario, having a high latency is not a handicap.
- Similar to the previous point. Once we are using UDP, and if we find a functional error, we can always reuse TCP to rule out the origin of the error relates to the communications layer.
Therefore, our goal is to have a first version of our application using a communication library based on TCP. Once we see that the functional part of the application behaves as expected, we  will replace the communications library that we have developed over UDP.


 
Let's see an approximation of how to implement this library UDP communications.
Before going into detail we will explain the context of our application. Our server, a multiplayer game (snake the net) creates a thread for each incoming connection. For each item, there is a single thread that manages the game while there are other  n threads that manage the relationship with each of the remote clients. It is an architecture that can be used in countless applications. It is important to detail the scenario because this architecture helps us to make a modular and scalable deployment althought it has implications for communications.
- One of the advantages we enjoy with TCP is that for each incoming connection, we will have a dedicated socket so we have a dedicated connection between each client and the server.
Socket client = socket.accept();
In our case, each server  thread, that  serves a remote client, will have this dedicated socket. It will read and write information when needed.
ois = new ObjectInputStream(client.getInputStream());
c=(Command) ois.readObject();
              oos = new ObjectOutputStream(client.getOutputStream());
              oos.writeObject(c);
In the case of UPD, we will not have this dedicated channel and we have to simulate it. After creating  UDP socket, we will shortly read it.
DatagramSocket  socket = new DatagramSocket(port);
      
This socket server will be shared by all threads that serve the socket so we add a layer to solves this deficiency. The solution we have adopted is to create a thread server that is responsible for reading the socket, store and deliver it to each customer when required. The skeleton is as follows:
       private static ArrayList<Client> clients;
       private static ArrayList<Command> commands;
       while(keepReading){
              Command command=UDPCom.getCommand();
             synchronized(commands){
                    commands.add(command);
                    ClientN cli=getClientWaittingCommand(command);
      
                    if (cli!=null)
                           synchronized(cli){
                                  cli.notify();
                           }
       }

1.       After opening the socket, we will create a thread that will read from the socket continuously. Onces a packer arrives, the server registers it in a packets container.

             Command command=UDPCom.getCommand();
             synchronized(commands){
                    commands.add(command);

2.      Then we check whether there has been a client expecting this packet. If you have registered you will be notified to pick up your package. Here we can already deduce that the packet must have a client identifier. This allocation of Ids must be managed between the server and the remote client. In our case, we perform this dialog via TCP (no need to use UDP).

       ClientN cli=getClientWaittingCommand(command);
      
       if (cli!=null)
             synchronized(cli){
                    cli.notify();
             }

From the perspective of the thread that serves the customer, as mentioned, we operate analogously regardless of protocol. Reading will encapsulate a method analogous to that TCP offers.

public static Command readUDPCommand(ClientNibbles client,int timeOut) throws InterruptedException
{
       Command command=getReadedCommandByClient(client);
       if (command==null)
       {
             synchronized(clients)
             {
                    clients.add(client);
             }
             synchronized(client)
             {
                    client.wait(timeOut);
             }
command=getReadedCommandByClient(client);
                   
             synchronized(clients)
             {
                    clients.remove(clients);
             }
                   
       }
       else
             if (command!=null){
                    synchronized(commands){
                           commands.remove(command);
                    }
             }
return command;
       }     

1.       Each thread that serves the customer look first if the UDP server already has a pending packet for him.
2. If we already have it, we get it  and delete it from the list of packet container.
3. If not, we record the client in the waiting client list and wait until the UDP  server notify us that a new packet has arrived. Upon notification, we will read it  from  the unread  packet container and delete it from the container.
By this approach we simply ensure that each thread, serving a remote client, receives the packets we need. We Also get closer the way of working of the two protocols so that we can exchange the TCP / UDP libraries at our convenience. However, it seems that we have not  implemented two of the advantages that  TCP delivers: Ensure the order of arrival of the packets and obtain confirmation of their arrival.
Let's see ... there is no magic solution. The key is to detect which type of traffic can be implemented using UDP. Specifically, our application will only use UDP when the game is running for collecting customer movements  by the server. Consider each of the two points:
- Ensure that the packets arrive.
The server, which is an imperative server has an internal clock that each x time warn us that he has completed a turn and must send the summary of movements back to the players. If you end a turn with no news  from a remote client, we will assume that, for latency issues, their movement is late and it will be discarded. In fact, by this logic we are solving the problem of lost packets. If the package never arrives it will be equivalent to a movement that is late. The only difference is that the packets really never come.
In this case, the result of using UDP is that lost packets will be interpreted in the eyes of the customer as corrections to their movements (only if change of direction). Certainly, but the alternative (using TCP) involves raising the latency and have a percentage of tardy movements much higher.

- Ensure the order of arrival of packets
Let's review what we have done in the UDP server. Put simply, there a thread that reads packets arriving from the socket and leave them in a container for each client to pick their own. Well, we include an identifier sequence in each packet so that the server knows which one you have to collect. It will be enough then to look for packets searching by  this sequence number. If a packet with sequence id greater than expected arrives earlier, It will simply be remain stored until the server needs it
In summary, it appears that we have achieved our goal. We have created a UDP-based library that has the same functionality as TCP. In addition, performing a modular design, we can exchange libraries when needed.

Wednesday, March 11, 2015

Accessing databases through multiple threads



Most applications require store information. The options and scenarios are diverse. Let's focus on the scenario in which multiple remote clients need to store centralized information. No matter if the remote client is a standalone application, website or mobile app

   In general, it is not advisable to let the remote clients communicate with the database directly. It is always better to create a middleware to manage the persistence of the entire system. Not only for safety reasons but because we have a single point of entry to the database so that we can enhance and manage performance in a unified way.

So customers send and receive data from the server and he will be responsible for managing the persistence of this information. In small databases  with few connections, not much to worry about. With notions of SQL you can make a small server with good performance.

But when you are facing scenarios involving large numbers of users or large transactions, you must use more advanced techniques. There are two major problems to deal:

  •      Large number of users:

      This is more a question of architecture, but it is always good practice create a new thread for each new connection to a remote client. Of course, you have to limit the number of threads depending on your environment if you want to ensure quality to  everyone connected.
You cannot share the same connection with all the threads or rather, you can do but as database providers implement sessions  in a synchronized way, the commands will not be executed until their turn. You will be within a FIFO (first in, first out) queue.

The idea is to create a new database session for each external connection ( owned by the new thread). Each remote client will get or keep your data in parallel as if he just connected to the database.

So we're done ... isn’t it?

Unfortunately, this is not possible or at least not always. You can’t open a new  session for each incoming connection (and therefore the same number of threads) because sessions database are costly both in terms of resources and licenses (depending on how each provider licensing).

 The solution to this problem is to create a pool of database connections. Each thread will request a free connection pool when needed and released when he finishes what he has to do. Thus, each thread will not own a connection. Actually there is no sense to maintain a connection to the database without using. In short, each thread will request a connection when needed, will do the job, and then release the connection.

…but  it can still happen that when needed, there are no available  sessions  to the database in the pool. In this case, the thread waits until another thread release a database session. In this scenario, the user will likely experience some delay. If this happens frequently, you should ask if you can increase the number of sessions in the pool, or if we have reached the maximum capacity, purchasing more resources to enable your server to handle this load.

In relation to pool connections, there are some providers (providers including databases) that provide implementations that can be used via API. We have used some of them and they work quite well. You also have the option of making your own implementation. It will cost you some time, but on the other hand, you will have more control over what you do and you might particularize and optimize it as it suits us.

If you choose to do it yourself, a good idea is to open the database sessions when the server starts because  the creation of a database connection has a high cost. Therefore, if the connection is opened at start up, there will be no perception of the cost of establishing the connection for remote clients


  •   Long transactions or lot of queries / commands involved.

      There are other scenarios or mixed scenarios in which the issue is that you have to run a lot of queries for the same user request. The perception of the users is that your request takes longer than expected. How to deal with this?

Depends deeply on the scenario in which we find ourselves. In fact, to solve this problem requires insight into the performance of your application. We can use a similar strategy to that we used on the last point.

We will create a specific thread for each group of linked commands. Take an example, if you are reading data from a client and their orders, you can think of parallel load data from customer and every order, because, no apparent dependence. So, again, we will create different threads, requesting a connection to the pool, and run these commands in parallel. As a result of this approach, users will wait  only the slowest group of commands that are executed in parallel.

If a branch is heavier than the rest, we can choose to return part of the data while the heavier parts are still running. If you do this, the remote client will need to know in order to inform the user about it. At the end of the heavier parts, send the information to the client so that you can complete the user request.

This approach has some risks that you need to know:

    1.   You will  deal with some threads in parallel, so you are in charge of joining together when they finish their duties. Be careful because you have to ensure that the information you are accessing is already loaded. Use semaphores or other mutual exclusion mechanism to control access to these areas.  
    2. Beware of deadlocks that can cause connection pool usage. Release the session as long as you do not need it. For example, if you create a new thread from another thread already has a session, the latter may end up locked (depending on the size of the pool and context). You could be waiting for her son before releasing your session while he is also waiting for an  available session. If this behavior is widespread we could have a global lock the entire application. This scenario may seem unlikely, but possible oop often suffer this problem. To address this, as mentioned, let's release the database session when we don't need anymore. But be careful, when dealing with hierarchies of objects and multithreading is not as straightforward ensure this behavior. A simple solution is to restrict parallel to the objects that have no dependency downwards. In a graph of tree , we would only  put in parallel the leaves because they have no dependencies.
Combining this two techniques you can get a good performance on your intensive database access applications.