Skip to main content

Multi-threading Interview Questions in Java part- 2


Multi-threading


1.What are the minimum requirements for a Deadlock situation in a program?
                      For a deadlock to occur following are the minimum requirements:
  1. Mutual exclusion: There has to be a resource that can be accessed by only one thread at any point of time.
  2. Resource holding: One thread locks one resource and holds it, and at the same time it tries to acquire lock on another mutually exclusive resource.
  3. No preemption: There is no pre-emption mechanism by which resource held by a thread can be freed after a specific period of time.
  4. Circular wait: There can be a scenario in which two or more threads lock one resource each and they wait for each other’s resource to get free. This causes circular wait among threads for same set of resources.
2. How can we prevent a Deadlock?
                      To prevent a Deadlock from occurring at least one requirement for a deadlock has to be removed:
  1. Mutual exclusion: We can use optimistic locking to prevent mutual exclusion among resources.
  2. Resource holding: A thread has to release all its exclusive locks if it does not succeed in acquiring all exclusive locks for resources required.
  3. No preemption: We can use timeout period for an exclusive lock to get free after a given amount of time.
  4. Circular wait: We can check and ensure that circular wait does not occur, when all exclusive locks have been acquired by all the threads in the same sequence.
3. How can we detect a Deadlock situation?
                        We can use ThreadMXBean.findDeadlockedThreads() method to detect deadlocks in Java program. This bean comes with JDK .
        Sample code is as follows:
                         ThreadMXBean bean = ManagementFactory.getThreadMXBean();
                          long[] threadIds = bean.findDeadlockedThreads(); 

                                                                        // It will return null for no deadlock
                          if (threadIds != null) {
                                        ThreadInfo[] infos = bean.getThreadInfo(threadIds);
                                        for (ThreadInfo info : infos) {
                                                       StackTraceElement[] stack = info.getStackTrace();
                                                                     // Log or store stack trace information.
                                         }
                          }


4. What is a Livelock?
                             Livelock is a scenario in which two or more block each other by responding to an action caused by another thread. In a deadlock situation two or more threads wait in one specific state. In a Livelock scenario, two more threads change their state in such a way that it prevents progress on their regular work. E.g. Consider scenario in which two threads try to acquire two locks. They release a lock that they have acquired, when they cannot acquire the second lock. In a Livelock situation, both threads concurrently try to acquire the locks. Only one thread would succeed, the second thread may succeed in acquiring the second lock. Now both threads hold two different locks. And both threads want to have both locks. So they release their lock and try again from the beginning. This situation keeps repeating multiple times..


5. What is Thread starvation?
                         In a priority based scheduling, Threads with lower priority get lesser time for execution than higher priority threads. If a lower priority thread performs a long running computation, it may happen that this thread does not get enough time to finish its computations just in time. In such a scenario, the tread with lower priority would starve. It will remain away from the threads with higher priority.

6. How can a synchronized block cause Thread starvation in Java?
                      It is not defined for synchronization that which thread will enter a synchronized block. It may happen that if many threads are waiting for the entry to a synchronized block, some threads may have to wait longer than other threads. Hence these threads with lower priority will not get enough time to finish their work in time.

7. What is a Race condition?
                          A race condition is an unwanted situation in which a program attempts to perform two or more operations at the same time, but because of the logic of the program, the operations have to be performed in proper sequence to run the program correctly. Since it is an undesirable behavior, it is considered as a bug in code.
                             Most of the time race condition occurs in “check then act” scenario. Both threads check and act on same value. But one of the threads acts in between check and act. See this example to understand race condition.
                                        if (x == 3) // Check
                                        {
                                                    y = x * 5; // Act
                                                                    // If another thread changes x
                                                                   // between "if (x == 3)” and "y = x * 5”,
                                                                   // then y will not be equal to 15.

                                        }


8. What is a Fair lock in multithreading?
                       In Java there is a class ReentrantLock that is used for implementing Fair lock. This class accepts an optional parameter fairness. When fairness is set to true, the RenentrantLock will give access to the longest waiting thread. The most popular use of Fair lock is in avoiding thread starvation Since longest waiting threads are always given priority in case of contention, no thread can starve. Downside of Fair lock is the low throughput of the program. Since low priority or slow threads are getting locks multiple time, it leads to slower execution of a program. The only exception to a Fair lock is tryLock() method of ReentrantLock. This method does not honor the value of fairness parameter.

9. Which two methods of Object class can be used to implement a Producer Consumer scenario?
                          In a Producer Consumer scenario, one thread is a Producer and another thread is a Consumer. For this scenario to start working, a Consumer has to know when the Producer has produced. In Object class, there is a wait() method. A Consumer calls wait method to wait on Producer. The Producer used notify() method of Object class to inform Consumer that it has produced. In this way the processor time between produce and consume operations is freed due to the use of wait() and notify() methods.

10.How JVM determines which thread should wake up on notify()?
                     If multiple threads are waiting on an object’s monitor, JVM awakens one of them. As per Java specification the choice of this thread is arbitrary and it is at the discretion of the implementation. So there is no guarantee of rule that a specific thread will be awakened by JVM on notify() method call.

11. Check if following code is threadsafe for retrieving an integer value from a Queue?
                           public class QueueCheck {
                                    Queue queue;
                                    public Integer getNextInt() {
                                             Integer retVal = null;
                                             synchronized (queue) {
                                                    try {
                                                            while (queue.isEmpty()) {
                                                                     queue.wait();
                                                             }

                                                   }catch (InterruptedException e) {
                                                              e.printStackTrace();
                                                   }
                                            }

                                            synchronized (queue) {
                                            retVal = queue.poll();
                                            if (retVal == null) {
                                                     System.err.println("retVal is null");

                                                     throw new IllegalStateException();
                                            }
                                  }
                                  return retVal;
                           }


throw new IllegalStateException();
}
}
r
eturn retVal;

}} 

               In the above code Queue is used as object monitor to handle concurrency issues. But it may not behave correctly in a multithreading scenario. There are two separate synchronized blocks in above code. In case two threads are woken up simultaneously by another thread, both threads will enter one after in the second synchronized block. Only one of the two threads will get new value from the queue and make it empty. The second thread will poll on an empty queue and it will not get any non-null return value.

12.How can we check if a thread has a monitor lock on a given object?
                     In Java, Thread class has a static method holdsLock(Object objToCheck) to check whether thread has a lock on objToLock object. This method will return true if current thread holds the lock on the objToLock object that was passed as an argument to this method.

13.What is the use of yield() method in Thread class?

                   The yield() method of Thread class is used to give a hint to scheduler that the current thread wants to free the processor. The scheduler can either use this hint or just ignore this hint. Since the scheduler behavior is not guaranteed, it may happen that the current thread again gets the processor time. It can be used for debugging or testing purposes. But there is rarely any concrete use of this method.


14. What is an important point to consider while passing an object from one thread to another thread?
           This is a multi-threading scenario. In a multi-threading scenario, the most important point is to check whether two threads can update same object at the same time. If it is possible for two threads to update the same object at the same time, it can cause issues like race condition. So it is recommended to make the object Immutable. This will help in avoiding any concurrency issues on this object.

15.What are the rules for creating Immutable Objects?
             As per Java specification, following are the rules for creating an Immutable object:
       Do not provide "setter" methods that modify fields or objects referred to by fields.Make all fields final and private. Do not allow subclasses to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods. If the instance fields include references to mutable objects, do not allow those objects to be changed. Do not provide methods that modify the mutable objects. Do not share references to the mutable objects. Never store references to external, mutable objects passed to the constructor; if necessary, create copies, and store references to the copies. Similarly, create copies of your internal mutable objects when necessary to avoid returning the originals in your methods.


16.What is the use of ThreadLocal class?
             ThreadLocal class provides thread-local variables. Each thread accesses only its own local variables. It has its own copy of the variable. By using ThreadLocal, if thread X stores a variable with value x and another thread Y stores same variable with the value y, then X gets x from its ThreadLocal instance and Y gets y from its ThreadLocal instance. Typically, ThreadLocal instances are private static fields that are associated with the state of a thread.

17.What are the scenarios suitable for using ThreadLocal class?
          We can use instance of ThreadLocal class to transport information within an application. One use case is to transport security or login information within an instance of ThreadLocal so that every method can access it. Another use case is to transport transaction information across an application, without using the method-to-method communication.

18.How will you improve the performance of an application by multi-threading?
              In an environment with more than one CPU, we can parallelize the computation tasks on multiple CPUs. This leads to parallel processing of a bigger task that takes lesser time due to multiple threads dividing the work among themselves. One example is that if we have to process 1000 data files and calculate the sum of numbers in each file. If each file takes 5 minutes, then 1000 files will take 5000 minutes for processing. But by using multi-threading we can process these files in 10 parallel threads. So each thread will take 100 files each. Since now work is happening in 10 parallel threads, the time taken will be around 500 minutes.
  
19.What is scalability in a Software program?
                 Scalability is the capability of a program to handle growing amount of work or its potential to be enlarged in order to accommodate growth. A program is considered scalable, if it is suitable to handle a large amount of input data or a large number of users or a large number of nodes. When we say a program does not scale, it means that program fails on increasing the size of task.


20. How will you calculate the maximum speed up of an application by using multiple processors?
                     Amdahl’s law gives the theoretical speedup in latency of the  execution of a task at fixed workload. It gives the formula to compute the theoretical maximum speed up that can be achieved by providing multiple processors to an application.
        If S is the theoretical speedup then the formula is:
                    S(n) = 1 / (B + (1-B)/n)
              where n is the number of processors B is the fraction of the program that cannot be executed in parallel. When n converges against infinity, the term (1-B)/n converges against zero. Therefore, the formula can be reduced in this special case to 1/B. In general, the theoretical maximum speedup behaves in inverse proportion to the fraction that has to be executed serially. Thismeans the lower this fraction is, the more theoretical speedup can be achieved.


21.What is Lock contention in multithreading?
                Lock contention is the situation when one thread is waiting for a lock/object that being held by another thread. The waiting thread cannot use this object until the other thread releases the lock on that object. It is also known as Thread contention. Ideally locks reduce the thread contention. Without locks, multiple threads can operate on same object and cause undesirable behavior. If locking is implemented correctly it reduces the occurrence of contention between multiple threads.

22. What are the techniques to reduce Lock contention?
             There are following main techniques to reduce Lock contention:
  1. Reduce the scope of lock.
  2. Reduce object pooling.
  3. Reduce the number of times a certain lock can be acquired.
  4. Avoid synchronization at unnecessary places.
  5. Implement hardware supported Optimistic locking in place of synchronization.
23. What technique can be used in following code to reduce Lock contention?
                                  synchronized (map) {
                                           Random r = new Random();
                                           Integer value = Integer.valueOf(42);
                                           String key = r.nextString(5);
                                           map.put(key, value);
                                 } 

          The code uses Random() to get a random string and it also used Integer to convert 42 in an object. Since these lines of code are specific to this thread, these can be moved out of Synchronization block.
                                Random r = new Random();
                                Integer value = Integer.valueOf(42);
                                String key = r.nextString(5);
                                synchronized (map) {
                                           map.put(key, value);
                                }


24. What is Lock splitting technique?
              Lock splitting is a technique to reduce Lock contention in multithreading. It is applicable in scenario when one lock is used to synchronize access to different aspects of the same application. Sometimes we put one lock to protect the whole array. There can be multiple threads trying to get the lock for same array. This single lock on array can cause Lock contention among threads. To resolve this we can give one lock to each element of the array. Or we can use modulus function to assign different locks to a small group of array elements. In this way we can reduced the chance of Lock contention. This is Lock splitting technique.

25. Which technique is used in ReadWriteLock class for reducing Lock contention?
                     ReadWriteLock uses two locks. One lock for read-only operations, another lock for write operations. Its implementation is based on the premise that concurrent threads do not need a lock when they want to read a value while no other thread is trying to write. In this implementation, read-only lock can be obtained by multiple threads. And the implementation guarantees that all read operation will see only the latest updated value as soon as the write lock is released.

26. What is Lock striping?
             In Lock splitting we use different locks for different parts of the application. In Lock striping we use multiple locks to protect different parts of the same data structure. ConcurrentHashMap class of Java internally uses different buckets to store its values. Each bucket is chosen based on the value of key. ConcurrentHashMap uses different locks to guard different buckets. When one thread that tries to access a hash bucket, it can acquire the lock for that bucket. While another thread can simultaneously acquire lock for another bucket and access it. In a synchronized version of HashMap, the whole map is has one lock. Lock striping technique gives better performance than Synchronizing the whole data structure.

27. What is a CAS operation?
                  CAS is also known a Compare-And-Swap operation. In a CAS operation, the processor provides a separate instruction that can update the value of a register only if the provided value is equal to the current value. CAS operation can be used as an alternate to synchronization. Let say thread T1 can update a value by passing its current value and the new value to be updated to the CAS operation. In case another thread T2 has updated the current value of previous thread, the previous thread T1’s current value is not equal to the current value of T2. Hence the update operation fails. In this case, thread T1 will read the current value again and try to update it. This is an example of optimistic locking. 

28. Which Java classes use CAS operation?
                       Java classes like AtomicInteger or AtomicBoolean internally use CAS operations to support multi-threading. These classes are in package java.util.concurrent.atomic.

29. Is it always possible to improve performance by object pooling in a multi-threading application?
                By using Object pools in an application we limit the number of new objects to be created for a class. In a single thread operation, it can improve the performance by reusing an already created object from a pool. In a multi-threading application an object pool has to provide synchronized access to multiple threads. Due to this only one thread can access the pool at a time. Also there is additional cost due to Lock contention on pool. These additional costs can outweigh the cost saved by reuse of an object from the pool. Therefore using an Object pool may not always improve the performance in a multi-threading application.

30. How can techniques used for performance improvement in a single thread application may degrade the performance in a multi-threading application?
                     In a single thread applications we can use Object pool for performance optimization. Where as in multi-threading environment, it may not be a good idea to use an Object pool. Increased overhead of synchronization and lock contention can degrade the performance gained by using Object pool in a multi-threading application. 
                    Another example is the implementation in which a List keeps a separate variable to hold the number of elements. This technique is useful in single thread application where size() method can return the value from this variable, without the need to count all the elements of list. 
                   But in a multi-threading application, this separate variable can rather degrade the performance. This variable has to be access controlled by a lock since multiple concurrent threads can insert an element in a list. The additional cost of lock on this variable can outweigh the benefit gained by it in a multi-threading application.

31.What is the relation between Executor and ExecutorService interface?
                 Executor interface has only execute(Runnable) method. The implementing class of this interface has to execute the given Runnable instance passed to execute() method at some time in the future. ExecutorService interface extends Executor interface. It provides additional methods like- invokeAny(), invokeAll(), shutdown(), awaitTermination(). These method provide the ability to shutdown the thread so that further requests can be rejected. Also it provides ability to invoke a collection of Callable tasks.

32. What will happen on calling submit() method of an ExecutorService instance whose queue is already full?
            The implementation of ExecutorService will throw RejectedExecutionException, when its queue is already full and a new task is submitted by calling submit() method.

33. What is a ScheduledExecutorService?
             ScheduledExecutorService interface extends the interface ExecutorService. It provides various schedule() methods that can be used to submit new tasks to be executed at a given point of time. One of the schedule() method provides the ability to schedule a oneshot task that can be executed after given delay. Another version of schedule() method provides the ability to execute ScheduleFuture after a given amount of delay. In addition there are scheduleAtFixedRate() and scheduleWithFixedDelay() methods that can execute an action at a periodic interval of time.

34. How will you create a Thread pool in Java?
              In Java, Executors framework provides a method newFixedThreadPool(int nThreads) that can be used to create a Thread pool with a fixed number of threads.
          Sample code is as follows:
                   public static void main(String[] args) throws InterruptedException,ExecutionException
                  {
                         ExecutorService myService = Executors.newFixedThreadPool(5);
                         Future<Integer>[] futureList = new Future[5];
                         for (int i = 0; i < futureList.length; i++) {
                                 futureList[i] = myService.submit(new MyCallable());
                         }

                         for (int i = 0; i < futureList.length; i++) {
                                  Integer retVal = futureList[i].get();
                                  println(retVal);
                         }

                         my
                         Service.shutdown();
                  }


35. What is the main difference between Runnable and Callable interface?
                Runnable interface defines run() method that does not return any value. Callable interface allows call() method to return a value to its caller. A Callable interface can also throw an exception in case of an error. Also Callable is a newer addition to Java since version 1.5.

36. What are the uses of Future interface in Java?
                 We can use Future interface to represent the result of an asynchronous computation. These are the operations whose result is not immediately available. Therefore Future interface provides isDone() method to check if the asynchronous computation has finished or not. We can also check if the task was cancelled by calling isCancelled() method. Future also provides cancel() method to attempt the cancellation of a task.

37. What is the difference in concurrency in HashMap and in Hashtable?
             In a Hashtable class all methods are synchronized. In a HashMap implementation all the methods are not synchronized. Therefore Hashtable is a thread-safe collection. HashMap is not a thread-safe collection. In a multi-threading it is not advisable to use regular HashMap. We can use ConcurrentHashMap class in multi-threading applications.

38. How will you create synchronized instance of List or Map Collection?
             In Java, Collections class provides methods to synchronize any collection. It also provides synchronizedList(List) and synchronizedMap(Map) methods that can be used to convert a List or Map to a synchronized instance.

39. What is a Semaphore in Java?
                 Semaphore class in Java is used to implement a counting semaphore. It is used to restrict the number of threads that can access a physical or logical resource. A Semaphore maintains a set of permits that should be acquired by competing threads. We can also use it to control how many threads can access the critical section of a program or a resource concurrently. The first argument in Semaphore constructor is the total number of permits available. Each invocation of acquire() method tries to obtain one of the available permits. The acquire() method is used to acquire a permit from the semaphore. If we pass number of permits required to acquire() method, then it blocks the thread until that number of permits are available. Once a thread has finished its work, we can use release() method to release the permits.

40. What is a CountDownLatch in Java?
                 CountDownLatch class helps in implementing synchronization in Java. It is used to implement the scenarios in which one or more threads have to wait until other threads have reached the same state such that all thread can start.
                 There is a synchronized counter that is decremented until it reaches the value zero. Once it reaches zero, it means that all waiting threads can proceed now.
                 It is a versatile tool that can be used for other Synchronization scenarios as well. It can also work as on/off latch or gate. All threads invoking await() method wait at the gate until it is opened by a thread invoking countdown() method.


41. What is the difference between CountDownLatch and CyclicBarrier?
                CyclicBarrier takes an optional Runnable task that is run once the common barrier condition is achieved. CountDownLatch is used in simple use cases where a simple start stop is required. A CyclicBarrier is useful in complex scenarios where more coordination is required. E.g. MapReduce algorithm implementation. CyclicBarrier resets the internal value to the initial value once the value reaches zero. CyclicBarrier can be used to implement the scenarios in which threads have to wait for each other multiple times.

42. What are the scenarios suitable for using Fork/Join framework?
                ForkJoinPool class is in the center of Fork/Join framework. It is a thread pool that can execute instances of ForkJoinTask. ForkJoinTask class provides the fork() and join() methods. The fork() method is used to start the asynchronous execution of a task. The join() method is used to await the result of the computation. Therefore, divide-and-conquer algorithms can be easily implemented with Fork/Join framework.

43. What is the difference between RecursiveTask and RecursiveAction class?
               RecursiveAction class has compute() method that does not have to return a value. RecursiveAction can be used when the action has to directly operate on a Data structure. It does not need to return any computed value. In RecursiveTask class has compute() method that always returns a value. Both RecursiveTask and RecursiveAction classes are used in ForkJoinTask implementations.

44. In Java 8, can we process stream operations with a Thread pool?
              In Java 8, Collections provide parallelStream() method to create a stream that can be processed by a Thread pool. We can also call the intermediate method parallel() on a given stream to convert it into a sequential stream of parallel tasks.

45. What are the scenarios to use parallel stream in Java 8?
               A parallel stream in Java 8 has a much higher overhead compared to a sequential one. It takes a significant amount of time to coordinate the threads. We can use parallel stream in following scenarios:
          When there are a large number of items to process and the processing of each item takes time and is parallelizable. When there is a performance problem in the sequential processing. When current implementation is not already running in a multithread environment. If there is already a multi-threading environment, adding parallel stream can degrade the performance.


46. How Stack and Heap work in Java multi-threading environment?
               In Java, Stack and heap are memory areas available to an application. Every thread has its own stack. It is used to store local variables, method parameters and call stack. Local variables stored in Stack of one Thread are not visible to another Thread. Where as, Heap is a common memory area in JVM. Heap is shared by all threads. All objects are created inside heap. To improve performance thread can cache the values from heap into their stack. This can create problem if the same variable is modified by more than one thread. In such a scenario we should used volatile keyword to mark a variable volatile. For a volatile variable the thread always reads the value from main memory.

47. How can we take Thread dump in Java?
            The steps to take Thread dump of Java process depends on the operating system. On taking Thread dump, Java writes the state of all threads in log files or standard error console. We can press Ctrl + Break key together to take thread dump in Windows. We can execute kill -3 command for taking Thread dump on Linux. Another option to take Thread dump is jstack tool. We can pass process id of java process to this tool for taking Thread dump. This is the simple one, -Xss parameter is used to control stack size of Thread in Java. You can see this list of JVM options to learn more about this parameter. 

48. Which parameter can be used to control stack size of a thread in Java?
           We use –Xss parameter to control the stack size of a thread in Java. If we set it as 1 MB, then every thread will get 1MB of stack size.

49. There are two threads T1 and T2? How will you ensure that these threads run in sequence T1, T2 in Java?
                In Java there are multiple ways to execute threads in a sequence. One of the simplest way for sequencing is join() method of Thread class. We can call join() method to start a thread when another thread has finished. We start with the last thread to execute first. And make this thread join on the next thread. In this case we start thread T2 first. And then call T1.join() so that thread T2 waits for thread T1 to finish execution. Once T1 completes execution, T2 thread starts executing.







Basic Java       OOPs          Static       Inheritance    



Comments

Popular posts from this blog

Microservices Interview Questions

Microservices Interview Questions 1. What is a Microservice in Java? A Microservice is a small and autonomous piece of code that does one thing very well. It is focused on doing well one specific task in a big system. It is also an autonomous entity that can be designed, developed and deployed independently. Generally, it is implemented as a REST service on HTTP protocol, with technology-agnostic APIs. Ideally, it does not share database with any other service. 2. What are the benefits of Microservices architecture? Microservices provide many benefits. Some of the key benefits are: 1.      Scaling : Since there are multiple Microservices instead of one monolith, it is easier to scale up the service that is being used more. Eg. Let say, you have a Product Lookup service and Product Buy service. The frequency of Product Lookup is much higher than Product Buy service. In this case, you can just scale up the Product Lookup service to run on powerful hardware with multipl

DOCKER Interview questions

DOCKER 1. What is Docker? Docker is Open Source software. It provides the automation of Linux application deployment in a software container. We can do operating system level virtualization on Linux with Docker. Docker can package software in a complete file system that contains software code, runtime environment, system tools, & libraries that are required to install and run the software on a server. 2. What is the difference between Docker image and Docker container? Docker container is simply an instance of Docker image. A Docker image is an immutable file, which is a snapshot of container. We create an image with build command. When we use run command, an Image will produce a container. In programming language, an Image is a Class and a Container is an instance of the class. 3. How will you remove an image from Docker? We can use docker rmi command to delete an image from our local system. Exact command is: % docker rmi <Image Id> If we want to fin

Cloud Computing Interview Questions

Cloud Computing 1. What are the benefits of Cloud Computing? There are ten main benefits of Cloud Computing: Flexibility : The businesses that have fluctuating bandwidth demands need the flexibility of Cloud Computing. If you need high bandwidth, you can scale up your cloud capacity. When you do not need high bandwidth, you can just scale down. There is no need to be tied into an inflexible fixed capacity infrastructure. Disaster Recovery : Cloud Computing provides robust backup and recovery solutions that are hosted in cloud. Due to this there is no need to spend extra resources on homegrown disaster recovery. It also saves time in setting up disaster recovery. Automatic Software Updates : Most of the Cloud providers give automatic software updates. This reduces the extra task of installing new software version and always catching up with the latest software installs. Low Capital Expenditure : In Cloud computing the model is Pay as you Go. This means there is very