Home Icon Home Placement Prep Important 30+ Infosys Interview Questions (Technical & HR) - 2024

Important 30+ Infosys Interview Questions (Technical & HR) - 2024

Revise the most commonly asked questions and nail your Infosys technical interview. Also, take a look at some important HR questions that you must prepare at all costs!
Shivangi Vatsal
Schedule Icon 0 min read
Important 30+ Infosys Interview Questions (Technical & HR) - 2024
Schedule Icon 0 min read

Table of content: 

  • Infosys Interview Questions: Key Points
  • Infosys Interview Questions: Technical Round
  • Infosys Interview Questions: HR Round
  • Recruitment Profiles for Infosys interview
  • Eligibility Criteria for Infosys Technical Interview 
  • Tips to Answer Infosys Interview Questions
expand

Infosys is a dream company for every IT enthusiast. The company offers increased end-user productivity and is therefore always on the lookout for talented individuals who can make a difference in the organization with their technical knowledge, communication skills, and leadership abilities. The mission of this India-based company is to be the best place to learn, grow and lead.

Infosys is a well-reputed Indian multinational IT company that provides business consulting, information technology, and outsourcing services. In 1981, seven engineers started Infosys Limited. The company was founded in Pune and is headquartered in Bangalore. 

Infosys Interview Questions: Key Points

The Infosys recruitment process has multiple steps, including an online assessment test and two interview rounds. The interview rounds consist of a list of different types of interview questions and are divided into categories. To understand the list of questions mentioned below for your next interview with Infosys, it is necessary to note that similar to other interviews, the interview questions at Infosys also consist of the following:

  1. General questions about your personal and educational background, preferences, a basic understanding of the job, etc.
  2. Technical questions that focus on understanding the potential candidate’s complete knowledge about the company, technical knowledge around important terminologies and concepts, etc. This, at times, can also be divided separately for freshers and experienced candidates.
  3. Behavioral questions put the candidates through situations wherein their leadership skills and communication skills are put to the test.

Infosys Interview Questions: Technical Round

Following are some frequently asked Infosys interview questions with detailed explanation for your revision. 

1. What are method overriding and method overloading, and are they part of Polymorphism?

Method overriding and method overloading are concepts in object-oriented programming (OOP) and are both part of polymorphism, which is one of the four fundamental principles of OOP. Here's a brief overview of each concept:

  1. Method Overriding: Method overriding occurs when a subclass provides a new implementation for a method that is already defined in its superclass. In other words, a subclass provides a new implementation of a method that has the same name, return type, and parameter list as a method in its superclass. When an object of the subclass calls that overridden method, the new implementation in the subclass is executed instead of the implementation in the superclass. Method overriding allows a subclass to provide a specialized implementation of a method inherited from its superclass.

  2. Method Overloading: Method overloading occurs when a class has multiple methods with the same name, but different parameter lists. In other words, multiple methods in the same class have the same name but different numbers or types of parameters. Method overloading allows a class to have multiple methods with the same name, but they are differentiated based on the number or types of parameters they accept. When a method is called, the appropriate overloaded method is automatically selected based on the number or types of arguments passed to the method.

2. Define right outer join and left outer join in SQL.

In SQL, a right outer join and a left outer join are types of join operations that are used to combine data from two or more tables based on specified conditions. Here are their definitions:

  1. Right Outer Join: A right outer join, also known as a right join, is a type of join operation that returns all the rows from the table on the right side of the join, and the matching rows from the table on the left side of the join based on the specified join condition. If there are no matching rows in the left table, NULL values are returned for the columns of the left table. In other words, a right outer join ensures that all the rows from the right table are included in the result set, regardless of whether they have matching rows in the left table.

The syntax for a right outer join in SQL typically looks like this:

SELECT column1, column2, ...
FROM table1
RIGHT JOIN table2 ON table1.column_name = table2.column_name;

  1. Left Outer Join: A left outer join, also known as a left join, is a type of join operation that returns all the rows from the table on the left side of the join, and the matching rows from the table on the right side of the join based on the specified join condition. If there are no matching rows in the right table, NULL values are returned for the columns of the right table. In other words, a left outer join ensures that all the rows from the left table are included in the result set, regardless of whether they have matching rows in the right table.

The syntax for a left outer join in SQL typically looks like this:

SELECT column1, column2, ...
FROM table1
LEFT JOIN table2 ON table1.column_name = table2.column_name;

3. What are the major concepts in Java relating to OOP?

Java is an object-oriented programming (OOP) language that follows the principles of OOP. There are several major concepts in Java that are fundamental to understanding OOP. Here are some of the key concepts:

  1. Classes and Objects: A class is a blueprint or a template that defines the structure and behavior of objects. An object is an instance of a class, which represents a specific occurrence of that class. In Java, objects are created from classes using the new keyword, and they can be used to represent and manipulate data, as well as perform operations. (Read: Difference between Classes and Objects)

  2. Encapsulation: Encapsulation is the concept of hiding the internal details of an object and exposing only the necessary information through well-defined interfaces. In Java, this is achieved using access modifiers (such as private, public, and protected) to specify the visibility and accessibility of class members (i.e., fields, methods, and inner classes).

  3. Inheritance: Inheritance is the concept of creating a new class (subclass) that inherits the properties and methods of an existing class (superclass). The subclass can then override or extend the inherited behavior. In Java, inheritance allows for code reuse and promotes the creation of hierarchical relationships between classes.

  4. Polymorphism: Polymorphism is the ability to treat objects of different classes as if they were of the same type, based on a common interface or inheritance hierarchy. In Java, polymorphism allows for writing more flexible and extensible code, as objects of different classes can be used interchangeably.

  5. Abstraction: Abstraction is the concept of creating abstract classes or interfaces that define common characteristics and behaviors for a group of related classes. Abstract classes cannot be instantiated and serve as a blueprint for concrete classes. Interfaces define a contract that classes can implement, specifying a set of methods that must be implemented by any class that implements the interface.

  6. Method Overloading and Method Overriding: Method overloading is the ability to define multiple methods in the same class with the same name but different parameter lists, allowing for multiple ways of invoking the same method. Method overriding is the ability to provide a new implementation for a method in a subclass that is already defined in its superclass.

  7. Exception Handling: Exception handling is the concept of handling runtime errors, called exceptions, that may occur during the execution of a Java program. Java provides a robust exception-handling mechanism with try-catch blocks, which allows for the graceful handling of exceptions and recovery from unexpected situations.

4. Name some of the types of advanced-level programming languages.

There are several types of advanced-level programming languages that are designed to provide high-level abstractions and advanced features for specific purposes. Some of these types of programming languages are:

  1. Domain-specific languages (DSLs): These are programming languages that are designed for specific domains or industries, with specialized syntax, features, and libraries tailored to solve specific problems. Examples of DSLs include SQL for database querying, HTML/CSS for web development, MATLAB for scientific computing, and R for data analysis.

  2. Scripting languages: These are programming languages that are designed for automating tasks, performing rapid prototyping, and writing scripts for various purposes. Scripting languages are typically interpreted and have dynamic typing and high-level abstractions for string manipulation, regular expressions, file I/O, and other common tasks. Examples of scripting languages include Python, Ruby, Perl, and Shell scripting languages (such as Bash).

  3. Functional programming languages: These are programming languages that treat computation as the evaluation of mathematical functions and emphasize immutability and the use of higher-order functions. Functional programming languages typically support advanced features such as closures, currying, and pattern matching, and are known for their concise and expressive syntax. Examples of functional programming languages include Haskell, Lisp, ML, and Scala.

  4. Concurrent and parallel programming languages: These are programming languages that are designed for writing concurrent and parallel programs, which can execute tasks concurrently or in parallel to achieve higher performance and scalability. These languages typically provide built-in abstractions for managing threads, processes, and synchronization primitives. Examples of concurrent and parallel programming languages include Go, Erlang, and CUDA for GPU programming.

  5. Logic programming languages: These are programming languages that are designed for representing and manipulating symbolic logic and reasoning about relationships and constraints. Logic programming languages typically use formal logic as their foundation and provide constructs for expressing rules, queries, and constraints. Examples of logic programming languages include Prolog, Mercury, and Alloy.

  6. Markup languages: These are languages that are designed for defining and describing the structure and presentation of documents or data, typically used in web development and data interchange. Markup languages use tags or annotations to define the structure and properties of elements, and are typically interpreted or parsed by other programs. Examples of markup languages include HTML for web pages, XML for data interchange, and CSS for styling web documents.

5. List down the C++ tokens.

In C++, tokens are the smallest individual units of a program that the compiler reads and interprets. They are the basic building blocks of C++ programs, and they are used to form expressions, statements, and other syntactic constructs. Here is a list of C++ tokens:

  1. Keywords: Keywords are reserved words that have special meanings in the C++ language and cannot be used as identifiers (variable names, function names, etc.). Examples of keywords in C++ include int, float, if, for, while, class, public, private, protected, const, return, and many others.

  2. Identifiers: Identifiers are used to name variables, functions, classes, and other program elements. Identifiers in C++ must follow certain rules, such as starting with a letter or an underscore, and can consist of letters, digits, and underscores. Examples of identifiers in C++ include count, sum, myVariable, calculateArea(), Person, MyClass, and so on.

  3. Constants: Constants are used to represent fixed values that do not change during the execution of a program. C++ supports various types of constants, including integer constants, floating-point constants, character constants, and string literals. Examples of constants in C++ include 42, 3.14, 'A', "Hello, World!", and so on.

  4. Operators: Operators are used to perform operations on operands, such as arithmetic operations, comparison operations, logical operations, and assignment operations. Examples of operators in C++ include +, -, *, /, %, ==, !=, <, >, &&, ||, =, +=, -=, ++, --, and many others.

  5. Delimiters: Delimiters are used to separate and structure the different elements of a C++ program. Examples of delimiters in C++ include ; (semicolon) to terminate statements, {} (curly braces) to define blocks of code, () (parentheses) to group expressions and function arguments, , (comma) to separate function arguments or variable declarations, and . (dot) to access members of a class or object.

  6. Special Characters: Special characters are used in C++ to represent special meanings or escape sequences. Examples of special characters in C++ include # (hash) used for preprocessor directives, :: (scope resolution operator) used to define or access members of a class, -> (arrow operator) used to access members of a pointer to an object, * (asterisk) used for pointer declarations and dereferencing, and & (ampersand) used for reference declarations and address-of operations.

  7. Comments: Comments are used to add explanations or documentation to the code, and they are ignored by the compiler. C++ supports single-line comments starting with // (double slash) and multi-line comments starting with /* (slash-asterisk) and ending with */ (asterisk-slash).

Prepare for Infosys Power Programmer Coding Questions! Join Unstop Pro and Unlock Your Potential

6. List down the different types of inheritances.

In object-oriented programming (OOP), inheritance is a mechanism that allows a class to inherit properties and behavior from another class. There are several types of inheritance that can be used in OOP languages like C++, Java, and Python. Here are some commonly used types of inheritance:

  1. Single inheritance: In single inheritance, a class inherits properties and behavior from only one parent class. It forms a simple hierarchy where one class is derived from another class.

  2. Multiple inheritance: In multiple inheritance, a class can inherit properties and behavior from more than one parent class. This allows a class to have multiple parent classes, and it can inherit and use their properties and behavior.

  3. Multi-level inheritance: In multi-level inheritance, a class can inherit properties and behavior from a parent class, which in turn may inherit from another parent class. This forms a chain of inheritance, where a class is derived from a parent class, which is derived from another parent class.

  4. Hierarchical inheritance: In hierarchical inheritance, multiple classes inherit properties and behavior from a single parent class. This forms a tree-like structure where one parent class is inherited by multiple child classes.

  5. Hybrid inheritance: Hybrid inheritance is a combination of multiple inheritance and single inheritance. It involves inheriting properties and behavior from multiple parent classes, and it can result in complex class hierarchies.

7. Differentiate between multilevel and multiple types of inheritance in OOP language.

Multilevel inheritance and multiple inheritance are two different types of inheritance in object-oriented programming (OOP) languages. Here are the differences between them:

Multilevel Inheritance:

  • In multilevel inheritance, a class inherits from a parent class, which in turn may inherit from another parent class.
  • It forms a chain of inheritance where a class is derived from a parent class, which is derived from another parent class.
  • The derived class in multilevel inheritance has a direct relationship with only one parent class and inherits its properties and behavior.
  • It results in a hierarchical structure, where each class adds or modifies the properties and behavior of its parent class.
  • Multilevel inheritance provides a way to reuse code and establish a relationship between classes with different levels of abstraction.

Example:

class Animal {
// Properties and behavior of Animal class
};

class Mammal : public Animal {
// Properties and behavior of Mammal class
};

class Dog : public Mammal {
// Properties and behavior of Dog class
};

Multiple Inheritance:

  • In multiple inheritance, a class can inherit properties and behavior from more than one parent class.
  • It allows a class to have multiple parent classes, and it can inherit and use their properties and behavior.
  • The derived class in multiple inheritance has a direct relationship with multiple parent classes, and it inherits properties and behavior from all of them.
  • Multiple inheritance can result in complex class hierarchies and can lead to issues like ambiguity and conflicts when two or more parent classes define properties or behavior with the same name.
  • Multiple inheritance can be useful in certain scenarios where a class needs to inherit properties and behavior from multiple sources, but it requires careful design to avoid conflicts.

Example:

class Animal {
// Properties and behavior of Animal class
};

class Bird {
// Properties and behavior of Bird class
};

class FlyingBird : public Animal, public Bird {
// Properties and behavior of FlyingBird class
};

8. What is the meaning of an array in data testing?

In the context of data testing, an array typically refers to a data structure that stores multiple elements of the same data type in a contiguous memory location. It is a collection of homogeneous data elements, where each element is identified by an index or a subscript. Arrays are commonly used in programming languages for storing and manipulating collections of data, such as numbers, characters, or objects.

In data testing, an array can be used to represent a set of values or data points that need to be tested against predefined criteria or conditions. For example, in data validation testing, an array can be used to represent a set of input data values that need to be tested for correctness or conformity with expected values or data formats. The array can be iterated through, and each element can be tested against predefined rules or criteria to determine if the data meets the required quality standards or business rules.

It is one of the most important parts of the data structure, mixed to be a strong collection of similar yet different types of data that is stored in a very contiguous location of memory. It is supported by almost all high-level programming languages and using an array, the retrieval of elements becomes faster and more efficient.

9. What is the name of the pre-processor available in the C program?

In the C programming language, a preprocessor is a text-processing tool that is used to manipulate the source code before it is compiled. The preprocessor directives are commands that are interpreted by the preprocessor, which then performs text substitution or other actions on the source code before it is compiled.

The preprocessor in C provides various features, including macro expansion, conditional compilation, file inclusion, and compiler control. Some commonly used preprocessor directives in C are:

  1. #define: Used to define macros, which are symbolic names or constants that are replaced with their values during compilation.
  2. #include: Used to include files in the source code, which can be used for modularization and reusability of code.
  3. #ifdef, #ifndef, #else, #endif: Used for conditional compilation, allowing portions of code to be compiled or excluded based on predefined conditions.
  4. #ifdef, #ifndef, #else, #endif: Used for conditional compilation, allowing portions of code to be compiled or excluded based on predefined conditions.
  5. #error: Used to generate an error message during compilation if a certain condition is not met.
  6. #pragma: Used to provide additional instructions or directives to the compiler.

These preprocessor directives are used in C programs to modify the source code before it is compiled, allowing for conditional compilation, code reuse, and other source code manipulation techniques.

10. What is the role of virtual function and pure virtual function?

In C++, virtual functions and pure virtual functions are used in the context of polymorphism, which is one of the fundamental concepts of object-oriented programming (OOP). Virtual functions allow derived classes to provide their own implementation of a function in the base class, while pure virtual functions are abstract functions that do not have any implementation in the base class and must be overridden by derived classes.

The role of virtual functions in C++:

  1. Overriding: Virtual functions provide a mechanism for derived classes to override the implementation of a function in the base class. When a virtual function is called on an object of a derived class, the overridden version of the function in the derived class is executed instead of the version in the base class. This allows for polymorphism, where objects of different derived classes can be treated as objects of the base class and their overridden functions can be invoked based on the actual type of the object at runtime.

  2. Dynamic Binding: Virtual functions enable dynamic binding or late binding, which means that the appropriate version of the function to be executed is determined at runtime based on the type of object on which the function is called. This allows for flexibility and extensibility in object-oriented design, as new derived classes can be added without changing the interface or implementation of the base class.

The role of pure virtual functions in C++:

  1. Abstract Classes: Pure virtual functions are used to define abstract classes in C++, which are classes that cannot be instantiated directly and must be subclassed to provide implementation for the pure virtual functions. An abstract class is a class that contains at least one pure virtual function, and it cannot be instantiated. It can only be used as a base class for deriving other classes.

  2. Interface Definition: Pure virtual functions are used to define interfaces, which are sets of methods or functions that define a common contract or behavior that must be implemented by derived classes. Derived classes that inherit from an interface or abstract class with pure virtual functions must provide their own implementation for the pure virtual functions, which enforces a specific behavior or contract.

11. Define cross-platform.

Cross-platform refers to the ability of software or technology to run on different operating systems or platforms without requiring modification to the source code or significant changes to the functionality. In other words, cross-platform software or technology can be used on multiple operating systems or platforms with minimal or no changes, allowing for interoperability and compatibility across different environments.

Cross-platform software or technology is designed to be platform-independent, meaning it can run on different operating systems, such as Windows, macOS, Linux, or mobile operating systems like iOS and Android, without having to be redeveloped or recompiled for each platform separately. This allows developers to write code once and deploy it on multiple platforms, reducing development time, effort, and cost.

Cross-platform development is often achieved through the use of programming languages, frameworks, libraries, or tools that are designed to be platform-independent and can generate executable code or applications for different platforms. Examples of cross-platform technologies include web-based applications that can run on different web browsers, mobile apps that can be deployed on multiple mobile platforms, and software development frameworks like Java, .NET, or Qt that provide cross-platform support.

12. What is the meaning of heap binary tree?

A heap binary tree, also known as a binary heap, is a complete binary tree data structure that satisfies the heap property. The heap property is a specific condition that must be satisfied by the elements in the binary tree in order for it to be considered a heap.

In a heap binary tree:

  • The tree is a complete binary tree, which means that all levels of the tree are fully filled, except possibly the last level, which is filled from left to right.
  • The heap property is satisfied, which depends on whether it is a min-heap or a max-heap. In a min-heap, the value of the parent node must be less than or equal to the values of its children nodes, whereas, in a max-heap, the value of the parent node must be greater than or equal to the values of its children nodes.

A heap binary tree is often used to implement priority queues, where the elements are stored in a way that allows for efficient retrieval of the highest (or lowest) priority element, depending on whether it is a min-heap or a max-heap. Heaps are commonly used in various algorithms and data structures, such as heap sort, priority queue implementations, and some graph algorithms like Dijkstra's algorithm. They are also used in memory management systems, such as the heap memory area used in programming languages like C and C++ to dynamically allocate and deallocate memory during program execution.

13. What do you understand by DSN for development?

DSN stands for Data Source Name in the context of software development. It is a configuration that defines the connection parameters for a database or other data source and is used by applications to establish a connection to the data source in order to interact with it.

In software development, a DSN is typically used as a reference to establish a connection to a database or other data source. It contains information such as the name or location of the data source, the type of data source (e.g., relational database, file system, etc.), the authentication credentials (if required), and any other connection parameters needed to establish the connection.

A DSN can be created and managed at the system level or at the application level, depending on the programming language, framework, or database management system (DBMS) being used. For example, in some programming languages like Java or Python, a DSN can be created and managed using external configuration files or programmatically within the code. In other cases, a DBMS may provide its own tools or utilities for creating and managing DSNs, which can then be used by applications to establish connections to the data source.

DSNs are commonly used in database-driven applications to abstract the details of the actual data source connection, allowing for flexibility and ease of configuration. They provide a layer of abstraction between the application code and the specific data source, making it easier to switch between different data sources or change connection parameters without having to modify the application code.

14. What is the significance of EXE file extension and DLL, as per your technical knowledge?

As per my technical knowledge:

  1. EXE (Executable) File Extension: The EXE file extension stands for "executable" and refers to a type of file that contains machine code instructions that can be executed directly by a computer's operating system. An EXE file is typically used to package and distribute compiled software applications, allowing users to run the software on their computers by executing the EXE file.

The significance of the EXE file extension lies in its ability to encapsulate a complete software application, including its code, data, and resources, into a single file. When a user executes an EXE file, the operating system loads the instructions contained in the file into memory and executes them, launching the associated software application. EXE files are commonly used in Windows operating systems for standalone software applications, installers, and other executable programs.

  1. DLL (Dynamic Link Library) File Extension: The DLL file extension stands for "dynamic link library" and refers to a type of file that contains reusable code and resources that can be shared across multiple software applications. A DLL file is typically used to package and distribute code and resources that are used by multiple software programs, allowing them to dynamically link and load the code into their memory during runtime.

The significance of the DLL file extension lies in its ability to provide code and resources that can be shared among multiple software applications, reducing code duplication and promoting code reusability. DLLs are loaded into memory during runtime by software applications that need to use the code and resources contained in the DLL, allowing for efficient memory usage and easier maintenance of shared code. DLLs are commonly used in Windows operating systems for libraries, plugins, device drivers, and other shared components used by multiple software applications.

15. What is the difference between white box and black box testing?

Here's a table that outlines the differences between white box testing and black box testing:

Feature White Box Testing Black Box Testing
Definition Testing based on internal knowledge of the code and internal structure of the software. Testing based on external knowledge of the software without knowing the internal code and structure.
Perspective Internal perspective, focuses on code structure, logic, and implementation. External perspective, focuses on software behavior, inputs, and outputs.
Testing Approach Tests individual functions, statements, and branches of the code. Tests the software as a whole, from end-to-end, without knowledge of internal implementation.
Test Design Test cases are derived from the knowledge of the internal code, structure, and implementation. Test cases are derived from requirements, specifications, or user perspectives.
Test Coverage Achieves high code coverage as it tests internal code and logic. Achieves broader coverage as it tests the overall software behavior.
Testing Level Primarily used in unit testing and integration testing. Can be used in various testing levels such as functional testing, integration testing, system testing, and acceptance testing.
Test Objective Identifies defects in code logic, implementation, and internal structures. Identifies defects in software functionality, usability, and external behavior.
Test Skills Requires knowledge of programming languages, code structures, and implementation details. Requires understanding of software requirements, specifications, and user perspectives.
Advantages Can provide in-depth insight into code quality, uncovering subtle defects. Tests software from a user's perspective, focusing on actual usage scenarios.
Disadvantages May not uncover defects related to software behavior, usability, or requirements. May not provide detailed insight into code quality and internal logic.

16. What are the different levels or variable types in a programming language?

In an Infosys technical interview, knowledge regarding the different programming languages is a distinctive quality that helps potential candidates stand out in an Infosys interview. So, to answer this question, you need to focus on a detailed explanation of the types of programming languages and their level as below:

Assembly level language: The main use of assembly-level language is mnemonics that effectively minimize the complexity of a program. It is on similar lines to the ability of a computer to understand machine codes, but in place of numbers, assembly-level language uses words. No abstract methods are included.

Middle-level language: This language is used to interact with the layer of the computer with abstract methods and is used to bridge the gap between high-level and machine-level language. The programming languages in the middle level are C and C++.

Low-level language: This is the language that is easily understood by the machine but it is difficult for human beings who will be reading it directly and coding within this low-level language. 

High-level language: As the highest level of the programming language levels, this particular language has strong obstruction from the list of hardware details of a computer. In simple words, this level of programming language does not require any significant computer hardware knowledge to use. In addition to that, it is easy to learn and comes in understandable human forms. Top examples are Java, Python, PHP, etc.

17. What do you understand about SDLC in a database management system?

SDLC stands for Software Development Life Cycle, which is a structured approach or methodology followed during the development of software systems, including database management systems (DBMS). It encompasses a series of phases or stages that are typically followed in a sequential or iterative manner to ensure the successful development and deployment of software.

In the context of a database management system, SDLC includes the following stages:

  1. Requirements Analysis: In this stage, the requirements for the database management system are gathered and analyzed. This involves understanding the needs and expectations of the users, identifying data entities and relationships, defining data attributes and constraints, and documenting the requirements.

  2. Database Design: In this stage, the database is designed based on the requirements gathered in the previous stage. This involves creating a conceptual data model, logical data model, and physical data model, and defining the database schema, tables, views, indexes, and other database objects.

  3. Implementation: In this stage, the database is implemented based on the database design. This involves creating the actual database and populating it with data, creating database objects such as tables, views, indexes, and triggers, and implementing security measures such as user authentication and authorization.

  4. Testing: In this stage, the database is thoroughly tested to ensure its functionality, performance, and reliability. This involves creating test cases and test data, executing tests, and identifying and fixing any defects or issues discovered during testing.

  5. Deployment: In this stage, the database is deployed or released into the production environment. This involves transferring the database from the development environment to the production environment, setting up the necessary infrastructure, and ensuring the database is ready for production use.

  6. Maintenance: In this stage, the database is monitored, maintained, and updated as needed to ensure its continued performance, security, and reliability. This involves performing routine maintenance tasks, such as backups and updates, and addressing any issues or enhancements that arise during production use.

  7. Retirement: In this stage, the database is retired or decommissioned when it is no longer needed or relevant. This involves archiving or purging data, shutting down the database, and taking appropriate measures to securely handle any sensitive data.

18. What do you mean by the waterfall model?

The waterfall model is a linear and sequential software development life cycle (SDLC) model where each phase of the development process is completed before moving on to the next phase. It is called the "waterfall" model because the progress flows downwards in a cascading manner, with each phase building upon the results of the previous phase, and no phase being revisited once completed. The waterfall model follows a top-down approach and is typically represented as a linear flowchart with distinct phases or stages.

The typical phases in the waterfall model are:

  1. Requirements Analysis: In this phase, the requirements for the software or system are gathered, analyzed, and documented. This includes understanding the needs and expectations of users, identifying system requirements, and documenting them in a formal requirements document.

  2. Design: In this phase, the software or system is designed based on the requirements gathered in the previous phase. This includes creating a detailed design that defines the system architecture, software components, data structures, algorithms, and interfaces.

  3. Implementation: In this phase, the software or system is developed based on the design created in the previous phase. This involves coding, compiling, and testing the software or system components according to the design specifications.

  4. Testing: In this phase, the software or system is thoroughly tested to identify and fix any defects or issues before deployment. This includes various types of testing, such as unit testing, integration testing, system testing, and acceptance testing, to ensure the quality and functionality of the software or system.

  5. Deployment: In this phase, the software or system is deployed or released into the production environment. This includes installing the software or system, configuring it for production use, and setting up the necessary infrastructure for its operation.

  6. Maintenance: In this phase, the software or system is monitored, maintained, and updated as needed to ensure its continued performance, reliability, and security. This includes performing routine maintenance tasks, such as bug fixes, updates, and enhancements, and addressing any issues or incidents that arise during production use.

19. Key differences between C++ and C programming languages.

Here's a table summarizing key differences between C++ and C programming languages:

Feature C++ C
Object-Oriented Yes No
Supports Classes Yes No
Supports Inheritance Yes No
Supports Polymorphism Yes No
Supports Operator Overloading Yes No
Supports Templates Yes No
Supports Exception Handling Yes No
Supports Namespaces Yes No
Supports References Yes No
Supports Constructors and Destructors Yes No
Supports Function Overloading Yes Yes
Supports Pointers Yes Yes
Supports Macros Yes Yes
Supports Preprocessor Directives Yes Yes
Supports Standard Template Library (STL) Yes No

Note: C++ is a superset of C, which means that C++ includes all features of C programming language and adds additional features on top of it to support object-oriented programming (OOP) concepts. However, C++ also retains the ability to write procedural code similar to C programming language.

20.  What do you understand by the feature frame in HTML?

In HTML (Hypertext Markup Language), a "frame" is a mechanism that allows a web page to be divided into multiple, independent sections or panes, each displaying separate content. Frames were introduced in early versions of HTML as a way to create complex page layouts with multiple sections that could be updated independently. However, the use of frames has been largely deprecated in modern web development due to various issues, such as lack of accessibility, search engine optimization (SEO) challenges, and difficulties in handling bookmarking and sharing of specific frame content.

Frames are typically defined using the <frame> and <frameset> elements in HTML. The <frameset> element is used to define the overall structure of the frames on a web page, specifying how the frames should be arranged and sized. The <frame> element is used to define individual frames within the frameset, specifying the source (i.e., the content) of each frame.

Frames can have different attributes, such as src to specify the source URL of the content to be displayed in the frame, name to assign a name to the frame for targeting and referencing, and border to specify the border width around the frame.

21. What is the difference between an object-relational database and an object-oriented database model?

The main difference between an object-relational database and an object-oriented database model lies in their fundamental approach to data storage and retrieval.

  1. Object-Relational Database (ORDBMS): An object-relational database (ORDBMS) is a type of database management system (DBMS) that combines features of both relational and object-oriented databases. It extends the traditional relational database model by incorporating object-oriented concepts, such as data types, methods, inheritance, and encapsulation. ORDBMS allows for the storage of complex data types, such as arrays, structures, and multimedia data, as well as the ability to define custom data types and operators. Examples of ORDBMS include PostgreSQL, Oracle Database, and Microsoft SQL Server.

  2. Object-Oriented Database Model (OODBMS): An object-oriented database model (OODBMS) is a type of database model that is based on the principles of object-oriented programming (OOP). It treats data as objects, which are instances of classes or templates defined in a programming language, and stores them in the database along with their attributes and methods. OODBMS allows for complex data modeling, inheritance, encapsulation, and polymorphism. OODBMS typically use object-oriented query languages, such as OQL (Object Query Language) or ODMG (Object Data Management Group) Query Language, for retrieving and manipulating data. Examples of OODBMS include MongoDB, Couchbase, and ObjectDB.

22. What are memory blocks?

In computer programming, memory blocks refer to contiguous regions of memory that are allocated for storing data. Memory blocks are used to represent and manipulate data in computer systems, and they can be allocated and deallocated dynamically during program execution.

Memory blocks are typically divided into two main categories:

  1. Stack Memory Blocks: Stack memory blocks are allocated on the stack, which is a region of memory used for temporary storage of data during the execution of a program. Stack memory blocks are managed by the compiler and are automatically allocated and deallocated as local variables and function call frames are created and destroyed. Stack memory blocks have a limited size and lifetime, and their contents are typically automatically cleared when they go out of scope.

  2. Heap Memory Blocks: Heap memory blocks are allocated on the heap, which is a region of memory used for dynamic memory allocation during runtime. Heap memory blocks are managed by the programmer and must be explicitly allocated and deallocated using memory allocation functions, such as malloc() and free() in C/C++, or new and delete operators in C++. Heap memory blocks have a larger size and longer lifetime compared to stack memory blocks, and their contents persist until explicitly deallocated by the programmer.

23. What is object-relational DBMS?

Object-relational database management system (ORDBMS) is a type of database management system (DBMS) that combines features of both object-oriented databases (OODBMS) and relational databases (RDBMS). It extends the traditional relational database model by incorporating object-oriented concepts, such as objects, classes, inheritance, and polymorphism, into the relational data model.

In an ORDBMS, data is organized into tables with rows and columns, similar to a relational database. However, unlike a traditional RDBMS, an ORDBMS allows for the storage of complex data types, such as objects, arrays, and multimedia data, within the database. It also supports object-oriented modeling techniques, such as encapsulation, abstraction, inheritance, and polymorphism.

24.  What is gray box testing? Is it common?

Gray box testing is a type of software testing that combines elements of both black box testing and white box testing. In gray box testing, the tester has partial knowledge of the internal workings of the system being tested, while also maintaining some level of independence from the internal code and design.

In gray box testing, the tester has access to some information about the internal structure, code, or design of the software being tested, such as system architecture, design documents, or limited source code. This allows the tester to have a better understanding of the system being tested compared to black box testing, where the tester has no knowledge of the internal workings of the system. However, the tester does not have full access to the source code or complete knowledge of the internal implementation details, as in white box testing.

Gray box testing is often used to test software systems where some knowledge of the internal structure is required to design effective test cases, but where full access to the source code or complete knowledge of the internal implementation is not available or not practical. It can be used to verify the functionality, performance, security, and other aspects of the software from a partially informed perspective, while also considering the system as a whole.

Gray box testing is relatively less common compared to black box and white box testing, as it requires a specific level of knowledge and access to the system being tested. However, it can be a useful approach in certain situations where a combination of black-box and white-box testing techniques is desired to effectively identify defects and ensure the quality of the software.

Also Read: Top Software Testing Interview Questions And Answers 

25. Differences between a pointer variable and a rev variable?

The differences between a pointer variable and a reference variable in programming are as follows:

Pointer Variable Reference Variable
Syntax: Declared with an asterisk (*) before the variable name. Syntax: Declared with an ampersand (&) before the variable name.
Value: Contains the memory address of another variable. Value: Refers directly to another variable.
Dereferencing: Requires dereferencing operator (*) to access the value of the variable it points to. Dereferencing: Automatically refers to the value of the variable it is referencing.
Null value: Can be assigned a null value (nullptr or NULL) to indicate it does not point to any memory location. Null value: Cannot be assigned a null value.
Assignment: Can be assigned to point to a different memory location. Assignment: Cannot be assigned to reference another variable.
Initialization: Can be uninitialized and may contain a garbage value until assigned a valid memory address. Initialization: Must be initialized at the time of declaration.
Reassignment: Can be reassigned to point to a different memory location at any time. Reassignment: Cannot be reassigned to reference another variable after initialization.
Memory management: Requires manual memory allocation and deallocation using new and delete operators for dynamic memory allocation. Memory management: Does not require manual memory allocation or deallocation.
Usage: Often used for dynamic memory allocation, complex data structures, and low-level programming tasks. Usage: Often used for passing function arguments, returning values from functions, and as aliases to existing variables.

26.  What does database schema mean?

In the context of databases, a database schema refers to the overall structure or blueprint of a database. It defines how data is organized, stored, and accessed in a database system. A database schema typically includes the following elements:

  1. Tables: Tables are used to store data in rows and columns. A table represents a collection of related data entities and defines the structure of the data, including the field names, data types, and relationships between tables.

  2. Fields or columns: Fields, also known as columns, represent individual data attributes within a table. Each field has a name and a data type that specifies the type of data that can be stored in that field, such as numbers, text, dates, or binary data.

  3. Relationships: Relationships define how tables are related to each other, such as one-to-one, one-to-many, or many-to-many relationships. Relationships are typically represented through keys, such as primary keys and foreign keys, which establish links between tables.

  4. Constraints: Constraints are rules that are enforced on the data in the database to ensure data integrity and consistency. Examples of constraints include primary key constraints, foreign key constraints, unique constraints, and check constraints.

  5. Views: Views are virtual tables that are created based on the data stored in one or more tables. Views allow users to retrieve specific subsets of data from the database without accessing the underlying tables directly.

  6. Security permissions: Database schema also includes security permissions that define who has access to the database and what operations they can perform on the data, such as read, write, update, or delete operations.

27. What are the benefits of the agile model?

The Agile model is a software development approach that emphasizes iterative and incremental development, collaboration, flexibility, and customer feedback. Some of the key benefits of the Agile model include:

  1. Flexibility and adaptability: Agile allows for changes in requirements and priorities during the development process, making it well-suited for projects with evolving or dynamic requirements. Agile teams can quickly respond to changing customer needs, business priorities, and market conditions.

  2. Faster time-to-market: Agile promotes incremental and iterative development, enabling teams to deliver working software in shorter timeframes. This allows for quicker feedback, validation of assumptions, and faster time-to-market, which can be advantageous in competitive industries or for products with time-sensitive deadlines.

  3. Customer collaboration: Agile emphasizes continuous customer involvement throughout the development process, promoting regular feedback, and active participation in the development process. This helps ensure that the delivered software meets the needs of the customer, resulting in higher customer satisfaction and improved product quality.

  4. Enhanced team collaboration: Agile encourages cross-functional teams to work collaboratively, with frequent communication, transparency, and shared ownership. This fosters a collaborative and empowered team culture, resulting in better communication, teamwork, and overall project success.

  5. Quality and accountability: Agile emphasizes a focus on quality, with regular testing, inspection, and adaptation. Agile teams are accountable for the quality of the software they deliver, leading to better quality software with fewer defects and higher customer satisfaction.

  6. Transparency and visibility: Agile promotes transparency and visibility into the development process, with frequent progress updates, regular demonstrations, and open communication. This helps stakeholders, including customers, team members, and management, to have a clear understanding of the project status, progress, and potential risks.

  7. Continuous improvement: Agile encourages a culture of continuous improvement, with regular retrospectives and opportunities for learning and adaptation. This helps teams to identify areas for improvement, make necessary adjustments, and continuously optimize the development process for better outcomes.

28. Explain the meaning of NULL pointer

A NULL pointer in computer programming refers to a pointer that does not point to any valid memory location or object. In other words, it is a pointer that has not been assigned any memory address or object reference. In many programming languages, including C, C++, and Java, a NULL pointer is typically represented by a value of zero (0) or a special constant that indicates a null or empty value.

When a pointer is initialized to NULL, it does not point to any valid memory location or object, and any attempt to access the memory location through that pointer can result in undefined behavior or program crashes. Therefore, it is important to properly initialize pointers before using them in a program and to check for NULL pointers before dereferencing them to avoid potential memory-related errors.

NULL pointers are commonly used to indicate the absence of a valid value or object, and they are often used in error handling, memory allocation, and data structure operations, among others. For example, a function may return a NULL pointer to indicate that a requested resource was not found, or a pointer to a dynamically allocated memory block may be set to NULL after the memory has been deallocated to prevent accessing invalid memory.

29. What is the purpose of indexing in SQL machine language?

Indexing in SQL is a technique used to improve the performance and efficiency of database queries by providing a faster way to locate data records. Indexes are data structures that are associated with database tables and store a sorted or hashed representation of the values in one or more columns of the table. The purpose of indexing in SQL is to speed up the data retrieval process by allowing the database system to quickly locate records based on the indexed values, instead of scanning the entire table.

Here are some of the main purposes of indexing in SQL:

  1. Improved query performance: Indexing allows the database system to quickly locate data records that match a specific condition in a query, without having to scan the entire table. This can significantly speed up query execution times, especially for large tables with millions or billions of records.

  2. Reduced I/O operations: Indexing can reduce the number of I/O (input/output) operations needed to locate and retrieve data records, which can improve overall database performance. By providing a faster path to the desired data, indexes can minimize disk I/O and reduce the time required to fetch data from the storage subsystem.

  3. Enhanced data retrieval efficiency: Indexing can help optimize data retrieval operations, such as sorting and filtering, by providing a pre-sorted or pre-hashed representation of the data. This can speed up operations that involve searching, sorting, or filtering data based on indexed columns.

  4. Efficient join operations: Indexing can improve the performance of join operations in SQL queries, which involve combining data from multiple tables. By providing an indexed representation of the join columns, indexes can help the database system quickly locate and combine matching records from different tables, which can result in faster query execution times.

  5. Faster data modification operations: While indexing primarily benefits data retrieval operations, it can also improve the performance of data modification operations, such as INSERT, UPDATE, and DELETE statements. By providing a faster way to locate data records, indexes can help speed up these operations, especially when they involve conditions that match the indexed columns.

30. Explain what do you understand by a stored procedure

A stored procedure is a type of database object that contains a collection of SQL statements or other programming logic, which is stored in the database and can be executed as a single unit. Stored procedures are typically written in a procedural programming language, such as PL/SQL for Oracle Database, T-SQL for Microsoft SQL Server, or PL/pgSQL for PostgreSQL.

The main purpose of a stored procedure is to encapsulate a series of SQL statements or other programming logic into a single, pre-compiled and reusable database object. Stored procedures can be invoked by name, and their execution can be controlled by input parameters passed to them. Stored procedures can also return output parameters or result sets, making them powerful tools for complex data processing and business logic implementation in a database system.

 31. Mention key differences between heap and stack memory.

Heap and stack memory are two different areas of memory used in computer systems for different purposes. Here are some key differences between heap and stack memory:

Heap Memory:

  1. Dynamic memory allocation: Heap memory is used for dynamic memory allocation, where memory is allocated and deallocated manually by the programmer using functions like malloc() and free() in C/C++ or new and delete operators in C++.

  2. Lifetime: Memory allocated on the heap has a longer lifetime compared to stack memory. Heap memory persists until explicitly deallocated by the programmer using appropriate memory deallocation functions.

  3. Size: Heap memory is typically larger in size compared to stack memory. Heap memory allows for allocation of larger memory blocks, whereas stack memory is limited in size and usually smaller.

  4. Memory management: Heap memory requires manual memory management, and the programmer needs to ensure proper allocation and deallocation of memory to avoid memory leaks or dangling pointers.

  5. Access: Heap memory can be accessed from any part of the program as long as a valid pointer to that memory block exists.

Stack Memory:

  1. Automatic memory allocation: Stack memory is used for automatic memory allocation, where memory is allocated and deallocated automatically by the system as part of the function call stack. Local variables declared inside a function are typically allocated on the stack.

  2. Lifetime: Stack memory has a shorter lifetime compared to heap memory. Stack memory is automatically deallocated when the function call stack unwinds, and local variables go out of scope.

  3. Size: Stack memory is typically limited in size and usually smaller compared to heap memory. Stack memory is limited by the system's stack size and can be exhausted if the stack size is exceeded.

  4. Memory management: Stack memory does not require manual memory management, as memory allocation and deallocation are handled automatically by the system as part of the function call stack.

  5. Access: Stack memory is accessible only within the scope of the function or block where it is allocated. Once the function or block completes, the stack memory is automatically deallocated and cannot be accessed.

Infosys Interview Questions: HR Round

The list of Infosys interview questions for the HR round can include the following:

  1. What do you know about the recent news updates regarding Infosys?
  2. Which department at this renowned company would you prefer between software testing and software development flow?
  3. What makes you a perfect fit for the job at this renowned company?
  4. Do you have a role model? Who is your role model and why?
  5. What is the one strategy that helps you work best under pressure, for your dream career?
  6. Describe a situation when you were under intense stress and had to manage a team alone?
  7. What is that one quality that makes you better than the other potential candidates sitting for the job interview?
  8. What is your goal in life, both short-term and long-term?
  9. Have you heard about the InfyTQ Certification Program? Is it a pathway to your dream career here?
  10. How would you handle a mistake at work due to the negligence of another team member?
  11. What were your key learnings from your past job experience?
  12. What are the top qualities that make a good leader?

Recruitment Profiles for Infosys interview

Infosys recruits every year for three different profiles, especially for freshers

Role Package
System engineer (SE) 3.6 LPA

System engineer specialist (SES) or

Digital specialist engineer (DSE)

6.2 LPA
Power programmer (PP) 8 LPA

Eligibility Criteria for Infosys Technical Interview 

The eligibility criteria must be followed by the Infosys job applicants for the above-mentioned roles.

  1. Must be a regular student with graduation/ post graduation or other relevant graduation degrees like BE/ B Tech/ ME/M Tech (from all disciplines), MCA, MSc (Computer Science/Electronics/Mathematics/Physics/Statistics).
  2. Must have a minimum 60% aggregate or equivalent in 10th and 12th.
  3. Must have a minimum of 68% or 6 CPI (on 10) in BTech/BE.
  4. The candidate should not have any active backlog throughout the completion of the course.
  5. Candidates should be willing to relocate as required by Infosys.
  6. Candidates should be willing to work in different technologies as required by Infosys, as each brings a list of additional benefits.

Tips to Answer Infosys Interview Questions

  1. The most important factor to crack a job by performing well at an Infosys interview is to have self-confidence and good communication skills.
  2. Never try to fluff your answers or be fake with the interviewer. Be candid.
  3. As part of the Infosys interview process, the understanding of database concepts, in general, can also be tested, especially for C++ high-level language, among the other programming language levels available today.
  4. Always show a positive attitude and an adaptable nature to the interviewer as these are necessary leadership skills, along with the technical skills you bring to the table. Make your body language look comfortable but stern.
  5. Try to always answer to the point and not confuse the interviewers.
  6. Don't hesitate to ask the interviewer to repeat the question if you have not heard it properly, as a wrong answer can cost you the job opportunity.
  7. Do not indulge in any of the cheating activities that can drop you out from any of the rounds.
  8. Divide your time during the coding and MCQ rounds of Infosys and work on the ones you know first, then move to the tougher ones.
  9. Prepare well for the online assessment, as it is the first necessary step in recruitment before reaching the technical Infosys interview round. In the online assessment, your practical knowledge and basic concept utilization will be tested. A good performance on this will lead you to other rounds of interviews. 

Suggested reads:

Edited by
Shivangi Vatsal
Sr. Associate Content Strategist @Unstop

I am a storyteller by nature. At Unstop, I tell stories ripe with promise and inspiration, and in life, I voice out the stories of our four-legged furry friends. Providing a prospect of a good life filled with equal opportunities to students and our pawsome buddies helps me sleep better at night. And for those rainy evenings, I turn to my colors.

Tags:
Infosys

Comments

avatar

jagabandhu nayak 6 months ago

i will understande the basic knwledg that a student will be want from the requrite and what will be the reqrrutier will be asking in the interview