Beyond the success of Kotlin: a documentary about how and why Kotlin succeeded in the world of Android development.

10 full stack developer interview questions and answers

What are the most common full-stack developer interview questions? See examples and commentary from Rufat Khaslarov, Chief Software Engineer I.

a blue notebook with a hand holding a pencil

full-stack developer jobs

We asked Rufat Khaslarov, Chief Software Engineer I at EPAM, and a practicing technical interviewer, to share his take on the most common full-stack developer interview questions.

In my opinion, modern full-stack development is primarily about delivery. That’s why a full-stack developer should be able to deliver every single feature from end to end, including the frontend, backend, infrastructure, automation testing, and more.

Considering this, those looking for full-stack developer jobs should be ready to respond to the following interview questions.

EngX Code Review
Elevate your code quality and establish an effective code review process.
View coursearrow-right-blue.svg

1. What happens when you type google.com into your browser address bar?

When you type google.com into your browser's address bar, a series of events occur behind the scenes. First, the browser checks its cache for a DNS record to find the corresponding IP address of google.com. The request goes to the system's DNS cache if it's not found. If the system cache also doesn't have this information, the ISP's DNS server is queried.

Once the IP address is found, the browser initiates a TCP connection with the server. The browser is sending an HTTP request to the web server, which responds with an HTTP response. This response usually contains the HTML content of the webpage.

The browser then begins rendering the HTML. It parses the HTML, CSS, and JavaScript and constructs the DOM (Document Object Model) tree. The browser then is rendering the page on the screen, starting from the top and working down.

2. What are the latest updates of your programming language specification and frameworks?

The latest updates to a programming language or framework vary greatly depending on the specific language or framework in question. For example, if we're talking about , the latest specification is ECMAScript 2021 (or ES12). This update includes features like replaceAll method, Promise.any, WeakRefs, and Logical Assignment Operators.

In terms of frameworks, let's take React as an example. The latest release at the time of writing this article is React 17.0. This update primarily focuses on making it easier to upgrade React itself. It introduces improvements like gradual updates, the new JSX transform, and better error handling.

A tip from the interviewer:

In addition to your primary programming language, it’s highly recommended that you master an additional language. The more languages you know, the more approaches and best practices you have access to.

For example, even if your potential employer is hiring Python developers, knowing JavaScript will be a bonus for you. If you've mastered JavaScript and Python, learn Go and Rust, then try some frontend/backend framework combinations. For instance, it might be React/Spring, Angular/Django, etc.

3. How do JavaScript engines work under the hood?

JavaScript engines are complex pieces of software that interpret and execute JavaScript code. The most well-known JavaScript engine is V8, used in Chrome and Node.js.

When JavaScript code is run, the engine first parses the code into a data structure — Abstract Syntax Tree (AST). This tree is the syntactic structure of the code. The engine then compiles the AST into bytecode, a lower-level code representation.

The JavaScript engine also includes an interpreter, which can execute the bytecode. However, for performance-critical code, the engine uses a Just-In-Time (JIT) compiler to compile the bytecode into machine code, which can later be executed directly by the computer's processor.

The engine also includes a garbage collector, which automatically frees up memory no longer in use. This is crucial for managing resources and preventing memory leaks in JavaScript applications.

In addition, modern JavaScript engines like V8 use inline caching and hidden classes to optimize object property access and improve execution speed.

4. TDD vs BDD: what's the difference?

Test-Driven Development (TDD) and Behavior-Driven Development (BDD) are software development methodologies that involve writing tests before the actual code. However, they differ in their focus and approach.

TDD is a development technique where developers first write a test for a specific functionality and then write the minimum code to pass that test. The process is often described as "red, green, refactor" — write a failing (red) test, make it pass (green), and then refactor the code for optimization and readability.

On the other hand, BDD extends TDD by writing tests in a more natural language that non-programmers and business stakeholders can understand. BDD focuses on an application's behavior for a user and is more concerned with the business outcome than the technical details. It encourages collaboration between developers, QA and non-technical or business participants in a software project.

A tip from the interviewer:

It's essential to know how to test your code, from unit to end-to-end testing, and have a general understanding of .

Climb up the testing pyramid, explore the automation testing tools, set up an environment, and test your code using different approaches, such as TDD/BDD, AAA, FIRST for unit testing, integration testing, contracts testing, UI testing, and performance testing.

You also want to refresh your skills with tools related to different technologies. For instance, if you work with JavaScript, be ready to use Jest, Mocha, and Chai libraries; for BDD you might need Cucumber; for e2e testing – Cypress, Webdriver, and Protractor.

5. What are the best practices for writing unit tests?

Writing effective unit tests is a crucial aspect of software development. Here are some best practices to follow:

  1. Write small, focused tests: A unit test should test a single "unit" of code, such as a function or method. It should be small and focused, testing only one aspect of the function at a time.
  2. Use descriptive test names: The name of your test should describe what the test does. This makes it easier to understand what is being tested and why a test might be failing.
  3. Isolate your tests: Each test should be independent of others. This means not relying on the state from other tests or external state. This can be achieved by setting up and tearing down for each test.
  4. Test for positive and negative: Don't just test the "happy path". Make sure to test for expected failures and edge cases as well.
  5. Don't test implementation details: Your tests should focus on the behavior of your code, not its implementation. If you change the internal implementation of a function, but the output remains the same, your tests should still pass.
  6. Keep tests fast: Slow tests can become a bottleneck in the development process. Keep your tests small and avoid unnecessary complexity to keep them fast.
  7. Use a consistent structure: Consistency makes your tests easier to read and understand. Use a consistent style and structure for all your tests.

6. Explain the differences between Rest API and GraphQL. What are the pros and cons of each?

REST (Representational State Transfer) and GraphQL are two different approaches to building APIs.

is an architectural style used for network applications. Using standard HTTP methods, it treats API endpoints like resources that can be created, read, updated and deleted. A specific URL identifies each resource, and the type of data and the actions to be performed on it are determined by the HTTP method.

Pros of REST:

Cons of REST:

1. Simplicity: REST is straightforward to use and understand because it uses standard HTTP methods.

1. Over-fetching and under-fetching: Since the server defines what data is returned, clients may receive more data than they need (over-fetching) or need to make additional requests to get all the data they need (under-fetching).

2. Scalability: REST is stateless, meaning each request from a client to a server can be treated independently. This makes REST APIs highly scalable.

2. Versioning: Changes to the API often require versioning, which can lead to complexity.

3. Wide support: REST has been around for a long time and is supported in virtually all platforms and languages.

GraphQL is a query language that is designed specifically for APIs. It lets clients define the structure of the responses they need, which means they can request exactly what they require. This feature significantly reduces the flow of data that needs to be transferred over the network.

In GraphQL, a single endpoint is responsible for accepting complex queries from clients and returning the data in a specified format.

Pros of GraphQL:

Cons of GraphQL:

1. Efficiency: Clients can request exactly what they need, reducing the amount of data transferred.

1. Complexity: GraphQL has a steeper learning curve than REST and can be overkill for simple APIs.

2. No versioning: The client can handle changes to the data structure, eliminating the need for versioning.

2. Performance issues: Complex queries can lead to performance issues if not properly optimized.

3. Powerful querying capabilities: GraphQL supports complex queries, including nested queries and aggregation.

3. Less support: GraphQL is newer and not as widely supported or understood as REST.

GraphQL provides several benefits over RESTful APIs, including faster development, improved performance, and reduced network overhead. With GraphQL, clients may retrieve only the data they need, which leads to faster response times and better user experiences.

A tip from the interviewer:

Such full-stack engineer interview questions may include service development and communication protocols. You'll need to dive into HTTP and real-time communication (web sockets, polling).

Then, move on to architecture styles of web services: REST, RPC (gRPC), check GraphQL, and be ready to cover the pros and cons of using each. Also, provide the definitions of SOA, monolith, and microservice architecture.

On top of that, make sure you know how to document (for instance, swagger-like tools), debug, monitor, and deploy your services (containerization, orchestration).

Additional topics might be:

  • Authentication/authorization (OpenID, JWT, OAuth, and so on)
  • Caching strategies
  • Availability, reliability, and fault tolerance techniques

7. How can we achieve data normalization? What are the types of normalizations (1NF, 2NF, and so on)?

Data normalization is an essential process in database design that helps to minimize data redundancy and dependency. In this process, a table is decomposed into smaller and less redundant tables without losing any critical information. The primary objective is to isolate data in a way that enables easy updates, deletions, and modifications in just one table, which can then be propagated to the rest of the database via the defined relationships. This process ensures that the database remains consistent and flexible, making it easier to update based on the changing requirements of the organization.

There are several stages of normalization, each referred to as a "normal form." Here are the first three, which are the most commonly used:

  1. First Normal Form (1NF): In this stage, the data is organized in tables where each column contains atomic (indivisible) values and no repeating groups. Each table has a primary key uniquely identifying each record.
  2. Second Normal Form (2NF): In database normalization, a table is said to be in the second normal form (2NF) if it satisfies the conditions of the first normal form (1NF) and every non-key attribute in the table is fully functionally dependent on the primary key. This means that the value of any non-key attribute in the table should depend on the overall primary key, and not just a part of it.
  3. Third Normal Form (3NF): A table is in 3NF if it is in 2NF and there are no transitive dependencies of non-key attributes on the primary key. This means that non-key attributes do not depend on other non-key attributes.

There are more advanced normal forms like Boyce-Codd Normal Form (BCNF), Fourth Normal Form (4NF), and Fifth Normal Form (5NF), but they are less commonly used. Each normalization stage helps reduce data redundancy and improve data integrity but also comes with a trade-off in complexity and performance. Therefore, carefully considering your application's specific needs is important when deciding how much to normalize your data.

A tip from the interviewer:

You also need to understand and be ready to explain the following topics:

  • Data abstraction layers (ORM, ODM, query builders)
  • Differences between relational and NoSQL databases
  • Theoretical knowledge, including the CAP theorem
  • Transaction models (ACID/BASE)
  • Data modeling and query optimization (indexes, aggregation, projections, functions, normalization)
  • Replication and sharding
  • Basics of SQL

8. Provide an example of the pillars of the AWS Well-Architected Framework

The AWS Well-Architected Framework is a set of guiding principles that can help users take advantage of the benefits of the cloud. It provides a unified approach for customers and partners to evaluate architectures and implement scalable designs. The framework is based on five pillars:

  1. Operational excellence is the first pillar that focuses on running and monitoring systems in order to ensure delivering business value and improve processes and procedures. It is a crucial aspect of business management guaranteeing that operations run smoothly and efficiently. This pillar encompasses several key topics, such as managing and automating changes, responding to events to manage daily operations.
  2. The Security pillar is another critical aspect of business management that focuses on protecting information and systems. It is essential to safeguard the organization's confidential data and maintain data integrity. This pillar includes various topics, such as identifying and monitoring who can do what with privilege management and establishing controls to detect security issues.
  3. Reliability is the third pillar that ensures a workload performs its function correctly and consistently as expected. It is essential to recover from infrastructure or service errors, dynamically acquire computing resources to meet demand. This pillar is crucial to maintain the organization's credibility and reputation.
  4. Performance efficiency is another crucial pillar that focuses on using IT and computing resources efficiently. It is essential to pick the right resource types and sizes as per the workload requirements and make informed decisions to stay efficient as business needs evolve. This pillar is crucial to optimize the organization's performance and productivity.
  5. Cost optimization is the last pillar of business management that revolves around the avoidance of unnecessary expenses. It is crucial to comprehend and regulate spending in the organization, choose the appropriate and adequate number of resources, analyze the expenses incurred over time, and expand business operations without exceeding the budget. This approach ensures that the business scales up efficiently while minimizing costs, which is crucial for long-term sustainability and growth.

Each pillar is associated with a set of best practices and design principles to guide you in architecting your systems. Following the AWS Well-Architected Framework, you can build and deploy faster, lower or mitigate risks, make informed decisions, and learn AWS best practices.

A tip from the interviewer:

Clouds have taken over the infrastructure domain over the last few years and have become almost the default choice for all cases and software solutions. So, prepare to cover at least the primary services of cloud providers (AWS, GCP, and Azure are the most in-demand ones). Also, try to use them in your pet projects since they all provide free-tier accounts.

Ensure you understand cloud distribution models (IaaS, PaaS, SaaS) and the pros/cons of using cloud providers. Explore the serverless architecture and the components you need to use (functions, message brokers, databases, storage).

Other major topics that may be a part of the interview are access management, deployment strategies (Blue-green, canary, etc.), alerting, and monitoring services.

9. Your customer’s website takes 10 seconds to load. What are you going to do to solve this?

A website taking 10 seconds to load is a significant performance issue, as slow load times may result in a negative user experience or a potentially lost business. Here are the steps I would take to address this:

  1. Identify the problem: The first step is to identify what's causing the slow load time. Tools like Google's PageSpeed Insights, Lighthouse, or WebPageTest can provide insights into what might be slowing down the site. These could be large images, unoptimized CSS or JavaScript, slow server response times, or several other issues. Consider tracking the Core Web Vitals metrics if you need to perform search engine optimization for the website.
  2. Optimize images: Large, unoptimized images commonly cause slow websites. Images should be compressed and served in a format that provides the best compression and quality for the web, such as WebP.
  3. Minify and bundle assets: CSS and JavaScript files should be minified and bundled together to reduce the number of HTTP requests the browser processes.
  4. Implement lazy loading: Lazy loading is a technique where you defer loading non-critical or below-the-fold content until needed. This can significantly speed up the initial load time.
  5. Utilize a content delivery network (CDN): A CDN can serve static assets from a location closer to the user, reducing the download time.
  6. Improve server response time: If the server is slow to respond, upgrading the server or optimizing the backend code or database queries might be necessary.
  7. Enable caching: Implementing caching strategies can significantly improve load time for returning visitors. This can be done at various levels — browser caching, server-side caching, or using a caching proxy like Varnish.
  8. Remove unnecessary plugins: If the site is built with a CMS like WordPress, unnecessary plugins can slow down the site. Deactivate and delete any plugins that aren't needed.

After implementing these changes, it's important to continue monitoring the site's performance and make further adjustments as necessary. Performance optimization is an ongoing process, not a one-time task. Check out our offered by EPAM experts.

A tip from the interviewer:

The frontend as a term includes web and mobile development. Whichever you’ve chosen, dig deeper into it.

For instance, if it’s web development, learn one of the most popular frontend frameworks (React, Angular, or Vue), and know all related topics well (state management, client-side/server-side rendering). Learn how browsers work, how they render pages, and loading and rendering optimization techniques and principles (such as CRP and RAIL models).

You should be able to write HTML/CSS code as well. Progressive web apps (app shell, PRPL, service workers) and static site generators (like GatsbyJS) are trending now.

10. Could you please describe the entire feature life cycle on your previous project? Do you think that the CI pipeline was flawless? Is there something that you'd change on your project?

A sample answer might go as follows:

“The feature life cycle on my previous project typically followed these steps:

  1. Requirement gathering: The product owner or business analyst would gather stakeholder requirements and create user stories.
  2. Planning and estimation: The team would then discuss the user stories in a planning meeting, break them down into tasks, and estimate the effort required.
  3. Development: Developers would then pick up tasks, write code to implement the feature, and write unit tests to verify the functionality.
  4. Code review: Once the feature was implemented, a pull request was created, and other team members reviewed the code.
  5. Testing: After the code was merged, it was tested by QA engineers. They would write and execute test cases to ensure the feature worked as expected and didn’t break existing functionality.
  6. Deployment: If the feature passes all tests, it will be deployed to the staging environment then to the production environment.
  7. Monitoring and maintenance: The feature was monitored for any issues after deployment. Any bugs found would be fixed, and the cycle would start again.

The CI (continuous integration) pipeline was quite robust but not flawless. It included automated building, testing, and deployment, which helped catch issues early and improved the speed and reliability of releases. However, there were occasional false negatives due to flaky tests, which could slow development.

If I could change something, I would invest more time in making the tests more reliable and introduce a process for regularly reviewing and improving the CI pipeline. Additionally, I recommend implementing continuous deployment for certain low-risk parts of the application to speed up the delivery of new features further.”

A tip from the interviewer:

You might be asked about working with requirements (DoR, DoD), documentation, code review, CI/CD, and release strategies on your current and past projects from your portfolio.

In addition, it's important to read about software development methodologies (Agile, Waterfall, and their combinations) and estimations (by analogy, by experts, planning poker, decomposition, bucket, t-shirt, story points).

In conclusion

I want to point out one more critical thing: try to break down each topic to its core and explain it in connection to your own experience. Of course, it's impossible to know everything, but you should explore and read at least key points related to the above topics.

Good luck with your interview!

Related posts
Get the latest updates on the platforms you love