Back to course sections
    Mark As Completed Discussion

    Introduction to Microservices

    Microservices have gained significant popularity in recent years as an architectural style for building large-scale applications. They represent a shift from the traditional monolithic approach to a more modular and distributed system.

    With microservices, an application is divided into small, independent services that can be developed, deployed, and scaled independently. Each service focuses on a specific business capability and communicates with other services through well-defined APIs.

    The key benefits of microservices are:

    • Independent Deployment: Microservices allow individual services to be deployed and updated independently without affecting other parts of the system. This provides flexibility and agility in delivering new features and bug fixes.

    • Scalability: Microservices enable horizontal scaling, where each service can be scaled independently based on its specific workload. This allows for better utilization of resources and improved performance under high loads.

    • Fault Tolerance: By decoupling services and making them self-contained, microservices promote fault tolerance. If one service fails, it doesn't affect the entire application. The system can continue to function with degraded performance, and the failed service can be easily replaced or fixed.

    Microservices also support other important characteristics such as modularity, resilience, and ease of testing. However, it's important to note that adopting microservices comes with challenges, including increased complexity in managing distributed systems and the need for effective service orchestration.

    In the next sections, we will dive deeper into the building blocks of microservices, the methods of communication between microservices, scaling strategies, fault tolerance and resilience techniques, and other important aspects of this architectural style.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Build your intuition. Is this statement true or false?

    Microservices are an architectural style where an application is divided into large, monolithic components.

    Press true if you believe the statement is correct, or false otherwise.

    Building Blocks of Microservices

    In order to understand microservices, it is important to grasp the key components and concepts that form the building blocks of this architectural style. By breaking down a monolithic application into smaller, independent services, microservices provide a more modular and scalable approach to software development.

    Here are the main building blocks of microservices:

    1. Service: A microservice is the fundamental unit of functionality in a microservices architecture, representing a small and independently deployable service that focuses on a specific business capability. Each microservice performs a specific task and communicates with other microservices through well-defined APIs.

    2. API Gateway: The API gateway acts as a single entry point for clients to interact with the microservices. It provides a unified interface and handles requests by routing them to the appropriate microservice. Additionally, it can handle security, authentication, and other cross-cutting concerns.

    3. Service Discovery: In a microservices environment, services need a way to discover and communicate with each other. Service discovery is the mechanism that allows services to locate and connect to other services without hard-coding their addresses. By using service discovery, services can dynamically discover and adapt to changes in the system.

    4. Load Balancing: As the number of instances of a microservice increases, load balancing becomes important to distribute the incoming requests across these instances. Load balancing ensures that no single instance is overwhelmed with traffic and helps to improve the overall performance and availability of the system.

    5. Containerization: Microservices are often packaged and deployed using containerization technologies like Docker. Containers provide a lightweight and isolated environment for running microservices, allowing for easy deployment and scalability.

    These building blocks form the foundation of microservices and enable the benefits of modularity, scalability, and independent deployment. Understanding these concepts is crucial for designing and implementing microservices effectively.

    Let's take a look at a simple C# code example:

    TEXT/X-CSHARP
    1{{code}}

    In the above code, we have a basic C# program that prints "Hello, World!" to the console. Although this is a simple example, it demonstrates the concept of a small, independently deployable service that performs a specific task.

    By leveraging the building blocks of microservices, developers can design and build robust, scalable, and modular applications.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Let's test your knowledge. Fill in the missing part by typing it in.

    To facilitate communication between microservices, a common architectural pattern is to use a ___ Bus. This bus acts as a central messaging system through which microservices can send and receive messages. By decoupling the sender and receiver, this pattern enables asynchronous communication and helps maintain loose coupling between microservices. The use of a message bus also allows for the implementation of features like event-driven architecture and publish/subscribe messaging.

    The Message Bus provides the following benefits:

    1. Reliability: The message bus ensures the reliable delivery of messages between microservices, even in the event of failures or temporary unavailability of individual microservices.

    2. Scalability: The use of a message bus enables the horizontal scaling of microservices. New instances of microservices can be added to handle increased message load without affecting the existing services.

    3. Flexibility: The message bus can support different messaging patterns, such as point-to-point or publish/subscribe, allowing for flexible communication between microservices depending on the requirements.

    4. Interoperability: The message bus facilitates communication between microservices developed in different programming languages or frameworks by providing a common messaging protocol.

    Fill in the blank with the appropriate term.

    Write the missing line below.

    Communicating between Microservices

    When building microservices, one of the key challenges is enabling communication between them. In order to achieve a cohesive and scalable system, microservices need to communicate with each other effectively and reliably.

    There are different methods of communication between microservices, each with its own advantages and considerations. Let's explore some of these methods:

    1. RESTful APIs: One common approach is to use RESTful APIs for communication between microservices. REST (Representational State Transfer) is an architectural style that uses HTTP protocols to enable communication between systems. It offers a simple and flexible way to interact with microservices, allowing them to exchange data and perform operations.

    Here's an example of a RESTful API call in C#:

    SNIPPET
    1{{code}}
    1. Message Queue: Another method is to use a message queue for asynchronous communication. In this approach, microservices can send messages to a message queue, and other microservices can consume these messages asynchronously. This decouples the sender and receiver, allowing them to operate independently and handle messages at their own pace.

    Here's an example of sending a message to a queue using Azure Service Bus in C#:

    SNIPPET
    1{{code}}
    1. Event-Driven Architecture: Event-driven architecture is a communication pattern where microservices produce and consume events. Events represent significant changes or actions in the system, and microservices can subscribe to specific events they are interested in. This allows for loose coupling between microservices and enables them to react to changes in the system in a decoupled manner.

    Here's an example of publishing an event using Azure Event Grid in C#:

    SNIPPET
    1{{code}}
    1. gRPC: gRPC is a high-performance, open-source framework developed by Google for remote procedure call (RPC) communication. It allows microservices to define the structure of their API using Protocol Buffers, a language-agnostic binary serialization format. gRPC provides efficient bi-directional streaming and supports multiple programming languages, making it a popular choice for inter-service communication.

    Here's an example of making a gRPC call in C#:

    SNIPPET
    1{{code}}

    When choosing a communication method for microservices, it's important to consider factors such as performance, scalability, reliability, and ease of integration. Each method has its own trade-offs, and the choice depends on the specific requirements and constraints of the system.

    By implementing effective communication between microservices, you can ensure a cohesive and interconnected system that can handle complex workflows and scale to meet the demands of your application.

    Let's test your knowledge. Click the correct answer from the options.

    Which method of communication between microservices allows for loose coupling and enables them to react to changes in the system in a decoupled manner?

    Click the option that best answers the question.

    • RESTful APIs
    • Message Queue
    • Event-Driven Architecture
    • gRPC

    Scaling Microservices

    Scaling microservices is a critical aspect of building a robust and high-performing system. To handle high loads, it's important to consider horizontal and vertical scaling.

    Let's explore some strategies for scaling microservices:

    1. Horizontal Scaling:

      • Horizontal scaling involves adding more instances of a microservice to distribute the load across multiple servers.
      • This can be achieved by deploying multiple instances of the microservice behind a load balancer.
      • The load balancer distributes incoming requests evenly across the instances, improving performance and availability.
    2. Vertical Scaling:

      • Vertical scaling involves increasing the resources (CPU, memory) of a single instance of a microservice.
      • This can be done by upgrading the hardware or allocating more resources to the instance.
      • Vertical scaling is suitable for handling high loads on a single microservice instance.
    3. Caching:

      • Caching can significantly improve the performance of microservices.
      • By caching frequently accessed data or results, microservices can reduce the load on backend systems and improve response times.
      • Popular caching solutions like Redis or Memcached can be used to implement caching in microservices.
    4. Asynchronous Processing:

      • Asynchronous processing allows microservices to handle high loads by offloading time-consuming tasks to background workers.
      • By using messaging systems like RabbitMQ or Apache Kafka, microservices can communicate asynchronously and process tasks in parallel.
    5. Auto Scaling:

      • Auto scaling enables microservices to automatically adjust the number of instances based on the incoming traffic.
      • By monitoring system metrics like CPU usage or request latency, auto scaling can dynamically scale up or down to meet the demand.

    Scaling microservices requires careful planning and implementation. It's important to monitor the performance of the system and continuously optimize for efficient scaling.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Build your intuition. Fill in the missing part by typing it in.

    Scaling microservices requires careful planning and implementation. It's important to monitor the performance of the system and continuously optimize for efficient ___.

    Write the missing line below.

    Fault Tolerance and Resilience

    In the world of microservices, fault tolerance and resilience are crucial concepts to ensure the stability and availability of your system. With microservices being distributed and independently deployable, it's important to design your architecture in such a way that it can withstand failures and recover gracefully.

    In traditional monolithic applications, a single failure can bring down the entire system. However, in microservices architecture, failures can be isolated to individual services without impacting the entire system.

    Let's explore some techniques for ensuring fault tolerance and resilience in microservices:

    1. Circuit Breaker Pattern:

      • The circuit breaker pattern is a design pattern that allows services to handle failures and prevent cascading failures.
      • It involves wrapping requests to other services in a circuit breaker, which monitors the response and opens the circuit if the service fails.
      • When the circuit is open, subsequent requests are not sent to the failing service, preventing overload and allowing the system to recover.
    2. Retry and Timeout Mechanisms:

      • Implementing retry and timeout mechanisms can help deal with transient failures.
      • When a service encounters a failure, it can automatically retry the request a certain number of times, giving the failing service a chance to recover.
      • Additionally, setting a timeout for requests can prevent long delays and allow the system to handle failures more efficiently.
    3. Bulkhead Pattern:

      • The bulkhead pattern involves isolating services into separate resource pools or pools of threads.
      • This ensures that failures in one service do not impact the resources allocated to other services, improving fault tolerance.
      • By limiting the number of concurrent requests or dedicating specific resources to each service, you can prevent failures from propagating across the system.
    4. Eventual Consistency:

      • Achieving strong consistency across microservices can be challenging due to the distributed nature of the architecture.
      • Instead, microservices often rely on eventual consistency, where the system may temporarily be inconsistent but eventually converges on a consistent state.
      • Eventual consistency allows for better fault tolerance as services can continue to operate even when certain components are temporarily unavailable.
    5. Monitoring and Alerting:

      • Implementing robust monitoring and alerting systems is essential for detecting and responding to failures in microservices.
      • By monitoring key metrics such as response times, error rates, and resource utilization, you can identify potential issues and take proactive measures.
      • Alerting systems can notify teams or trigger automated processes to address failures and ensure timely resolution.

    It's important to design your microservices architecture with fault tolerance and resilience in mind right from the start. By adopting these techniques, you can ensure the availability and reliability of your system, even in the face of failures and challenges.

    Build your intuition. Click the correct answer from the options.

    Which design pattern helps prevent cascading failures in microservices architecture?

    Click the option that best answers the question.

    • Circuit Breaker Pattern
    • Retry and Timeout Mechanisms
    • Bulkhead Pattern
    • Eventual Consistency

    Service Discovery and Load Balancing

    In a microservices architecture, the dynamic nature of the system necessitates a mechanism for service discovery and load balancing. Service discovery allows microservices to locate and communicate with each other, while load balancing ensures that requests are distributed evenly across multiple instances of a service.

    When it comes to service discovery, there are a few different approaches that can be employed:

    1. Client-side Service Discovery: In this approach, the responsibility of discovering services lies with the client. Each client is responsible for locating the necessary services by querying a service registry. The client can then directly communicate with the discovered service instances.

    2. Server-side Service Discovery: In this approach, the responsibility of service discovery is delegated to a centralized service registry or service discovery server. Clients can query this registry to get the location information of the required services. This approach removes the burden of service discovery from the clients and centralizes it for easier management.

    3. Service Mesh: A service mesh is an infrastructure layer that facilitates communication between services by providing service discovery, load balancing, encryption, observability, and other functionalities. It typically consists of a sidecar proxy deployed alongside each service, which handles the service-to-service communication.

    Now let's talk about load balancing in a microservices architecture. Load balancing ensures that the incoming requests are distributed across multiple instances of a service to prevent any single instance from becoming a bottleneck and to improve the overall performance and reliability of the system.

    There are various algorithms and techniques for load balancing, including:

    • Round Robin: Requests are distributed to each instance in a circular order.

    • Weighted Round Robin: Instances are assigned different weights, and requests are distributed based on these weights. This allows for more fine-grained control over the traffic distribution.

    • Least Connection: Requests are sent to the instance with the least number of active connections, aiming to balance the load more evenly.

    • Random: Requests are randomly assigned to instances.

    • Consistent Hashing: This technique maps requests to instances based on a consistent hashing algorithm, ensuring that requests with the same key are consistently routed to the same instance.

    Implementing service discovery and load balancing in your microservices architecture is crucial for maintaining the availability, scalability, and resilience of your system. These mechanisms enable services to locate each other dynamically and distribute the request load effectively.

    Build your intuition. Fill in the missing part by typing it in.

    Implementing service discovery and load balancing in your microservices architecture is crucial for maintaining the availability, scalability, and resilience of your system. These mechanisms enable services to locate each other ____ and distribute the request load effectively.

    Write the missing line below.

    Securing Microservices

    When it comes to microservices architecture, security is of utmost importance. As your application is divided into multiple smaller services, each with its own dedicated server and database, it becomes essential to protect these microservices from various security threats.

    One important aspect of securing microservices is authentication. It is crucial to ensure that only authorized users can access the microservices. In the code below, we have a simple example of an authentication service and a microservice that processes requests:

    SNIPPET
    1using System;
    2
    3public class EncryptionService
    4{
    5    public static string Encrypt(string data)
    6    {
    7        // Implement encryption logic
    8        return "encryptedData";
    9    }
    10}
    11
    12public class AuthenticationService
    13{
    14    public static bool Authenticate(string username, string password)
    15    {
    16        // Implement authentication logic
    17        return true;
    18    }
    19}
    20
    21public class Microservice
    22{
    23    private EncryptionService encryptionService;
    24    private AuthenticationService authenticationService;
    25
    26    public Microservice()
    27    {
    28        encryptionService = new EncryptionService();
    29        authenticationService = new AuthenticationService();
    30    }
    31
    32    public void ProcessRequest(string username, string password)
    33    {
    34        bool isAuthenticated = authenticationService.Authenticate(username, password);
    35
    36        if (isAuthenticated)
    37        {
    38            // Process the request
    39            Console.WriteLine("Request processed successfully.");
    40        }
    41        else
    42        {
    43            Console.WriteLine("Authentication failed. Access denied.");
    44        }
    45    }
    46}
    47
    48public class Program
    49{
    50    public static void Main()
    51    {
    52        Microservice microservice = new Microservice();
    53        microservice.ProcessRequest("john.doe", "password123");
    54    }
    55}

    In this example, we have an EncryptionService and an AuthenticationService. The AuthenticationService validates the username and password provided by the client, while the Microservice processes the request if the authentication is successful. This is a simplified version of authentication in microservices, and in practice, you would use more robust and secure authentication mechanisms.

    Apart from authentication, other security measures such as authorization, encryption, and secure communication protocols should be implemented to ensure the overall security of microservices. It is important to follow security best practices and stay updated with the latest security vulnerabilities and techniques to protect your microservices from potential attacks.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Let's test your knowledge. Fill in the missing part by typing it in.

    In microservices architecture, it is essential to protect the microservices from various security threats. One important aspect of securing microservices is ___.

    Write the missing line below.

    Monitoring and Logging

    In the world of microservices, monitoring and logging play a crucial role in ensuring the smooth operation and performance of your application. With the distributed nature of microservices, it becomes essential to have effective mechanisms in place to monitor the health of individual services and identify any potential issues.

    Importance of Monitoring

    Monitoring your microservices allows you to gain insights into their behavior and performance. It helps you identify any bottlenecks, resource limitations, or errors that may be impacting the overall system. With proper monitoring, you can proactively identify and resolve issues before they become critical.

    Monitoring can be done at different levels in a microservices architecture. You can monitor the health and availability of individual services, track the response times and throughput of API calls, and collect metrics on resource utilization such as CPU and memory.

    Logging and its Role

    Logging is an essential component of microservices architecture. It helps you track and analyze the flow of information within your application, making it easier to debug and troubleshoot issues. By logging relevant events and data, you can gain visibility into the execution path of requests and identify the root cause of any errors or unexpected behavior.

    Logging should be done consistently across all services in your microservices architecture. Each service should log relevant information about its actions, such as incoming requests, outgoing API calls, and any errors encountered. The logs should capture essential details such as timestamps, request IDs, and relevant metadata.

    Tools for Monitoring and Logging

    There are various tools and technologies available for monitoring and logging in microservices architecture. Some popular options include:

    • Prometheus

      A monitoring and alerting toolkit that provides a time-series database and a query language for analyzing metrics.

    • Grafana

      A visualization and analytics tool that integrates with Prometheus and other data sources to create custom dashboards and monitor system performance.

    • ELK Stack (Elasticsearch, Logstash, Kibana)

      A popular logging solution that enables you to collect, process, and visualize logs from different services.

    • Azure Application Insights

      A cloud-based application performance monitoring tool that provides insights into your application's availability, performance, and usage.

    These tools can help you monitor and analyze the performance and behavior of your microservices, identify potential issues, and make data-driven decisions for optimization and improvement.

    Summary

    Monitoring and logging are critical aspects of microservices architecture. They provide visibility into the health and performance of your services and help you identify and resolve issues proactively. By using appropriate monitoring and logging tools, you can ensure that your microservices are running smoothly and delivering the desired performance and reliability.

    Build your intuition. Click the correct answer from the options.

    Which of the following is NOT a best practice for monitoring and logging in microservices architecture?

    Click the option that best answers the question.

    • Tracking response times and throughput of API calls
    • Using consistent logging across all services
    • Logging only critical errors and exceptions
    • Collecting metrics on resource utilization

    Testing Microservices

    When it comes to microservices, testing plays a crucial role in ensuring the functionality and reliability of your services. Testing microservices effectively requires a well-defined strategy that covers different aspects of the testing process.

    Unit Testing

    Unit testing is an essential part of testing microservices. It involves testing individual units of code, such as methods or functions, in isolation to ensure they function correctly.

    In the context of microservices, unit testing can be applied to test the functionality of individual services. For example, let's consider a CalculatorService that has an Add method to perform addition. We can write a unit test to validate the correctness of this method.

    TEXT/X-CSHARP
    1using System;
    2
    3namespace Microservices
    4{
    5    public class CalculatorService
    6    {
    7        public int Add(int a, int b)
    8        {
    9            // Perform addition
    10            int sum = a + b;
    11
    12            // Log the result
    13            Console.WriteLine($"The sum of {a} and {b} is {sum}");
    14
    15            // Return the sum
    16            return sum;
    17        }
    18    }
    19
    20    class Program
    21    {
    22        static void Main(string[] args)
    23        {
    24            CalculatorService calculator = new CalculatorService();
    25
    26            // Test the Add method
    27            int result = calculator.Add(5, 3);
    28
    29            // Output the result
    30            Console.WriteLine($"The result is: {result}");
    31        }
    32    }
    33}

    In the above example, we have a CalculatorService class with an Add method that performs addition. We create an instance of the CalculatorService and test the Add method by passing in two numbers (5 and 3). The expected result is 8, and we can validate this by checking the output of the Add method.

    Integration Testing

    Integration testing focuses on testing the interaction between different components or services in a microservices architecture. It ensures that the services are working together correctly and that data is being exchanged correctly between them.

    For example, in a microservices architecture, we may have multiple services that interact with each other to fulfill a specific functionality. Integration testing can be used to test the end-to-end flow of this functionality by simulating requests and verifying the responses.

    Performance Testing

    Performance testing is essential for microservices to ensure that the system can handle high loads and respond quickly. It involves testing the performance characteristics of individual services as well as the system as a whole.

    When performing performance testing, it is important to consider factors such as response time, throughput, and resource utilization. By testing the performance of microservices, you can identify potential bottlenecks and optimize the system for better scalability and reliability.

    Load Testing

    Load testing is a type of performance testing that involves testing the system under high loads to determine its capacity and limitations. It helps identify the maximum number of concurrent users or requests that the system can handle without degradation in performance.

    Load testing can be done by simulating a realistic workload on the system and measuring its response in terms of latency and throughput. By conducting load testing on microservices, you can ensure that the system can handle the expected load and scale appropriately when required.

    Summary

    Testing microservices effectively is crucial for ensuring the functionality and reliability of your microservices architecture. Unit testing, integration testing, performance testing, and load testing are important strategies to employ when testing microservices. By implementing a comprehensive testing strategy, you can identify and address issues early on and ensure the overall quality of your microservices.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Are you sure you're getting this? Click the correct answer from the options.

    What is one important strategy for testing microservices?

    Click the option that best answers the question.

    • Unit testing
    • Integration testing
    • Performance testing
    • All of the above

    Deployment and Continuous Integration

    In the world of microservices, effective deployment and continuous integration are crucial for maintaining a smooth development process and ensuring the reliability of your services.

    Deployment Techniques

    Deploying microservices to the cloud provides numerous benefits such as scalability, flexibility, and reduced infrastructure management. Cloud platforms like Azure offer a variety of tools and services for deploying microservices.

    Azure Kubernetes Service (AKS) is a popular choice for deploying microservices using containers and orchestrating them with Kubernetes. With AKS, you can easily scale your microservices based on demand and ensure high availability.

    Another deployment option is to use Azure Service Fabric, which provides a distributed systems platform for deploying and managing microservices. Service Fabric simplifies the process of creating and scaling microservices while offering features like automatic failover and rolling upgrades.

    Continuous Integration

    Continuous Integration (CI) is a development practice that involves frequently merging code changes into a central repository. In the microservices world, CI plays a crucial role in maintaining code quality and preventing integration issues.

    Azure DevOps is a powerful tool for implementing CI in your microservices projects. It provides features like source control, build pipelines, and release pipelines to automate the integration and deployment process.

    By setting up a CI pipeline, you can automate the building, testing, and deployment of your microservices. This ensures that any changes made to the codebase are quickly validated and integrated into the main repository, reducing the risk of conflicts and regression issues.

    Sample Code

    Here's a simple C# code snippet to demonstrate a basic deployment and continuous integration scenario:

    TEXT/X-CSHARP
    1using System;
    2
    3class Program
    4{
    5    static void Main(string[] args)
    6    {
    7        Console.WriteLine("Deploying microservices to the cloud.");
    8        Console.WriteLine("Continuous integration with Azure DevOps.");
    9    }
    10}

    In the above code, we have a simple console application that outputs the messages "Deploying microservices to the cloud" and "Continuous integration with Azure DevOps". This serves as a basic example, but in a real-world scenario, you would have more complex deployment and integration processes tailored to your microservices architecture.

    C#
    OUTPUT
    :001 > Cmd/Ctrl-Enter to run, Cmd/Ctrl-/ to comment

    Let's test your knowledge. Is this statement true or false?

    Continuous Integration (CI) is mainly focused on monitoring service availability and responding to issues quickly.

    Press true if you believe the statement is correct, or false otherwise.

    Conclusion

    In this tutorial, we explored the fundamentals of microservices architecture and its benefits. We discussed how microservices offer a distributed architecture that allows for independent deployment of smaller, purpose-driven services.

    As a senior software engineer with expertise in C#, SQL, React, and Azure, you can leverage your skills to develop and maintain microservices-based applications. By adopting microservices, you can achieve operational efficiency, scalability, and fault tolerance.

    Some of the key takeaways from this tutorial include:

    1. Microservices architecture allows for independent deployment and scaling of services, reducing dependencies and improving flexibility.

    2. Microservices promote loose coupling and parallel programming, enabling teams to work autonomously and manage their own services.

    3. Having dedicated servers and databases for each microservice enhances scalability and fault tolerance of the application.

    4. Continuous integration plays a crucial role in maintaining code quality and preventing integration issues in microservices projects.

    5. Cloud platforms like Azure provide tools and services for deploying and managing microservices, such as Azure Kubernetes Service and Azure Service Fabric.

    In summary, adopting microservices architecture offers numerous benefits for building scalable and resilient applications. With your expertise in C#, SQL, React, and Azure, you are well-equipped to design, develop, and maintain microservices-based solutions.

    Let's test your knowledge. Is this statement true or false?

    Microservices architecture allows for independent deployment and scaling of services, reducing dependencies and improving flexibility.

    Press true if you believe the statement is correct, or false otherwise.

    Generating complete for this lesson!