Cloud & DevOps Archives - TatvaSoft Blog https://www.tatvasoft.com/blog/category/cloud-devops/feed/ Thu, 01 May 2025 07:52:30 +0000 en-US hourly 1 Azure Cost Optimization – Techniques and Tools https://www.tatvasoft.com/blog/azure-cost-optimization/ https://www.tatvasoft.com/blog/azure-cost-optimization/#respond Fri, 20 Dec 2024 12:34:53 +0000 https://www.tatvasoft.com/blog/?p=13511 Azure Cloud deployment is making it easy to develop and scale applications. Businesses no longer need to worry about setting up and maintaining the infrastructure. It certainly is a convenient and cost-effective option, providing a wide range of benefits. There may be possible downsides, only if there is a lack of efficient resource management and Azure cost optimization.

The post Azure Cost Optimization – Techniques and Tools appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Cost Management: Taking control of your Azure expenses while increasing the efficiency of cloud resources.
  2. Savings Techniques: Azure cost optimization techniques involve turning off unused resources, using flexible storage tiers and implementing Azure right-sizing.
  3. Eliminate Hidden Costs: Identifying unattached disks, incorrectly stopped VMs, unused network interfaces and overprovisioning helps reduce unnecessary costs.
  4. Leverage Azure Tools: For effective cost optimization, you need right Azure tools such as Azure Cost Management, Advisor, and Pricing Calculator.
  5. Ongoing Review: Review Azure subscriptions regularly to monitor and adjust cloud resource usages and costs.

Azure Cloud deployment is making it easy to develop and scale applications. Businesses no longer need to worry about setting up and maintaining the infrastructure. It certainly is a convenient and cost-effective option, providing a wide range of benefits. There may be possible downsides, only if there is a lack of efficient resource management and Azure cost optimization. But with the right strategies, you can hit the goal. 

Most businesses would hire a top software development company that can cater to their IT requirements. These companies understand how to use resources efficiently and can help optimize spending. You must be aware of various resource types, pricing models, and optimization techniques. You must ensure their effective implementation to obtain maximum return on investment. 

1. What is Azure Cost Optimization?

Azure cost optimization is an approach for managing costs on the Azure platform. The aim is to save money while getting the most out of its cloud capabilities. Azure users practice different combinations of cost optimization techniques. Don’t just go for the widely used or best practices. Adopt the ones that can truly fulfill your project requirements.

Cost optimization also demands a complete system analysis to report the usage patterns, trends, and inefficiencies of the system. After all, Azure cost optimization is also about checking whether all resources are performing optimally or not. With this data, you can easily eliminate all the unnecessary expenses on your Azure platform. 

2. Importance of Optimizing Azure Costs

Cost optimization always gives you the upper hand when you’re considering cutting costs so that it can be capital can be moved to some other areas. The following section will provide more insights into why it is important to optimize Cloud costs.

2.1 Improved Cost-Effectiveness

Businesses are slowly migrating all of their workloads to the Azure cloud to reap a range of benefits. However, poor management of Azure resources can lead to a significant increase in cloud expenses. That’s where Azure cost optimization strategies come in. It helps you gain control over your Azure spending with increased resource efficiency and profitability. 

2.2 Efficient Resource Utilization

Maximizing returns from your investments is essential. Cost optimization helps you get the most value from your resources. 

2.3 Compliance and Governance

Optimizing Azure costs would also allow you to check whether all your resources work in compliance with the relevant policies and regulations. 

3. Azure Cost Optimization Techniques

There are numerous ways to reduce expenses on an Azure platform. Let’s explore one after another.

In some cases, you might be overpaying than your actual resource usage. Such an imbalance can lead to unnecessary costs. 

But you can strike a perfect balance between cost and performance with Azure Right-sizing. Here, you will be using cost optimization and Azure sizing tools to adjust the resource usage to meet your workload requirements for better performance and cost efficiency. 

Understandably, your needs might change with time. To cope with that, you have to monitor and optimize your resources continuously. 

For example, your average memory usage is 50 GB but can spike to 300 GB during peak times. Some would say you can rent 300 GB of memory on the cloud to meet those unexpected spikes in usage. However, that option is costly. Most of that space will be sitting idly incurring huge expenses to your business. 

Instead, consider purchasing the memory space for your average utilization and opt for the pay-as-you-go model that instantly assigns you the required memory during peak times. The extra memory will be dislocated once the usage subsides. This way you only have to pay for resources you use. 

Reserved Instances
Source- Azure Microsoft

3.2 Azure Storage Access Tiers

Do you know Azure offers flexible storage access tiers for cost-effective data management? They are divided based on usage patterns and access frequency. You have to understand how each tier works to pick a suitable option that reduces your storage costs. 

  • Hot access tier: Those who need to access or modify their data frequently, use this hot access tier. Users can immediately access data, with the lowest access and the highest storage costs. 
  • Cool Access tier: Users who need to access their data quickly but do not frequently use the cool tier. With a minimum storage duration of 30 days, this tier costs more in access and lower in storage than the hot tier. 
  • Archive access tier: The archive access tier is for those users who rarely access their data and don’t need it quickly. The term archive gives it away that this tier is specifically built for long-term data retention with a minimum storage time of 90 days. The storage costs in the archive access tier is the lowest but its access time is the highest of all. 

Azure determines the limits of storage capacities at the account level rather than at the tier level. So, if the need arises, users can easily increase the storage capacity within one tier or distribute its usage across multiple tiers. 

3.3 Leveraging Azure Hybrid Benefit

Do you want to save around 85% on standard Azure prices? Azure Hybrid Benefits is a licensing program that helps organizations migrate to Azure and save costs. To qualify, you must either have an active Windows Server or SQL Server License with Software Assurance or any active Linux subscription in Azure. 

It reduces operational expenditure through seamless integration between cloud and on-premise environments. Microsoft offers 180 days of dual-use rights, allowing you to keep your on-premise solutions after migrating to Azure.

This cost optimization program provides many more options to save costs across various Azure resources. Feel free to explore them with the Azure Hybrid Benefit Savings Calculator.

3.4 Identify Hidden Azure Cost

In a software project, especially large and complex ones, many costs go unnoticed. You can put a stop to them only if you first identify them. 

Unattached Disks and Snapshots

Data of a virtual machine is stored in disks. VMs are deleted when they are no longer useful. However, developers forget to delete their unattached disks, which continue to consume storage space and incur charges. The same goes for snapshots as well.

So, whenever you create a disk or a snapshot, always attach them to the relevant VM. By regularly monitoring your Azure accounts, you can easily identify unattached disks and snapshots. 

Incorrectly Stopped Azure VMs

You can stop the unused or underused VMs to eliminate unnecessary costs. But if you fail to stop it correctly, Azure may still consider it active. The correct method would be to stop the VM and terminate the VM instance. If you do it incorrectly, then that particular VM will be reflected as active on the Azure platform and you will be charged for it. 

A thorough analysis can help identify and resolve this problem. You can either check manually or perform automated checks which are quite fast and more accurate. 

Load Balancer Data Transfer

A load balancer has a pay-per-use pricing model, meaning users pay hourly rates for using it. However, some unexpected incoming or outgoing data transfers can incur significant costs. These additional costs are so high that sometimes they even surpass the cost of the load balancer itself. 

Staying vigilant and monitoring necessary load balancer metrics is the key to avoiding this hidden cost. This includes setting proactive alerts for data transfer when it passes pre-determined thresholds.

Unused Network Interfaces

Just delete the unused network interfaces to avoid unnecessary charges. 

Overprovisioning 

Never provide more resources like storage, bandwidth, and capacity to the virtual machines than they need. Overprovisioning inflicts an unnecessary burden on your IT budget. Therefore, ensure that your team is strictly adhering to the predetermined budget and set alerts if the Azure costs are nearing the spending limits. 

3.5 Switch To Azure Elastic Databases

When a system faces unpredictable usage demands or needs to scale or simply handles multiple databases, consider using Azure SQL Elastic Database. It is an elastic pool situated on a single server. It can hold all the databases you throw in it. 

The more databases you add to this pool, the more money you save. This money-saving scheme works on the principle that Azure Elastic Database offers limited resources to every service at a limited rate. 

3.6 Implement Tagging and Resource Organization

Tags are used to identify, classify, and organize things. Azure Resource Tagging is no different. Here, you will be assigning customized labels to each Azure resource including VMs, databases, and more. 

Every metadata tag you assign consists of a key-value pair that helps organize resources according to the criteria that meet your company’s requirements. In the tags, you can also add descriptions, or specific details to better organize your Azure resources. This entire process can be automated easily. This tagging process can be easily automated using tools including Azure Portal and Azure Resource Manager Templates.

3.7 Turn Off Unused or Idle Resources

To reduce cost, shut down the resources you are not using anymore or are using very little. These unused resources are the biggest source of your hidden costs. You can identify such redundant resources by leveraging services like Azure Advisor and Azure Cost Management. They also provide estimates on cost savings and assist in budgetary decisions. 

3.8 Configure Autoscaling For VMs

VMs are certainly very important and robust resources in an Azure cloud environment. So, configuring them can certainly lead to significant cost savings. Autoscaling for VMs would help in configuring VMs to respond dynamically to varying demands. With that, you can keep operations to a minimum during off-peak hours and easily scale them up with the increasing demand. 

3.9 Utilize Serverless Technologies

To reduce your cloud costs, use serverless technologies. Cloud service providers offer two options; you either rent a dedicated server or pay for the space you are using on their server. 

In the first option, you are renting the entire infrastructure from them. In the second, you are paying them to use the necessary components or resources of their infrastructure. Azure serverless is truly a cost-effective option, eliminating the need to hire personnel to set up and maintain the infrastructure.

3.10 Azure Dev/Test Pricing

Cost optimization also means opting for a price model. Azure offers three pricing plans: 

  • Pay-per-use: You only pay for what you use. No more no less. No unnecessarily renting or buying any cloud resources. This plan allows you to maintain multiple accounts in a single isolated environment. You get a separate bill for each account for the Azure resources they have subscribed to. 
  • Enterprise: Sign an Enterprise Agreement and get to use dev/test workloads at lower rates. You wouldn’t have to pay its bill separately as it is paid from the funds in your Enterprise Agreement. 
  • Azure Plan: This plan from Azure gives you the flexibility to stretch your Microsoft Customer Agreement and get discounts when signing up for dev/test workloads.

3.11 Review and Optimize Azure Subscriptions

A user must review and optimize their Azure subscriptions regularly. The purpose is to monitor resource usage and costs. That helps in finding inefficiencies and implementing solutions is straightforward. Tools like Azure Cost Management and PowerBI provide detailed reports on Azure costs. You can leverage them to gain valuable insights into your Azure expenses. 

4. Azure Cost Optimization Tools

The effective implementation of the techniques discussed in the above section is not possible without using the right tools. They are helpful at every step of the cost optimization process.

Here are the top tools from Azure:

4.1 Azure Cost Management and Billing

Azure Cost Management and Billing

Azure Cost Management and Billing tool allows you to track your cloud costs by resources, services, and locations. It generates reports that provide details on your spending patterns and cost drivers. Its alerting features help keep the expenses within the limits of your budget. More interestingly, it can forecast your cloud expenses, which helps you make important budgetary decisions. 

4.2 Azure Advisor

Azure Advisor

Using Azure Advisor feels like having a personal advisor who helps in handling all your Azure finances. This cost optimization tool can conduct a thorough analysis of your usage and spending pattern as well as resource performance. Based on that data, Azure Advisor, true to its name, offers personalized advice for cloud optimization. It takes care of rightsizing underused and idle resources. 

4.3 Azure Pricing Calculator

Azure Pricing Calculator

Do you wish to get an estimate for your Azure services beforehand? Use the Azure pricing calculator. You only need to provide information about expected resource usage and you can see the estimated cost per month. The calculator is useful in planning Azure migrations, managing cloud budgets, and comparing pricing options and configurations. 

4.4 Azure Monitor

Azure Monitor

Azure Monitor is one of the best cost optimization tools focusing on performance. You can leverage this tool to monitor the performance of your application, network, and infrastructure. Azure monitor identifies the performance issues in your cloud environment by collecting and analyzing telemetry data. Moreover, it also provides detailed details on cost optimization and efficient resource utilization. 

4.5 Azure Resource Graph

Azure Resource Graph

You can monitor your Azure resources using Azure Resource Graph. It gives a unified view of every resource you have subscribed to across all accounts. Managing and optimizing resources becomes easy with this tool. 

So, this Azure Resource Graph tells about your resource usage and helps understand the dependencies and relationships between them. Large enterprises can benefit the most and improve their cost management strategies by adopting this cost management service. 

5. Factors Affecting Azure Cost Optimization

Understanding and managing the factors that contribute to expenses is the key to significant cost savings. 

5.1 Pricing Models

Like every other service, Azure also offers various pricing models. Each of these pricing models caters to different sets of workload requirements. So, it becomes necessary to understand your requirements and pick a suitable Azure pricing model. 

  • Pay-per-use 
  • Spot instances 
  • Reserved instances 
  • Hybrid benefits.

5.2 Resource Type

Even in a specific pricing range, the costs may vary depending on the resource type you choose to use. The Azure meter will track your resource usage and charge you in billable units for each billing period.

5.3 Service Type

Similar to the resource type, you are also charged according to the service type you subscribe to. Azure offers several service types such as Enterprise, Cloud Solution Provider, and Web Direct. You are also given a usage allowance for every service.

5.4 Location

The cost of Azure services depends on what you are using but along with this where you use it also matters. Location is important as not all resources and services are available everywhere. Many factors can affect the service or resource availability in a specific location, affecting both accessibility and pricing at the time of subscription.

5.5 Billing Zones

Your costs are also influenced greatly by the billing zone you are operating in. All inbound data transfers are free in Azure. However, it charges for outbound data transfers, and those charges vary in different billing zones. 

Azure offers 5GB of outbound data transfer for free every month. After that, your data transfers will be charged per GB basis.

6. Conclusion

A service provider always offers more than what you need, and the same sounds true for Microsoft Azure. But you have to stick to your requirements and don’t get tempted to buy unnecessary resources no matter how advanced they may seem. 

Why would you want to keep paying for that additional functionality that you won’t use regularly? 

If you already have an existing Azure environment then continuously monitor the resources and costs. This will help you understand your resource usage and costs. Once you have a clear picture, making the most out of your Azure investment will be an easy game. 

FAQs

What are the 3 pricing models of Azure?

If you don’t know whether Azure services are really helpful or not then take them for a test drive. Azure offers a 30-day trial period. If they work up to expectation then you can subscribe to them with a suitable pricing model. There are three options to choose from: Pay-as-you-go, Reserved VM Instances, and Spot Virtual Machines. 

What is the Cost Management strategy in Azure?

A cost management strategy is all about reducing your costs. In Azure, it is about proper budget and cost allocations, that ought to put a limit on your expenditures. 

The post Azure Cost Optimization – Techniques and Tools appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/azure-cost-optimization/feed/ 0
REST API Best Practices https://www.tatvasoft.com/blog/rest-api-best-practices/ https://www.tatvasoft.com/blog/rest-api-best-practices/#respond Tue, 17 Dec 2024 10:24:23 +0000 https://www.tatvasoft.com/blog/?p=13469 REST is an architectural approach to API development. Its ease of use has made it popular among the community of developers. Even top software development companies prefer REST over other protocols like SOAP for building modern applications. 

The post REST API Best Practices appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Use the Appropriate HTTP Method: Identify the nature of operation and implement a suitable HTTP method such as GET, POST, PUT, PATCH and DELETE.
  2. Use Query Parameters for Filtering, Sorting, and Searching: Using query parameters like filter, sort and search in an HTTP request URL helps obtain precise information.
  3. Use HTTP Response Status Codes: Providing proper HTTP Response Status Codes is necessary to explain the outcome of a specific HTTP request.
  4. Create API Documentation and Version Your REST APIs: Maintaining a detailed documentation along with API versioning makes it easy to adopt and update APIs.
  5. Cross-Origin Resource Sharing (CORS): Mitigate the risks of cross-origin HTTP requests by leveraging CORS in APIs.

REST is an architectural approach to API development. Its ease of use has made it popular among the community of developers. Even top software development companies prefer REST over other protocols like SOAP for building modern applications. 

Nowadays, RESTful APIs and web services play an essential role in communication between the client and server sides of applications. Therefore, it is important to design robust and efficient APIs. To achieve that, API developers must follow some standard practices as listed in the blog below. 

1. What is a REST API?

REST API or Representational State Transfer Application Programming Interface, is a programming style created to help design web architecture. 

REST API provides a simple way for two computers to communicate over HTTP or HTTPS. The process is quite similar to how clients and servers communicate, making it straightforward and efficient.

What is REST API?

2. Rest API Best Practices

Implement the following best practices to ensure you get the most out of REST APIs. 

2.1 Use JSON to Send and Receive Data 

Many think REST should exclusively use hypertext for data exchange. However, using JSON to request payload and send responses is very effective. JSON is a standard format for data migration and most modern networking technologies use it. 

JavaScript provides built-in methods to encode and decode JSON using either an HTTP client or the Fetch API. Even server-side technologies provide libraries that can easily decode JSON as well. 

Not all frameworks support XML for data transfer. Eventually, you need JSON to transform data into usable information. However, manipulating data in such a way will be complex in browsers. Form data is ideal for sending files, but for text and numbers, it’s simple to use JSON for transferring the data directly to the client side. 

After making the request, set the Content-Type in the response header to application/JSON to ensure that your REST API responds with JSON so that clients can interpret the data accordingly. It’s a default setting in many server-side frameworks where endpoints return JSON as a response. The HTTP clients would parse the data in a suitable format after looking at the Content-Type response header. 

2.2 Use the Appropriate HTTP Method

You have to determine an HTTP method after naming the endpoint. The method must be based on the nature of the operation you are performing. To mention a few of these methods: 

  • Employ GET requests to retrieve resources.
  • Utilize POST requests to create resources when your server is assigned a unique identifier.
  • Use PUT requests to create or replace resources. 
  • Implement PATCH requests for updating your resources partially. 
  • Execute DELETE requests for deleting any specified resources. 
Use the Appropriate HTTP Method

Here in this Tweet APILayer has said how using the proper HTTP method is important.

2.3 Use Query Parameters For Filtering, Sorting, and Pagination 

To get more clear and precise information, you can use query parameters like filter, sort, and search in the URL of your HTTP request. It also allows you to control the server’s response. There isn’t any standard method for filtering, sorting, and pagination but usually, these parameters are added after the endpoint, as shown below:

  • Filter – To retrieve products by category
    /products?category={category}
  • Sort – To sort products by price in ascending order
    /products?sort_by=price
  • Paginate – To get results for a specific page with a set number of items per page
    /products?page={page_number}&per_page={results_per_page}

2.4 Use Nouns Instead of Verbs in Endpoints

It is important to use nouns instead of verbs in your endpoint paths while designing REST APIs. Some of the most common HTTP methods are named according to their verb or basic operations, such as GET, POST, DELETE, and more. 

For example, instead of using URLs like https://myexample.com/getPosts or https://myexample.com/createPost, which incorporate verbs, you should use a more straightforward noun-based approach. A cleaner endpoint would look like: https://myexample.com/posts which indicates the GET action for retrieving posts.

Refer to this Tweet for what we should use in REST API endpoints.

2.5 Use HTTP Response Status Codes

The HTTP response status code indicates the outcome of the specific HTTP request and is divided into five categories:

  1. Informational responses (100 – 199): It is about offering information to the users.
  2. Successful responses (200 – 299): This indicates the successful HTTP request.
  3. Redirection messages (300 – 399): These codes are used to redirect your HTTP requests.
  4. Client error responses (400 – 499): These codes indicate errors that need to be addressed from the client side.
  5. Server error responses (500 – 599): These codes indicate there are errors from the server side and the client must visit later. The only exception here is the error 511, which is related to the incorrect user credentials.

2.6 Handle Errors Gracefully and Return Standard Error Codes

When an error occurs, it must be managed gracefully, returning a standard HTTP response code to specify what kind of error occurred. This allows the people who maintain APIs to understand the problem. The errors might bring your system down. So, you have to leave it to the API consumer to handle the errors. 

The following example demonstrates user registration management while validating input data and returning HTTP status codes for error conditions.

const express = require('express');
const bodyParser = require('body-parser');

const app = express();

// Simulated database of existing users
const users = [
  { email: 'abc@foo.com' },
  { email: 'xyz@foo.com' }
];

app.use(bodyParser.json());

app.post('/register', (req, res) => {
  const { email } = req.body;

  // Check if the email is provided
  if (!email) {
	return res.status(400).json({ error: 'Email is required' });
  }

  // Check if the email is already registered
  const userExists = users.find(u => u.email === email);
  if (userExists) {
	return res.status(400).json({ error: 'User already exists' });
  }

  // Simulate successful registration
  users.push({ email });
  return res.status(201).json({ message: 'User registered successfully', user: { email } });
});

app.listen(3000, () => console.log('Server started on port 3000'));

Input Validation: 

A request body asks for specific user details such as email ID. However, when users send the request without submitting these mandatory details, a 400 bad Request Response is returned. It’s a message asking users to fill in the empty input fields in the request body.

User Existence Check: 

During a new registration the API checks whether the user or given details are already registered in the database. Even if a single piece of information, such as email ID, matches an existing user, a 400 Bad Request response will display a message saying “User already exists” to inform the user that the email is already registered with an existing user and asks for another email ID for new registration. 

Successful Registration: 

The new user is added to the users’ array once they pass all the checks and validations. In return, the API sends a message indicating that registration was successful for the given email ID.

Error codes should come with clear messages that inform the maintainers and help them troubleshoot the issue. But it shouldn’t contain too much information as well that attackers can leverage to steal the information or bring down the system. 

In short, whenever the APIs fail, we must gracefully send the error code with the necessary information that enables the users to take corrective measures. 

2.7 Create API Documentation and Versioning Your REST APIs

Creating and maintaining detailed API documentation helps improve API adoption and ease of use. The documentation should offer comprehensive information on authentication patterns, request and response formats, available endpoints, and more. One of the most popular tools for documenting REST APIs is OpenAPI. 

Additionally, API versioning allows you to handle API changes and updates while maintaining compatibility with the client applications. You can use unique identifiers or labels for API versioning. Let’s take a look at some common approaches: 

  1. URL versioning: This approach includes the API version directly in the URL. For instance, the URL /api/v1/product shows version 1 of the API. 
  2. Query parameter versioning: In an API request, this approach mentions the version number as a query parameter. For example, /api/product?version=1.
  3. Header versioning: This approach uses the custom header to indicate the version number in the API request. For example, Accept-Version: 1.
  4. Content negotiation versioning: This approach negotiates the version based on media type or the Accept header of the request payload. 

With the help of versioning, you can ensure stability through different API versions, allow developers to gradually adopt changes, and facilitate backward compatibility for clients. 

2.8 Using Path Parameter vs. Query Parameter

When using web APIs, there is some information passed from the client to the endpoint. It is important to know when to use the path parameters and query parameters effectively. Path parameters are ideal for identifying and retrieving specific resources. Meanwhile, query parameters are helpful in spring-requested information. It can also be used for pagination and filtering options.

Path Parameters Example:
https://api.example.com/orders/789/products/456/reviews

Query Parameters Example:
https://api.example.com/search?query=books&sort=price&limit=5

2.9 HATEOAS (Hypermedia as the Engine of Application State)

Add metadata and links in your API responses with the help of Hypermedia controls or HATEOAS. The links you add will act as a guide for clients to help them navigate easily to the related actions or resources. It also helps make your API self-descriptive. 

The presence of hypermedia links in the API responses improves the navigational ability and the discoverability of the API. You can allow users to access documentation, perform actions, and find resources more efficiently. Sadly not so many developers are aware of this usage, refer to this Tweet by Cory House.

2.10 Cross-Origin Resource Sharing (CORS)

CORS is a process based on an HTTP header. It enables the server to specify the origin from which the browser provides permission to load resources. This mechanism allows the browser to send a preflight request to the server hosting the cross-origin resources to check whether the server will approve the request. 

Cross-origin HTTP requests from the scripts are restricted by browsers for security reasons. Therefore, the web app with these APIs has to request the resource from the same origin from where the app was loaded unless the response from other origins contains the right CORS headers. 

The Cross-Origin Resource Sharing mechanism supports data transfers between servers and browsers as well as secure cross-origin requests. Meanwhile, browsers leverage CORS in APIs like XMLHttpRequest or fetch() to mitigate the risks of cross-origin HTTP requests.

2.11 Enhance API Responses with Error Details

To indicate the success or failure of the request, the appropriate HTTP status codes must be displayed. However, returning only HTTP static code is not enough in this case. You must also include a little information about what went wrong. 

Create a well-structured JSON message to help your API consumers. Below, it is mentioned what information you must include in your responses. 

  • Data – Add the requested data in this section if the API request is successful. 
  • Error – Add error information in this section if the API request fails. 
    • Error code – It’s a machine-readable error code that can identify the specific error condition. 
    • Error message – A human-readable message explaining the details of the error. 
    • Error context – It offers essential information about the error, such as request ID, request parameters that caused the error, or the field in which the error was made. 
    • Error links – Add URLs to resources that offer additional information about the errors or help how to solve them. 
  • Timestamp – States the time of the error’s occurrence.

2.12 Monitor and Log API Activity

May it be security auditing, performance optimization, or simple troubleshooting, monitoring and logging every API activity is crucial. Execute a powerful logging mechanism that gathers relevant data like error details, execution time, and request and response payloads. Moreover, integrate these logs with monitoring tools to track key metrics such as resource utilization, error rates, and response times. 

3. Common API Design Mistakes 

Even if you are implementing the best practices for RESTful APIs, it is essential to avoid making any REST API design mistakes. Let’s discuss the most common ones: 

3.1 Using Too Much Abstraction

API developers sometimes use too much abstraction in their applications. It doesn’t mean you have to completely avoid using abstractions or never use them in excess. Poorly structured code may require extra abstractions to maintain the software. Use abstractions in the amount that is suitable for suitable needs of your project. 

3.2 Adding Numerous Endpoints

Instead of utilizing dozens of endpoints dedicated to different purposes and methods, developers must leverage flexible APIs with simple endpoints. It helps minimize the number of API calls, which enhances app performance.

3.3 Poor Documentation

Even a good REST API is only useful if people know how to use it. For a successful API launch, you must prepare extensive documentation that covers everything from request and response formats to authentication and authorization requirements. Lack of documentation can create confusion while proper documentation prevents you from making mistakes. 

3.4 Irregular Performance Optimization

Every software product needs regular performance optimization, to maintain its speed and to ensure a positive user experience. When it comes to API performance, the number and size of requests and responses matter. 

Sending too much data in a single request or response can slow down response time. Additionally, the way you store and fetch data from the database significantly impacts API performance. You can implement techniques like indexing, denormalization, pagination, and caching to resolve these issues. 

3.5 Using Undefined HTTP Methods

Make sure you are following the appropriate HTTP methods defined in the RESTful architecture when designing endpoints. Although you can technically use any method, using specific methods for their intended purposes is more beneficial. 

3.6 Not Implementing a Versioning System

Implementing a versioning system is crucial for keeping your software from growing stale. Instead, it allows your application to adapt and evolve to changing requirements and new features. A versioning system allows you to add a unique identifier in the request header for every update and helps ensure compatibility. Meanwhile, the old ones will be ignored until they are updated as well. 

3.7 Absence of Error Management 

You could face significant issues if you don’t have a proper error-handling system in place. Instead of adding only logic in your codebase, you must also add a proper response with adequate information about the problem to display to the end-users. This approach helps users understand what went wrong. 

3.8 Sharing Too Much Information 

Be cautious about sharing too much information with the end users, as it might make your system vulnerable to cyberattacks. Moreover, every user has a unique set of permissions, so it is illogical to share all resources with everyone. It can put an unnecessary load on backend systems and cause a surge in processing time, resulting in increased costs and latency issues.

3.9 Leaving Scalability Out of REST API Design 

Your API can’t perform properly when it is overwhelmed with a sudden rise in requests and responses when you ignore scalability during API development. It’s necessary to implement proper load-balancing techniques and use an API management platform to scale and monitor APIs. 

3.10 Inconsistent Programming 

A consistent coding style and design make things easy for everyone. A developer can avoid looking into the documentation or other reference resources when you have established a consistent programming style. This saves a lot of their time and prevents any kind of confusion. 

3.11 Bloated Responses

Many developers find it convenient to return a whole object in the API call instead of only requested properties. However, the potential downside to this approach is the rise in bandwidth usage and latency for both providers and end-users. 

It would be better to give users the choice to either receive requested properties or the entire object. Putting a stop to bloated responses leads to increased performance and reduced data transfer. 

API developers should be mindful of the consumer requirements when designing APIs.

4. Conclusion

This article covered the REST API best practices. Developers must implement them to create robust APIs. It also discussed the common mistakes to avoid during the API development process. Both avoiding mistakes and implementing best practices are the foundation on which an API designer can build secure and high-performing applications. 

FAQs 

What are the 4 main benefits of using REST APIs?

Using REST APIs gives you the benefits of independence, scalability, security, easy integration, and flexibility.

What are the 6 constraints of REST API?

A REST API consists of six constraints namely; Uniform interface, Client-server, Stateless, Cacheable, Layered system, and Code-on-demand. 

What are the key components of a RESTful API?

Resources, HTTP methods, Presentations, Hypermedia links, and Source Code are the key components of a RESTful API. 

The post REST API Best Practices appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/rest-api-best-practices/feed/ 0
Cloud Cost Optimization- Best Practices and Tools https://www.tatvasoft.com/blog/cloud-cost-optimization/ https://www.tatvasoft.com/blog/cloud-cost-optimization/#respond Wed, 11 Dec 2024 10:40:34 +0000 https://www.tatvasoft.com/blog/?p=13450 If you have a single or multiple objectives mentioned above then you need to leverage cloud cost optimization techniques. It provides the solution to all your problems related to cloud costs and resource usage.  You can either implement the best practices mentioned in the article to make your existing system cost-efficient or collaborate with a top software development company to design a new one from scratch.

The post Cloud Cost Optimization- Best Practices and Tools appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Understanding Cloud Cost Optimization: Know the best practices to reduce cloud costs while maintaining the app security and performance.
  2. Implementing Best Practices: Adhering to the methods for efficient resource utilization and cost optimization.
  3. Budget Planning: Track your expenses and formulate a plan that pays bills on time and has enough funds for emergencies to ensure financial stability.
  4. Utilizing Cost Management Tools: Use cost management tools like Azure to track expenses and identify resource inefficiencies.
  5. Improving Performance and Efficiency: Know how to use fewer resources for the same amount of work or get more things done within a given timeline.

Are you looking to reduce your cloud costs? 
Do you want to know where exactly your cloud expenses are going? 
Do you want to make the most of your cloud resources? 

If you have a single or multiple objectives mentioned above then you need to leverage cloud cost optimization techniques. It provides the solution to all your problems related to cloud costs and resource usage. You can either implement the best practices mentioned in the article to make your existing system cost-efficient or collaborate with a top software development company to design a new one from scratch. They can help you keep the costs of your cloud resources within your budget while maintaining their performance. So let’s get started.

1. What is Cloud Cost Optimization?

Cloud cost optimization includes cutting down inefficiencies in the cloud environment. It includes eliminating unused instances, cutting down over-provisioned resources, and ensuring that all costs related to the cloud environment are appropriate. In short, it balances the cost with the requirements such as security and performance.

To optimize cloud costs, it’s essential to have domain knowledge and identify the operational metrics, along with the performance thresholds for each workload. Remember, this process is dynamic as the app or cloud requirements can change over time. 

Therefore, your cloud cost optimization must also adapt and evolve with these changing circumstances. You can use automated tools to keep an eye on the important metrics, provide you with regular reports, and offer smart suggestions. 

2. Best Practices For Cloud Cost Optimization

To ensure that cloud costs don’t become a burden for your organization, ensure that you have proper purchasing and implementation policies. It is best to adhere to best practices when you embed cloud resources into your workflow to maximize their benefits.

2.1 Budget Planning

It is easier to plan for cloud expenditures as they include pay-per-usage-based monthly subscriptions than on-premise IT costs that require high and unknown up-front investments. Once you understand the cloud billing and cloud usage patterns, you can plan a monthly budget for your cloud computing strategy. 

Remember, the budget may need adjustments with the varying organizational needs and monthly usage. Still, it is beneficial to have a planned budget for overall cloud and optimized costs.

2.2 Proactively Rightsize the Cloud Computing Resources

You have to use cloud cost optimization tools to analyze the performance metrics and usage patterns of each workload and application. This allows you to identify the underutilized cloud resources. For cost efficiency, usage, or rightsizing, you can then modify the workloads appropriately. 

Use the rightsizing tool to notify you whenever costs exceed a predefined percentage within a predetermined period. These tools can be easily configured to automatically terminate unutilized assets after that period to optimize the cloud costs. If you have the right tool by your side, you can also automate the entire rightsizing process.

2.3 Identify and Eliminate Idle Resources

Sometimes you forget to turn off unused resources, leading to unnecessary costs. On top of that, if you don’t remove the storage of resources that are turned off or not used anymore, it would hinder the performance of your system and incur extra storage charges. 

To ensure that you aren’t paying for unused resources, monitor your cloud for resource utilization and performance bottlenecks. Additionally, you can scan your cloud service bills to check for the charges related to services you no longer use. Identifying and eliminating idle resources not only helps optimize cost but also improves overall performance.

2.4 Optimizing Software License Costs

You must consider the software licenses and subscription costs when planning your budget. After all, they represent a significant part of your cloud system expenses. There are two ways to deploy; through the service marketplace and handling it manually. 

There is a risk of paying for unused software licenses in manual handling and it can be overwhelming. Instead, you can find commercial or public service marketplaces to manage them for you. Trimming unnecessary costs of software licenses helps you optimize your total cloud costs.

2.5 Release Idle Elastic IP Addresses

Many cloud service providers allot multiple IP addresses to users by default as a part of their package. However, this package includes a charge for each of them. For example, every AWS account provides five elastic IP addresses per region.

The reason behind this is to make the software and instances available to the users even in the failures. However, the cloud providers would charge users for all five IP addresses even if they are not in use. Therefore, if you want to reduce your cloud expenses, identify the unused IP addresses and turn them off using appropriate tools.

2.6 Set Automated Alerts

You can use cost management tools to keep an eye on resource usage and costs. This helps you identify the overspending or cost anomalies. You can set alerts to get notifications about such instances. 

These tools allow you to easily identify and address cost drivers, anomalies, and trends before the expenditures get out of hand. They help analyze the root cause and address it to prevent unexpected costs. This enables you to keep your project expenses within the budget. 

Moreover, cloud cost management tools leverage heat maps to visualize demand fluctuations, helping identify and shut down unnecessary services to save costs. Empowered by ML algorithms, these tools can easily detect unusual patterns and notify users about overuse or unexpected expenses.

2.7 Evaluate Different Compute Instance Types

Cloud providers offer different types of compute instances like dedicated, spot, reserved, and on-demand instances. Selecting the right instance type can accommodate your cost and performance requirements. To make that decision, you must understand these options and evaluate them against your needs to ensure long-term benefits. 

  • Dedicated Instances: These run on your hardware and are dedicated to a specific user or single customer. 
  • Spot Instances: You can access unused capacity but its price varies based on demand. 
  • Reserved Instances: Allows you to reserve an instance for a fixed period or long-term deployment that needs consistent performance. 
  • On-demand Instance: You pay an hourly rate when your instance is running. 

2.8 Implement a Cloud-Native Design

Another cloud cost optimization best practice is to replace your current cloud system with a more cost-efficient one. For instance, you can implement a cloud-native design with auto-scaling capabilities. This helps ensure that you pay only for the servers you use. 

You can also leverage the documentation and expert guidance from your cloud service providers to design a cost-effective cloud native system. 

However, it is important to note that developing cloud-native design needs a specific skill set. Many businesses tend to modify their existing systems rather than design a new one from scratch. Whatever option you choose, ensure it’s cost-optimized and aligns well with the performance and other requirements.

3. Why Do You Need Cloud Cost Optimization?

The objective of cost control indeed is to optimize the costs of cloud resources. However, it also allows you to address the cloud performance and security challenges. Some of the reasons why you should consider cloud cost optimization are:

3.1 Higher Cost Savings

The primary reason why you should implement cloud cost optimization is to save money. Developing and maintaining cloud resources can be expensive. 

By implementing cloud cost optimization best practices and policies, you can help your teams achieve the most of the available resources, foster a culture of cost awareness, and get better value for any cloud spending. With this, your company can make all the cloud purchasing decisions on solid data instead of going on a guesswork.

3.2 Better Value For Money

Cloud cost optimization ensures that you are paying only for the resources you are using. It involves analyzing the usage and then managing them based on the results to avoid underutilization and over-provisioning.

3.3 Improved Business Agility

A cost-efficient system allows you to scale operations without incurring any hefty expenses. A cost-efficient cloud system doesn’t weigh you down, rather it allows you to explore new arenas that can help your business grow or deliver a better customer experience.

3.4 Improved Efficiency

Under-used, idle, mismanaged, or poorly optimized cloud resources can increase the cost of your cloud operations significantly. With proper auto-scaling and rightsizing tools, you can identify these under-used or over-provisioned resources to save costs. Optimizing these resources also helps enhance the overall efficiency of your cloud system. 

3.5 Enhanced Performance

With cloud cost optimization, you not only understand your requirements but also the various types of resources available to fulfill them. This allows you to select the most suitable option, guaranteeing better performance. 

In an existing system, cloud cost optimization helps you understand each workload and its distinct requirements. Moreover, the tools help you monitor the operational metrics to determine the performance thresholds for every resource, resulting in enhanced performance, faster processing time, and an improved user experience. 

3.6 Reduced Security Risks

Cloud cost optimization enables you to monitor resource usage and track anomalies within the system. This capability helps identify potential security threats. Cloud Cost optimization tools are often empowered with machine learning algorithms that are specially created to detect unusual patterns in the environment. 

They also allow you to automate cloud provisioning, which helps enforce security controls and reduce the risks of misconfiguration. 

4. Most Popular Cloud Cost Management Tools

This section discusses the most widely used cloud cost management tools in the market. Many developers and businesses prefer them to understand, reduce, and optimize cloud costs.

4.1 Amazon CloudWatch

Amazon CloudWatch

Amazon CloudWatch is an AWS native tool used to handle cloud expenses effectively. It offers in-depth AWS cost reporting by pulling logs and metrics from more than 70 AWS apps, services, and resources in real time. CloudWatch also provides a dashboard for computations that displays all the relevant data and calculations. 

With CloudWatch, you can plan a budget, collect custom metrics, set cost alerts, and automate actions on Kubernetes, EKS, and ECS clusters to quickly respond to the changes in cloud costs. You can easily integrate Amazon CloudWatch with AWS Budgets, AWS Cost and Usage Report, AWS Cloud Explorer, and other AWS cost management tools.

4.2 Azure Cost Management + Billing

Azure Cost Management + Billing

Similar to CloudWatch, Azure Cost Management, and Billing is a native cloud cost management tool from Microsoft’s Azure Cloud Service. It offers all the necessary services, such as budgeting, monitoring cloud expenses, cost analysis, exporting cost management data, and providing cost optimization recommendations based on best practices. This tool also helps you handle the billing data for both AWS and Azure

4.3 Densify

Densify

Densify is a cloud resource optimization tool designed to help you reduce your cloud costs and computing charges. It also allows you to set alerts for over-allocated resources and inefficient instances. 

4.4 AWS Cost Explorer

AWS Cost Explorer

AWS Cost Explorer is a built-in tool that helps you track, analyze, and manage the costs of cloud services in the AWS ecosystem. It can track your AWS usage and all the associated costs. AWS Cost Explorer then presents this data alongside AWS Costs and usage reports. 

It also leverages AWS monitoring tools, such as AWS CloudTrail, for gathering and reporting the data with resource-level granularity at every hour. Cost Explorer can provide cost-saving plans as well. 

4.5 Datadog

Datadog

Datadog is an observability and monitoring tool for both on-premise and cloud applications. It helps manage cloud costs by tracking resource usage and associated costs on a large scale, providing quantifiable metrics. 

It shows you the cost of every resource usage across Azure and AWS. You can use tags to specify the “who” and “what” of cloud spending when allocating by team, service, or product. 

Datadog uses K8s native concepts like cost pods, nodes, and clusters to track the costs in Kubernetes. Additionally, determining more views is possible through app-level cost data and custom metrics. 

4.6 Harness Cost Management

Harness Cost Management

Harness is especially useful for tracking usage data, including idle, unallocated, and under-utilized resources. While it doesn’t exactly estimate feature or project-specific costs, it offers the necessary context for the cost reports. Harness can also detect cost anomalies in your cloud system to help you manage expensive activities. 

4.7 Flexera Cloud Cost Management

Flexera Cloud Cost Management

The Flexera Cloud cost management tool is more useful for teams that require visibility into multi-cloud environments. In addition to all the standard features like cost forecasting, reporting, and analysis, Flexera allows users to set automatic budget alerts and allocate costs by department and team. It is largely used to obtain decent visibility into both private and public cloud expenditures. 

5. Conclusion

Cost optimization is a critical aspect of any cloud project. It doesn’t matter if you use a private or public cloud, small-scale or large-scale cloud environment, you ought to amass misconfigured, outsized, and redundant resources over time. This will burden your bill, as they won’t be offering any real value to your project or business. 

The best practices for cloud cost optimization discussed in this article can be used to implement necessary changes to understand and optimize your cloud costs. 

However, the first step to cost management is to gain accurate visibility into where your costs are coming from. Use suitable tools to obtain such visibility and monitor your expenditures. That will help you adhere to your organization’s budget and objectives.

FAQs 

What is cloud cost optimization?

Cloud Cost Optimization is a collective process of devising a suitable cloud cost optimization strategy and implementing best practices and tools to minimize your cloud costs as well as maximize the return on your investment. 

What is cloud cost efficiency?

Cloud cost efficiency means the ability to save money while effectively handling the costs of cloud services. A well-handled cloud cost allows you to manage your budget and allocate resources effectively. It’s about saving money while maintaining the operational efficiency of the cloud system.

The post Cloud Cost Optimization- Best Practices and Tools appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/cloud-cost-optimization/feed/ 0
Docker Best Practices https://www.tatvasoft.com/blog/docker-best-practices/ https://www.tatvasoft.com/blog/docker-best-practices/#respond Wed, 26 Jun 2024 10:05:05 +0000 https://www.tatvasoft.com/blog/?p=13086 Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker.

The post Docker Best Practices appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. Docker is used for fast and consistent application delivery. It also provides a cost-effective alternative to VMs so that we can use more of our server capacity to achieve our goals.
  2. Implementing Docker development best practices helps build secure containers and deliver reliable container-based applications.
  3. The initial stage is to select the right base image. It should be from a trusted source and keep it small.
  4. Docker images are immutable. To keep your images up-to-date and secure, rebuild your images often with updated dependencies.
  5. Secure your applications from external attacks by running programs within the container as a specified user.

Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker. To create more secure containers, we have assembled some of the most important Docker best practices into one blog post, making it the most thorough practical advice available. Shall we begin?

1. Docker Development Best Practices

In software development, adhering to best practices while working with Docker can improve the efficiency and reliability of your software projects. Below-mentioned best practices help you better optimize images, measure security in the docker container runtime, and host OS, along with ensuring smooth deployment processes and maintainable Docker environments.

1.1 How to Keep Your Images Small

Small images are faster to pull over the network and load into memory when starting containers or services. There are a few rules of thumb to keep the image size small:

  • Begin with a suitable basic image: If you require a JDK, for example, you might want to consider using an existing Docker Image like eclipse-temurin instead of creating a new image from the ground up.
  • Implement multistage builds: For example, you may develop your Java program using the maven image, then switch to the tomcat image, and finally, deploy it by copying the Java assets to the right place, all within the same Dockerfile. This implies that the built-in artifacts and environment are the only things included in the final image, rather than all of the libraries and dependencies.
  • To make your Docker image smaller and utilize fewer layers in a Dockerfile with fewer RUN lines, you can avoid using Docker versions without multistage builds. Just use your shell’s built-in capabilities to merge any commands into one RUN line, and you’ll be good to go. Take into account these two pieces. When you use the first, you get two image layers; when you use the second, you get just one.
    RUN apt-get -y update
    RUN apt-get install -y python
    

    or

    RUN apt-get -y update && apt-get install -y python
    
RUN apt-get -y update
RUN apt-get install -y python

or

RUN apt-get -y update && apt-get install -y python
  • If you have multiple images with a lot in common, consider creating your base image with the shared components, and basing your unique images on that. Docker only needs to load the common layers once, and they are cached. This means that your derivative images use memory on the Docker host more efficiently and load faster.
  • To keep your production image lean but allow for debugging, consider using the production image as the base image for the debug image. Additional testing or debugging tools can be added on top of the production image.
  • Whenever deploying the application in different environments and building images, always tag images with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information. Don’t rely on the automatically created latest tag.

1.2 Where and How to Persist Application Data

  • Keep application data out of the container’s writable layer and away from storage drivers. Compared to utilizing volumes or bind mounts, storing data in the container’s writable layer makes it larger and less efficient from an I/O standpoint.
  • Alternatively, use volumes to store data.
  • When working in a development environment, bind mounts can be useful for mounting directories such as source code or newly produced binaries into containers. Instead of mounting a volume in the same spot as a bind mount during development; use that spot for production purposes.
  • During production, it is recommended to utilize secrets for storing sensitive application data that services consume, and configs for storing non-sensitive data like configuration files. You may make use of these capabilities of services by transforming from standalone containers to single-replica services.

1.3 Use CI/CD for Testing and Deployment

Use Docker Hub or another continuous integration/continuous delivery pipeline to automatically build, tag, and test Docker images whenever you make a pull request or check in changes to source control.

Make it even more secure by having the teams responsible for development, testing, and security sign images before they are sent into production. The development, quality, and security teams, among others, can test and approve an image before releasing it to production.

2. Docker Best Practices for Securing Docker Images

Let’s have a look at the best practices for Docker image security.

2.1 Use Minimal Base Images

When creating a secure image, selecting an appropriate base image is the initial step. Select a small, reputable image and make sure it’s constructed well.

Over 8.3 million repositories are available on Docker Hub. Official Images, a collection of Docker’s open-source and drop-in solution repositories, are among these images. Docker publishes them. Images from verified publishers can also be available on Docker.

Organizations that work with Docker produce and maintain these high-quality images, with Docker ensuring the legitimacy of their repository content. Keep an eye out for the Verified Publisher and Official Image badges when you choose your background image.

Pick a simple base image that fits your requirements when creating your image using a Dockerfile. A smaller base image doesn’t only make your image smaller and faster to download, but also reduces the amount of vulnerabilities caused by dependencies, making your image more portable.

As an additional precaution, you might want to think about creating two separate base images: one for use during development and unit testing, and another for production and beyond. Compilers, build systems, and debugging tools are build tools that may not be necessary for your image as it progresses through development. One way to reduce the attack surface is to use a minimal image with few dependencies.

2.2 Use Fixed Tags for Immutability

Versions of Docker images are often managed using tags. As an example, the most recent version of a Docker image may be identified by using the “latest” tag. But since tags are editable, it’s conceivable for many images to have the same most recent tag, which can lead to automated builds acting inconsistently and confusingly.

To make sure tags can’t be changed or altered by later image edits, you can choose from three primary approaches:

  • If an image has many tags, the build process should choose the one with the crucial information, such as version and operating system. This is because more specific tags are preferred.
  • A local copy of the images should be kept, maybe in a private repository, and the tags should match those in the local copy.
  • Using a private key for cryptographic image signing is now possible with Docker’s Content Trust mechanism. This ensures that both the image and its tags remain unaltered.

2.3 Use of Non-Root User Accounts

Recent research has shown that the majority of images, 58% to be exact, are using the root user ID (UID 0) to run the container entry point, which goes against Dockerfile recommended practices. Be sure to include the USER command to alter the default effective UID to a non-root user, because very few use cases require the container to run as root.

In addition, Openshift necessitates extra SecurityContextConstraints, and your execution environment may automatically prohibit containers operating as root.

To run without root privileges, you might need to add a few lines to your Dockerfile, such as:

  • Verify that the user listed in the USER instruction is indeed present within the container.
  • Make sure that the process has the necessary permissions to read and write to the specified locations in the file system.
# Base image
FROM alpine:3.12

# Create a user 'app' and assign ownership and permissions
RUN adduser -D app && chown -R app /myapp-data

# ... copy application files

# Switch to the 'app' user
USER app

# Set the default command to run the application
ENTRYPOINT ["/myapp"]

It is possible to encounter containers that begin as root and then switch to a regular user using the gosu or su-exec commands.

Another reason containers might use sudo is to execute certain commands as root.

Although these two options are preferable to operating as root, they might not be compatible with restricted settings like Openshift.

3. Best Practices for Local Docker

Let’s discuss Local Docker best practices in detail.

3.1 Cache Dependencies in Named Volumes #

Install code dependencies as the machine starts up, rather than baking them into the image. Using Docker’s named volumes to store a cache significantly speeds up the process compared to installing each gem, pip, and yarn library from the beginning each time we resume the services (hello NOKOGIRI). The configuration mentioned above might evolve into:

services:
  rails_app:
    image: custom_app_rails
    build:
      context: 
      dockerfile: ./.docker-config/rails/Dockerfile
    command: ./bin/rails server -p 3000 -b '0.0.0.0'
    volumes:
      - .:/app
      - gems_data:/usr/local/bundle
      - yarn_data:/app/node_modules

  node_app:
    image: custom_app_node
    command: ./bin/webpack-dev-server
    volumes:
      - .:/app
      - yarn_data:/app/node_modules
      
volumes:
  gems_data:
  yarn_data:

To significantly reduce startup time, put the built dependencies in named volumes. The exact locations to mount the volumes will differ for each stack, but the general idea remains the same.

3.2 Don’t Put Code or App-Level Dependencies Into the Image #

When you start a docker-compose run, the application code will be mounted into the container and synchronized between the container and the local system. The main Dockerfile, where the app runs, should only contain the software that is needed to execute the app.

You should only include system-level dependencies in your Dockerfile, such as ImageMagick, and not application-level dependencies such as Rubygems and NPM packages. When dependencies are baked into the image at the application level, it becomes tedious and error-prone to rebuild the image every time new ones are added. On the contrary, we incorporate the installation of such requirements into a starting routine.

3.3 Start Entrypoint Scripts with Set -e and End with exec “$@” #

Using entrypoint scripts to install dependencies and handle additional setups is crucial to the configuration we’ve shown here. At the beginning and end of each of these scripts, you must incorporate the following elements:

  • Next to #!/bin/bash (or something similar) at the start of the code, insert set -e. If any line returns an error, the script will terminate automatically.
  • Put exec “$@” at the end of the file. The command directive will not carry out the instructions you provide unless this is present.

4. Best Practices for Working with Dockerfiles

Here are detailed best practices for working with Dockerfiles.

4.1 Use Multi-Stage Dockerfiles

Now imagine that you have some project contents (such as development and testing tools and libraries) that are important for the build process but aren’t necessary for the application to execute in the final image.

Again, the image size and attack surface will rise if you include these artifacts in the final product even though they are unnecessary for the program to execute.

The question then becomes how to partition the construction phase from the runtime phase. Specifically, how can the build dependencies be removed from the image while remaining accessible during the image construction process? In that case, multi-stage builds are a good option. With the functionality of multi-stage builds, you can utilize several temporary images during the building process, with only the most recent one becoming the final artifact:

Example:-

# Stage 1: Build the React app
FROM node:latest as react_builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Create the production image
FROM nginx:stable
COPY --from=react_builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

4.2 Use the Least Privileged User

Now, which OS user will be utilized to launch the program within this container when we build it and run it?

Docker will use the root user by default if no user is specified in the Dockerfile, which may pose security risks. On the other hand, running containers with root rights is usually unnecessary. This creates a security risk as containers may get root access to the Docker host when initiated.

Therefore, an attacker may more easily gain control of the host and its processes, not just the container, by launching an application within the container with root capabilities. This is true unless the container’s underlying application is susceptible to attacks.

The easiest way to avoid this is to execute the program within the container as the specified user, as well as to establish a special group in the Docker image to run the application.

Using the username and the USER directive, you can easily launch the program.

Tips: You can utilize the generic user that comes packed with some images. This way, you may avoid creating a new one. For instance, you can easily execute the application within the container using the node.js image, which already includes a generic user-named node. 

4.3 Organize Your Docker Commands

  • Combine commands into a single RUN instruction whenever possible; for instance, you can run many instructions using the && operator.
  • To minimize the amount of file system modifications, arrange the instructions in a logical sequence. For instance, group operations that modify the same files or directories together.
  • If you want to utilize fewer commands, you need to reevaluate your current approach.
  • Reducing the number of COPY commands in the Apache web server example, as described in the section Apache Web Server with Unnecessary COPY Commands is one way to achieve this.

5. Best Practices for Securing the Host OS

Below are a few best practices for securing the host OS with Docker.

5.1 OS Vulnerabilities and Updates

It is critical to establish consistent procedures and tools for validating the versioning of packages and components within the base OS after selecting an operating system. Take into consideration that a container-specific operating system may have components that could be vulnerable and necessitate fixing. Regular scanning and checking for component updates using tools offered by the operating system vendor or other reputable organizations.

To be safe, always upgrade components when vendors suggest, even if the OS package does not contain any known security flaws. You can also choose to reinstall an updated operating system if that is more convenient for you. Just like containers should be immutable, the host running containerized apps should also maintain immutability. Data should not only persist within the operating system. To prevent drift and drastically lower the attack surface, follow this best practice. Finally, container runtime engines like Docker provide updates with new features and bug fixes often. Applying the latest patches can help reduce vulnerabilities.

5.2 Audit Considerations for Docker Runtime Environments

Let’s examine the following:

  • Container daemon activities
  • These files and directories:
    • /var/lib/docker
    • /etc/docker
    • docker.service
    • docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-containerd
    • /usr/bin/docker-runc

5.3 Host File System

Ensure that containers operate with the minimum required file system permissions. Mounting directories on a host’s file system should not be possible for containers, particularly when such folders include OS configuration data. This is not a good idea since an attacker might gain control of the host system if they were to get their hands on the Docker service, as it runs as root.

6. Best Practices for Securing Docker Container Runtime

Let’s follow these practices for the security of your Docker container runtime.

6.1 Do Not Start Containers in Privileged Mode

Unless necessary, you should avoid using privileged mode (–privileged) due to the security risks it poses. Running in privileged mode allows containers to access all Linux features and removes limitations imposed by certain groups. Because of this, they can access many features of the host system.

Using privileged mode is rarely necessary for containerized programs. Applications that require full host access or the ability to control other Docker containers are the ones that use privileged mode.

6.2 Vulnerabilities in Running Containers

You may add items to your container using the COPY and ADD commands in your Dockerfile. The key distinction is ADD’s suite of added functions, which includes the ability to automatically extract compressed files and download files from URLs, among other things.

There may be security holes due to these supplementary features of the ADD command. A Docker container might be infiltrated with malware, for instance, if you use ADD to download a file from an insecure URL. Thus, using COPY in your Dockerfiles is a safer option.

6.3 Use Read-Only Filesystem Mode

Run containers with read-only mode enabled for their root filesystems. This will allow you to quickly monitor writes to explicitly designated folders. One way to make containers more secure is to use read-only filesystems. Furthermore, avoid writing data within containers because they are immutable. On the contrary, set a specified volume for writes.

7. Conclusion

With its large user base and many useful features, Docker is a good choice to the cloud-native ecosystem and will likely remain a dominant player in the industry. In addition, Docker offers significant benefits for programmers, and many companies aspire to use DevOps principles. Many developers and organizations continue to rely on Docker for developing and releasing software. For this reason, familiarity with the Dockerfile creation process is essential. Hopefully, you have gained enough knowledge from this post to be able to create a Dockerfile following best practices.

The post Docker Best Practices appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/docker-best-practices/feed/ 0
Kubernetes Deployment Strategies- A Detailed Guide https://www.tatvasoft.com/blog/kubernetes-deployment-strategies/ https://www.tatvasoft.com/blog/kubernetes-deployment-strategies/#respond Tue, 09 Apr 2024 12:04:42 +0000 https://www.tatvasoft.com/blog/?p=12877 Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines.

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. To deliver resilient apps and infrastructure, shorten time to market, create deployments without downtime, release features & apps faster and operate them with greater flexibility, choosing the right Kubernetes deployment strategies is important.
  2. All the K8s deployment strategies use one or more of these use cases:
    • Create: Roll out new K8s pods and ReplicaSets.
    • Update: Declare new desired state and roll out new pods and ReplicaSets in a controlled manner.
    • Rollback: Revert the K8s deployment to its previous state.
    • Scale: Increase the number of pods and Replica Sets in the K8s deployment without changing them.
  3. Following factors one should consider while selecting any K8s deployment strategy:
    • Deployment Environment
    • How much downtime you can spare
    • Stability of the new version of your app
    • Resources availability and their cost
    • Project goals
  4. Rolling or Ramped deployment method is Kubernetes default rollout method. It scaled down old pods only after new pods became ready. Also one can pause or cancel the deployment without taking the whole cluster offline.
  5. Recreate Deployment, Blue/Green Deployment, Canary Deployment, Shadow Deployment, and A/B Deployment are other strategies one can use as per requirements.

Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines. Here, it becomes essential for the Kubernetes deployment team to use innovative approaches for delivering the service because of frequent updates in Kubernetes.

For this, software development companies have to choose the right deployment strategy as it is important for deploying production-ready containerized applications into Kubernetes infrastructure. For this, there are three different options available in the market and they are canary releases, rolling, and blue/green deployments. Kubernetes helps in deploying and autoscaling the latest apps by implementing new code modifications in production environments. To know more about this Kubernetes deployment and its strategies, let’s go through this blog.

1. What is Kubernetes Deployment?

A Deployment allows a description of an application’s life cycle such as images to be used, the number of pods required, and how to update them in the application. In other words, deployment in Kubernetes is a resource object that can be used to specify the desired state of the application as deployment is generally declarative which means that the developers cannot dictate the achievement steps of this phase. Besides, in Kubernetes, a deployment controller can be used to reach the target efficiently.

2. Key Kubernetes Deployment Strategies

The Kubernetes deployment process is known as a declarative statement that helps development teams configure it in the YAML file that specifies the several Kubernetes deployment strategies and life of the application and how it will get updated from time to time. While deploying applications to a K8s cluster the selected deployment strategy would determine which applications have been updated from an older version to a newer version. Also, some Kubernetes deployment strategies involve downtime while some deployments would introduce testing concepts and enable user analysis.

2.1 Rolling Deployment Strategy

Rolling Deployment Strategy
  • Readiness probes

This type of deployment strategy helps the developers to monitor when any application gets activated and what happens if the probe fails & no traffic is sent to the pod. This approach is mostly used when there is a need for specific types of initialization steps in an application before it goes live. There are chances where applications may be overloaded with traffic and cause probe failure. With this, it also safeguards the application from getting more traffic.

Once the availability of the new version of an application is detected by the readiness probe, the older version gets removed. If there are any challenges then the rollout can be stopped and rollback of the previous version is deployed to avoid downtime of the application across the Kubernetes cluster. This is because each pod in the application gets replaced one by one. Besides this, even the deployment can take some time when the clusters are larger than usual. When a new deployment gets triggered before the previous one is finished, the previous deployment gets ignored and the new version gets updated as per the new deployment. 

When there is something specific to the pod and it gets changed, the rolling deployment gets triggered. The change here can be anything from the environment to the image to the label of the pod. 

  • MaxSurge

It specifies a maximum number of pods that are permitted at the time of deployment.

  • MaxUnavailable

It defines the maximum number of pods that are granted by the deployment creation process to be inaccessible while the rollout is being processed. 

In this example:

  • replicas: 3 indicates that there are initially three replicas of your application running.
  • rollingUpdate is the strategy type specified for the deployment.
  • maxUnavailable: 1 ensures that during the rolling update, at most one replica is unavailable at a time.
  • maxSurge: 1 allows one additional replica to be created before the old one is terminated, ensuring the desired number of replicas is maintained.
apiVersion: E-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          image: tatvasoft/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

2.2 Recreate Deployment

Recreate Deployment

Here is a digital representation of Recreate Deployment. It’s a two-step process that includes deleting all old pods, and once that is done, new pods are created. It may lead to downtime as users will have to wait until old services are deleted and new ones are created. However, this strategy is still used by Kubernetes for performing deployment.

Recreate deployment strategy helps eliminate all the pods and enables the development team to replace them with the new version. This recreate deployment strategy is used by the developers when a new and old version of the application isn’t able to run at the same time. Here, in this case, the downtime amount taken by the system depends on the time the application takes to shut down its processing and start back up. Once the pods are completely replaced, the application state is entirely renewed. 

In this example, the strategy section specifies the type as Recreate, indicating that the old pods will be terminated before new ones are created.

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          Image: tatvasoft/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: Recreate

2.3 Blue-Green Deployment

In the Blue-Green Kubernetes deployment strategy, you can release new versions of an app to decrease the risk of downtime. It has two identical environments, one serves as the active production environment, that is, blue, and the other serves as a new release environment, that is, green.

Blue-Green Deployment

A Blue-Green deployment is one of the most popular Kubernetes deployment strategies that enable the developers to deploy the new application version which is called green deployment with the old one which is called blue deployment. Here, when the developers want to direct the traffic of the old application to the new one, they use a load balancer. In this case, it is utilized as the form of the service selector object. Blue/Green Kubernetes deployments are costly as they require double the resources of normal deployment processes. 

Define Blue Deployment (blue-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: blue-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: blue-e-commerce
  template:
    metadata:
      labels:
        app: blue-e-commerce
    spec:
      containers:
        - name: blue-e-commerce-container
          image: tatvasoft/blue-e-commerce:latest
          ports:
            - containerPort: 80

Define Green Deployment (green-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: green-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: green-e-commerce
  template:
    metadata:
      labels:
        app: green-e-commerce
    spec:
      containers:
        - name: green-app-container
          image: tatvasoft/green-e-commerce:latest
          ports:
            - containerPort: 80

Define a Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: blue-e-commerce  # or green-app, depending on which environment you want to expose
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Define an Ingress (ingress.yaml):

apiVersion: e-commerce/v1
kind: Ingress
metadata:
  name: e-commerce-ingress
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: e-commerce-service
                port:
                  number: 80

Now, the blue environment serves traffic. Whenever you are switching to green env, you can make an update in the service selector to match the green deployment labels. After this update is made, Kubernetes will begin routing traffic in the green environment.

2.4 Canary Deployment

In this strategy, you can route a small group of users to the latest version of an app, running in a smaller set of pods. It tests functions on a small group of users and avoids impacting the whole user base. Here’s the visual representation of the canary deployment strategy.

Canary Deployment

A Canary deployment is a strategy that the Kubernetes app developers can use to test a new version of the application when they aren’t fully confident about the functionality of the new version created by the team. This canary deployment approach helps in managing the deployment of a new application version with the old one. Here, the previous version of the application tends to serve the majority of the application users, and the newer application version is supposed to serve a small number of test users. This lets the new deployment rollout maximum number of users when it is successful.

For instance, in the Kubernetes cluster with 100 running pods, 95 could be for v1.0.0 while 5 could be for v2.0.0 of the application. This means that around 95% of the users will be directed to the app’s old version while 5% of them will be directed to the new one. 

Version 1.0 Deployment (v1-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v1-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          image: tatvasoft/e-commerce-v1:latest
          ports:
            - containerPort: 80

Version 2.0 Deployment (v2-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v2-deployment
spec:
  replicas: 1  # A smaller number for canary release
  selector:
    matchLabels:
      app: e-commerce-v2
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: tatvasoft/e-commerce-v2:latest
          ports:
            - containerPort: 80

Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Gradually Shift Traffic to Version 2.0 (canary-rollout.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          Image: tatvasoft/e-commerce-v1:latest
          ports:
            - containerPort: 80
---
apiVersion:  e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 2
  selector:
    matchLabels:
      app: e-commerce-v2  # Gradually shifting to the version 2.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: tatvasoft/e-commerce-v2:latest
          ports:
            - containerPort: 80

This example gradually shifts traffic from version 1.0 to version 2.0 by updating the number of replicas in the Deployment. Adjust the parameters based on your needs, and monitor the behavior of your application during the canary release.

2.5 Shadow Deployment

Shadow deployment is a strategy where a new version of an app is deployed alongside the new production version, primarily monitoring and testing the product.

Shadow deployment is another type of canary deployment that allows you to test the latest release of the workload. This strategy of deployment helps you to split the traffic between a new and current version, without users even noticing it.

When the performance and stability of the new version meet in-built requirements, operators will trigger a full rollout of the same.

One of the primary benefits of shadow deployment is that it can help you test the new version’s non-functional aspects like stability, performance, and much more.

On the other hand, it has a downside as well. This type of deployment strategy is complex to manage and needs two times more resources to run than a standard deployment strategy.

2.6 A/B Deployment

Just like Canary deployment, the A/B deployment strategy helps you to target a desired subsection of users based on some target parameters like HTTP headers or cookies.

It can distribute traffic amongst different versions. It is a technique that is widely used to test the conversion of a given feature and then the version converting the most popular.

In this strategy, data is usually collected based on the user behavior and is used to make better decisions. Here users are left uninformed that testing is being done and a new version will be made available soon.

This deployment can be automated using tools like Flagger, Istio, etc.

3. Resource Utilization Strategies:

Here are a few resource utilization strategies to follow:

3.1 Resource Limits and Requests:

Each container in the pod in Kubernetes can define limits and resource requests for both memory and CPU. These settings are crucial for resource isolation and allocation.

Resource Requests:

  • A particular amount of resources that Kubernetes guarantees to the container.
  • If the container exceeds the requested resources, it might feel throttled.

Resource Limits:

  • It helps in setting an upper limit on the exact amount of resources that a container can utilize.
  • In case this limit is exceeded, it might terminate or face other consequences.

So, it is necessary to set these values appropriately to ensure fair resource allocation among various containers on the same node.

Ex. In the following example, the pod specifies resource requests of 64MiB memory and 250 milliCPU (0.25 CPU cores). It also sets limits to 128MiB memory and 500 milliCPU. These settings ensure that the container gets at least the requested resources and cannot exceed the specified limits.

apiVersion: e-commerce/v1
kind: Pod
metadata:
  name: e-commerce
spec:
  containers:
  - name: e-commerce-container
    image: e-commerce:v1
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

3.2 Horizontal Pod Autoscaling (HPA)

In this technique, you get a feature of automatic adjustment of the number of replicas of a running pod that is based on particular metrics.

Target Metrics:

  • HPZ can scale depending on different metrics like memory usage, CPU utilization, and custom metrics.
  • The targetAverageUtilization field determines the required average utilization for CPU or memory.

Scaling Policies:

  • Define the maximum and minimum number of pod replicas for social deployment.
  • Scaling the decisions that are made based on whether the metrics break the decided thresholds.

HPA is useful for handling different loads and ensuring decided resource utilization by managing the number of pod instances in real-time.

Ex. This HPA example targets a Deployment named example-deployment and scales based on CPU utilization. It is configured to maintain a target average CPU utilization of 80%, scaling between 2 and 10 replicas.

apiVersion: e-commerce/v1
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: e-commerce/v1
    kind: Deployment
    name: e-commerce-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80

3.3 Vertical Pod Autoscaling

While HPA adjusts the number of replicas, VPA will focus on dynamically adjusting the resource requests of each pod.

Observation and Adaptation:

  • VPA observes the real resource usage of pods and adjusts resource requests based on that.
  • It optimizes both memory and CPU requests based on historical information.

Update Policies:

  • With the updateMode function, you can determine how aggressively VPA should update resource requests.
  • With alternatives like Auto, Off, and Recreate, the updated policies help the users officially.
  • It also helps in fine-tuning resource allocation for adapting to the actual runtime behavior.

Ex. This VPA example targets a Deployment named example-deployment and is configured to automatically adjust the resource requests of the containers within the deployment based on observed usage.

apiVersion: e-commerce/v1
kind: VerticalPodAutoscaler
metadata:
  name: e-commerce-vpa
spec:
  targetRef:
    apiVersion: "e-commerce/v1"
    kind: "Deployment"
    name: "e-commerce-deployment"
  updatePolicy:
    updateMode: "Auto"

3.4 Cluster Autoscaler

The cluster autoscaler is responsible for dynamically adjusting the node pool size in response to the resource requirements of your workloads.

Node Scaling:

  • When a node lacks resources and cannot accommodate new pods, the Cluster Autoscaler adds more nodes to the cluster.
  • Conversely, when nodes are not fully utilized, the Cluster Autoscaler scales down the cluster by removing unnecessary nodes.

Configuration:

  • Configuration parameters such as minimum and maximum node counts vary depending on the cloud provider or underlying infrastructure.

The Cluster Autoscaler plays a crucial role in ensuring an optimal balance between resource availability and cost-effectiveness within a Kubernetes cluster.

Ex. This example includes a simple Deployment and Service. Cluster Autoscaler would dynamically adjust the number of nodes in the cluster based on the resource requirements of the pods managed by the Deployment.

apiVersion: e-commerce/v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
      - name: e-commerce-container
        image: e-commerce:v1
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

4. How to Deploy Kubernetes?

Most of the Kubernetes deployments and functions are specified in YAML (or JSON) files. These are developed using ‘kubectl apply’. 

For example, when it comes to Ngnix deployment, the YAML is called ‘web deployment’ with four copies of replicas. This will look like the code given below:

$ kubectl set image deployment mywebsite nginx=nginx:1.22.1

In the above example, the metadata shows that the ‘web-deployment’ was developed with four copies of pods which are replicas of each other (replicas: 4), and the selector defines how the deployment process will find the pods using the label (app: nginx). Besides this, here the container (nginx) runs its image at version 1.17.0, and the deployment opens port 80 for the pod’s usage.

In addition to this, here, the environment variables for the containers get declared with the use of the ‘envFrom’ or ‘env’ field in the configuration file. After the deployment gets specified, it gets developed from the YAML file with: kubectl apply -f https://[location/web-deployment.yaml]

5. Update Kubernetes Deployment

When it comes to Kubernetes deployment, the developers can use the set command to make changes to the image, configuration fields, or resources of an object. 

For instance, to update a deployment from nginx version 1.22.0 to 1.22.1, the following command can be considered.

apiVersion: apps/v1;

kind: Deployment

metadata:

name: web-deployment 

spec: 

selector:

matchLabels:

app: nginx

replicas: 4

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.17.0

ports:- containerPort: 80

6. Final Words

In this blog, we saw that there are multiple ways a developer can deploy an application. When the developer wants to publish the application to development/staging environments, a recreated or ramped deployment is a good choice. But when production is considered, a blue/green or ramped deployment is an ideal choice. When choosing the right Kubernetes deployment strategy, if the developer isn’t sure about the stability of the platform, a proper survey of all the different ways is required. Each deployment strategy comes with its pros and cons, but which to choose depends on the type of project and resources available.

FAQs

What is the Best Deployment Strategy in Kubernetes?

There are mainly 8 different Kubernetes deployment strategies:
Rolling Deployment, Ramped slow rollout, Recreate Deployment, Best-effort controlled rollout, canary deployment, A/B testing, and Shadow deployment. 
You can choose the one that’s most suitable to your business requirements.

What Tool to Deploy k8s?

Here’s the list of tools to deploy by Kubernetes professionals:

  • Kubectl
  • Kubens
  • Helm
  • Kubectx
  • Grafana
  • Prometheus
  • Istio
  • Vault
  • Kubeflow
  • Kustomize, and many more

What is the Difference Between Pod and Deployment in Kubernetes?

A Kubernetes pod is the smallest unit of Kubernetes deployment. It is a cluster of one or more containers that has the same storage space, and even similar network resources.

On the other hand, Kubernetes deployment is the app’s life cycle that includes the pods of that app. It’s a way to communicate your desired state of Kubernetes deployments.

What is the Life Cycle of Kubernetes Deployment?

There are majorly ten steps to follow in a Kubernetes deployment life cycle:

  • Containerization
  • Container Registry
  • YAML or JSON writing
  • Kubernetes deployment
  • Rollbacks and Rolling Updates
  • Scaling of the app
  • Logging and Monitoring
  • CI/CD

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/kubernetes-deployment-strategies/feed/ 0
Kubernetes Best Practices to Follow https://www.tatvasoft.com/blog/kubernetes-best-practices/ https://www.tatvasoft.com/blog/kubernetes-best-practices/#respond Tue, 06 Feb 2024 11:53:40 +0000 https://www.tatvasoft.com/blog/?p=12583 Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

The post Kubernetes Best Practices to Follow appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  • When it comes to working with Kubernetes and following its best practices, there are some complications that developers face in deciding which best practice can help in which circumstances. To help with this confusion, in this blog, we will go through some of the top Kubernetes practices, and here is what a Kubernetes developer will take away with this blog:
    1. Developers will learn that security isn’t the afterthought for any Kubernetes app development process as DevSecOps can be used to emphasize the importance of integrating security at every phase of the process.
    2. These practices of Kubernetes combine authorization controllers for the developers to create modern applications.
    3. Though there are many security professionals available in the app development field, the key factor for being the best developer, automation, and DevSecOps practices is to know how to secure software.
    4. These best practices can help supply chain software to address emerging security issues.

Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

This is how Kubernetes master, Google staff developer advocate, and co-author of Kubernetes Up & Running (O’Reilly) Kelsey Hightower acknowledges it:

“Kubernetes does the things that the very best system administrator would do: automation, failover, centralized logging, monitoring. It takes what we’ve learned in the DevOps community and makes it the default, out of the box.”Kelsey Hightower

In a cloud-native environment, many of the more common sysadmin duties, such as server upgrades, patch installations, network configuration, and backups, are less important. You can let your staff focus on what they do best by automating these tasks with Kubernetes. The Kubernetes core already has some of these functionalities, such as auto scaling and load balancing, while other functions are added via extensions, add-ons, and third-party applications that utilize the Kubernetes API. There is a huge and constantly expanding Kubernetes ecosystem.

Though Kubernetes is a complex system to work with, there are some of its practices that can be followed to have a solid start with the app development process. These recommendations cover issues for app governance, development, and cluster configuration. 

1. Kubernetes Best Practices

Here are some of the Kubernetes best practices developers can follow:

1.1 Kubernetes Configuration Tips

Here are the tips to configure Kubernetes:

  • The very first thing to do while defining Kubernetes configurations is to specify the latest version of the stable API.
  • Then the developer must see that the configuration files are getting saved in the version control before they get pushed to the Kubernetes cluster. This thing will help the development team to roll back changes in a configuration quickly and also help in the restoration and re-creation of a cluster.
  • Another tip to configure Kubernetes is that objects must be grouped into a single file whenever it is possible. This helps in managing files easily.
  • The developer must write the application configuration files by using YAML technology rather than JSON. These formats can be interchanged and used in the majority of situations, but YAML is more user-friendly.
  • Another tip for configuring Kubernetes is to use many kubectl commands by calling them on a directory.
  • The developer must put the description in annotations if he wants to offer better introspection.
  • When values are specified without requirements, even the minimal and simple configuration makes errors.

1.2 Use the Latest K8s Version

Another best practice of Kubernetes is to use the latest version. The reason behind it is that the majority of the time, developers worry about having the latest new features in the new Kubernetes version and their unfamiliarity or limited support or incompatibility with the current application setup.

For this, the most important thing to do is to update Kubernetes with the latest version that is stable and offers performance and security to all the issues. Besides this, if any issues are faced while using the latest version, the developers must find community-based support.

1.3 Use Namespaces

Using namespaces in Kubernetes is also a practice that every Kubernetes app development company must follow. For this, the developers of any company must use namespaces to organize the objects of the application and create logical partitions within the Kubernetes cluster to offer high security. In addition to this, Kubernetes comes with three different namespaces Kube-public, Kube-system, and default. In this case, RBAC is used by the developers to control the access of some specific namespaces whenever they need to reduce the group’s access to control the blast radius.

Besides this, in the Kubernetes cluster, even the LimitRange objects can be configured against namespaces. This is done to specify the container’s standard size that needs to be deployed in the namespace. Here, the developers can use ResourceQuotas to limit the total resource consumption.

YAML Example:
# this yaml is for creating namespace
apiVersion: v1
kind: Namespace
metadata:
  name: my-demo-namespace
  labels:
    name: my-demo-namespace
# this yaml is for creating pod in the above created namespace
apiVersion: v1
kind: Pod
metadata:
  name: my-demo-app
  namespace: my-demo-namespace
  labels:
    image: my-demo-app
spec: 
  containers:
    - name: my-demo-app
    image: nginx

1.4 Avoid Using HostPort and HostNetwork

Avoiding the use of hostPort and hostNetwork is another best practice of Kubernetes, here is what can be done with it:First of all the developers must create a service before deployments or ReplicaSets and before any workloads that require access. Now, whenever Kubernetes starts a container, it will offer environment variables that point to the services that are running on the container. For instance, if a service called “foo” is present, then all the containers will get the below-specified variables in their environment:

FOO_SERVICE_HOST=
FOO_SERVICE_PORT=
  • This implies ordering requirements or services that a Pod requires to access the environment. Here, the DNS must not have any restrictions.
  • Besides this, the developers can also add an optional cluster add-on is a DNS server. This will help the DNS server to look after the Kubernetes API for all the new Services that are created. It also helps in creating a set of DNS records for each cluster.
  • Another best practice is to avoid the use of hostNetwork along with hostPort. The main reason behind not specifying a hostPort for a Pod is that when a Pod is bound to a hostPort, it will start limiting the number of places the Pod is being scheduled as each combination in the hostPort must be unique.
  • The developer must also consider the use of headless services to discover the service when there is no need for kube-proxy load balancing.

1.5 Using kubectl

Using kubectl is also a practice that can be considered by Kubernetes developers. Here, the development team can make use of the following things:

  • First of all, the developers must consider using kubectl apply -f <directory>. This is essential for configuring Kubernetes in all .yml, .yaml, and .json files in <directory>.
  • Then, Use kubectl create deployment and kubectl expose to quickly create single-container Deployments and Services.
  • Here, the label selectors must be used to get and delete operations. This can be used instead of specific object names.

1.6 Use Role-based Access Control (RBAC)

Using RBAC is another best practice that helps develop and create Kubernetes applications with ease. The general approach while working with Kubernetes is that minimal RBAC rights must be assigned to service accounts and users. Only Permissions Explicitly required for the use of operation should be used. Here, as each cluster will be different, some general rules can be applied to all, and they are:

  • Whenever possible, the development team must avoid offering wildcard permissions to all the resources. The reason behind this is that as Kubernetes is an extensible system when rights are given to all object types of the current system version, the object types of the future system version will also be assigned to the users.
  • Permission must be assigned at the namespace level if possible by using RoleBindings instead of ClusterRoleBindings to offer rights as per the namespace.
  • The development team must avoid adding users to the master group of the system as any user who is a member of this group gets the authority to bypass all the RBAC rights.
  • Unless it is not required, cluster-admin accounts must not be used by the administrators. The reason behind this is that when the low-privileged account is offered with impersonation rights, it can help avoid accidental modification of cluster resources.
YAML Example:
# this yaml is for creating role named “pod-reading-role”
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reading-role
rules:
- apiGroups: [""]      # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
# This yaml is for creating role binding that allows user "demo" to read pods in the "default" namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: demo-user-read-pods
  namespace: default
subjects:
- kind: User
  name: demo    #name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reading-role   # this must match the name of the Role
  apiGroup: rbac.authorization.k8s.io

1.7 Follow GitOps Workflow

Another Kubernetes best practice is to follow GitOps workflow. This means that to have a successful deployment of Kubernetes, developers must give thought to the workflow of the application. For this, the use of git-based workflow is an ideal choice as it helps in offering automation by using CI/CD (Continuous Integration / Continuous Delivery) pipelines. This also helps in increasing the application deployment process with efficiency. In addition to this, when CI/CD is used, it tends to offer an audit trail to the developers for the software deployment process.

1.8 Don’t Use “Naked” Pods

Not using naked pods is another best practice that must be considered. Here, there are a few points to look out for and they are as below:

  • Naked pods must not be used if avoidable as it will not be rescheduled if the node fails in an event.
  • Generally, deployment helps in creating a ReplicaSet to make sure that the number of Pods that are required are available and specifying a strategy that can help in replacing Pods. In this case, naked pods can create an issue.
YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx

1.9 Configure Least-Privilege Access to Secrets

Configuring least-privilege access to secrets is also a best practice as it helps developers plan the access control mechanism like Kubernetes RBAC (Role-based Access Control). In this case, the developers must follow the below-given guidelines to access Secret objects.

  • Humans: In this case, the software development teams must restrict watch, get, or list access to Secrets. Cluster administrators are the only ones that should be allowed access.
  • Components: Here, the list or watch access must be restricted access to only the most privileged components of the system.

Basically, in Kubernetes, the user who has access to create a Pod uses a Secret and can see the value of that particular Secret. Here, even if the Kubernetes cluster default policies don’t allow the user to react to the Secret, the same user can get access to the Secret when he runs the Pod. For this, limiting the impact caused by Secret data exposure is why the following recommendations should be considered:

  • Implementation of audit rules that alert the admin on some specific events.
  • Secrets that are used must be short-lived.

1.10 Use Readiness and Liveness Probes

Readiness and Liveness probes are known as the most important parts of the health checks in Kubernetes. They help the developers to check the health of the application.

A readiness probe is a popular approach that enables the development team to make sure that the requests sent to a pod are only directed when the pod is ready to serve it. If the pod is not ready to serve a request, it should be directed somewhere else. On the other hand, the Liveness probe is a concept that helps in testing if the application is running as expected by the health check protocols.

YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx:1.14.2
            readinessProbe:
               httpGet:
                  path: /ready
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5
	livenessProbe:
               httpGet:
                  path: /health
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5

1.11 Use a Cloud Service to Host K8s

Kubernetes cluster hosting on the hardware can be a bit complex but its cloud services can offer platform as a service (PaaS) like EKS (Amazon Elastic Kubernetes Service) on Amazon Web Services and AKS (Azure Kubernetes Service) on Azure. This means that the infrastructure of the application can be handled by the cloud provider along with other tasks like adding and removing nodes from the cluster can also be achieved by cloud services.

1.12 Monitor the Cluster Resources

Another Kubernetes best practice is to monitor cluster resource components in the Kubernetes version control system to keep them under control. As the control plane is known as the core of the Kubernetes, these components can help in keeping the system up and running. Besides this, Kubelet, Kubernetes API, controller-manager, etcd, kube-dns, and Kube-proxy make up the control plane.

1.13 Secure Network Using Kubernetes Firewall

The last in our list of Kubernetes best practices is securing the network using a Kubernetes firewall and using network policies to restrict internal traffic. When the firewall is put in front of the Kubernetes cluster, it will help restrict resource requests that are sent to the API server.

2. Conclusion

As seen in this blog, many different best practices can be considered to design, run, and maintain the Kubernetes cluster. These practices help developers in putting modern applications into the world. But which practice to put into action and which practice will help the application become a success needs to be decided by the Kubernetes app developers for which the engineers need to be experts in Kubernetes.

FAQs

What is the main benefit of Kubernetes?

Some of the main benefits of Kubernetes are efficient use of namespaces, robust security through Firewall & RBAC, and monitoring control panel components. 

How do I improve Kubernetes?

To improve Kubernetes performance, the developer needs to focus on using optimized container images, defining resource limits, and more. 

What is a cluster in Kubernetes?

In Kubernetes, the cluster is an approach that contains a set of worker machines called nodes. 

What is the biggest problem with Kubernetes?

The biggest issue with Kubernetes is its vulnerability and complexity.

The post Kubernetes Best Practices to Follow appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/kubernetes-best-practices/feed/ 0
Microservices Testing Strategies: An Ultimate Guide https://www.tatvasoft.com/blog/microservices-testing-strategies/ https://www.tatvasoft.com/blog/microservices-testing-strategies/#respond Tue, 23 Jan 2024 05:41:36 +0000 https://www.tatvasoft.com/blog/?p=12295 In today's time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. For microservices, formulating an effective testing strategy is a challenging task. A Combination of Testing Strategies with right tools that provide support at each layer of testing is a key.
  2. Post integration of the service, the risk of failures and cost of correction is high. So a good testing strategy is required.
  3. Tools like Wiremock, Goreplay, Hikaku, VCR, Mountebank, and many others are used for microservices testing purposes.
  4. For the effective approach, there should be a clear Consensus on the test strategy. Required amount of testing should be focused at the correct time with suitable tools.
  5. For the microservices architecture, there is a scope of unit testing, integration testing, component testing, contract testing, and end to end testing. So the team must utilise these phases properly as per the requirements.

In today’s time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight. But they need to be frequently updated to scale up the application. Besides this, the developers also have to carry out microservices testing strategies to make sure that the application is performing the way it is expected to.

Let’s first understand what types of challenges developers are facing while using a microservices architecture.

1. Challenges in Microservices Testing

Microservices and monolithic architectures have many differences. It also comes with some challenges that every developer should know before testing microservices.

Challenge Description
Complexity
  • Though single services are very simple, the entire microservices system is a bit complex which means that to work with the developers one needs to be careful in choosing and configuring the databases and services in the system.
  • Even testing and deploying each service can be challenging as this is a distributed nature.
Data Integrity
  • Microservices offer distributed databases which are problematic for data integrity as business applications might require updates with time, but here database upgrade becomes compulsory.
  • This is the case when there is no data consistency. So, testing becomes more difficult.
Distributed Networks
  • Microservices can be deployed on various servers with different geographical locations by adding latency and making the application know about network disruptions.In this case, when tests rely on the network, it will fail if there is any fault in the code and this will interrupt the CI/CD pipeline.
Test Area
  • Every microservice usually points to many API endpoints which means that testable surfaces increase and developers have to work on more areas which is a bit time-consuming task.
Multiple frameworks used for development
  • Though the developers choose the best-suited microservices frameworks and programming languages for each microservice when the system is big, it becomes difficult to find a single test framework that can work for all the components.
Autonomous
  • The app development team can deploy microservices anytime but the only thing they need to take care of is that the API compatibility doesn’t break.
Development
  • Microservices can be independently deployable, so extra checks are required to ensure they function well.Even the boundaries need to be set correctly for microservices to run perfectly fine.

2. Microservices Testing Strategy: For Individual Testing Phases

Now let us understand the testing pyramid of microservices architecture. This testing pyramid is developed for automated microservices testing. It includes five components. 

Microservices Testing Strategies

The main purpose of using these five stages in microservices testing is: 

Testing Type Key Purpose
Unit Testing
  • To test various parts (class, methods, ) of the microservice. 
Contract Testing
  • To test API compatibility. 
Integration Testing
  • To test the communication between microservices, third-party services, and databases. 
Component Testing
  • To test the subsystem’s behavior. 
End-to-End Testing
  • To test the entire system. 

2.1 Unit Testing

The very first stage of testing is unit testing. It is mainly used to verify the function’s correctness against any specification. It checks a single class or a set of classes that are closely coupled in a system. The unit test that is carried out either runs with the use of the actual objects that are able to interact with the unit or with the use of the test doubles or mocks.

Basically, in unit tests, even the smallest piece of the software solution is tested to check whether it behaves the way it is expected to or not. These tests are run at the class level. Besides this, in unit testing one can see a difference in whether the test is performed on an isolated unit or not. The tests carried out in this type of testing method are written by developers with the use of regular coding tools, the only difference is in its types as shown below.

Solitary Unit Testing: 

  • Solitary unit tests ensure that the methods of a class are tested.
  • It mainly focuses on the test result to always be deterministic. 
  • In this type of unit testing, collaborations and interactions between an object of the application and its dependencies are also checked.
  • For external dependencies, mocking or stubbing to isolate the code is used.
Solitary Unit Testing

Sociable Unit Testing: 

  • These tests are allowed to call other services. 
  • These tests are not deterministic, but they provide good results when they pass. 
Sociable Unit Testing

Basically, as we saw here, unit tests when used alone do not offer a guarantee of the system’s behavior. The reason behind it is that in unit testing, the core testing of each module is covered but it doesn’t cover the modules when they are in collaborative mode. Therefore, in such cases, to make sure that the unit tests are run successfully, developers make use of test doubles and ensure that each module works correctly.

2.2 Integration Testing

The second stage of microservices testing is Integration tests. This type of testing is used when the developer needs to check the communication between two or more services. Integration tests are specially designed to check error paths and the basic success of the services over a network boundary. 

In any software solution, there are different components that interact with one another as they may functionally depend on each other. Here, the integration test will be used to verify the communication paths between those components and find out any interface defects. All the test modules are integrated with each other and they are tested as a subsystem to check the communication paths in this testing method.

There can be three types of communications that happen in the microservices architecture: 

  1. Between two different microservices
  2. Between Microservice and third-party application
  3. Between Microservice and Database
Integration Testing

The aim of integration testing is to check the modules and verify if their interaction with external components is successful and safe. While carrying out such tests, sometimes it becomes difficult to trigger the external component’s abnormal behavior like slow response time or timeout. In such cases, special tests are written by the developers to make sure that the test can respond as expected. 

Basically, integration tests come with the goal of providing assurance that the coding schema matches the stored data.

2.3 Component Testing

Component tests are popular when it comes to checking the full function of a single microservice. When this type of testing is carried out, if there are any calls made by the code to the external services, they are mocked. 

Basically, a component is a coherent, well-encapsulated, and independently replaceable part of any software solution. But when it comes to a microservice architecture, these component tests become a service. This means that developers perform component tests in microservices architecture to isolate any component’s complex behavior.

Besides this, component tests are more thorough in comparison to integration testing as it has the capability to travel on all the paths. For instance, here we can know how the component responds to the network’s malformed requests. This process can be divided into two parts.

In-process Component Testing

  • Test runners exist in the same process or thread as microservices.
  • Microservices can be started in an offline test mode.
  • This testing works only with single-component microservices.
In-process Component Testing

Out-process Component Testing

  • Appropriate for any size of components.
  • Components here are deployed unaltered in a test environment.
  • All dependencies in microservices are stubbed or mocked out.
Out-of-Process Component Test

2.4 Contract Testing

This testing type is carried out when two microservices gather via an interface and they need a bond to specify all the possible transactions with their data structures. Here, even the possible side effects of the inputs and outputs are analyzed to make sure that there is no security breach in the future. This type of testing can be run by the client, the producer, or both.

Contract Testing

Consumer-side 

  • The downstream team writes and executes the tests.
  • The testing method connects microservices in a mocked version of the producer service.
  • Microservices are checked to see if they can consume client-side API. 

Producer-side 

  • Producer-side contract tests run in upstream services. .
  • Clients’ API requests are checked along with the producer’s contract details.

2.5 End-to-End Testing

The last type of testing in our list is End-to-End Testing. This approach is used for testing microservices completely. This means that end-to-end testing checks the entire microservices application. It checks whether the system meets the client’s requirements and helps in achieving the goal. When this test is carried out by the developers, it doesn’t bother about the internal architecture of the application but just verifies that the system offers a business goal. In this case, when the software is deployed, it is treated as a black box before getting tested.

End-to-End Testing

Besides this, as end-to-end testing is more about business logic, it checks the proxies, firewall, and load balancer of the application because generally they are affected by the public interference from API and GUIs. In addition to this, end-to-end testing also helps developers to check all the interactions and gaps that are present in microservice-based applications. This means that testing microservices applications completely can be possible with end-to-end testing.

Now, let’s look at various scenarios and how these phases can apply. 

3. Microservices Testing Strategies for Various Scenarios

Here we will go through various microservices testing strategies for different scenarios to understand the process in a better way. 

Scenario 1: Testing Internal Microservices Present in the Application

This is the most commonly used strategy to test microservices. Let’s understand this with an example. 

For instance, there is a social media application that has two services like

  1. Selecting Photos and Videos
  2. Posting Photos and Videos 

Both services are interconnected with each other as there is close interaction between them in order to complete an event. 

Testing Internal Microservices Present in the Application
Testing Scopes  Description
Unit Testing
  • For each individual microservice, there is a scope of unit testing.
  • We can use frameworks like JUnit or NUnit for testing purposes.
  • First, one needs to test the functional logic.
  • Apart from that, internal data changes need to be verified.
  • For Example: If Selecting Photos and Videos Service returns a selectionID then the same needs to be verified within the service.
Integration Testing
  • Both the microservices are internally connected in our case.
  • In order to complete an event, both need to be executed in a perfect manner.
  • So, there is a scope for Integration testing.
Contract Testing
  • It is recommended to use testing tools that enable user-driven contract testing.Tools like Pacto, Pact, and Janus are recommended.In this testing, data passed between services needs to be validated and verified. For the same, one can use tools like SOAPUI.
End-to-End Testing
  • End to End Testing, commonly referred to as E2E testing, ensures that the dependency between microservices is tested at least in one flow.
  • For example, an event like making a post on the app should trigger both the services i.e. Selecting Photos and Videos and Posting Photos and Videos.

Scenario 2: Testing Internal Microservices and Third-party Services

Let’s look at the scenario where third-party APIs are integrated with Microservices. 

For Example, in a registration service, direct registration through the Gmail option is integrated. Here registration is modelled as a microservice and interacts with gmail API that is exposed for authenticating the user. 

Testing Internal Microservices and Third-party Services
Testing Scopes Descriptions 
Unit Testing
  • The developers can perform unit tests to check the changes that happened internally.
  • Frameworks like xUnit are used to check the functional logic of the application after the change.
  • The TDD approach can also be considered whenever possible.
Contract Testing
  • The expectations from the consumer’s microservices are checked which decouples itself from the external API.
  • Test doubles can be created here using Mountebank or Mockito to define Gmail API.
Integration Testing
  • Integration tests are carried out if the third-party developers offer sandbox API.This type of testing checks whether the data is being passed perfectly from one service to another and to see if the services are integrated as required.
End-to-End Testing
  • With end-to-end testing, the development team ensures that there are no failures in the workflow of the system.
  • One checks the dependencies between the microservices and ensures that all the functions of the application are working correctly.

Scenario 3: Testing Microservices that are Available in Public Domain

Let’s consider an e-commerce application example where users can check the items’ availability by calling a web API.

Testing Microservices that are Available in Public Domain
Testing ScopesDescriptions 
Unit Testing
  • Here, the development team can carry out unit testing to check all functions of the application that the services have defined.
  • This testing helps to check that all the functions of the services work perfectly fine as per the user’s requirements.
  • It also ensures that the data persistence is taken care of.
Contract Testing
  • This testing is essential in such cases.
  • It makes sure that the clients are aware of the contracts and have agreed upon them before availing of the facilities provided by the application.
  • Here, the owner’s contracts are validated, and later consumer-driven contracts are tested.
End-to-end Testing
  • Here we can test the workflow using End-to-end Testing. It enables software testing teams to make sure that the developed application offers facilities as per the requirement. End-to-end testing also ensures that the integration of services with external dependencies is secured.

4. Microservices Testing Tools

Here are some of the most popular Microservices testing tools available in the market.

  • Wiremock: It is a very popular simulator that is used by developers when they want to do integration tests. Unlike any other general-purpose mocking tool, Wiremock has the capability to work by developing an actual HTTP server that the code that is being tested can connect to as a real web service.
  • Goreplay: It is an open-source tool for network monitoring. It helps in recording live traffic of the application and this is why it is used by the developers to capture and replay live HTTP traffic.
  • Mountebank: It is a widely used open-source tool that enables software development companies to carry out cross-platform and multi-platform test doubles over the wire. With the help of Mountebank, the developers can simply replace the actual dependencies of the application and test them in the traditional manner.
  • Hikaku: It is a very popular test environment for microservices architecture. It helps the developers to ensure that the implementation of REST-API in the application actually meets its specifications. 
  • VCR: Developers use the VCR tool to record the tests that they carry out on the suite’s HTTP interactions. This recording can be later played for future tests to get accurate, fast, and reliable test results.

5. Conclusion

Microservices testing plays a very important role when it comes to modern software development tactics. It enables developers to offer applications that have greater flexibility, agility, and speed. There are some essential strategies that need to be carried out by the development teams when it comes to testing microservices applications in order to deploy a secure application and some of those microservices testing strategies are discussed in this blog. These automated tests enable the developers to easily cater to customer requirements by offering a top-notch application.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/microservices-testing-strategies/feed/ 0
AWS Cost Optimization Best Practices https://www.tatvasoft.com/blog/aws-cost-optimization/ https://www.tatvasoft.com/blog/aws-cost-optimization/#respond Wed, 20 Dec 2023 09:39:17 +0000 https://www.tatvasoft.com/blog/?p=12373 In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.

The post AWS Cost Optimization Best Practices appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. AWS Cloud is a widely used platform and offers more than 200 services. These cloud resources are dynamic in nature and their cost is difficult to manage.
  2. There are various tools available like AWS Billing Console, AWS Trusted Advisor, Amazon CloudWatch, Amazon S3 Analytics, AWS Cost Explorer, etc. that can help in cost optimization.
  3. AWS also offers flexible purchase options for each workload. So that one can improve resource utilization.
  4. With the help of Instance Scheduler, you can stop paying for the resources during non operating hours.
  5. Modernize your cloud architecture by scaling microservices architectures with serverless products such as AWS Lambda.

In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.

In such cases, if one wants to stay ahead of the competition and offer services to deliver efficient business values at lower rates, cost optimization is required. Here, we will understand what cost optimization is, why it is essential in AWS and what are the best practices for AWS cost optimization that organizations need to consider.

1. Why is AWS More Expensive?

The AWS cloud is the widely used platform by software development companies that offers more than 200 services to their clients. This cloud resource is dynamic in nature and because of this, its cost is difficult to manage and is unpredictable. Besides this, here are some of the main reasons that make AWS a more expensive platform to use for any organization. 

  • When any business organization does not use Elastic Block Store (EBS) volumes, load balancers, snapshots, or some other resources, they will have to pay more as these resources will still be incurring costs whether you use them or not.
  • Some businesses are paying for computer instance services like Amazon EC2 but not utilizing them properly.
  • Reserved or spot instances are not used when they are required and these types of instances generally offer discounts of 50-90%. 
  • Sometimes it happens that the auto-scaling feature is not implemented properly or is not optimal for the business. For instance, as the demand for something increases and you scale up your business to fulfill it but becomes too much as there are many redundant resources. This can also cost a lot.
  • Savings Plans that come with AWS are not used properly which can affect the cost as it will not minimize the total spend on AWS. 

2. AWS Cost Optimization Best Practices

Here are some of the best practices of AWS cost optimization that can be followed by all the organizations opting for AWS.

2.1 Perform Cost Modeling

One of the top practices that must be followed for AWS cost optimization is performing cost modeling for your workload. Each component of the workload must be clearly understood and then cost modeling must be performed on it to balance the resources and find out the correct size for each resource that is in the workload. This offers a specific level of performance.

Besides this, a proper understanding of cost considerations can enable the companies to know more about their organizational business case and make decisions after evaluating the value realization outcomes.

There are multiple AWS services one can use with custom logs as data sources for efficient operations for other services and workload components. For example:

  1. AWS Trusted Advisor 
  2. Amazon CloudWatch 

This is how AWS Cost Advisor Works: 

Now, let’s look at how Amazon CloudWatch Works:

How Amazon CloudWatch Works
Source: Amazon

These are some of the recommended practices one can follow:

  1. Total number of matrices associated with CloudWatch Alarm can incur cost. So remove unnecessary alarms. 
  2. Delete the dashboards those are not necessary. In ideal case, dashboards should be three or less.
  3. Also checkout your contributor insight reports and remove any non-mandatory rules.
  4. Evaluating logging levels and eliminating unnecessary logs can also help to reduce ingestion costs. 
  5. Keep monitor of custom metrics off when appropriate. It will also reduce unnecessary charges. 

2.2 Monitor Billing Console Dashboard

AWS billing dashboard enables organizations to check the status of their month-to-date AWS expenditure, pinpoint the services that cost the highest, and understand the level of cost for the business. Users can get a precise idea about the cost and usage easily with the AWS billing console dashboard. The Dashboard page consists of sections like –

  • AWS Summary: Here one can find an overview of the AWS costs across all the accounts, services, and AWS regions.
  • Cost Trend by Top Five Services: This section shows the most recent billing periods.
  • Highest Cost and Usage Details: Here you can find out the details about top services, AWS region, and accounts that cost the most and are used more. 
  • Account Cost Trend: This section shows the trend of account cost with the most recent closed billing periods. 

In the billing console, one of the most commonly viewed pages is the billing page where the user can view month-to-date costs and a detailed services breakdown list that are most used in specific regions. From this page, the user can also get details about the history of costs and usage including AWS invoices.

In addition to this, organizations can also access other payment-related information and also configure the billing preferences. So, based on the dashboard statistics, one can easily monitor and take actions for the various services to optimize the cost. 

2.3 Create Schedules to Turn Off Unused Instances

Another AWS cost optimization best practice is to pay attention to create schedules to turn off the instances that are not used on the regular bases. And for this, here are some of the things that can be taken into consideration.

  • At the end of every working day or weekend or during vacations, unused AWS instances must be shut down. 
  • The usage metrics of the instances must be evaluated to help you decide when they are frequently used which can eventually enable the creation of an accurate schedule that can be implemented to always stop the instances when not in use.
  • Optimizing the non-production instances is very essential and when it is done, one should prepare the on and off hours of the system in advance.
  • Companies need to decide if they are paying for EBS quantities and other relevant elements while not using the instances and find a solution for it. 

Now let’s analyze the different scenarios of AWS CloudWatch Alarms.

ScenarioDescription
Add Stop Actions to AWS CloudWatch Alarms
  • We can create an alarm to stop the EC2 instance when the threshold meets.
  • Example:
    • Suppose you forget to shut down a few development or test instances.
    • You can create an alarm here that triggers when CPU utilization percentage has been lower than 10 percent for 24 hours indicating that instances are no longer in use.
Add Terminate Actions to AWS CloudWatch Alarms 
  • We can create an alarm to terminate the EC2 instance when a certain threshold meets.
  • Example:
    • Suppose any instance has completed its work and you don’t require it again. In this case, the alarm will terminate the instance.
    • If you want to use that instance later, then you should create an alarm to stop the instance instead of terminating it.
Add Reboot Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically reboots the instance.
  • In case of instance health check failure, this alarm is recommended.
Add Recover Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically recovers the instance if it becomes nonfunctional due to hardware failure or any other cause.

2.4 Supply Resources Dynamically

When any organization moves to the cloud, it will have to pay for its requirements. But for that, the company will have to supply resources that can match the workload demand at the time of the requirement. This can help in reducing the cost that goes behind overprovisioning. For this, any organization will have to modify the demand for using buffer, throttle, or queue in order to smooth the demand of the organization processes via AWS and serve it with fewer resources. 

This benefits the just-in-time supply and balances it against the need to have high availability, resource failures, and provision time. Besides this, in spite of the demand being fixed or variable, the plan to develop automation and metrics will be minimal. In AWS, reducing the cost of optimization by supplying resources dynamically is known as the best practice. 

PracticeImplementation Steps
Schedule Scaling Configuration
  • When the changes in demand are predictable, time-based scaling can help in offering a correct number of resources.
  • If the creation and configuration of resources are not fast to respond to the demands generated, schedule scaling can be used.
  • Workload analysis can be configured using AWS Auto Scaling and even predictive scaling can be used to configure time-based scheduling.
Predictive Scaling Configuration
  • With predictive scaling, one can increase instances of Amazon EC2 in the Autoscaling group at an early stage.
  • Predictive analysis helps applications start faster during traffic spikes.
Configuration of Dynamic Automatic Scaling
  • Auto scaling can help in configuring the scaling as per the active workload in the system
  • Auto-scaling launches the correct resources level after the analysis and then verifies the scale of the workload in the required timeframe.

2.5 Optimizing Your Cost with Rightsizing Recommendations

One of the best practices of cost optimization in AWS is rightsizing recommendations. It is a feature in Cost Explorer that enables companies to identify cost-saving opportunities. This can be carried out by removing or downsizing instances in Amazon EC2 (Elastic Compute Cloud). 

Rightsizing recommendations is a process that will help in analyzing the Amazon EC2 resources of your AWS and check its usage to find opportunities to lower the spending. One can check the underutilized Amazon EC2 instances in the member’s account in a single view to identify the amount you can save and after that can take any action. 

2.6 Utilize EC2 Spot Instances

Utilizing Amazon EC2 Spot Instances is known as one of the best practices of AWS cost optimization that every business organization must follow. In the AWS cloud, this instance enables companies to take advantage of unused EC2 capacity.

Spot Instances are generally available at up to a 90% discount in the cloud market in comparison to other On-Demand instances. These types of instances can be used for various stateless, flexible, or fault-tolerant applications like CI/CD, big data, web servers, containerized workloads, high-performance computing (HPC), and more.

How Spot Instances Work Amazon EC2
Source: Amazon

Besides this, as Spot Instances are very closely integrated with AWS services like EMR, Auto Scaling, AWS Batch, ECS, Data Pipeline, and CloudFormation, any company will have to select the way they want to launch and maintain the apps that are running on Spot Instances. And for this, taking the below-given aspects need to be taken into consideration.

  • Massive scale: Spot instances have the capability to offer major advantages for conducting massive operating scales of AWS. Because of this, it enables one to run hyperscale workloads at a cost savings approach or it also allows one to accelerate the workloads by running various tasks parallelly. 
  • Low, predictable prices: Spot instances can be purchased at lower rates with up to 90% of discount than other On-Demand instances. This enables any company to have provision capacity across Spot, RIs, and On-Demand by using the EC2 Auto Scaling approach in order to optimize workload cost. 
  • Easy to use: When it comes to Spot Instances, launching, scaling, and managing them by utilizing the AWS services like ECS and EC2 Auto Scaling is easy. 

2.7 Optimizing EC2 Auto Scaling Groups (ASG) Configuration

Another best practice of AWS cost optimization is to configure EC2 auto-scaling groups. Basically, ASG is known as a collection of Amazon EC2 instances and is treated as a group of logical approaches for automatic scaling and management of tasks. ASGs have the ability to take advantage of Amazon EC2 Auto Scaling features like custom scaling and health check policies as per the metrics of any application.

Besides this, it also enables one to dynamically add or remove EC2 instances from predetermined rules that are applied to the loads. ASG also enables the scaling of EC2 fleets as per the requirement to conserve the cost of the processes. In addition to this, you can also view all the scaling activities by either Auto Scaling Console or describe-scaling-activity CLI command. In order to optimize the scaling policies to reduce the cost of scaling the processes up and down, here are some ways.

  • For scaling up the processes, instances must be added which are less aggressive in order to monitor the application and see if anything is affected or not.
  • And for scaling down the processes, reducing instances can be beneficial as it allows for minimizing the necessary tasks to maintain current application loads.

This is how AWS auto scaling works:

AWS Auto Scaling Works
Source: Amazon

2.8 Compute Savings Plans

Compute Savings Plans is an approach that is very beneficial when it comes to cost optimization in AWS. It offers the most flexibility to businesses using AWS and also helps in reducing costs by 66%. The computer savings plans can automatically be applied to the EC2 instance usage regardless of the size, OS, family, or region of the instances. For example, one can change the instances from C4 to M5 with the help of Compute Savings Plans or move the workload from EC2 to Lambda or Fargate. 

This is the snapshot of the Computation of AWS savings plan rates

AWS Saving Plans Rates
Source: Amazon

2.9 Delete Idle Load Balancers

One of the best practices of AWS cost optimization is to delete the ideal load balance and to do that initially the user needs to check the Elastic Load Balancing configuration in order to check which load balancer is not being used. Every load balancer that is working in the system incurs cost and if there is any that doesn’t have any backend instances or network traffic, it won’t be in use which will be costly for the company. This is why the first thing to do is identify the load balancer that is not in use for which one can use AWS Trusted Advisor. This tool will identify load balancers with a low number of requests. After identifying the balancer with less than 100 requests in a week, you can remove it to reduce the cost.

2.10 Identify and Delete Orphaned EBS Snapshots

Another best practice for AWS cost optimization is to identify and remove orphaned EBS snapshots. To understand this and learn how to delete the snapshots, let’s go through the below points and see how AWS CLI allows businesses to search certain types of snapshots that can be deleted.

  • The very first thing to do is use the describe-snapshots command. This will help the developers to get a list of snapshots that are available in your system which includes private and public snapshots owned by other Amazon Web Services accounts. These snapshots will require volume permissions which you will need to create and then in order to filter the created snapshots, one needs to add a JMESPath expression as shown in the below commands.
    aws ec2 describe-snapshots 
    --query "Snapshots[?(StartTime<=`2022-06-01`)].[SnapshotId]" --output text 
    
  • Now it’s time to find old snapshots. For this, one can add a filter to the command while using tags. In the below example, we have a tag named “Team” which helps in getting back snapshots that are owned by the “Infra” team. 
    aws ec2 describe-snapshots --filter Name=tag:Team,Values=Infra 
    --query "Snapshots[?(StartTime<=`2020-03-31`)].[SnapshotId]" --output text 
    
  • After this, as you get the list of snapshots associated with a specific tag mentioned above, you can delete them by executing the delete-snapshot command. 
    aws ec2 delete-snapshot --snapshot-id  
    

Snapshots are generally incremental which means that one snapshot is deleted which consists of data that has the reference of another one, the data won’t get deleted but will be transferred to another snapshot. This clearly means deleting a snapshot will not reduce the storage but if there is a block in the data, it will be captured and no longer will be a problem.

2.11 Handling AWS Chargebacks for Enterprise Customers

The last practice in our list to optimize AWS cost is to handle the chargebacks for enterprise customers. The main reason behind doing this is that as AWS product portfolios and features start growing, so does the migration of an enterprise customer in the existing workloads to the new products on AWS. And in this situation, keeping the cloud charges low is very difficult. And when the resources and services of your business are not tagged correctly, the complexity grows. In order to help the businesses normalize the processes and reduce their costs after implementing the last updates of AWS, implementing auto-billing and chargebacks transparently is necessary. For this, the following steps must be taken into consideration. 

  • First of all, a proper understanding of blended and unblended costs in consolidated billing files (Cost & Usage Report and Detailed Billing Report) is important. 
  • Then the AWS Venting Machine must be used to create an AWS account and keep the account details and reservation-related data in the database in different tables. 
  • After that, to help the admin to add invoice details, a web page hosted on AWS Lambda or a web server is used. 
  • Then to begin the transformation process of the billing, the trigger is added to the S3 bucket to push messages into Amazon Simple Queue Services. After this, your billing transformation will run on Amazon EC2 instances.

3. AWS Tools for Cost Optimization

Now, after going through all the different practices that can be taken into consideration for AWS cost optimization, let us have a look at different tools that are used to help companies track, report, and analyze costs by offering several AWS reports.

  • Amazon S3 Analytics: It enables software development companies to automatically carry out analysis and visualization of Amazon S3 storage patterns which can eventually help in deciding whether there is a need to shift data to another storage class or not. 
  • AWS Cost Explorer: This tool enables you to check the patterns in AWS and have a look at the current spending, project future costs, observe Reserved Instance utilization & Reserved Instance coverage, and more. 
  • AWS Budgets: It is a tool that allows companies to set custom budgets that can trigger alerts when the cost increases that the pre-decided budget. 
  • AWS Trusted Advisor: It offers real-time identification of business processes and areas that can be optimized. 
  • AWS CloudTrail: With this tool, users can log into the AWS infrastructure, continuously monitor the processes, and retain all the activities performed by the account to take better actions which can help in reducing the cost. 
  • Amazon CloudWatch: It enables the companies to gather the metrics and track them, set alarms, monitor log files, and automatically react to changes that are made in AWS resources. 

4. Conclusion

As seen in this blog, there are many different types of AWS cost optimization best practices that can be followed by organizations that are working with the AWS platform to create modern and scalable applications for transforming the firm. Organizations following these practices can achieve the desired results with AWS without any hassle and can also stay ahead in this competitive world of tech providers.

The post AWS Cost Optimization Best Practices appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/aws-cost-optimization/feed/ 0
A Complete Guide to React Micro Frontend https://www.tatvasoft.com/blog/react-micro-frontend/ https://www.tatvasoft.com/blog/react-micro-frontend/#respond Tue, 05 Dec 2023 08:15:33 +0000 https://www.tatvasoft.com/blog/?p=12274 It is a difficult and challenging task for developers to manage the entire codebase of the large scale application. Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies.

The post A Complete Guide to React Micro Frontend appeared first on TatvaSoft Blog.

]]>

Key Takeaways

  1. When developers from various teams contribute to a single monolith on the top of microservices architecture, it becomes difficult to maintain the large scale application.
  2. To manage the large-scale or complex application, breaking down the frontend into smaller and independently manageable parts is preferable.
  3. React is a fantastic library! One can create robust Micro-Frontends using React and tools like Vite.
  4. Micro Frontend with React provides benefits like higher scalability, rapid deployment, migration, upgradation, automation, etc.

It is a difficult and challenging task for developers to manage the entire codebase of the large scale application. Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies

Micro frontends involve breaking down the frontend side of the large application into small manageable parts. The importance of this design cannot be overstated, as it has the potential to greatly enhance the efficiency and productivity of engineers engaged in frontend code. 

Through this article, we will look at micro frontend architecture using React and discuss its advantages, disadvantages, and implementation steps. 

1. What are Micro Frontends?

The term “micro frontend” refers to a methodology and an application development approach that ensures that the front end of the application is broken down into smaller, more manageable parts which are often developed, tested, and deployed separately from one another. This concept is similar to how the backend is broken down into smaller components in the process of microservices.

Read More on Microservices Best Practices

Each micro frontend consists of code for a subset (or “feature”) of the whole website. These components are managed by several groups, each of which focuses on a certain aspect of the business or a particular objective.

Being a widely used frontend technology, React is a good option for building a micro frontend architecture. Along with the React, we can use vite.js tool for the smooth development process of micro frontend apps. 

What are Micro frontends

1.1 Benefits of Micro Frontends

Here are the key benefits of the Micro Frontend architecture: 

Key Benefit Description
Gradual Upgrades
  • It might be a time-consuming and challenging task to add new functionality to a massive, outdated, monolithic front-end application.
  • By dividing the entire application into smaller components, your team can swiftly update and release new features via micro frontends.
  • Using multiple frameworks, many portions of the program may be focused on and new additions can be deployed independently instead of treating the frontend architecture as a single application.
  • By this way, teams can improve overall dependencies management, UX, load time, design, and more.
Simple Codebases
  • Many times, dealing with a large and complicated code base becomes irritating for the developers.
  • Micro Frontend architecture separates your code into simpler, more manageable parts, and gives you the visibility and clarity you need to write better code.
Independent Deployment
  • Independent deployment of each component is possible using Micro frontend.
Tech Agnostic
  • You may keep each app independent from the rest and manage it as a component using micro frontend.
  • Each app can be developed using a different framework, or library as per the requirements.
Autonomous Teams
  • Dividing a large workforce into subgroups can increase productivity and performance.
  • Each team of developers will be in charge of a certain aspect of the product, enhancing focus and allowing engineers to create a feature as quickly and effectively as possible.

1.2 Limitations of Micro Frontends

Here are the key limitations of Micro Frontend architecture: 

Limitations Description
Larger Download Sizes
  • Micro Frontends are said to increase download sizes due to redundant dependencies.
  • Larger download rates derive from the fact that each app is made with React or a related library / framework and must download the requirement whenever a user needs to access that particular page.
Environmental Variations
  • If the development container is unique from the operational container, it might be devastating.
  • If the production container is unique from the development container, the micro frontend will malfunction or act otherwise after release to production.
  • The universal style, which may be a component of the container or other micro frontends, is a particularly delicate aspect of this problem.
Management Complexity
  • Micro Frontend comes with additional repositories, technologies, development workflows, services, domains, etc. as per the project requirements.
Compliance Issues
  • It might be challenging to ensure consistency throughout many distinct front-end codebases.
  • To guarantee excellence, continuity, and accountability are kept throughout all teams, effective leadership is required.
  • Compliance difficulties will arise if code review and frequent monitoring are not carried out effectively.

Please find a Reddit thread below discussing the disadvantages of Micro frontend.

Comment
byu/crazyrebel123 from discussion
inreactjs

Now, let’s see how Micro Frontend architecture one can build with React and other relevant tools. 

2. Micro Frontend Architecture Using React

Micro Frontends are taking the role of monolithic design, which has served as the standard in application development for years. The background of monolithic designs’ popularity is extensive. As a result, many prominent software developers and business figures are enthusiastic supporters. Yet as time goes on, new technologies and concepts emerge that are better than what everyone thinks to are used to.

The notion of a “micro frontend” in React is not unique; instead, it represents an evolution of previous architectural styles. The foundation of microservice architecture is being extensively influenced by revolutionary innovative trends in social media, cloud technology, and the Internet of Things in order to quickly infiltrate the industry.

Because of the switch to continuous deployment, the micro frontend with React provides additional benefits to enterprises, such as:

  • High Scalability
  • Rapid Deployment
  • Effective migration and upgrading
  • Technology-independence
  • No issue with the insulation
  • High levels of deployment and automation
  • Reduced development time and cost
  • Fewer Threats to safety and dependability have decreased

Let’s go through the steps of creating your first micro frontend architecture using React: 

3. Building Micro Frontend with React and Vite

Let’s have a look at step by step process of how we can build microfrontend with React and Vite.

3.1 Set Up the Project Structure

To begin with, let’s make a folder hierarchy.

# Create folder named React-vite-federation-demo
# Folder Hierarchy 
--/packages
----/application
----/shared

The following instructions will put you on the fast track:

mkdir React-vite-federation-demo && cd ./React-vite-federation-demo
mkdir packages && cd ./packages

The next thing to do is to use the Vite CLI to make two separate directories: 

  1. application, a React app that will use the components, 
  2. shared, which will make them available to other apps.
#./React-vite-federation-demo/packages
pnpm create vite application --template React
pnpm create vite shared --template React

3.2 Set Up pnpm Workspace

Now that you know you’ll be working with numerous projects in the package’s folder, you can set up your pnpm workspace accordingly.

A package file will be generated in the package’s root directory for this purpose:

touch package.json

Write the following code to define various elements in the package.json file. 

{
  "name": "React-vite-federation-demo", 
  "version": "1.1.0",
  "private": true,   
  "workspaces": [
    "packages/*"
  ],
  "scripts": {
    "build": "pnpm  --parallel --filter \"./**\" build",
    "preview": "pnpm  --parallel --filter \"./**\" preview",
    "stop": "kill-port --port 5000,5001"
  },
  "devDependencies": {
    "kill-port": "^2.0.1",
    "@originjs/vite-plugin-federation": "^1.1.10"
  }
}

This package.json file is where you specify shared packages and scripts for developing and executing your applications in parallel.

Then, make a file named “pnpm-workspace.yaml” to include the pnpm workspace configuration:

touch pnpm-workspace.yaml

Let’s indicate your packages with basic configurations:

# pnpm-workspace.yaml
packages:
  - 'packages/*'

Packages for every application are now available for installation:

pnpm install

3.3 Add Shared Component (Set Up “shared” Package)

To demonstrate, let’s create a basic button component and include it in our shared package.

cd ./packages/shared/src && mkdir ./components
cd ./components && touch Button.jsx

To identify button, add the following code in Button.jsx

import React from "React";
import "./Button.css"
export default ({caption = "Shared Button"}) => ;

Let’s add CSS file for your button:

touch Button.css

Now, to add styles, write the following code in Button.css

.shared-button {
    background-color:#ADD8E6;;
    color: white;
    border: 1px solid white;
    padding: 16px 30px;
    font-size: 20px;
    text-align: center;
}

It’s time to prepare the button to use by vite-plugin-federation, so let’s do that now. This requires modifying vite.config.js file with the following settings:

import { defineConfig } from 'vite'
import React from '@vitejs/plugin-React'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    React(),
    federation({
      name: 'shared',
      filename: 'shared.js',
      exposes: {
        './Button': './src/components/Button'
      },
      shared: ['React']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5000,
    strictPort: true,
    headers: {
      "Access-Control-Allow-Origin": "*"
    }
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

Set up the plugins, preview, and build sections in this file.

3.4 Use Shared Component and Set Up “application” Package

The next step is to incorporate your reusable module into your application’s code. Simply use the shared package’s Button to accomplish this:

import "./App.css";
import { useState } from "React";
import Button from "shared/Button";

function Application() {
  const [count, setCount] = useState(0);
  return (
    

Application 1

count is {count}
); } export default Application;

The following must be done in the vite.config.js file:

import { defineConfig } from 'vite'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'
import React from '@vitejs/plugin-React'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    React(),
    federation({
      name: 'application',
      remotes: {
        shared: 'http://localhost:5000/assets/shared.js',
      },
      shared: ['React']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5001,
    strictPort: true,
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

In this step, you’ll also configure your plugin to use a community package. The lines match the standard packaging format exactly.

Application Launch

The following commands will help you construct and launch your applications:

pnpm build && pnpm preview

Our shared React application may be accessed at “localhost:5000”:

Launch Your Application

At “localhost:5001”, you will see your application with a button from the shared application on “localhost:5000”:

4. Conclusion

Micro Frontends are unquestionably cutting-edge design that addresses many issues with monolithic frontend architecture. With a micro frontend, you may benefit from a quick development cycle, increased productivity, periodic upgrades, straightforward codebases, autonomous delivery, autonomous teams, and more.

Given the high degree of expertise necessary to develop micro frontends with React, we advise working with professionals. Be sure to take into account the automation needs, administrative and regulatory complexities, quality, consistency, and other crucial considerations before choosing the micro frontend application design.

The post A Complete Guide to React Micro Frontend appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/react-micro-frontend/feed/ 0
.NET Microservices Implementation with Docker Containers https://www.tatvasoft.com/blog/net-microservices/ https://www.tatvasoft.com/blog/net-microservices/#respond Thu, 23 Nov 2023 07:19:28 +0000 https://www.tatvasoft.com/blog/?p=12223 Applications and IT infrastructure management are now being built and managed on the cloud. Today's cloud apps require to be responsive, modular, highly scalable, and trustworthy.
Containers facilitate the fulfilment of these needs by applications.

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>

Key Takeaways on .NET Microservices

  1. The microservices architecture is increasingly being favoured for the large and complex applications based on the independent and individual subsystems.
  2. Container-based solutions offer significant cost reductions by mitigating deployment issues arising from failed dependencies in the production environment.
  3. With Microsoft tools, one can create containerized .NET microservices using a custom and preferred approach.
  4. Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself.
  5. An essential aspect of constructing more secure applications is establishing a robust method for exchanging information with other applications and systems.

1. Microservices – An Overview

Applications and IT infrastructure management are now being built and managed on the cloud. Today’s cloud apps require to be responsive, modular, highly scalable, and trustworthy.

Containers facilitate the fulfilment of these needs by applications. To put it another way, attempting to navigate a new location by placing an application in a container without first deciding on a design pattern is like going directionless. You could get where you’re going, but it probably won’t be the fastest way.

.NET Microservices is necessary for this purpose. With the help of a reliable .NET development company offering microservices, the software can be built and deployed in a way that meets the speed, scalability, and dependability needs of today’s cloud-based applications.

2. Key Considerations for Developing .Net Microservices

When using .NET to create microservices, it’s important to remember the following points:

2.1 API Design

Since microservices depend on APIs for inter-service communication, it’s crucial to construct APIs with attention. RESTful APIs are becoming the accepted norm for developing APIs and should be taken into consideration. To prevent breaking old clients, you should plan for versioning and make sure your APIs are backward compatible.

2.2 Data Management

Because most microservices use their own databases, ensuring data consistency and maintenance can be difficult. If you’re having trouble keeping track of data across your microservices, you might want to look into utilising Entity Framework Core, a popular object-relational mapper (ORM) for .NET.

Microservices need to be tested extensively to assure their dependability and sturdiness. For unit testing, you can use xUnit or Moq, and for API testing, you can use Postman.

Monitoring and analysis are crucial for understanding the health of your microservices and fixing any problems that may develop. You might use monitoring and logging tools such as Azure Application Insights.

If you want to automate the deployment of your microservices, you should use continuous integration and continuous delivery (CI/CD) pipeline. This will assist guarantee the steady delivery and deployment of your microservices.

3. Implementation of .NET Microservices Using Docker Containers

Here are the steps to Implement .NET Microservices Using Docker

3.1 Install .NET SDK

Let’s begin from scratch. First, install .NET 7 SDK.

Once you complete the download, install the package and then open a new command prompt and run the following command to check .NET (SDK) information: 

> dotnet

If the installation succeeded, you should see an output like the following in command prompt: 

.NET SDK Installation

3.2 Build Your Microservice

Open command prompt on the location where you want to create a new application. 

Type the following command to create a new app named “MyMicroservices”

> dotnet new webapi -o DemoMicroservice --no-https -f net7.0 

Then, navigate to this new directory. 

> cd DemoMicroservice

What do these commands mean? 

CommandMeaning
dotnetIt creates a new application of type webapi (that’s a REST API endpoint). 
-oCreates a directory where your app “DemoMicroservices” is stored.
–no-httpsCreates an app that runs without an HTTPS certificate. 
-fIndicates that you are creating a .NET 7 application. 

3.3 Run Microservice

Type this into your command prompt:

> dotnet run

The output will look like this: 

run microservices

The Demo Code: 

Several files were generated in the DemoMicroservices directory. It gives you a simple service which is ready to run.

The following screenshot shows the content of the WeatherForecastController.cs file. It is located in the Controller directory. 

Demo Microservices

Launch a browser and enter http://localhost:<port number>/WeatherForecast once the program shows that it is monitoring that address.

In this example, It shows that it is listening on port 5056. The following image shows the output on the following url: http://localhost:5056/WeatherForecast.

WeatherForecast Localhost

You’ve successfully launched a basic service.

To stop the service from running locally using the dotnet run command, type CTRL+C at the command prompt.

3.4 Role of Containers

In software development, containerization is an approach in which a service or application, its dependencies, and configurations (in deployment manifest files) are packaged together as a container image.

The containerized application may be tested as a whole and then deployed to the host OS in the form of a container image instance.

Software containers are like cardboard boxes in which they are a standardised unit of software deployment that can hold a wide variety of programs and dependencies, and they can be moved from location to location. 

This method of software containerization allows developers and IT professionals to easily deploy applications to many environments with few code changes.

If this seems like a scenario where containerizing an application may be useful, it’s because it is. The advantages of containers are nearly identical to the advantages of microservices.

The deployment of microservices is not limited to the containerization of applications. Microservices may be deployed via a variety of mechanisms, such as Azure App Service, virtual machines, or anything else. 

Containerization’s flexibility is an additional perk. Creating additional containers for temporary jobs allows you to swiftly scale up. The act of instantiating an image (by making a container) is, from the perspective of the application, quite similar to the method of implementing a service or a web application.

In a nutshell, containers improve the whole application lifecycle by providing separation, mobility, responsiveness, versatility, and control.

All of the microservices you create in this course will be deployed to a container for execution; more specifically, a Docker container.

3.5 Docker Installation

3.5.1. What is Docker?

Docker is a free and set of platform as a service products that use OS level virtualization for automating the deployment of applications as portable, self-sufficient containers that can run on cloud or on-premises. Docker also has premium tiers for premium features. 

Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself. Docker images may be executed in a container format on both Linux and Windows.

3.5.2. Installation Steps

Docker is a platform for building containers, which are groups of an app, its dependencies, and configurations. Follow the steps mentioned below to install the docker: 

  • First download the .exe file from docker website
  • Docker’s default configuration for Windows employs Linux Containers. When asked by the installation, just accept the default settings.
  • You may be prompted to sign out of the system after installing Docker.
  • Make sure Docker is up and running.
  • Verify that Docker is at least version 20.10 if you currently have it installed.

Once the setup is complete, launch a new command prompt and enter:

> docker --version

If the command executes and some version data is displayed, then Docker has been set up properly.

3.6 Add Docker Metadata

A Docker image can only be created by following the directions provided in a text file called a Dockerfile. If you want to deploy your program in the form of a Docker container, you’ll need a Docker image.

Get back to the app directory

Since the preceding step included opening a new command prompt, you will now need to navigate back to the directory in which you first established your service.

> cd DemoMicroservice

Add a DockerFile

Create a file named “Dockerfile” with this command:

> fsutil file createnew Dockerfile 0

To open the docker file, execute the following command. 

> start Dockerfile.

In the text editor, update the following with the Dockerfile’s current content:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY DemoMicroservice.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "DemoMicroservice.dll"]

Note: Keep in mind that the file needs to be named as Dockerfile and not Dockerfile.txt or anything else.

Optional: Add a .dockerignore file

If you have a .dockerignore file, it will limit the number of files that are read during the ‘docker build’ process. Reduce the number of files to compile faster.

If you’re acquainted with .gitignore files, the following command will create a .dockerignore file for you:

> fsutil file createnew .dockerignore 0

You can then open it in your favorite text editor manually or with this command:

> start .dockerignore

Then, either manually or with the following command, load it in your preferred text editor:

Dockerfile
[b|B]in
[O|o]bj

3.7 Create Docker Image

Start the process with this command:

> > docker build -t demomicroservice

Docker images may be created with the use of the Dockerfile and the docker build command.

The following command will display a catalogue of all images on your system, especially the one you just made.

> docker images

3.8 Run Docker image

Here’s the command you use to launch your program within a container:

> docker run -it --rm -p 3000:80 --name demomicroservicecontainer demomicroservice

To connect to a containerized application, go to the following address: http://localhost:3000/WeatherForecast 

demo microservices with docker weatherforecast

Optionally, The subsequent command allows you to observe your container in a different command prompt: 

> docker ps
docker ps

To cancel the docker run command that is managing the containerized service, enter CTRL+C at the prompt.

Well done! A tiny, self-contained service that can be easily deployed and scaled with Docker containers has been developed by you.

These elements provide the foundation of a microservice.

4. Conclusion

The .NET Framework, from its inception with .NET Core to the present day, was designed from the ground up to run natively on the cloud. Its cross-platform compatibility means your .NET code will execute regardless of the operating system as your Docker image is built on. .NET is incredibly quick, with the ASP.NET Kestrel web server consistently surpassing its competitors. Its remarkable presence leaves no doubt for disappointment and should be incorporated in your projects.

FAQs

Why is .NET core good for microservices?

.NET enables the developers to break down the monolithic application into smaller parts and deploy services separately which can not only help businesses get more time to market the product but also benefit in adapting to the changes quickly and with flexibility. Because of this reason, the .NET core is considered a powerful platform to create and deploy microservices. Besides this, some other major reasons behind it being a good option for microservices are – 

  • Easier maintenance as with .NET core, microservices can be tested, updated, and deployed independently.
  • Better scalability is offered by the .NET core. It scales each service independently to meet the traffic demands.

What is the main role of Docker in microservices?

When it comes to a microservices architecture, the .Net app developers can create applications that are independent of the host environment. This can be done by encapsulating each of the microservices in Docker containers. Docker is a concept that enables the developers to package the applications they create into containers and here each container has a standard executable component and operating system library to run the microservices in any platform. 

The post .NET Microservices Implementation with Docker Containers appeared first on TatvaSoft Blog.

]]>
https://www.tatvasoft.com/blog/net-microservices/feed/ 0