Should we follow new technology trends and rewrite our existing systems?

The lifetimes of custom business systems seem to be getting shorter. One reason for this is the rapidly evolving technology world, which gives frequent new opportunities for business development. The decision to create a new system is often driven by limitations of an existing system that negatively affect or threaten a business’s growth. This is especially the case when competitors have already developed new technologies. 

There are also cases when the decision to change an existing system is dictated only by technology trends. For example, if an existing system is based on the web services architecture, it may be rewritten to a more popular microservice architecture with a newer platform and better frameworks. Usually, such a decision would be influenced by non-functional requirements like improved performance, increased frequency and flexibility of releases, and reduced runtime errors.

Is it worth rewriting an existing system in this example? Let’s analyze it based on the tangible criteria. 

 

  • ROI
    Building a new system is a huge undertaking that requires resources and many months of commitment. In the example above, the cost of resources and the cost of shifting resources from other projects are disproportionately high compared with profits. Additionally, if the new system is built only on technical requirements, it may not be able to address new business challenges without substantial efforts. This could result in a considerable loss. 

 

  • Priority
    If we calculate ROI for the example above, it is significantly lower than the ROI for other backlog initiatives that are based on business requirements. Therefore, the project should have a lower priority than other initiatives; this often means that it will never start.

 

  • Risks
    Large projects carry more significant risks. The example project has an additional risk of termination by the business once its ROI and priority are evaluated and compared against other initiatives.

 

In summary, it is typically not worth rewriting an existing system based on requirements driven only by technology trends.  Low ROI results in such projects being low priority and high risk. But what if a system has performance issues, scalability issues, or other non-functional requirements?

First, all requirements should be added to the backlog and an initial analysis should be performed to determine the cost of a possible solution, ROI, and priority compared to other backlog items. In some cases, the full scope of non-functional requirements cannot be determined. For example, fixing a highly visible performance issue often reveals another one.  Performance should be improved on an ongoing basis; tasks should be added to the backlog based on results of a completed optimization in a previous iteration. With this approach, the tasks can be executed by dedicating 20% of the team’s time (velocity).

So – when is it worth rewriting an existing system?

It is worth creating a new system if key business requirements cannot be implemented in the current system, or if the old technology significantly increases the cost and/or time of implementing the requirements. It is important to gather the key business requirements to make good design decisions, such as choosing the architecture, technologies, and platform. However, it is essential not to invest too much in a project whose lifetime will likely be shorter than expected.

In conclusion, we would like to share a summary of the approach we took to help a client who planned to rewrite a critical business system to address performance issues. The project goal was to improve performance by rewriting the system using the newest technology stack. Fortunately, we were aware of this plan because the client was going to involve our consultants in the project. Instead of starting the new project, we convinced the client to take a different approach. We helped resolve the performance problems by going through cycles of analysis/investigation, development/performance fix, and release that were aligned with the current system development Scrum interactions. Finally, we improved the system performance by 40%, which was almost what the client aimed to achieve in the planned rewrite project. Due to this improvement, the client had time to analyze the business requirements for the future system, make intelligent design decisions, and beat their competition.

Beware of relying on future data in machine learning

Machine learning is almost always connected with analyzing historical data in a way that will allow us predict the future. However, it is easy to create a model by erroneously relying on future data. Sometimes we might catch this mistake towards the beginning of model development; other times, it will go unnoticed until the model is complete. I will present two cases where we should be really careful not to use future data.

Stock performance based on company statements

 

Companies listed on stock exchange present various financial statements like income statement, cash flow statements, balance sheet statements, etc. One may try to predict a company’s performance using this data. Where is the risk of using future data in this case?
Let’s assume we have a database with historical financial statements. Each statement has start and end dates which defines the period of time. We would like to create a training sample from this data. Let’s say we want to predict company’s performance based on the data we have on January 2 2020. In our database we have annual statements for period
2019-01-01 – 2019-12-31.

Though it seems we could make predictions given the data, we aren’t able to. Companies need time to prepare financial statements; therefore, they are not available the day after the statement’s period ending. If we do not consider this, your model will learn to look at the newest statement at the beginning of the year. For example, financial information for January 2, 2021 is not available on a statement for the year 2020, but your model will expect it. That may lead to inaccurate results.

To correct this and properly train a model, we need to look at previous data. For instance, we might check our database from two weeks prior. With January 2, 2020 as our starting point, would look in our database for data with an end date of December 19 2019. We
should keep asking ourselves the question: what data was available at given time point?
Then the same data will be available at prediction time, which is usually today’s date.

 

Backtesting portfolio performance

 

The Markowitz Portfolio Theory is a well-known portfolio construction method. It allows weight adjustment of given stocks within a portfolio to optimize its performance. If we want an idea of how well a portfolio performs, we back test it; in other words, checking how wouldit perform in the past. To show the problem of relying on future data let’s proceed with an example:

 

We want to invest in three companies:
● Mr. Cooper Group Inc (COOP)
● Educational Development Corporation (EDUC)
● Newmont Corporation (NEM)

To define how to divide our capital we will use data from past three years. Using Markowitz’s Theory we get following portfolio:

 

COOP: 35%, EDUC: 45%, NEM: 20%

 

Then we can easily check how such a portfolio would perform had we started investing three years ago. Markowitz proved that it would perform quite well, if we relied on future data.
Three years ago, data we used to compute that portfolio wasn’t known. For the next three years other proportions will be optimal but wedon’t know them. The threat of not realizing the usage of future data is being overly optimistic about future investments. A little unclear what this means!
To properly back test we have to think of a specific strategy rather than particular portfolio proportions. Such strategy might be: “Use Markowitz portfolio based on past three months, rebalance each month”. This way we make sure to always use data available at given time point and our back testing is more reliable.

 

Always think what data was available at a given time point

 

It is crucial to understand your data when dealing with machine learning, and data science in general. It doesn’t matter how sophisticated the model is; when you feed it with garbage data, you get garbage predictions. It becomes even more important when there is time involved in the data being used. One has to really be careful not to use future data. As this can be quite tricky it is better to ask yourself one more time: am I using future data?

How to build an effective drug recommendation system based on rules that will be effective and understandable to business people?

If we want to create a recommendation system for a service or platform we can approach the goal in many ways. Looking at the current trends, in particular the fashion for deep learning, we may be impressed that the use of only very complex algorithms will result in success. However, it is not so. We can apply recommender systems in scenarios where many users interact with many items and the system will help to suggest items that have been hosted by users that are similar.

The challenge with more complex approaches is that they can be sometimes difficult to tune and interpret. In other words, they can be very powerful but require a lot of knowledge to implement properly. Associations analysis from another side is relatively light on the math concepts and easy to explain to laypeople. In addition, it is an unsupervised learning tool that looks for hidden patterns so there is a limited need for data prep and feature engineering. It is a good start for certain cases of data exploration and can facilitate more insightful approaches to data.

A useful technique that we will show here as the first one is called association analysis, which attempts to find common patterns of items in large data sets. The important assumption is that we built an unsupervised rule based on GPI with 10 dig level and Drug Name level. We didn’t want to separate recommendations based on dose; for example, GPI for Lipitor Oral Tablet 10 MG.

The main idea in apriori rules is that we are looking for association rules between drugs, based on the basket (set of products that people buy). It is necessary to remember we are looking for users that buy more than one item. In the case of prescription users (SingleCare clients in this case), the number of drugs that they bought is not so big as in everyday situations in the supermarket. Therefore, we first chose only the customers that bought more than one drug. For simplicity, we created an artificial basket based on one year of user history. Based on the experiment and the results, we are choosing the customers who buy more than 4 drugs per year. Keep in mind that after preprocessing, we are creating rules from 3 important components: member_id, drug_name, count_of_drug (for each drug & client separately).

Important terminology:

Support is the relative frequency that the rules show up. In many instances, you may want to look for high support in order to make sure it is a useful relationship. However, there may be instances where low support is useful if you are trying to find “hidden” relationships, one of the measures of interest. These relationships describe the usefulness and certainty of the rules. 5% Support means 5% of transactions in the database follow the rule.

Confidence is a measure of the reliability of the rule. Confidence of 0.5 in the above example would mean that in 50% of the cases where amoxicillin and vitamin C were purchased, the purchase also included ibuprofen. For product recommendation, a 50% confidence may be perfectly acceptable but in a medical situation, this level may not be high enough. Confidence: A confidence of 60% means that 60% of the customers who purchased amoxicillin and vitamin C also bought ibuprofen.

Lift is the ratio of the observed support to that expected if the two rules were independent. The basic rule of thumb is that a lift value close to 1 means the rules were completely independent. Lift values > 1 are generally more “interesting” and could be indicative of a useful rule pattern.

The most interesting part of the algorithm is that rules have the direction where one drug is antecedents and the other is consequents. A patient who buys a cancer drug (antecedents) also buys a painkiller (consequents) indicates a strong rule. Inversely, it doesn’t work: not everyone suffering from pain has cancer. We are able to filter this kind of recommendation automatically by apriori algorithm parameters. Now based on the strongest rules (filtered by lift metrics), we are able to achieve some interesting results:

  1. more than 60% of the rules are related to popular drugs (first 50 drugs for all drugs available).
  2. more than 50% of the drugs that are recommended are cheaper for clients.  
  3. over 40% of strong apriori rules recommended by apriori increase profitability for a company.

 

The above conclusions constitute a strong argument for the implementation of a remediation system based on association rules.

9 Most Common Mistakes when Migrating from Monolith to Microservices

With the microservices architecture gaining a lot of traction in the software world, more and more companies are migrating from their existing monolith to microservices architecture. Typically, this is a wise move, but great care needs to be taken in order to complete this process successfully. Below, we have gathered the nine most common issues we see when working with our customers on monolith to microservices migration.

1. Having nothing to migrate from 

While in some specific systems it might make sense to start development with a microservice architecture, typically it is better to start off with a monolith with well-defined module boundaries and migrate to microservices when the product is more mature. During early development, the approach, requirements, and designs shift a lot. Plus, based on market response, product managers might decide to pursue a different market or niche. Additionally, decision-makers frequently do not have a complete understanding of the requirements from the beginning of the project.

A microservices architecture, due to its distributed and loosely coupled nature, makes introducing system-wide changes much harder than with a monolith. Such groundbreaking changes need to be synchronized across the team and across each microservice. More development effort is required to implement the changes in the code and modify APIs used between services. It can take a lot of time for the system to stabilize after such a migration.

To better understand if your company is ready to migrate to microservices, take a look at this article. It expands on this section with a deeper explanation of who should consider migrating to microservices.

 

2. Not defining clear objectives and timeline for migration

In larger organizations (those with more than 10 or 20 employees), things tend to fall through the cracks if not managed and tracked against goals. This is particularly true in the case of refactoring initiatives: there is always more immediately pressing work to do than dealing with technical debt and making non-functional changes. Therefore, it is very important to define a strategy and determine precise goals for the microservices migration effort.

The product managers and other stakeholders, when provided a clear plan and goals to achieve, will be able to navigate large numbers of customer requests and issues that come up along the way to accomplish the goals laid out ahead of time. Even if the migration does not go completely to plan (which is quite likely), setting out objectives beforehand is still helpful in getting an idea of where the company is, where it needs to go, and what resources will be required. If the project was not completed in the original timeframe, project managers will ask questions and execute corrective actions to get back on track. 

On the other hand, if the migration effort has no concrete scope or timeline, it will be much easier to push back important tasks into the undefined future when the team will have more time and fewer paying customer requests. Unfortunately, this is a shortsighted approach: if the system were fulfilling customer expectations, the microservices migration would not be needed in the first place. In this case, the migration is needed to better serve customers, and a faster migration to microservices is better for the overall business. 

3. Not enough (or too much) planning

In addition to the goals and timelines mentioned in the previous section, in order to be able to reach goals in a timely manner, a plan needs to be developed, including the technology side (e.g. technology stack, platform services, tools, etc.), organizational considerations (team organization, project management methodology), and functional analysis (what needs to be migrated, what has the most priority, what changes make sense to introduce in the migration process).

Keep in mind the plan cannot be too detailed–the planning process itself should not require inordinate time and many possibilities cannot be planned for adequately in the fast-paced market. Companies need to keep migration plans on a level that allows them to estimate key milestones and deliverables (e.g. specific microservices deployment) but plans should also be flexible so that changes can be made as the process move along.

If you want to learn more about how to develop an optimal plan to scale your product with microservices, you will find this article interesting.

4. No unified approach to microservices platform

Even though the microservices architecture is all about decoupling and decentralization, some amount of standardization, especially if developed early in the migration process, can be very beneficial. The obvious advantage is that the work related to designing and implementing common services like logging, metrics, health checks, and testing can be done once and then reused by multiple teams. Also, in the short-term, it is easier to move developers across teams or train new ones if requirements change when standards are kept between microservices.

In the long term, some teams can decide to move away from the common tooling. For close to 90% of the teams, the standard setup should be enough to allow them to work effectively. In case of any divergence from the standard, the cost of developing and maintaining the fork should also be taken into account, which might offset the potential efficiency gains.

5. Very tight coupling of services

One common error in the microservices architecture design is applying design patterns of the monolith to microservices platform. While those patterns could work, they will prevent the company from taking full advantage of the resilience of microservices.

For example, I have seen examples of companies implementing a form of distributed three-phase commit transactions across their microservices ecosystem. In this scenario, changes made to the underlying data models are not made permanent until all microservices taking part in the transaction confirm successful execution. While this approach ensures data consistency and immediate failure feedback, it requires that all services are operational, in a healthy state, and not experiencing overload. Conversely, if any of these issues affects one of the microservices (even if it is not critical for the transaction), the whole system might be rendered nonoperational and immediate intervention will be needed to bring it back to order.

If the transaction were implemented as a series of separate operations on microservices, the transaction might be partially completed even if one of the components could not complete its task. In addition, if the issue is permanent, and the transaction cannot be fulfilled at all, a series of compensating actions should be issued to bring the system to the original state.

6. Relying too much on synchronous communication between microservices

One of the key factors that increases the resiliency of any microservices-based system is favoring asynchronous communication over synchronous communication. Synchronous communication is typically a remote procedure call from one service to the other one, like fetching some data from a RESTful endpoint or causing something to occur with a gRPC endpoint. It requires both services to be up and running at the time of the call and it generally also keeps the calling microservice waiting for the result or confirmation of the action being performed. This increases requirements on both the availability and the computing resources of the platform hosting the microservices.

An asynchronous communication channel is most often a message sent by means of a message broker platform to a specific channel. The message broker assures that the messages are durable and are delivered to the microservices subscribed to listen to them. The consumer microservice, upon receiving the message, can act on it and also send a follow-up message or a response back using the same means. This setup differs from synchronous communication in that it puts less strict requirements on the underlying platform: the microservices do not need to be up and running at the same time (the consumer can catch up on the messages later on when it is back online).

Also, the hardware requirements are less demanding: the sender microservice thread is not blocked; when the confirmation or response comes as a message, it will be picked up in the same way as all other communication. Also, if there is a load spike on the service, the incoming messages will just queue up on the channel, instead of resulting in connection timeouts or failures. Moreover, in case of a spike, it is much easier and faster to scale up the consumers by just launching additional instances to process the excess messages in the queue instead of setting up load balancers and reconfiguring the backend instances.

However, there are types of communication in the microservices system that do not play well with asynchronous calls. Those include calls that require immediate feedback to the user (e.g., search as you type or real-time user action feedback) and calls whose main purpose is to return data. Converting those calls to asynchronous does not bring any benefits, i.e., the queuing of the message of a call requiring real-time feedback does not help. Moreover, it introduces additional overhead related to message processing and increased complexity that would affect maintenance effort.

7. Increasing complexity too much by writing too many microservices

While one of the basic principles for microservice system design is “single responsibility”, as in “design your microservices so that one microservice is responsible for a single thing”, it is often applied in an excessively narrow way. 

Systems of a mid-level complexity will quite often grow to hundreds of microservices within the first year of development. Each concept, data model, and small groups of functions are quite often created as a separate service, which results in uncontrolled growth of the ecosystem. It can be challenging for developers, QA, DevOps staff, and infrastructure teams to keep track of and maintain all of those services in the long term. 

Depending on the project, a better approach to the problem of decomposing a system into microservices could be to think in terms of domains and bounded contexts as defined by domain-driven design. Then, the number of services can be minimized by choosing to implement each domain in just one or a few microservices. While the resulting services might not be quite so small, they are still responsible for a single area of the business domain, so most changes required by the business can be contained within a single service. This approach maximizes the benefits of microservices while minimizing the drawbacks. 

8. Not including monolith changes in the migration plan

An important benefit of using a microservices architecture is that you can gradually migrate your system from a monolith to microservices-based application instead of building a new system from scratch while wasting a lot of resources simultaneously maintaining a legacy system. In that case, it is crucial to carefully plan all aspects of coexistence between the microservices and monolith for an extended period of time. Generally, the monolith will require substantial changes upfront to play well with microservices, but this effort will pay off later on when you are able to manage the monolith in a similar way as a microservice. The important aspects of monolith changes include designing or extending an existing API for interservice communication, assuring multiple monolith instances can be run together, and creating a containerization configuration for your monolith so that it can be run next to the other services. 

9. Rushing the migration without appropriate expertise

A microservices architecture adds another level of complexity onto other complexities that are present in any software project. For a microservices project to succeed and actually benefit from the microservice architecture, it has to be designed and built according to general best practices adjusted to the particular use case. Resilience, eventual consistency, saga pattern, asynchronous messaging – all of these concepts need to be carefully planned, introduced to the system, and maintained properly for the system to work.

Therefore, it is extremely important that the developers engaged in the migration have previous experience in microservices design or at least have plenty of time to investigate and research the topic. Having a team of microservices veterans that have a proven track record of navigating migration projects through uncertainties is the best approach to make sure that your project is a success. Here you can read some success stories of how others utilized external microservice teams. If you do not have engineers at your company with the requisite experience, we would be happy to help.

Although we covered as much ground as possible in this short article, there is plenty more to be said on the topic of microservices migration. If you found this article useful or interesting, take a look at our ebook with more details on the migration process as well as real-world case studies on implementing microservices in companies of all sizes.

NG Logic Proud to be Named a Top Development Firm by Clutch

Here at NG Logic, we know how difficult it can be to achieve your goals while also making sure your staff remains healthy during the ongoing COVID-19 pandemic. Rest assured that we’re here to help craft wonderful innovative IT solutions that will help you grow your business during these troubling times.

We design custom applications to perform specific functions for both web solutions and desktop programs. We’re involved in the entire process from planning and needs assessment all the way to design, implementation, and maintenance.

In recognition of our hard work and dedication, we’ve been named a 2020 Clutch leader in Warsaw Poland. Clutch is a B2B ratings authority that helps businesses find partners for their latest projects.

We’d like to extend a special thanks to our clients for making this award possible. They took the time to speak with Clutch analysts to provide insight into how effective of a partner we are. We were graded along the lines of quality, attention to project timelines, and overall project management skill. We’re happy to say we’ve received a 4.6 rating on a 5-star scale. Take a look at our most recent review below:

 

 

“We feel the Clutch Award is a confirmation of our commitment to making our customers successful with their software projects.” – Marcin Wudarczyk, CEO 

 

In addition to Clutch, we’ve also been highlighted by The Manifest and Visual Objects, two other B2B resource platforms. The Manifest, a how-to guide and business data resource, names us among their top developers to partner with. Visual Objects, a portfolio site, similarly names us a leading force in the web development space. 

We’re over the moon to receive this award. Once again, a huge thank you to our clients and the Clutch team for making this award possible. Contact us today to start your latest project with us!

COVID-19 – How tech companies are doing their part to help during the global coronavirus pandemic

During the hard times of global coronavirus pandemic, every day we get more and more negative information from all around the world. Many people are afraid for their health, jobs, businesses and even lives. Companies, both small and large, are dealing with the recession in many different industries.

 

On the positive note, in these hard times, many tech companies are utilizing their resources and business models to help out others. Allowing employees to work fully remotely and utilizing online communication technologies helps flatten out the curve by social distancing. 

 

Helping local businesses

 

Many companies utilize their already established business models to help out local businesses. This kind of approach allows local business owners to go through these times, minimalizing their cash flow limitations. Dine-in restaurants are down to 100% and food delivery is down by 30%-50%. For many local restaurants, this makes for nearly 80% overnight drop. This is why our engineers are working hand in hand with Raise to help local businesses all across the US by distributing gift cards that can be used after the coronavirus restrictions are over. By “paying-it-forward” type of gift cards, much-needed cash flow can be at least partly restored to keep local restaurants in business. If you are interested you can read the Raise case study here.

[email protected] – helping scientists fight the virus

 

Countless scientists all across the world are working hard on fighting the coronavirus. To make that as efficient as possible much computer power is needed. This is why [email protected] project, that focuses on the disease research, was created. It allows everyone to do their part in fighting diseases together, by “letting” some of their computer power to the scientists and speeding up their calculations. 

 

Many users all around the world – both private and enterprises – are joining this program to help move the researches forward at a faster pace. You can join the program here and become a part of the movement. 

8 Problems a Software House Should Take Off a CTO’s Head

Being a CTO is a very fulfilling opportunity. It gives the possibility to shape innovative products in terms of their technology, MVPs and the overall design. Being a CTO means taking full responsibility for the company’s overall development including product management and scalability, tech stack, team growth, and management and often overseeing the next version of the product. All of this might get somewhat overwhelming, and when it does, it might be a good idea to find a partner who can take some of the problems off a CTO’s head.

 

Bringing on board an external partner with a lot of experience can make a CTO’s job so much easier. Of course, not everything can and should be outsourced, but there are certain things that a software house can take off a CTO’s head. It’s a very individual thing and it all goes down to what kind of challenges are you and your company facing at the moment and what are the defined goals.

 

A good software house should act more as a consulting partner that works hand-in-hand with you rather than just a place where you outsource your development. There are way more things that a software house can do for a CTO to make his life much easier. 

 

Building the MVP

 

MVP or Minimum Viable Product is an early version of the product to satisfy the needs of early customers and at the same time be a base for further development by utilizing early feedback. Often it’s a starting point for a company’s product design. It helps channel all the ideas that pop out during the early stage of product design. Many companies struggle with defining their MVP. It’s always a key question about functionality and overall early product shape. Without proper experience, it is very easy to overlook major components or put your focus (and resources) at a wrong place. 

 

The importance of an MVP can’t be overstated. It is crucial to define the minimum components that need to be implemented upon the product launch. These minimum requirements need to be defined in such a way that the product can create an early user base and can be scaled up in the later stages of its lifecycle. Finally, a good MVP should be designed in such a way that it can be transitioned into the V2 of the product with minimum effort. 

 

Since building a good MVP is no easy task, many companies decide to bring external partners on board to help them deal with that project. An experienced partner can bring a lot to the table. Working with this kind of company takes away a lot of the guesswork in the beginning. You don’t have to reinvent the wheel. Granted that finding a reliable partner also might not come as an easy task. Here you can find some info on how to make the process more optimized. 

 

Sourcing talents

 

Whether you try to find developers, testers, system architects, data scientists or other professionals, getting the right talents might get somewhat tricky at times. It’s not uncommon that companies struggle for weeks or even months to build up a quality team. It’s not an easy task that can generate many obstacles along the way. As a result many companies simply fail at finding the right talents for their project. This might be due to a lack of understanding and meeting a candidate’s needs or simply as a result of a limited local (or regional) talent pool. Apart from finding and recruiting the right candidates, talent management is a part that often takes a lot of a CTO’s time.

 

Team augmentation or team outsourcing might be the right solution to manage this task and optimize the whole process. Many software houses offer not only custom software design but also team outsourcing services, often through offshoring which helps to bring on board a much broader talent pool. When talking about team offshoring, not all regions are equal. Picking a distant region to source engineers and finding a partner that can deliver on that might be one of the key decisions to be made in terms of team development strategy.

 

One of the regions that seems to have engineers available in fairly large numbers and affordable rates is Central Eastern Europe. This part of the world is being considered by many tech companies as worth checking out when sourcing for IT engineers of any kind. You can read more on offshoring to CEE here.

 

Product management

 

A Product Manager and a CTO have vastly different perspectives and different responsibilities towards the product they are working on. In terms of product management, a CTO’s main focus is development and delivery. Finding the right software architecture and appropriate technology stack is one of the crucial elements of this task. Very often it makes sense to outsource building a new product or updating an existing one, as it requires a lot of experience, knowledge and is highly time-consuming. 

 

Finding the right partner, who can act as a consultant with a broad experience in delivering somewhat similar products might make a CTO’s life so much easier. Keeping in mind that they won’t do everything for you but certainly can take a lot of load off your back, especially the more operational, every day, time-consuming objectives. Living more room for overseeing the broad picture and focusing on the strategic approach.

Application architecture

 

Choosing the correct architecture for an application is not an easy task. No matter if we are talking about designing a new application or migrating a pre-existing system to a new type of architecture. This might get somewhat overwhelming pretty fast. There are many things to consider and the answer might not always be clear. Each solution will have its pros and cons and at times there might not be one definitive best way to proceed. 

Architecture is one of the most important parts of software design. A well-designed software architecture makes a solid foundation for your project, that allows high-capacity of scalability and increases performance while reducing costs by avoiding code duplicity. On the other hand, an incoherent architecture might cause a lot of trouble and make you lose a lot of money as a result.

 

Microservice-based architecture seems to be the right choice for many companies. Even though it’s not perfect and certainly not a universal solution for every case scenario, it seems that it’s ticking many of the right boxes. If you would like to know more about microservices and designing proper software architecture for your system you can read more on that here.

 

If you are planning to migrate your architecture to microservices you can get some help here.

 

QA and testing

 

When it comes to ensuring high software quality starting early is the key. It might save you spending a lot of time and resources. Not detecting an error or a bug early enough during the development might generate costs 1000 times higher to get rid of it later on during the project. 

 

Let’s assume rectifying an error in the business requirements stage will cost $100, it could be even 10X more in the system requirements stage and 100X more in the high-level design stage or even 1000X more if it’s done even later. In that case, the cost might reach even $100,000 to remove the error in the implementation stage.

 

You can read a comprehensive insight into the subject of QA and testing here, explaining how it’s one of the most impotent stages in software development. 

 

DevOps

 

Modern DevOps process is a cornerstone of rapid, seamless releases of your product multiple times per day. It requires a shift in organizational culture, tools and appropriate configuration of infrastructure/platform that results in increased accountability and efficiency of the development teams.

However, building an in-house devops/platform team is not always feasible, for one due to sheer costs of hiring experienced DevOps engineers, for other due to limited availability of talents on the local market. In such a case, outsourcing DevOps might speed up transition to DevOps culture, but on the other hand, it’s not without its shortcomings, like the necessity to bring contractors to your internal processes and the need to manage the communication with the development teams.

 

The way of dealing with this subject really depends on the particular situation in your company and its goals. In-house DevOps requires vast staffing related resources like workplace etc. 

Still, when outsourcing DevOps, companies tend to face fewer failures, rapid decrease of lead time (time from the defining the requirements for the new feature to the release to production). Simultaneously this approach allows for significant cost-cutting risk control, thanks to a flexible cooperation model.

 

Overseeing version 2.0

 

After creating the MVP and marketing the first version of the product, that is decently designed, well-received by the market and brings clients and investors on board, it’s time to move forward. There is a high potential in the product but it needs to be more polished, so further development is needed to scale it up. Very often a crucial milestone in a company’s development cycle is creating the 2.0 version of their product and bringing it to the market. It’s often a test if a company can bring its product to the next level and scale itself up. 

 

Scaling up the product and the company in the process is the part when even the most experienced CTOs often opt for some help from an external vendor. Some of the best software houses have the proper experience and know-how to be able to manage and lead the entire process of developing the V2. In some cases, there is no need to outsource the entire process but rather only some of its parts. Either way working with a trusted partner might bring a huge added value to the entire process. 

 

Many companies, when thinking about the V2, start with redesigning their product’s architecture. Designing it in such a way that allows adding new functionalities on the fly and with room for potential further development. There are many ways to achieve the goat, but microservices seem to be one of the most often used. If you want to get some help in redesigning your software architecture you should check this out.  

 

Team management

 

Changing their role in the company and moving from a more technical perspective to taking on more management related responsibilities is the part where many CTOs tend to struggle the most. Especially when it comes to team management. That’s where team outsourcing might come in handy. It takes a lot of responsibilities from a CTO’s head and lets him focus more on technology side of business strategy. 

 

Team outsourcing might come in many forms and it really depends on the company’s needs and capabilities how this kind of cooperation should look like. Some companies decide to outsource the entire project to an external vendor. That includes the entire project management, development, and testing. This approach leaves you with somewhat less control over the project development but in exchange, you don’t have to worry about every operational detail along the way. Another way of dealing with team outsourcing is by team augmentation. For many companies this kind of approach, where you keep the project management on your side, seems to work pretty well. The trade-off here is that you take full responsibility for the operational side of the project. This means that your know-how needs to be in-line with the project demands. 

No matter which approach is chosen, many companies decide to offshore their team management needs. This helps to source talents from outside of the local talent pool while optimizing the payroll budget at the same time. Team offshoring brings a lot to the table for both startups and more mature companies. If you want to read more on the subject you can get some additional info on offshoring success stories here.

IT product quality

How to guarantee IT product quality?

Ensuring the highest IT product quality might get tricky and somewhat overwhelming pretty fast, that is why it’s very important to start thinking about it as early as possible. Building up form the top processes focusing on product quality, by testing and QA analysis as well as implementing modern methodologies will help to ensure product quality early on without overinflating the budget related to this aspect of the software development. Extraordinary software quality will lead to better cost-efficiency and above-average performance of your product.

 

Starting early in the project to implement quality control processes might save you a lot of time, effort and resources in the later phases. If an error or a bug is not detected early enough and it pops up later on, it might cost you to rectify even 1000 times more in the later stage of the project. 

So if rectifying an error might cost you $100 in the business requirements stage it might be 10X more in the system requirements stage, 100X more in the high-level design stage and all the way up to $100,000 in implementation. That’s why coming up with structured testing and QA processes and test automation should be a staple of any IT project when thinking about the long-term quality and cost management.

Test Automation

Test automation is a major time and money saver, especially where it is difficult to test things manually, as it does not require human intervention and can be run unattended by a software that is checking code for compliance. This is achieved by implementing a set of rules and has become a standard in the IT industry. Automation is way less prone to errors as no (or minimal) human factor is required. This also means that a much higher speed and increased coverage when it comes to test execution. 

When we are talking about application testing there are three main levels to focus on with unit testing being the first, most basic one. Here all the individual components are being tested separately, in order to determine if they perform as per requirements. Here the units are being referred to as the smallest parts of the whole system. with several inputs and (usually) one output. On a higher level is integration testing where individual modules (units) are being put together and tested that way, to determine the quality of the interactions between them and confirm that it’s in line with the system specification. The highest level when it comes to software testing is end-to-end or functional testing. Here the entire system is tested in order to determine any system dependencies and make sure that data integrity maintained all across software components.

Continuous Integration

Continuous Integration is another good practice for ensuring the highest quality of your product, especially when code integration from multiple contributors is needed. It allows developers, both in-house and outsourced, to integrate their code multiple times per day, with every check-in being verified automatically. This approach of integrating code regularly allows for early problem targeting with quick error detection which leads to less backtracking and saves tremendous amounts of time. This is extremely important for organizations that plan to scale up rapidly be it in terms of team size, the codebase of infrastructure, as it improves the feedback loop and enhances communication. After all implementation and testing work on a particular branch is done a merge request is meant for merging the source code between branches.

 

Keeping in mind that any code is written by human beings and as such is prone to errors and mistakes, peer code reviews help tremendously keep the quality in check. By utilizing code review tools, parts of the source code are being checked by one or several people as a double-check process to minimize errors as the product is being developed.

Agile

Incorporating a modern and well-verified methodology (like Agile) in your product development process will increase flexibility in creating your system, which leads to a shorter time to market with fewer errors along the way. With frameworks allowing to address complex problems during product development (like Scrum), it’s fairly easy to have high quality in your projects, implementing best practices like Daily standups to address any problems or issues that might occur along the way and distribute the knowledge among the team members. 

Scrum is a framework that also implements processes that show on regular basis the amount of work and all the stories that have been completed (Demo), but also maintaining the backlog clean and refined (Refinement) and concluding the sprint with potential changes applied to the next one (Retrospective). All this ensures a high quality of your IT project by implementing the best practices in the form of commonly used methodologies and frameworks.

Continue doing what works

Test, review and evaluate. See what worked and what didn’t. Evaluate your code by testing constantly to catch any potential bugs as early as possible to ensure the high quality of your product and maintain relatively low costs. Revise what was working and if it can be used later on down the road. Keeping a structured record of your QA process will help to see the bigger picture and start thinking about the long-term quality of your product.

 

Thinking long-term about your product, in general, will help you manage your product quality. Using state-of-the-art, modern technologies and architecture is crucial if you want to make your product future proof. Technology stack is a key component of any IT product determining among other things how easily can the product be updated into the second version. 

Adopting a microservice-based architecture, where individual components can be developed independently from each other, helps a lot in long-term product high-quality development, ensuring that short-term goals won’t interfere with strategic decisions.

 

Since having a high-quality product might be quite a challenge, bringing on board an external vendor with vast experience in analogical projects and similar technical backgrounds might an additional layer of know-how to the project. Team augmentation through outsourcing comes with a lot of advantages. Apart from the additional expertise that an external partner might bring, that might often lead to faster and more flawless product development, in many cases it’s a way more cost-efficient way of sourcing talents, especially when offshoring is taken into consideration.

An additional factor worth taking into consideration when deciding on project outsourcing is risk diversification that an eternal partner brings to the table. However, choosing the right vendor for your IT project might not be as easy as it seems. If you would like to learn more about how to verify a software house you can read more on this subject here.

How outsourcing microservices helps to scale tech products

Let’s run through an example scenario. Say a company has their first MVP on the market. It is well-received by the market and brings clients and investors on-board. The product is well-designed and it works, but it has room for improvement. The product has great potential, but V2 needs to be more polished and further development is needed to scale it up. Since the live product may already have a considerable user base, any changes implemented into the system should be done seamlessly and without downtime. The new architecture should be designed in such a way that allows the addition of new functionalities and leaves room for further development.

 

That’s where microservices come in

 

Microservice-based architecture brings a lot to the table when it comes to application scalability. It’s highly versatile and flexible, which makes it perfect for developing products that require quick changes and new functionalities on the fly. It also gives the freedom to develop particular services and deploy them independently from each other and without changing the entire system. This means that different parts of the code can be developed using different technologies. That type of design approach results in a system that is more resilient to failures – if one unit goes down, it doesn’t take the entire system along with it.

Since microservices are highly scalable and extremally fault-tolerant, they naturally increase business agility. They allow an organization to focus more on business needs and product development rather than projects, as they can be thought of as a depiction of business functionalities. This type of approach is crucial when it comes to future-proofing a product. In today’s world, business needs – both technological needs and market demands – can change drastically very quickly. It is important to invest in an architecture that can meet these demands. Microservices are a viable solution that allow businesses to easily reshape parts of a system as needed.

Scaling up an existing system, especially moving it to V2, may be a challenge. Choosing the right architecture for the job is of key importance. For many companies, microservices check all the right boxes when it comes to system scaling. This should not be a surprise, since they allow for easy and rapid scalability thanks to unit-based autonomy and make development fast and hassle-free thanks to technology independence.

 

To outsource or not to outsource?

 

Since microservice-based architecture can be developed by independent teams working on different functionalities, it’s a perfect solution for outsourcing in its nature. Outsourcing this kind of development comes with many benefits. For many companies lowering project costs is one of the major things that makes them consider this kind of approach. Cutting expenses through working with an eternal vendor comes in many forms. It’s not only staff costs. So finding a more affordable workforce, often from outside of the local talent pool, is something worth considering when talking to different vendors. Granted, cheaper should not mean poor quality. But many companies can deliver high-quality performance at a considerably lower price. This is thanks to high specialization and related to it process optimization leading to lower costs and higher efficiency. By using an outsourcing-based model you don’t have to invest in other assets, like additional office space, hardware and so on, making costs even more optimized. 

 

Since microservice-based architecture can be developed by independent teams working on different functionalities, the ability to outsource it is in its nature. Outsourcing this kind of development comes with many benefits. For many companies, lowering project costs is one of the major factors that makes outsourcing appealing. Cutting expenses by working with an eternal vendor comes in many forms; it’s not only staff costs.  Using an outsourced team also reduces costs associated with assets such as additional office space and hardware. Finding a more affordable workforce outside the local talent pool is something worth considering when talking to different vendors. Granted, less expensive should not mean poor quality. Many offshore companies can deliver high-quality performance at prices considerably lower than local prices thanks to high specialization and resulting process optimization.

 

Outsourcing can provide more value than just cost-cutting.  Project efficiency should not be sacrificed for low cost; a good partner should bring efficiency to the table along with a reasonable price. Outsourcing microservice architecture development improves in-house team performance; the in-house team is able to focus on key tasks, and non-core activities do not get in the way. When outsourcing, a company works with an experienced partner who understands the business needs and has a team who has implemented similar solutions dozens of times (often with multiple case studies to draw from). Development will likely be faster and more agile, since the company does not have to go through trial and error guesswork and reinvent the wheel. Outsourcing part (or all of) the work to an external vendor that can provide high-quality specialists with extensive experience in their respective fields is an easy way to bring additional knowledge to a company.

 

There are times when finding the right specialists in the local or even regional pool of talent is a hassle. Lack of specialists in a certain area or high market rates may be limiting factors when it comes to company development and product scaling. If that is the case, working with a partner who can provide a source of highly qualified talent is a good path forward. Many companies have offshored their microservice development with considerable success. Some of those success stories can be read here. However, the bottom line is that sourcing developers from a global talent pool and not being limited to the local job market gives companies the cutting edge for product scaling. 

 

In today’s fast-moving world, having a flexible business model that can overcome market fluctuations is one of the main indicators of a company’s potential scalability. The ability to freely add or reduce resources when needed is a highly desirable feature of any company, and it is one reason why outsourcing has become a highly popular form of organizational development in recent years. Being able to reduce or scale-up a development team depending on project peaks enables a company to rapidly react to emerging demands.

…but there’s always a downside

 

Sure, it’s not all sunshine and rainbows. Bringing on board an unreliable partner might cause more harm than good. That is why it is extremely important to do a background check before beginning a partnership; here is a step by step guide on how to do that the right way. Remember also that in addition to receiving knowledge from an external vendor, knowledge will (sometimes) need to be shared as well.  A well designed NDA comes in handy to protect both parties’ best interests. 

 

Keep in mind that outsourcing microservice architecture may result in less company control over the project, in particular over some functionalities and parts of the product. To minimize this risk, ensure that proper product management is in place. Technical documentation should be written and delivered so that the in-house team can take over if needed. Building up a microservice-based architecture may get complex and somewhat overwhelming at times. Choose a partner with significant experience in the area and a proven track record who has been on the market for a while.

 

To sum it all up

 

There is no one golden key that opens all locks and there is no one solution perfect for every scenario. What makes the most sense for a company in terms of product scaling is subjective and unique to every situation. That being said, developing a microservice-based architecture through outsourcing has been the preferred approach for many companies that want to bring their tech products to the next level. It is a highly cost-efficient solution on many levels. It allows cash flow management on both the operational and strategic levels; services can be added and removed without redesigning the entire system each time, and development headcount can be freely managed based on project peaks. It is a flexible solution that allows frequent system adjustments to stay in-line with company strategy and market demand.

 

Granted, this approach does not come without shortcomings. Microservices may sometimes be hard to build and implement into pre-existing architecture. Also, working with external partners is not always as straightforward as it should be. Coordinating all the aspects of a solution is challenging, but most potential risks can be minimized by choosing a reliable partner. That is why it is critical to work with a company (or companies) that has vast experience in a similar area in order to achieve seamless business scale-up. 

If you are interested in moving forward with your microservice architecture, you can find more information here on how to proceed and what your next steps should be.

Who should consider migrating to microservices?

The microservice-based approach produces a type of architecture that has a more spread out and unit-focused philosophy than a traditional monolith. Favoring flexibility and scalability, microservices may be the perfect solution for companies that want to quickly bring their systems to the next level. Granted one size does not fit all, so it’s good to keep in mind the intended uses and limitations of any strategy.  If a microservices are a good fit, they can become key to a company’s growth when applied correctly.

 

Why you should care about microservices

 

When scaling up, it is crucial to consider how to incorporate additional functionalities into the pre-existing system. The key is to do it in a way that does not overhaul the monolith completely; this is increasingly difficult for more developed systems. With the microservice approach, services can easily be added and removed from the system depending on the current load. This enables more efficient resource allocation management and is a time-efficient solution since multiple teams can work simultaneously on a single software project without stepping on each other’s toes. This leads to faster development and higher overall scalability.

 

One great thing about the microservice architecture is that system migration can begin right away. New functionalities can be built outside the monolith, assuming a good architecture approach is taken so as not to create a distributed monolith in the process. When implemented correctly, microservices allow the addition of new functionalities without having to rebuild the entire system; this makes microservices a low entry barrier solution. Another approach, in lieu of adding new functionalities on top of the pre-existing system, is migrating the system chunk by chunk to microservices.  This approach decomposes the system gradually until it is no longer monolithic and is comprised entirely of separate units. Regardless of the approach selected, immediate benefits are gained such as lower maintenance costs and shorter release cycles.

Independence is the key feature here; each microservice is an autonomous unit and can be treated as a separate piece of software.  Each unit can implement a single domain model for a functional element such as notifications, cart, pricelist, and document generation.  While each microservice can be developed independently, there needs to be loose coupling between the microservices to ensure proper communication within the system. Loosely coupling individual elements enables easier system changes and prevents regression issues in other parts of the application (such as those that may be experienced in monolithic applications).

 

System architecture utilizing independent elements has many advantages.  Since each microservice can be developed and implemented separately, developers always have the option to use the best possible technology for the task at hand. Independence of elements also makes the entire system more resilient. If one function stops working, it doesn’t bring the entire system down with it. This cannot be achieved with traditional architecture.  Additionally, architecture with independent elements enables faster deliveries. Since the system is broken into small microservices, it is much easier to complete development and testing procedures before releasing a new version. This arrangement also makes it easier for multiple teams to work simultaneously to quickly deliver and implement new functionalities and bug fixes.

 

Start-ups scaling up their product – and business in the process

 

Rapidly scaling up their business is a top concern for many start-ups after developing the first version of their product. Scaling up typically requires development of a second version (V2) of their product or system. Choosing the right technology – and architecture – is often the main factor that determines success. Therefore, many companies choose microservices as the architecture base for moving a monolith to V2. A monolith architecture can be an easy way to develop the first version of a product. It requires less effort than other approaches and allows for shorter MVP time-to-market. After establishing that the product is viable in the market, scaling the product effectively and fast becomes critical. More advanced architecture solutions are typically required, and microservices are by far the most effective of them all. They enable more developments in shorter periods of time by allowing different functionalities to be developed separately from one another. Different teams can work concurrently and independently from each other to develop different solutions and add them to the pre-existing monolith. 

 

 A microservice-based system is especially valuable for performance-sensitive online products. When rapidly scaling up an online product, the amount of traffic the system can handle must be considered.  It is important to design the appropriate architecture from the very beginning. Since each microservice scales horizontally with ease, it is simple to increase capabilities for systems getting large traffic either on a regular basis or in spikes. Incorporating microservice-based architecture to extend the functionality of a system is a low entry barrier solution for scalability since it does not require redesign of the entire monolith from the ground up. New solutions can be added on top of existing code and immediate benefits can be gained from the improved performance of new functions. 

 

Business scalability is one of the main things investors look for in start-up companies. A well designed, highly scalable company with ideas based on detailed market research attracts investors and brings capital. A well-rounded product with well-designed scalability potential is a crucial part of every tech business model. At the core of every piece of software is its architecture, which should allow the product to reach its full potential in an easy and seamless way. In many cases, microservices are the best way for start-ups to achieve seamless scalability in their products and overall business models.

 

Mature companies bringing their system to the next level

 

When a company matures, its processes and products become more complex and it gets harder to incorporate new functionalities into existing systems.  This is especially true if the new functions need to be fully integrated with the rest of the system without extensive system downtime. Microservices allow implementation of new functionalities into pre-existing systems without the need for complete overhauls. As a result, customers can immediately benefit from new functionalities added to an old system. Many corporations rely heavily on microservices for their global systems, because millions of global users can immediately utilize new services that are seamlessly incorporated into the core system.

 

After adding new functionalities into an old system, a natural step forward is to migrate the system to microservice-based architecture. This ensures that the “new” system will be more flexible and able to easily incorporate changes as new functionalities and needs emerge. Typically a plan is established for how to migrate the system to microservices chunk by chunk. Part of this plan involves establishing which units need to be migrated first, which can be replaced with new services, and which do not need to be migrated. Before proceeding with migration, a comprehensive system architecture review is needed to make the system microservice-ready. There are some systems for which a full migration to microservices is not possible or desirable. This is especially true for very old systems that haven’t been modernized over the years and are not well understood. Systems with a limited user base are also poor candidates, because it likely does not make economic sense to invest in them. In these cases, it may be reasonable to remove the systems entirely and rebuild them using more modern architecture. However, these systems can still benefit from the addition of microservices to the existing or redesigned system. 

Microservices are versatile, unique, and able to improve almost any system.  They can be used to completely redesign old, bulky systems bit by bit. Alternatively, they can be used to add new units on top of a pre-existing monolith. This distinct set of features is what brought global organizations like Autodesk and Red Cross to use microservices as part of their international strategies. More information about that can be found here.

 

Moving from monolith to microservices – doing it the right way

 

Whenever companies, large or small, decide to move to a microservice-based architecture, there is prep work that needs to be done prior to migration. Before designing the system architecture, the domain needs to be mapped out and its boundaries and shortcomings understood. There is a good chance, especially if the system is overblown, that the system is not comprehensive enough in its current form; a well-performed mapping will make the structure easier to understand. Only after properly mapping the system should the new architecture be designed.  During design, it should be considered whether the monolith will coexist with microservices. If this is the case, the new architecture must be compatible with the monolith so that the two can effectively communicate. 

 

When diving into microservices, take one step at a time. Design a platform and infrastructure required to run the microservices, as they need a more advanced approach for deployment and monitoring. Typically microservices are deployed on container orchestrators like Kubernetes, Docker Swarm or Mesos, which provide flexibility in defining how the services are deployed, run, and monitored. Service mesh frameworks (e.g. Istio, Conduit) can also be used; they allow for configuration settings that are useful for microservice projects.

You can take one of the two paths when starting your microservice journey. 

One of two paths can be taken when starting a microservice journey. An existing monolith can be decomposed chunk by chunk until the entire thing has been migrated into separate microservices.  Alternatively, new functionalities can be added as new microservices on top of a pre-existing monolith. This approach keeps the old system intact and builds a new infrastructure around it. It is the best approach if the current system is bulky or old and not well understood. In such cases, it is desirable to build new functionalities around the monolith.

 

Regardless of which path is taken, it is crucial to establish a communication channel for information exchange between the old monolith and the new microservice system.  This is necessary to ensure the system works properly. The recommended way of doing this is to make the monolith microservice-aware and have it “play nicely” with the new entities. However, this is not always possible and depends on the monolith’s original structure. 

 

First steps are always the hardest, and it is important not to dive head-first when approaching this subject. Let engineering teams learn from their first microservice development, even if it means learning from their mistakes. Allocate additional time to monitor the code, and review the key decisions regarding system design and implementation. This will likely result in a better final system. Often when starting a new project it is necessary to spend more time on things that can be done more quickly in a mature environment. But that does not mean that time is wasted – just that lessons are learned.

Invested time beings to pay off once the architecture and development approach have been tested, validated, and optimized. Subsequent microservices can be built at a much faster pace by applying the established framework. To make the process even more efficient and cost-optimized, many companies decide to bring an external team on board. Often the entire process is offshored to remote/external teams to further optimize the transition. There are countless success stories of this approach for both start-ups and large corporations.