By clicking “Accept”, you agree to the storing of cookies and pixels on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Computers with the Coding Screens, and Notes Photo

Prevent Your Products From Being Built on Legacy Systems

The founder of Freeport Metrics - Andrew smiling to the camera.
Andrew Gauvin
October 1, 2021

Defining a legacy system is not so simple, but usually, this refers to systems that no longer receive support or maintenance, are based on outdated technology that is not compatible with more current solutions or are based on systems that are unavailable for purchase. 

You may be perplexed as to why companies still use them, since having your product labeled a legacy system by your team or clients can be a nightmare for any product manager. 

As we mentioned in the previous post, your product doesn’t need to be 10 years old to be considered a legacy system by your customers or tech teams. Incredibly, this label could actually stick from day 1 of your product going live.

We’ll set the context for preventative and corrective measures your product team can take to prevent your product from becoming legacy technologies. We’ll take a look at 2 high-level category reasons  your system may obtain the Legacy label: Evolving user expectations, and Technological architecture decision missteps.

LEGACY SYSTEMS BORN FROM EVOLVING CUSTOMER EXPECTATIONS

Product managers and their software architects can help avert the legacy system label by first reviewing, in more detail, what kinds of customer expectations have changed over time.

  • user interfaces,
  • security,
  • performance,
  • analytics vs. reporting,
  • open APIs, and
  • cost of ownership from licensing and developer costs

Evolving User Interface Expectations

The most obvious changing expectations for customers have been regarding the user interface. Here is a brief history of the trends that have driven software to obtain the Legacy label:

90’s customers wanted a mouse. Thirty years ago with the dominance of Windows in more and more homes and workplaces, the obvious legacy system problem for most product managers was that users didn’t want to learn and interact with traditional “console” interfaces. MS-DOS, Unix, or mainframe applications. The UIs of most business software did not include a graphical user interface (GUI). Though the keyboard/console-driven interfaces may arguably have been more efficient/productive for many “power users”, the usability/training issues for staff were lessened by building mouse-driven graphical user interfaces (GUI).

00’s customers wanted to access their software. Twenty years ago, legacy system pains were driven by the expectations of users to access their business data through a web browser both internally to the organization and externally.

10’s customers expected mobile apps. In the last 10 years, the problem has been delivering services to mobile devices. Companies have upgraded the web interfaces with “mobile-first” or responsive web UI’s, but more and more are being pushed to “go native” building mobile apps for the iOS and Android platforms that can exploit the device’s hardware platform (sensors, camera, etc.).

The most obvious emerging expectation for more and more business applications is around solutions for “handsfree” (e.g., not typing) utilizing the sensors/camera of the mobile devices and the power of the cloud AI “interfaces” offered by Apple’s Siri and Amazon’s Alexa.

Evolving Security Expectations

For the first few years, Gmail wasn’t running on SSL. Most PC’s didn’t have a password prompt. Most wifi hotspots were not protected by a password. Most information technology professionals knew for years that these problems existed before the products and policies (and laws) of companies caught up to help close some of these obvious gaping holes.

More innovation, more critical exploits, and the inevitable emergence of more powerful computers will create ever more exploits. Examples of new threats include sophisticated “phishing” driven by Artificial Intelligence and the massive increases of computing power (quantum) that will provide ways to crack the most secure systems and fool the most technically sophisticated users.

Evolving Performance Expectations

Waiting overnight for reports or hours for queries to run has become unthinkable even on the largest datasets for most users. They have become accustomed to using cloud-based services for search, email, and analytics that give results instantly for very large datasets. Analysts, traders, and executive decision-makers all know their time is incredibly valuable, and waiting for answers to “simple” queries has become unacceptable.

Evolving Analytics Expectations

Data exploration tools such as Google Analytics and a wave of more affordable OnLine Analytical Processing (OLAP) BI tools have become available (e.g., Tableau, QlikView, Microsoft PowerPivot, Google Data Studio) beyond the heavyweight BI Enterprise software. There was an increased expectation of business users to analyze their data in real-time to inform decisions.

Giant systems optimized for transactional processing could be accessed directly to create monthly, weekly, or maybe daily “canned” reports. But were not able to take the load to run queries by dozens, hundreds, or thousands of users, and then give responses within a couple of seconds.

Evolving Expectations About How to Access Data

Having applications available for users from the browser was incredibly useful, but once the business data was obviously available outside the corporate walls, partners and customers wanted to “consume the data” outside the confines of the provided user interface. They wanted to “mesh” the data provided by one partner with their own data, do their own analysis, and resell the data for other purposes. Selling business software means providing many ways for your customers to access the data and logic of your system. access to the underlying database, various APIs, and SDKs.

Evolving Cost of Ownership Expectations

Finally, IT departments have evolved their expectations around what is the acceptable amount to spend on their own hardware infrastructure, software licenses, and their own internal or outsourced developers:

Hardware infrastructure has evolved from huge mainframes to less expensive Unix/PC/Linux servers, and eventually to cloud services. The largest companies today are offering their newest services relying on underlying technology deployed to all three of these environments.

Even with the offshore globalization trend of software development, developing and maintaining custom software in-house has become increasingly expensive. Platforms as a service such as Salesforce have thrived mostly from the promise to lower costs of creating in-house software.

Software licensing costs have to evolve as the computing infrastructure and the number of users have changed. Your software product could be considered legacy simply based on the licensing model.

Legacy System Born from Poor Software Architecture Decisions

Beyond external market expectations, there are some classic pressures and biases that software architects have when creating a new technology product. The two most classic mistakes are:

Cost-Reducing Silver Bullet Technologies

Some technology selections/paths promise to massively reduce the amount of effort to build your software. One silver bullet promises to eliminate the need to hire talented developers at all and the other promises to turn a single talented developer into a talented developer team.

Promise to turn anyone into a developer

Some technologies promise to reduce the cost of development by avoiding the need to hire expensive developers, the experienced and formally trained computer scientists. Some examples can include Microsoft Access or Salesforce promise to allow “software engineers” to be born after a few days at a training seminar or flipping through a “For Dummies” book.

Promise to turn good developers into Superheros

Other technologies promise strong developers (e.g., the expensive ones) that they can eliminate the “drudgery” of some coding tasks… giving them the power of dozens of average developers. Various code generation frameworks spring up every few months and many, many ambitious young software developers create their own. Many of these tools are great when proven (e.g., Rails and many of the clones) used for the business problem they were designed for. 

However, when you draw outside the lines with new ideas/requirements,  it takes a very sophisticated developer to debug the creations of these tools and you are often left with “write-only code”, an unmaintainable piece of software.

(Relatedly, you need to make sure your outsourcing partner isn’t choosing the path of most effort. It is true that creative ambitious developers have been stuck with “heavy” architectures that support projects with tens of thousands of billable dev hours. Frameworks/technologies have been successfully pushed by the unholy alliance between the large consulting firms and the large product companies.)

Resume-Driven Architecture Biases

Another prime category of architecture mistakes are those driven by the resumes (past and dreamed) of the team making the software architecture decisions. You team can lack ambition or have too much:

The team lacks ambition. They base the architecture decision solely on what they know the best despite the business problem or technology maturity. This unambitious team uses technology they know in the way they have always used without much reflection or learnings from the market. Many developers could choose this technology even when the technology is obviously flawed in some major way or no longer popular (which will increase hosting and long-term maintenance problems)

The team is too ambitious. They base the architecture decision on what will look best on their resume. Your team can often choose technology that gives them “street credit” among their developer peers… and even more importantly, for their next gig or resume. Plus, it’s simply fun to learn new stuff for smart folks (that aren’t working under a fix-price, fix-time contract). Popularity of a technology is important (for recruiting developers for a growing team) and for comfort-level for customers that might want to host your product themself. However, the technology choice should clearly be “mature”.

Tier by Tier. Tear by tear :( Examples of Architecture Decisions Gone Bad.

We’ll conclude this post to look at some specific examples of legacy system architecture decisions at each tier of a modern application.

The Legacy User Interface Tier Selections

Although browser-driven UI web applications have been around for nearly 20 years, the technology trends have been very strong and sweep. This was firstly driven by the “browser” wars with Microsoft keeping JavaScript/HTML/CSS unstandardized between IE and alternative browsers. Unhelpfully, the problem was being solved by competing Sun Microsystems/Oracle (Java Applets) and Adobe (Flash Macor) ideas.

Then, once the browser as software platform stabilized, we still saw a huge turn-over in popular today and dead tomorrow web UI frameworks. We assume this was mostly caused by two forces:
a) many web developers are not trained formally as software developers,

b) formally trained computer science software developers were coming up with ways to make web UI development feel like “real development”. So we’ve seen professional frameworks for “web applications” web-tier technology choice trends come and go extremely quickly:

  • ExtJS (later called Sencha) - can make web applications work like PC desktop applications. And pushed by Yahoo so must be better.
  • GWT - Google’s in-house tech used for its advertising platform promising java developers the ability to create HTML/CSS/JavaScript UI’s
  • Backbone.js - framework that Ruby on Rails developers could love
  • Angular - framework developed by some folks at Google and is complex so it must be better :)
  • React - framework developed by Facebook so must be even better
  • The need to build mobile experiences, and a server shortage of developers without the skills to learn/support native development (in objective C or Java).

The Legacy Middle Tiers

Mobile backend as a service is the most recent. User interface developers wanted to build/deploy their applications without having to learn or build the middle-tiers. Some of the darlings in this space were quickly acquired (Facebook with Parse and Google with Firebase) and promptly left to die. 

Two years ago Facebook announced (with 100,000’s of live mobile apps using it!) that it would shut down. Thousands of developers needed to migrate their “back-end” migrated to keep running.

The Legacy Data Tiers

Relational databases have shown their age, but have proven themselves as reliable backboned for scalable transactional systems. NoSQL databases for problems that would work fine for relational databases.

Neural network databases for analytics where star schemas hosted on traditional relational databases would be sufficient.

Avoid being labeled ‘Legacy Software’

If you’re concerned your software might be labeled as a legacy system, you can start by going over a practical checklist to see if your tech teams are taking the right steps. Alternatively, you can get in touch with our team to find out what more you can do to avoid offering legacy technologies to your users - giving your software and business a much longer life span.