The way the world creates code is changing. Developers must now contend with larger volumes and varieties of code—while producing quality software at high velocity—to deliver business value.
In the face of this increased complexity, developers’ ability to efficiently write and make changes to their enterprise’s code to meet tight deadlines and stringent quality and security requirements is paramount. Developer productivity in the era of so-called "big code" is mission-critical.
Just like vast amounts of data on the web-enabled big data applications, now large repositories of programs (e.g., open source code on GitHub) enable a new class of applications that use these repositories of big code. Using big code means to automatically learn from existing code in order to solve tasks such as predicting program bugs, predicting program behavior, predicting identifier names or automatically creating new code. The topic spans interdisciplinary research in machine learning, programming languages and software engineering. This website lists some of the state-of-the-art techniques in the area.
Having an understanding of all these changes—and what’s driving them—can help developers stay competitive and deliver quality software, fast. In this eWEEK Data Points article from Quinn Slack, co-founder and CEO of universal code search provider Sourcegraph, we present four trends fueling the changing landscape of big code.
Data Point No. 1: Volume
The amount of code in the world is growing exponentially every day because software has become the fundamental driver of innovation in nearly every industry. Developers are increasingly using and dealing with larger, more interdependent codebases that include both proprietary and open source code.
Traditional developer tools, such as editors and IDEs (integrated development environments), are beginning to fall short as a result of this increased volume. These tools were designed for individual developers working on a single repository, rather than for software teams working on multiple repositories and developing large codebases at scale.
Data Point No. 2: Variety
Before big code, companies were Microsoft shops using Visual Studio and .NET products, or Linux shops only using the LAMP stack; everyone used one set of technology with code stored in one place.
Today, developers use whatever the right technologies are for the job. This has resulted in a major uptick in the variety of programming languages, code hosts, repositories, version control systems, services and APIs at developers’ disposal.
To create highly competitive products, organizations need to uncover ways to navigate and analyze their massive stores of code, regardless of system, repository or language. Developers require the ability to efficiently find the information they need to do their jobs in today’s collaborative, multidimensional development environment. Failure to manage this variety can cause programming productivity to suffer.
Data Point No. 3: Velocity
Accelerated delivery cycles—and disciplines such as agile development—mean code is changing faster and being shipped virtually every day. Teams are under pressure to deliver quality software continuously, and any development lags can mean late releases, poor quality, frustrated teams, unhappy customers and noncompetitive products.
Data Point No. 4: Value
The era of big code highlights the value of efficient software development. Code is at the root of countless innovations that improve people’s lives every day and has quickly become the core intellectual property of most companies. Developers contribute to this business value directly by delivering high-quality software.
As the software development landscape continues to evolve, organizations that prioritize finding ways to mitigate the challenges associated with volume, variety, velocity and value will come out on top.
If you have a suggestion for an eWEEK Data Points article, email [email protected].