Introduction to REST architecture

This article is a brief introduction to the REpresentational State Transfer (REST) architecture. It is intended for aspiring/junior software developers and other technical professionals who would like to have a better understanding of REST.

Representational State Transfer is a system architectural style enabling the creation and utilization of web services. Services compatible with REST are referred to as “RESTful.” This is common among microservices, which allows for compatibility between multiple systems. It is a stateless protocol, meaning that requests and responses do not rely on prior messages.


One common use for RESTful web services is to access data stored in a database. To use a REST API, the endpoint URL and its available resources and actions must be known. For example, an endpoint might allow GET actions to retrieve and display data about movies.


In this example, a client sends a GET request (which is a safe method that cannot modify the target system) to the API endpoint, specifying the desired response media type in the header. There are a variety of ways to send a request programmatically or through tools such as curl or Postman, but for now we will focus on the concepts.

This request assumes we know the movie ID. The specified ID (12345) is requested from the /movies/ endpoint from

GET /movies/12345 HTTP/1.1
Accept: application/json


A successful response includes a HTTP 200 OK status, and the type and length of the content. The body of the response follows the header. In this case, the data returned is formatted as JSON.

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: ...

  "movie": {
    "movie_id ": 12345,
    "movie_title": "The Matrix",
    "genres": [
    "links": {
        "update": "/movies/12345/update",
        "delete": "/movies/12345/delete",

The response body includes details about the requested movie (12345), as well as links to additional endpoint actions such as updating or deleting this specific record.

Characteristics of system design

There are a few fundamental characteristics of system architecture that influenced the development of REST.

Colloquially referred to as the “-ilities,” there are several key characteristics of a solid technical architecture. Many of these principles are related to each other as they must work together to achieve their intended functionality. However, these characteristics do sometimes force design trade-offs due to competing needs.

The following are characteristics of a high-quality system architecture:

  • Performance
  • Scalability
  • Simplicity
  • Mutability
  • Visibility
  • Portability
  • Reliability


Technical performance (such as system, network or storage throughput) plays an important role in
performance from the perspective of the overall system. However, a well-architected system is primarily focused on user-perceived performance – i.e., minimizing the latency between interaction and response.


A well-designed system should be able to adjust to demand. Without scalability, an increase in users can negatively impact the overall performance of a system. By simplifying and decentralizing internal components, the system should be able to redistribute load across multiple service providers.


Simplicity enables each component of the system to develop independently and makes changes easier for developers to implement.


Sometimes referred to as Extensibility, this quality attribute is an important part of system design that makes future changes easier, potentially without affecting the system’s operational status.


Visibility is the concept of transparency between modular components, as well as the function of being able to moderate the relationship between them.


For a system to be portable, it should be able to operate in multiple environments or otherwise unaware of the architecture’s underlying hardware or software platform.


Reliability is all about minimizing the Mean Time Between Failure – the ability for a system to be resilient in the face of total or partial failures. This can be supported by avoiding design bottlenecks, implementing redundancy or monitoring systems, and creating operational procedures such as a maintenance schedule.

Implementation of REST

Client-server model

The Client/Server model is a conceptual framework for separating tasks between a service provider (server) and a requester (client). In the context of REST, a server is responsible for storing, accessing and performing operations on data or resource, while a client is responsible for requesting and subsequently formatting the desired data or resources.


Stateless communication requires that requests made to the server includes all relevant information to understand the request. It must be independent from any previous messages, as clients are responsible for managing state. This is a design trade-off as it results in a potential increase of network requests, but supports the architectural attributes of visibility, reliability and scalability.


Certain data provided by the server can be marked as cacheable, such that the client can reuse that data for follow-up requests. This helps alleviate the architectural drawbacks of stateless communication. However, it can negatively impact reliability as cached data is susceptible to become stale or unsynchronized from the server.

Layered system

In the same way that the client/server model supports separation of concerns, the REST architecture allows for a layered system to prevent components from managing things outside of its own scope. Additional layers for security and network management (e.g., proxies, load balancers, firewalls and other intermediary devices) can be introduced to further separate business logic from clients who are unaware of the internal workings.

Uniform interface

Broadly speaking, “RESTful” architectures are meant to simplify and decouple the architecture to allow each component to be developed independently. But they are also meant to provide a uniform interface. There are four additional constraints that help deliver a uniform interface.

Resource identification

Because data can be presented in multiple formats (HTML, XML, JSON) which are separate from the server’s stored data, resources must be identified when requested from the server.

Resource manipulation via representation

Once a client has received a resource from the server, it has everything necessary to make changes to that resource (assuming sufficient permission to do so). For example, a resource containing a list of customers includes the customer IDs, which can be used to construct an additional request to modify or delete that customer’s data on the server.

Self-descriptive messages

Each request provides the server with all relevant information necessary to complete the client’s desired operation.

Hypermedia as the engine of application state

Responses from the server should include links that allow a client to dynamically discover additional operations that are available to it. For example, if a client has requested to read details about a customer ID, the server would respond with links to modify or delete that record in the server database.

Using GillesPy2 to simulate inhalation anthrax

While working on my Computer Science degree, I was invited to participate in an undergraduate research project that allowed me to combine my interest in programming and biology. I converted a predictable, but simplified, model of inhalation anthrax to a model that accounts for the probabilistic nature of biological systems (i.e., deterministic to stochastic). The project was a collaboration between two UNC Asheville professors: Dr. Brian Drawert, Computer Science and Dr. Megan Powell, Mathematics.

Dr. Drawert is a researcher within systems biology, a field that attempts to model and analyze complex biological systems using computational algorithms. He is also the author of GillesPy2, an open-source software project designed to allow scientists to easily create those models and use stochastic simulations to study outcomes.

Dr. Powell specializes in infectious disease dynamics. Her primary research focus is inhalation anthrax, a disease caused by the Bacillus Anthracis bacteria, the same bacteria that was used in several high-profile acts of terrorism in the early 2000s.

By translating Dr. Powell’s deterministic model of inhalation anthrax into the stochastic form used by GillesPy2, we were able to deliver new insights on the model and make improvements to software quality & usability.

The following abstract is from a research paper I wrote for the UNC Asheville Computer Science department detailing the results of our joint research & development project.


GillesPy2 is a scientific software package designed for computationally modeling and simulating biological processes. Traditional simulations are done using deterministic methods, whereas GillesPy2 is based on the stochastic simulation algorithm presented by Daniel T. Gillespie [2], which introduces probability as a driving force behind simulations. By using stochastic methodologies, scientists can more accurately represent stages of growth in biology models.

Developing user-friendly software is a challenging process and often the best way to improve the quality of software is for developers to use it as a customer would – a practice known as “dogfooding.” Thus, in preparation for the public release of GillesPy2, we created a new stochastic model of inhalation anthrax (an often fatal infection caused by the B. Anthracis bacteria) based on a deterministic model presented by Day et. al [1].

In this paper, we document the methods used to convert the deterministic model to stochastic form. By using GillesPy2 through the eyes of a research scientist developing a model, we were able to provide a high level of software quality assurance by discovering a number of bugs and other usability issues. As part of the quality assurance process, we also implemented automated testing of source code to prevent the reintroduction of resolved issues.

Finally, with the introduction of a stochastic model of lung-borne anthrax infections, we began investigating research questions such as how the early immune response affects pathogenesis of infection, how levels of late-stage bacterial load are affected by initial conditions and whether the number of spores consumed by white blood cells determines survival rate.

  1. Day, Judy, Friedman, Avner and Schlesinger, Larry S “Modeling the host response to inhalation anthrax. (Report)”. Journal of Theoretical Biology. 276.1 2011-05-07. 199(10).
  2. Gillespie, D. T. (1976). “A general method for numerically simulating the stochastic time evolution of coupled chemical reactions”. Journal of Computational Physics22 (4): 403–434. Bibcode:1976JCoPh..22..403G. doi:10.1016/0021-9991(76)90041-3.

Analysis of 007: James Bond Films

The James Bond films are the longest running series in cinematic history, with a total of 24 films since Sean Connery’s debut as James Bond in 1962’s Dr. No. It is one of the highest-grossing franchises of all time with an estimated worth of approximately $15 billion USD.

This is a data analysis project that I’ve considered doing for a while. In 2006, when Daniel Craig took the reigns in Casino Royale, I was curious whether the movie had been more financially successful than its predecessors, despite the outcry of viewers who expressed uncertainty about the first “blonde Bond.” With the latest installment of 007 arriving in theaters this Fall, I thought it would be a great time to dig into the films’ data to see what additional insights I could discover.

Read more on Medium

Exploring a Science Fiction Utopia

The year was 2048 A.D. when scientists made a huge breakthrough in observational astronomy. Astrophysicists had been skeptical for decades, but a naked singularity had finally been discovered as part of the Alpha Centauri system. A singularity such as this could house a wormhole allowing us to travel through hyperspace and into the far reaches of the universe. It was close enough to visit, and being a naked singularity there was no event horizon to be sucked into. The possibility for a major scientific breakthrough was huge! After a decade of intensive efforts by Earth’s leading physicists and aerospace engineers to build a craft to visit Sigma Centauri – as it was now being called – a successful launch finally happened in the year 2060 CE. A crew of 3 scientist-astronauts volunteered for a “Return Improbable” mission. It took 40 years for the Seeker I probe and the Seekers to reach ∑ Centauri. Initial telemetry data indicated the singularity was approximately 50 billion solar masses!

Continue reading “Exploring a Science Fiction Utopia”

Self-reflection on Kacey’s Dream

The following is a self-reflection that was appended to Kacey’s Dream, an essay I wrote for a class (“Science Fiction & Utopia”) which is based on an idea for a short story sci-fi story I had several years ago. That original idea was in turn inspired by a dream I myself had after staying up too late one night while studying astronomy and supernovae for fun (yes, I’m that nerdy).

Note: Shevek is a character from Ursula K. Le Guin’s 1974 utopian science-fiction novel The Dispossessed. The prompt for the essay involved creating a “Frankenstein” utopia combining elements of other novels so you’ll also see elements of other utopian fiction included in Kacey’s Dream. Continue reading “Self-reflection on Kacey’s Dream”

Kacey’s Dream

Kacey bolted upright in her bed, soaked in sweat and trembling in terror. “What the hell was that about?!” she said to herself. The frightened young woman reached for the notepad she kept on her nightstand ever since she started living on her own, only three short weeks ago. That’s when the “nightmares” first started. Before too much time had passed, Kacey started writing down the details of the dream by emulating her sleep and placing herself back there mentally. Continue reading “Kacey’s Dream”