Evolution Of The Remote Procedure Call

In the early days of software development, remote procedure calls (RPCs) were used to allow one computer to execute a procedure on another computer over a network. The RPC model allowed developers to write code as if the procedure was being executed on the local machine, simplifying the process of building distributed systems.

Initially, RPCs were implemented using simple http sockets, which allowed for the transmission of data between computers using the HTTP protocol. However, this approach had limitations, as it did not provide a way to define the structure and data types of the transmitted information.

To address this issue, developers began using XML to define the structure of the data being transmitted over the network. This allowed for more complex data types to be transmitted, making it possible to build more sophisticated distributed systems.

As the use of XML-based RPCs grew, developers started using web services, which provided a standard way to exchange data over the internet using a variety of protocols, including SOAP. Web services became popular for building distributed systems, as they provided a way to interoperate between different platforms and languages.

With the release of Windows Communication Framework (WCF), Microsoft introduced a new way to implement RPCs that was more flexible and efficient than earlier approaches. WCF allowed developers to build distributed systems using a variety of protocols and message formats, including REST, which had become popular for building web-based APIs.

In recent years, developers have also begun using lightweight messaging protocols such as MQTT and gRPC for implementing RPCs. These protocols are designed to be efficient and easy to use, making them well-suited for building distributed systems that require high performance and low latency.

As the use of RPCs has evolved, developers have been able to build increasingly sophisticated distributed systems that are able to connect and communicate with other systems in complex ways.

The Godfather - Electronic Data Interchange (EDI)

Electronic Data Interchange (EDI) is a standardized format for exchanging business documents electronically, such as invoices, purchase orders, and shipping notices. EDI allows organizations to communicate and exchange information in a consistent and efficient manner, reducing the need for paper-based processes and improving the speed and accuracy of business transactions.

EDI is used in a variety of industries, including healthcare, manufacturing, and retail. It is often used to automate the exchange of data between trading partners, such as suppliers and customers, enabling them to quickly and easily exchange information related to orders, shipments, and invoices.

EDI is typically implemented using specialized software and infrastructure, including EDI translators, which convert EDI documents into a format that can be understood by the systems of the trading partners. EDI can be used to transmit data over a variety of communication channels, including the internet, private networks, and dedicated connections.

While EDI has been widely adopted as a standard for exchanging business documents, it has been largely superseded by newer technologies such as web services, which provide more flexibility and enable real-time data exchange. However, EDI continues to be used in many industries and remains an important part of the modern business landscape.

HTTP Sockets

Somewhere in the late 80s to early 90s the use of transmitting data over HTTP via sockets became commonplace, while this paved the way for client server and distributed computing advancements it did have its shortcomings, a quick look at HTTP sockets:

HTTP sockets are a low-level network communication mechanism that can be used to send and receive data over the web while implementing a variety of different protocols.

The use of HTTP sockets provided a simple and flexible way to implement communication between systems across a variety of programming languages and platforms, while supporting different types of data such as text, binary, and multimedia. While HTTP sockets served their purpose it did have its share of pitfalls in that it required a more detailed understanding of complex network programming, error handling and security posed significant challenges, and in terms of efficiency HTTP sockets did not perform well for the larger and more complex data transfers.

Stuck In The Middle Of Middleware - DCOM / CORBA

Moving into the mid 90s saw the introduction of protocols that required some additional overhead of middleware systems, these technologies served their purpose but were notorious for needing extra care and feeding to keep running consistently.

Distributed Common Object Model (DCOM)

Distributed Common Object Model (DCOM) was introduced in the mid-1990s and was widely used for building distributed systems in the Windows ecosystem. 

DCOM is a Microsoft technology that was designed to allow software components to communicate with each other over a network, enabling developers to build distributed applications that could run on multiple computers.

DCOM used a client-server model, in which clients made requests to servers, which executed the requested operations and returned the results. DCOM used a variety of protocols, including RPCs, to enable communication between clients and servers.

Common Object Request Broker Architecture (CORBA)

Common Object Request Broker Architecture (CORBA) was developed in the late 1980s as a way to enable interoperability between different computing systems. It was widely used for building distributed systems in a variety of industries, including finance, telecom, and defense. 

CORBA is a standard for building distributed systems that allows software components to communicate with each other regardless of the programming language or operating system they are running on.

CORBA uses a client-server model, in which clients make requests to servers, which execute the requested operations and return the results. CORBA uses an object request broker (ORB) to enable communication between clients and servers, allowing them to invoke operations on each other as if they were local objects.

Despite its widespread use, both DCOM and CORBA has been largely superseded by newer technologies such as SOAP, REST, and gRPC.

Pre-Soak The SOAP With XML

One of the earliest and most influential and/or foundation for other RPC protocols is XML format, which was developed in the late 1990s that provides textual data much needed structure.

XML then became the backbone of RPC data sometimes known as XML-RPC as a lightweight protocol that uses XML to encode its data and HTTP as a transport mechanism. It allows a client to call methods on a remote server and receive the results in a simple and straightforward way.

XML-RPC was an important development because it provided a way for different systems to communicate with each other using a common format (XML) and a widely available protocol (HTTP). This made it possible to build distributed systems that could interoperate across different platforms and programming languages.

However, XML-RPC had some limitations, and as a result, SOAP was developed as a more robust and feature-rich alternative. 

Suds And Duds - SOAP

SOAP (Simple Object Access Protocol) is a protocol for exchanging information in a decentralized, distributed environment. It is based on XML and uses HTTP for communication between client and server. SOAP web services use the SOAP protocol to send and receive messages, usually for the purpose of exposing or consuming web-based services.

RPC (Remote Procedure Call) is a method of requesting a service from a computer located in a different place. It is used to call functions or methods on a remote server as if they were local.

Pros of SOAP web services:
  • Widely-used and well-established protocol, so it has good support and a large developer community.
  • Platform-agnostic, so it can be used with a variety of programming languages and systems.
  • Supports multiple protocols (e.g. HTTP, SMTP) for communication.
  • Built-in error handling mechanism.
  • Built-in tooling amongst popular IDEs (Visual Studio) made for simple and tightly coupled interoperability.
Cons of SOAP web services:
  • Messages can be verbose and complex, which can make them less efficient than other options.
  • Nore rigid in structure than some other options, which can make it more difficult to work with.
  • Larger payload size compared to others.
Overall, SOAP web services are useful technologies for building distributed systems, but they may not be the best choice in every situation. It is important to consider the specific needs and requirements of your project when deciding which technology to use.

Rise Of REST

REST (Representational State Transfer) is a software architectural style that defines a set of constraints for designing web-based systems. It is often used to build APIs (Application Programming Interfaces) that allow systems to communicate with each other over the web.


In the context of RPC (Remote Procedure Call), a REST API can be used to expose a set of functions or methods that can be called remotely by clients. These functions are typically implemented on a server and can be accessed by clients using HTTP requests.

REST APIs are designed to be easy to use and understand, and they are based on a few key principles:
  • They use a uniform interface: REST APIs use the same set of HTTP verbs (e.g. GET, POST, PUT, DELETE) to perform different actions, and they use a common syntax for representing resources (e.g. URLs).
  • They are stateless: REST APIs do not maintain client state on the server, so each request is independent of the others.
  • They are cacheable: REST APIs can be designed to allow clients to cache responses in order to improve performance.
Pros of using a REST API:
  • Widely supported and easy to use.
  • Flexible and can be used with a variety of different data formats (e.g. XML, JSON).
  • Scalable and can handle a large number of requests.
Cons of using REST API:
  • Does not provide as much built-in support for features like security and error handling as some other options.
  • Not as efficient as other options for certain types of data transfer.
  • Requires manual HTTP interaction from a development standpoint, tooling not available for interaction as compared to SOAP.
Overall, REST APIs are a popular and widely used option for building distributed systems and implementing RPC, but they may not be the best choice in every situation. It is important to consider the specific needs and requirements of your project when deciding which technology to use.

Google gRPC - Knows No Stranger

gRPC (Remote Procedure Calls) is a high-performance, open-source universal RPC framework that enables clients to send requests to servers and receive responses over a network connection. It is designed to be language- and platform-agnostic, allowing for communication between a wide variety of client and server implementations.

gRPC is based on the HTTP/2 protocol and uses Protocol Buffers, a binary serialization format, to encode and decode data. This allows for efficient and compact data transmission, as well as the ability to support streaming data.

Some of the key benefits of gRPC include:
  • High performance: gRPC is designed to be fast and efficient, with low overhead and support for parallelism.
  • Language and platform agnosticism: gRPC supports a wide range of programming languages and platforms, allowing for communication between different implementations.
  • Compact and efficient data transmission: gRPC uses Protocol Buffers for efficient binary serialization, resulting in smaller payload sizes and faster transmission times.
  • Support for streaming data: gRPC supports both unary (single request/response) and streaming (multiple requests/responses) RPCs, allowing for the efficient transmission of large amounts of data.
  • Built-in security: gRPC includes support for secure transport via TLS/SSL, as well as other authentication and authorization mechanisms.
Overall, gRPC offers a high-performance and efficient way to facilitate communication and data exchange between clients and servers, making it well-suited for use in a wide range of applications.

Anti-Social Social Club Players

Microsoft .NET Remoting

Microsoft .NET Remoting is a technology that allows applications to communicate with each other over a network in a distributed environment. It is a powerful tool for building distributed systems and can be used as an alternative to other RPC (Remote Procedure Call) protocols.

Some benefits of using .NET Remoting over other RPC protocols include:
  • Comprehensive and feature-rich framework: .NET Remoting provides a wide range of capabilities, including support for multiple communication channels (e.g. HTTP, TCP), a variety of data formats (e.g. binary, XML), and advanced features like object lifetime management and security.
  • Tightly integrated with the .NET platform: .NET Remoting is designed to work seamlessly with the .NET framework, which makes it easier to use for developers who are already familiar with the platform.
  • Highly flexible and configurable: .NET Remoting allows developers to customize many aspects of the communication process, including the transport protocol, the format of the data, and the activation and lifetime of objects.
As cool as Microsoft .NET Remoting is, it is not without its limitations as it is only available on the .NET framework, so it is unable to be used with other programming languages, it can like others not be as efficient as some other options for certain types of data transfer.

Distributed Ruby (DRb)

Distributed Ruby (DRb) is a technology that allows software components written in the Ruby programming language to communicate with each other over a network, enabling developers to build distributed systems. DRb uses remote procedure calls (RPCs) to enable communication between components, allowing them to invoke operations on each other as if they were local objects.

DRb is a simple and lightweight RPC technology that is easy to use and well-suited for building distributed systems in the Ruby ecosystem. It supports a variety of communication channels and protocols, including TCP and UNIX sockets, and can be used to build both client-server and peer-to-peer distributed systems.

DRb is widely used in the Ruby community and is often used in conjunction with other Ruby libraries and frameworks, such as Rails, to build distributed applications. It is an important part of the Ruby ecosystem and has been widely adopted by developers for building distributed systems.

Java Remote Method Invocation (Java RMI)

Java Remote Method Invocation (Java RMI) is a technology that allows software components written in the Java programming language to communicate with each other over a network, enabling developers to build distributed systems. Java RMI uses remote procedure calls (RPCs) to enable communication between components, allowing them to invoke operations on each other as if they were local objects.

Java RMI is a powerful and flexible RPC technology that is widely used in the Java ecosystem. It supports a variety of communication channels and protocols, including TCP and HTTP, and can be used to build both client-server and peer-to-peer distributed systems.

Java RMI is an integral part of the Java platform and has been widely adopted by developers for building distributed systems. It is often used in conjunction with other Java libraries and frameworks, such as Spring and EJB, to build distributed applications.

Sensor Overload - MQTT

MQTT (Message Queue Telemetry Transport) is a lightweight, publish-subscribe messaging protocol that is designed to be used on low-bandwidth, high-latency networks. It is commonly used in Internet of Things (IoT) and Machine-to-Machine (M2M) communications, as it allows devices to communicate with each other efficiently and in a resource-efficient manner.

One of the key benefits of MQTT is its efficiency, as it uses a small code footprint and minimal network bandwidth. This makes it well-suited for use in constrained environments, such as those found in IoT devices and M2M communications.

MQTT also has a simple publish-subscribe messaging model, which allows devices to send and receive data without the need for direct communication between them. This allows for a decentralized communication system, where devices can publish data to a central broker, which can then distribute the data to any subscribed devices.

Another benefit of MQTT is its ability to support Quality of Service (QoS) levels, which allow for reliable message delivery even over unreliable networks. MQTT supports three QoS levels:
  1. QoS 0: At most once delivery. This means that the message may be delivered zero or one time.
  2. QoS 1: At least once delivery. This means that the message will be delivered at least one time, but may be delivered more than once in some cases.
  3. QoS 2: Exactly once delivery. This means that the message will be delivered exactly one time.
MQTT-RPC (MQTT Remote Procedure Call) is a variant of MQTT that allows for remote procedure calls (RPCs) to be made over an MQTT connection. With MQTT-RPC, a client can send a request message to a server and receive a response message in return. This allows for a simple and efficient way to perform RPCs over an MQTT connection.

Overall, the MQTT protocol and MQTT-RPC offer a lightweight and efficient way to facilitate communication and data exchange between devices, making them well-suited for use in IoT and M2M applications.

Where We Going Next?

Remote procedure calls (RPCs) have had a significant impact on computing, particularly as the use of mobile devices has increased in recent years.

Without the advancement of RPC capabilities, it is safe to say that cloud computing would not be where it is today, and on the consumer level it is safe to say that we would not have the elaborate ecosystem of mobile device applications at our disposal.

We shall see what new types of communication medium become available, especially as more and more machine learning and artificial intelligence demands are made with more and more need for sensor data.

Comments

Popular posts from this blog

Exploring C# Optimization Techniques from Entry-Level to Seasoned Veteran

Implementing Enhanced Policing With Big Data and Predictive Analytics

Is Cloud Computing a Digital Transformation Enabler or Obstacle?