Why is rpc bad




















You could use POST as well, of course. If you're not into wiring everything together in your applications explicitly, you could use a framework with GraphQL support to do it for you. Write a Groovy client or use your favourite language : Grab "com. I used raw-http a Java library with 0 dependencies that I wrote myself to make these kind of HTTP-based prototypes easy!

Notice how a GraphQL "query" can look just like a RPC call, with the difference that it describes what the response it wants back should look like. That's because, unlike with gPRC, generating the types does not seem to be the standard approach in GraphQL, even though it is supported via graphql-java-type-generator or the more polyglot graphql-code-generator if you're into TypeScript, Swift or Scala, there's also apollo-code-gen.

But all these type generators seemed clunky and under-documented to me, so I decided not to use them in this example. So, in summary, GraphQL is very powerful but not very simple. Matching the server code to your data storage may be challenging but there are some helper libraries and frameworks that may make this easier, and even databases that support GraphQL natively. But losing any kind of type-safety and a language-idiomatic way to make calls on the client is a major bummer.

Anyway, GraphQL may be appropriate for clients that deal with complex data that changes frequently or back-ends that need to use several data sources to serve a single request, I guess.

It has been open-source since a whitepaper describing it was published in when gRPC's predecessor, Stubby , was closed-source and has now support for nearly 20 different languages. It is an Apache project since Download and install the Thrift compiler:. This can compilicate the build in CI servers. The tutorial tells us to extract the tar ball, enter the thrift directory and run the following command:. What it doesn't say is that it will run for several minutes, as it builds and tests the Thrift implementation for several languages!

Well, nevermind Apparently, if you're on Ubuntu, you can just run apt install thrift-compiler. Create a Thrift file defining the data and service s : namespace java com. Run gradle build to generate the Java files. Implement the server-side service: package com.

HelloService ; import org. Implement the Server exposing the service:. TServer ; import org. TThreadPoolServer ; import org. TServerSocket ; import org. Args serverTransport. Implement the Client: package com.

TBinaryProtocol ; import org. TProtocol ; import org. Client tProtocol ; System. As with the other examples, now you can run the Server in a shell and the Client in another, which should print the hello messages as expected: Hello Mary Hello again Mary. Thrift is really, really similar to gRPC.

It's amazing that Google and Facebook both figured that the existing at the time RPC solutions were not enough and came up with something so similar, more or less independently the Thrift whitepaper does mention Protobuffers but as it was closed-souce at the time, it's impossible to tell if they had access to the RPC design Google was using. However, I do think that both of their solutions are sub-optimal in that they require more boilerplate than I think is justifiable for simple projects, including a custom IDL and its compiler, and that all code written to integrate with their frameworks cannot be re-purposed to use another RPC implementation without a lot of work.

Secondly, having not only data types be generated by protoc but also the service base classes is quite a big limitation as all the code implementing the services needs to be specifically written for gRPC.

Hence, I thought that having a simpler, more JVM-friendly but still usable in other platforms RPC mechanism based on Protobuffers which are great for serialization was really needed. The disadvantage is that it's not as easy to use with non-JVM languages Anyway, even though this is not nearly as mature and flexible as the other alternatives, it's simpler Create a service interface representing the remote service: package com.

Create a server-side implementation of the service: package com. Create a server exposing the service: package com. RemoteServices ; import java. Overall, in the world of system design, symmetry is highly desirable and asymmetry is tolerated when necessary. RPC is at best tolerated and best only used in the corner of the world, where it is applicable. Its advocates have been trying to generalize from a special case that is too restrictive.

I was excited to learn what this bold new networking concept was all about. That has always been the case. To my shock, there was nothing beyond that. And to add insult to injury, all of the peer-to-peer protocols were client-server, not peer protocols at all. If there is a message the application understands, then it was expected. Some models have kludged this by having a request with multiple replies, which might never have a final response. Interrupts are part of any operating system.

If RPC is more fundamental, then it must handle interrupts. Polling fits the RPC model. For this problem, Tanenbaum describes two situations: a terminal concentrator and sharing a file server.

He indicates that RPC can work nicely for the terminal server with the concentrator requesting input from the terminal, in other words, polling. Not really. I have written this code. It really needs to be interrupt driven, otherwise too much time is lost polling terminals with nothing to send and it severely limits how many terminals can be supported. Even with a fast typist and echoing every character remotely some systems do that , terminals are very slow devices. Tanenbaum should look at Telnet more closely.

Contrary to what most textbooks say, Telnet is not a remote log-in protocol, but a terminal device driver protocol. And while most terminal-to-host protocols were asymmetric and would seem to lean toward RPC, Telnet is symmetric. Making it a character-oriented IPC facility. This brilliant insight greatly increased its flexibility and created a model that made other issues simple.

Again, not all problems are symmetric but the solutions are better when they are. Telnet did have to solve the half-duplex terminal problem and here again they found a brilliant way to see both cases as extremes of the same problem. Because of limited kernel memory, Telnet had to be a user process. Telnet can expect input at any time from either the network or from the user. To implement it, they had to have two processes: one for incoming traffic; one for outgoing, and hack stty and gtty for the necessary coordination between them.

The next thing that was done was to design and implement real IPC for Unix. A similar experience occurred years later trying to do IPC on Apollo workstations, which only had an asymmetric RPC-like mailbox facility.

The result was cumbersome, complex and a pain to build code on. The idea that a process can only be a client or only a server is religion, not engineering. Procedures do! Yea, okay, I remember: Fortran Function calls always have to return a value. Again the lack of the language infrastructure. But most of the others are really more of the same: Not being precise and not defining a model, not making an RPC a procedure call, not providing the distributed OS support for the RPC, etc.

Concluding Remarks. It has always seemed that what underlies the whole RPC phenomenon is a deep-seated fear among computer scientists of asynchrony. They go to great lengths to keep everything very deterministic, very linear, when the problems are precisely the opposite.

Special offer on my book. Is cacheable or not. Do REST really wins? By continuing to use this web site you agree with the API Handyman website privacy policy effective date , June 28, Unfortunately, REST become a marketing buzzword for most of — As such, you might not get cacheability provided, it might have a bunch of wacky conventions, and there might not be any links for you to use to discover next available actions.

It is not advisable of course, but it is possible. A huge source of confusion for people with REST is that they do not understand "all the extra faffing about", such as hypermedia controls and HTTP caching.

They do not see the point, and many consider RPC to be the almighty. To them, it is all about executing the remote code as fast possible, but REST which can still absolutely be performant focuses far more on longevity and reduced client-coupling.

REST can theoretically work in any transportation protocol that provides it the ability to fulfill the constraints, but no transportation protocol other than HTTP has the functionality. REST has no specification which is what leads to some of this confusion, nor does it have concrete implementations. That said, there are two large popular specifications which provide a whole lot of standardization for REST APIs that chose to use them:.

If the API advertises itself as using these, there is a chance it is a good one. Otherwise go at it yourself with a plain-old HTTP client and you should be ok with a little bit of elbow grease. Still, it is one of the fastest growing API ecosystems out there, mostly due to some of the confusion outlined above. You ask for specific resources and specific fields, and it will return that data in the response.



0コメント

  • 1000 / 1000