SHAttered: SHA-1 collisions

I was reading this post about Google generating the first SHA-1 collision is at first glance fine. At first I didn’t think there was anything surprising there. We already know that hashing algorithms will collide. We’ve known that since they were invented. This isn’t news nor is it surprising. However they go on to say stuff like

You could alter the contents of, say, a contract, and make its hash match that of the original. Now you can trick someone into thinking the tampered copy is the original. The hashes are completely the same.

Well … that’s not so easy to do, I thought. Just because you can generate a collision with 2 different pieces of data, it doesn’t mean that those 2 pieces of data resemble each other. For example one could theoretically generate a hash of the Mona Lisa and a hash of Starry Night and find that they are the same but if you’re trying to convince someone that the Mona Lisa is the same as Starry Night you have a long way to go. You really need a way to tamper with the original data, say in this example Mona Lisa, and vary it subtly, say with a little darkening around the eyes, so that on close visual inspection they look identical. Additionally the hashes of the 2 blobs of data, the Mona Lisa and it’s darkened sibling, must be the same. So I was sceptical when I initially read it but it turns out that they’ve managed to come up with a technique that can alter data, at least in some specific cases, so that the copy hashes to the same hash as the original and looks like the original. That is pretty amazing.

At the crack of dawn

at-the-crack-of-dawn

I hope I can keep this up. I’ve been following Jocko Willink on Twitter and reading his book Extreme Ownership. Jocko has an insane, by anyones standards, start to the day. He wakes up at 4:30 and goes to his basement gym and works out. Additionally he only goes to bed around 23:00 which means he gets only 5:30 hours of sleep. How the hell do you get through the day on so little sleep? Surely you will crash at 17:00? When I first heard about his routine I thought not for me, I’ll never be able to do it It just made more sense to go to the gym at a sensible time after work and do your exercise then. And this is what I did. I’d go to the gym for a run between 19:00 and 21:00 well after the work day ended, and several years ago I had an even more disciplined routine of weights and running, however I lost the rhythm and struggled to get back into it. But after constantly seeing Jocko’s snaps at 4:30 on twitter I think the message subliminally affected me and I found myself thinking before bed go to the gym in the morning. Note that I say ‘I found myself thinking’ and not ‘I decided’. One day last week, I did. I’d damaged my foot running so I found a new gym with a pool and had a new weights program done. Then I woke up one day at 7:30 and went to the gym. The next morning was a little earlier at 7:00. Then at 6:30. And I’ve done this 6 times since. Two days on, one day off. I didn’t go straight into 6:30 wakeups. I broke it down from 7:30, to 7:00 to 6:30. I probably won’t do 4:30 wakeups since the gym doesn’t open till 6 and I like the swim – it’s a little luxury.

What I’ve noticed so far:

  • I don’t have a mid afternoon slump anymore. Normally abnout 15:00 or 16:00 I get tired at work but this isn’t happening anymore.
  • Last night I woke up at 3:30 and lay in bed trying to sleep till 6:00. I still went to the gym even on 4 hours sleep and I felt fine during the day. Of course I’d rather get a full night of quality sleep, but I have severe sleep apnea so I don’t fall asleep easily, the quality of sleep is bad and I tend to wake up during the night and struggle to sleep again. However I’m not going to let it stop me.
  • When forming a new habit it helps to mentally prepare by repeating to yourself what you want to do. When your last thought before going to bed is I will go to the gym when I wake up it really is so easy to do. It’s a form of mental programming, or programming your subconcious mind. Maybe there’s some hard research out there that backs this up, but whenever I’ve needed to form a new habit, the first thing I do is mentally repeat to myself what I want to do during the weeks prior. That way your subconcious mind takes control and tells you what to do.
  • How to kill jobs in Elixir.

    Sometimes a previous run of a process fails but doesn’t exit cleanly, especially when one is doing the whole edit, compile, debug cycle. For example, I have a program that registers an agent by module name:

    {:ok, pid} = Agent.start_link(fn -> {[], tokens} end, name: __MODULE__)

    If the calling program exits and I restart it I get this error:

    (MatchError) no match of right hand side value: {:error, {:already_started, #PID<0.233.0>}}

    To remedy this you need to kill the process that has already started. Process manipulation functions are in the Process module.

    iex(131)> h Process.exit/2
    
                                 def exit(pid, reason)
    
    Sends an exit signal with the given reason to the pid.
    
    The following behaviour applies if reason is any term except :normal or :kill:
    
      1. If pid is not trapping exits, pid will exit with the given reason.
      2. If pid is trapping exits, the exit signal is transformed into a
         message {:EXIT, from, reason} and delivered to the message queue of pid.
      3. If reason is the atom :normal, pid will not exit (unless it is the
         calling process's pid, in which case it will exit with the reason
         :normal). If it is trapping exits, the exit signal is transformed into a
         message {:EXIT, from, :normal} and delivered to its message queue.
      4. If reason is the atom :kill, that is if exit(pid, :kill) is called,
         an untrappable exit signal is sent to pid which will unconditionally exit
         with exit reason :killed.
    
    Inlined by the compiler.
    
    Examples
    
    ┃ Process.exit(pid, :kill)

    The next question is how to create pid? There is a pid function that takes the process numbers from the error message.

    iex(132)> Process.exit(pid(0,233,0), :kill)

    Note that a similar process applies if using Erlang.

    Did you know Apples contain cyanide?

    Did you know apples contain cyanide? Actually, it’s only half true. They contain a cyanide compound in their seeds that will only becomes toxic when you eat a stupid amount of them, or when you have to pay for an Apple Macbook 2016. OMG! WTF!!!

    Why Apple? Maybe it’s the fancy lcd touch bar that I’m sure the folk at Apple designing it agonized over for 3 hour daily meetings every day of every month for a year, while sipping their water bottles, presenting keynotes covering topics like: How big should the icons be? How many should we allow? Should a finger obscure an icon or should there be some of it’s edges protruding? What’s the average finger width? Is that male of or female? Child or adult? Human or animal? Do animals have fingers and shouldn’t we be using the term paws instead? No doubt the design team at Apple are paid handsomely for their intellectual prowess and ability to provide insight into these deep questions. And, no doubt, that’s why the product costs so much.

    This is a genuine problem for people who need a Unix like operating system supported by a big vendor. Apple is the only game in town. If you want a Unix like OS without Apple then be prepared to support it yourself. Buying a Windows laptop isn’t an option for me unless I want to run my Linux development machine in a virtual machine in addtion to the 3 or 4 other virtual machines I need to run to test the code for the clusters I work on. What about buying a Windows laptop and installing Linux over it? I instant messaged Dell on their customer support website and they basically said installing Linux would void the warranty. I wasn’t keen to void the warranty. I had look on the net and System76 popped up. They build laptops with Ubuntu installed except it would have to be shipped from the USA to Australia, and I wasn’t keen on shipping that far, or shipping back there if needed to use the warranty. I wanted something local. Then I saw Metabox, a small independent laptop manufacturer in Perth. Their laptops have an option no operating system which meant I could install Linux on it. The price for a laptop with the same specs as an Apple Mac was $2500 less. So for about $1500 I got:

    • 525GB SSD
    • Nvidia 960M
    • 16 GB RAM
    • 15″ 1920×1080 display
    • Intel i7-6700HQ
    • No operating system

    I figured this would suit me so I took a chance on a small operator and bought it. Physically I like the laptop. It’s chunky. It feels sturdy. It’s not trying to be sleek or a Macbook clone, since it’s a gaming laptop (no idea why people want to game on a laptop – doesn’t make sense to me. Maybe they travel alot.) The keyboard is great to type on. The next trick was to get linux installed on it. I’ve become used to using i3 as a window manager in Linux. It’s very good for those who don’t like to use the mouse. Similar to ratpoison in that sense. Manjaro has an i3 community edition that installs i3 out of the box, and being an Arch user Manjaro is familiar since it’s an Arch derivative. So I downloaded the latest manjaro i3 edition, popped it onto a USB, booted the laptop into the installer and installed. Installation was straighforwared but on reboot, the login display manager didn’t load and I was left with a console to login. I suspected it was something to do with Nvidia and the Intel graphics cards conflicting. Recalling a similar problem I had with an older laptop, that had an Intel i915 and Nvidia gpu built it, I read the thread above again. This time I checked the BIOS for an option to disable the hybrid GPU and forced it to use Nvidia. In the BIOS the options show as “MSHYBRID” or “DISCRETE”. I changed it to “DISCRETE”. Then reinstalled Manjaro using the non-free drivers (the Manjaro installer allows you to choose between free or non-free. I was surprised by this, but it makes life easier when dealing with Nvidia, since I’d rather use the Nvidia drivers over the reverse engineered Nouveau stuff). The only short coming of this solution is I have to use the Nvidia GPU which means that battery life might be compromised. I’m not sure if it works with an external display either, since I haven’t tested that yet. So for $2500 less I get a Unix like laptop. I don’t think Apple can justify their laptop being worth $2500 more. It’s unclear what I’d be paying for? Better support?

    Written in Asciidoc using Vim.

    Median Tracking

    Median tracking is the term given to the process used to track the median in a given array as elements are inserted into an array. Remember that the median is not the average value but the value in the middle. So, for example, in an array of 3 items [1,2,3] the median element, assuming 0 based indexing, is the item at index 1, which is 2. Now we already know that a heap is a good data structure for tracking the minimum or maximum of any value. With a heap we know that we can read an array and push elements as they are read onto the heap and, if the heap is a max heap, the largest of the elements will also be at the top. If it’s a min heap the smallest of the elements will be at the top ready to be popped whenever we want. We can exploit this utility using 2 heaps to track the median value. One heap is a max heap containing the lowest numbers of the array and the other is a max heap containing all the minimum values in the array. Thus, the top of the max heap becomes the max of all the mimimum values and the top of the min heap becomes the min of all the maximum values. The median element is then one of those two values. If the heap sizes are uneven then pick the element at the top of the largest heap to be the median. If the heap sizes are even then pick one heap and always use the top of that heap. The algorithm is shown in C++ below.

    The full source code is here

    Additionally it would be trivial to extend this idea to track the kth smallest number by limiting the size of the max heap to be k.

    Karger’s Min Cut

    A minimum cut of a graph is the minimum number of edges of a graph that is needed such that the graph remains connected. Karger’s min cut algorithm is a process that takes a given graph finds these edges. It’s a random algorithm because the edge it picks at each stage is randomly choosen. That edge is removed and one endpoint of the edge collapses onto the other endpoint. The edges of attached to the collapsed vertex attach to the new vertex, the vertex that is collapsed to. The process repeats, choosing at random another edge, and collapsing the end points. The process is illustrated in the following sequence of diagrams.

    Initial Graph

    Here the edge {1,3} is randomly selected and we decide to collapse vertex 3 to vertex 1.

    Collapse edge 1 3

    This results in the following graph where vertex 3 has disappeared into vertex 1 and it’s edges are now effectively attached to vertex 1. Now we select one of the edges {1,2} and collapse vertex 2 onto vertex 1.

    Collapse edge 2 1

    This results in the following graph with a loop from vertex 1 to vertex 1. This was the other {2,1} edge in the previous graph. Self loops like this are removed. This is the final graph. Once we get to 2 nodes we stop. We can see that the number of edges is 2 therefore the minimum number of edges for a minimum cut of the graph is 2.

    Final

    Note that this isn’t the only solution. The algorithm, because it randomly picks edges, could have come up with a min cut of 3. Therefore you have to run the algorithm multiple times to increase the probabilty of getting the mimimum cut.

    My first attempt at this was in C++. I modelled the adjacency list of vertices using a vector and each element contained a vector of edges that were connected to the vertex. Each edge contained two endpoints which were pointers to vertices. The adjacency list of edges was a vector of these edges. My first attempt didn’t work. I got my head in spin trying to logically reason about pointers to vertices to edges that contained pointers to vertices, and the whole thing didn’t work properly. I was surprised that I couldn’t get it right, and frustrated since I’d spent a few days on it. Eventually I bit the bullet and decided to start from scratch again, but this time, by choosing a simpler representation of the graph. I still used an adjacency list but instead of pointers used the vertex labels. For example, the vertex list for the graph above is [1 => [4,3,2], 2 => [1,3], 3 => [2,4,1], 4=>[1,3]. The corresponding edge list is [{1,2},{2,3},{3,4},{4,1},{1,3}]. Then, with a pen and paper I stepped through what would happen if an edge was removed and the endpoints collapsed. It allowed me to observe the changes to the lists themselves and come up with a different algorithm for collapsing a vertex, one that was not based on the pictorial representation above, that I coded in C++.

    • Pick a random edge, call the endpoints i and j. Let j be the vertex that is collapsing.
    • Remove edge {i,j} from the list of edges
    • In the list of edges replace all occurrences of j with i.
    • Remove self loops from the edge list.
    • Find the vertices connected to vertex j excluding i
    • Append those vertices to the vertex i
    • Remove vertex j from the list of vertices.
    • For each vertex, replace all occurences of j with i in that vertex’s list of vertices.

    I reimplemented the algorithm in Elixir and decided not to publish the source code since this is an assignment question from the Stanford Algorithm Design course on Coursera.

    Written in Asciidoc using Vim.

    No Debugging Programming in Elixir

    At last I have found a language that I do not need to debug. I can just write the code and have it run on the input data, and it will give me the correct result. The language is elixir-lang.org. This is the Holy Grail!

    So this is what happened. I’m studying algorithms and doing a course and one of the assignments was to write a inversion counter. So what’s an inversion? It’s basically where, in a list of numbers, a number is out of sequence with respect to another number in the list.

    Example of no inversions

    [1,2,3,4]

    Example of inversion. Here 3 is out of order.

    [1,3,2,4]

    Here there are 6 inversions.

    [4,3,2,1]

    A brute force algorithm that calculates this would have check each number against all the numbers in the list and therefore would be O(n^2) which is inefficient. The trick is to piggyback on mergesort which has a running time of O(nlogn). The reason this works is because during the merge step of the mergesort we detect inversions. When writing the larger array from the two subarrays the process detects inversions every time you find an element in the left array that is larger than one in the right array. At the end of the day a few subtle modifications to the mergesort algorithm is all that’s needed. And that’s what I did in Elixir. I stumbled along and my tired fingers and sore eyes cranked out some Elixir. The biggest surprise was it worked (ignoring compilation errors) right from the start. I didn’t have an input case that produced the wrong answer for the assignment. I didn’t have to go back and rewrite any of the pattern matches. I didn’t have to fix the recursion because I’d put the calls in the right place. Here it is.

    The problem with the solution above is it is slow. On an input set of 100000 numbers it took 30 seconds to compute. It contains a hidden operation that makes the summarize function O(n^2). The hidden operation is the ++ that appends the element to the end of the list. To do this it has to walk to the end of the list, which is an O(n) operation thus turning the summarize function from an O(n) operation to an O(n^2) function. The way to fix it is to prepend the head on the list and then reverse the list later as shown below.

    Applying algorithms intuitively

    Algorithms need to be internalized such that you can produce a solution without conscious effort, similarly to how you produce words without thinking explicitly about them. In fact thoughts only are created in the subconscious and the consciousness becomes aware of them. There’s no conscious control of them. However what you do have control over is what you program into your subconscious via learning and repetition and active problem solving. I believe that this can be applied to different problem domains such that, after training for a period of time, one can intuitively and a subconscious level see a real world problem and map it to a set of algorithms that solve it.

    Xnodes – Architecture of a Scalable Distributed System

    Note that this document is a work in progress and represent my ideas about how a scalable distributed highly available cluster should work.


    What is Enodes

    • A platform to enable a distributed highly available heterogeneous manageable cluster.
    • A platform for Internet of Things devices.
    • A platform for building network elements.

    Let’s analyze the first description. There are a lot of terms in the sentence A platform to enable a distributed highly available heterogeneous manageable cluster. that need to be defined.

    • Distributed. Nodes in the system can be geographically distant. They need not be on the same local area network.
    • Highly Available. Services running in the system are highly available. The loss of one instance of a service does not affect the delivery of the service. Overloading one instance of a service does not block clients from using that service.
    • Heterogeneous. Nodes in the system do not have to be on the same hardware or same version of software.
    • Manageable. Facilities provided for the installation of applications, their upgrade and downgrade with preservation of transforms to that applications configuration.
    • Cluster. A set of computing units or nodes connected in a network.

    Availability or 99.9999% uptime
    Say we want to have a highly available service. Let’s pretend it’s a database. We want it to tolerate a failure on the host machine so that it is still available. Simplest solution is to have it running on another host machine that acts as a standby. But what if numerous requests come in thus overloading the active database? Well let’s make the standby node active as well. Now we just need a load balancer at the front that round robins connections from clients between the two active databases. Simple. No need for overly complicated redundancy policies. Now what if we want to scale out to three node in our cluster? Easy we just add another node, make it active and configure our load balancer to round robin connections between three nodes in our cluster. If one node dies the other two will pick up the slack. Again no need for complicated redundancy groups. Availabilty is ensured by having multiple instances of the service running at any one time. If the services need some sort of logic to decide who is the master instance they can build themselves or Enodes can provide a service that makes that decision.

    Data consistency
    Ok great so we have a cluster of nodes and we have a way to ensure that a service is always available. But what about the configuration data of that service? In other words, which node is the authority about the information stored? And if we have a service running on multiple nodes surely all instances of the same service should share the same view of the data? Surely we need consistency between values between databases? I say rethink that assumption. How do you think Facebook and Amazon ensure such high availability and scalability? (Hint: they don’t ensure consistency, only that eventually the system will tend to a consistent state) It’s up to the application designer to decide which part of their data they definitely want to keep consistent and in the case where there is contention, it’s up to the application to handle it. In short it becomes a user interface problem. This model is not as bad as it sounds and in practice is far better than trying to enforce consistency on a database that has to scale and be available. The alternative, forcing consistency, requires every client application to agree to the transaction that is about to process, thus slowing the responsiveness of every application since they all have to wait. This has the side effect of reducing availability. Even if the client has established a connection with the database service but it’s waiting for the database to answer another clients request, that database is effectively offline. This has been thought through in the past; if it hadn’t Amazon, Google, Facebook etc would not be in business. Even git uses this conceptual model. Every time we use git (https://git-scm.com/) we are sacrificing consistency and gaining availability and scalability. We sacrifice consistency because we can make changes to out local repository without informing the upstream repository. When we commit a change locally, the local and upstream repositories diverge. When someone else commits a change locally, all other local repositories are no longer in sync. Importantly git doesn’t take any steps to enforce consistency. Consistency happens as part of the workflow and as part of the user interface. The users of git bear the responsibilty and do the work required to maintain consistency. We know that our local and upstream repositories will eventually be consistent because after everyone pushes their local commits, and everyone pulls from the upstream repository, all repositories will be consisitent. Similarly Apache Cassandra is a database that does what I’ve just outlined: http://cassandra.apache.org/. All nodes will eventually tend towards a consistent state. The https://wiki.apache.org/cassandra/ArchitectureOverview has this to say: “Cassandra weak consistency comes in the form of eventual consistency which means the database eventually reaches a consistent state. As the data is replicated, the latest version of something is sitting on some node in the cluster, but older versions are still out there on other nodes, but eventually all nodes will see the latest version.” In Cassandra’s case the applications, and not the user as is the case with git, bear the reposibility of resolving the conflicting case. For another view on the tradeoff between data consistency see https://highlyscalable.wordpress.com/2012/09/18/distributed-algorithms-in-nosql-databases/.


    Databases

    These are the properties we want a configuration database to satisfy.

    • Scale from 1 to thousands of nodes.
    • High availability.
    • Eventually consistent (always allow writes).
    • Tolerate network partitions.
    • Throughput and speed of read/write transactions isn’t a concern. It’s intended to be used as a configuration database and configuration data doesn’t change frequently.

    SQL

    Traditional SQL databases are strongly consistent and can be made highly available. They tend to be run on a handful of top of the line servers depending on application demands. Scaling out with them is hard as they favour strong consistency, so the more servers you add, the more time it takes for the service as a whole to maintain it’s consitency guarantees on transactions. They would be unsuitable for the enodes platform, since we require horizontal scalability.

    NoSQL

    Key Value database
    These are database like Riak and Redis that store keys and values. Simple queries are straightforward but more complicated queries like I want key x if key y satisfies some condition require more code on the application side. Essentially the application has to implement the code that SQL queries give you for free. However the benefit of these databases is that they scale horizontally very easily, work well as a cache, and are good for structureless/schemaless data.

    Graph databases
    Theses databases are specialized to store information in a graph. They are application specific and used to model a social network or where there is a graph like structure. I’ve got a feeling that this sort of database is kind of database you turn to when you know already that you need it. In other words, if you’re questioning if you need this, then you don’t.

    Document databases
    These store data in what essentially look like structs or records from a programming langauge, or a JSON object. They map easily to the code one might right for an application since almost every language I’ve seen (except Bash) has some facility to define structs of records.

    Column family databases
    These are large volume, high availabilty databases, like Cassandra, that are used when all your data will not fit on a single server which means you probably have several hundred terrabytes of data. In the case of an enode, well there will definitely not be more configuration data than the enode can handle, so I think we can safely ignore these types of databases.

    A note on monitored/read-only data

    Monitored data requires fast read/write and has a different set of properties from config data. Most noticibly it’s not configurable. The user can’t change the value of a database entry that is used for monitoring some aspect of a system, like the packets per second on a nic. It’s not persisted either. I’m not sure what to use for monitored data yet. It would be possible to use the same db as the configuration database but that’s not ideal.


    Hypothetical Use Cases

    To give you an idea of what Enodes helps with the following use cases are outlined below as a thought experiment.

    Running a web server
    The typical scenario is an organization running a web server that delivers content to external clients. Enodes will manage changes to the configuration of the server. Transforms to the services config are preserved so that a rollback can be performed. Additionally a snapshot of the content associated with that instance of the config is taken. This allows a proper rollback of the data. It’s not sufficient to only record the configuration changes. Dependencies between configuration and associated binaries are also tracked. For example assume that the Apache Web Server (httpd) has a dependency on OpenSSL. The current system is running version A of httpd and version A of libssh2. Then one day OpenSSL implements version 320 of the TLS protocol and release version B of openssl. Later, the Apache developers release a version, B, of httpd that supports version B of openssl. They also add another key in the httpd.conf configuration file. This means that this config can only work with version B of httpd and requires the new version of the openssl shared object on the host machine. Anyway the system admin performs the upgrade, installing version B of httpd and version B of openssl and updating the httpd.conf file to use the new configuration option. A few days later, for some reason, they decided to rollback httpd. Maybe and exploit was discovered in the latest version of httpd, something they couldn’t test for before upgrading. Enodes allows them to do this safely because the configuration data associated with version A of httpd is preserved. This state becomes the state that all nodes in the system will tend to. Content can also be snapshot to the version of the software installed.

    Installing software
    Installing a new piece of software changes the configured state of the system. Software is first installed on any node in the system and then the other nodes follow that node automatically install the software. In cases where the installation fails an alarm is raised.

    Upgrading
    Upgrading changes the configured state of an existing software package. The old state is snapshot (including config data and content), the binaries updated and new config replaced. Configuration transforms are executed. Again this is done on a single node in the xnode system first. All other nodes then tend to this nodes configuration.

    Downgrading
    The software binaries are downgraded to the previous version. The previous configured state is restored. Other nodes tend to this configuration.

    Removal
    The application binaries are removed from the running instance of the node. Configuration state is preserved. All nodes will tend to this new state. A system admin can always rollback the system config to a time where the application was installed.

    In production testing
    Say you want to role out the latest version of WordPress. You have an Xnode system running but want to test this latest version of WordPress. Enodes will allow you to 1) select the nodes to isolate 2) role out the upgrade to WordPress on isolated nodes. Then you can test the new version of WordPress by configuring your network to redirect a percentage of requests to the isolated set of nodes.


    Configuration data

    The Enodes database is CouchDB. It is intended to store system configuration information, like the packages installed on the Enodes system, their configuration. Changes to the configuration are not immediately synchronized. Instead, because of the way Riak works, the state of the system will eventually become consistent. By trading consistency this allows Enodes to provide scalabilty and availabilty. The following is a quote from the OpenSAF IMM Programmers Guide and describes a design Enodes will avoid:

    “During the time when sync is in progress, the IMMSv is not writable. The IMMNDs will reject any request that implies a mutation to the repository with SA_AIS_ERR_TRY_AGAIN. Read requests are of course still serviced. Updates to cached non-persistent runtime attributes are also allowed during sync.”


    Open Issues

    How are services that require redundancy but every instance of the service can’t be in use handled? For example how would Enodes handle a dhcp service?


    Links

    Beating the CAP Theorem Checklist[http://ferd.ca/beating-the-cap-theorem-checklist.html]

    Times, Clocks and the Ordering of Events in a Distributed System[http://research.microsoft.com/en-us/um/people/lamport/pubs/time-clocks.pdf]

    Byzantine Generals[http://research.microsoft.com/en-us/um/people/lamport/pubs/byz.pdf]

    CAP 12 years later[https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed]