Multiple Oracles - Dynamically & Statically Sized Oracles

We will be using three buffers: two dynamically sized buffers focused on speed for the push-based reporters and OCW and one global stricter static buffer for the on-chain oracle focused on security. The oracle on the OCWs will be collecting data from various APIs. The oracle on the push-based reporter buffer will accept transactions through an extrinsic. Their results can then be aggregated in the main Kylin runtime Oracle. Since the results of the OCW are intended for the next block, this could mean a sample rate of 12 seconds. On the other hand, if the user wishes, he can use the OCW buffer every 6 seconds while being aware he/she is sacrificing security for speed. The faster aggregation allows for the publication of a manifest specifying data provenance before the publication of the data.

Note: A similar concept of multiple oracles has been used as a fallback mechanism by Liquity - https://www.liquity.org/

Kylin Runtime Oracle This oracle is centered on security and democratization. Reporters feeding this oracle will be oracles themselves: An OCW pull-based pool of API reporters and a push-based pool from the community at large. Mirroring attacks are when the data feeders are willing to freeload another data feeder’s response to minimize their cost of data provision. The first step in addressing these attacks is to identify the reporter in the SLA, which the querier signs. Furthermore, mechanisms should be put in place that ensures the confidentiality of the data sent by the data feeders. A commitment scheme achieves this confidentiality. Each data feeder should send a commitment of the plain data as an encrypted message to the receiver(Kylin Oracle Runtime Buffer). Later, the sender can reveal the original plain data and verify its authenticity using the commitment scheme.

Kylin OCW Pull Oracle The off-chain workers are part of the Kylin collator without being part of the runtime. They have direct access to storage and can be leveraged to prevent some of the common attacks. They are also, by their nature, able to run intensive and demanding computations without affecting the runtime. Their work will be verified by ZK proof, and their result can either be used right away(speed) or sent to the Kylin runtime Oracle buffer for subsequent treatment(security) as the consumer chooses in the SLA.

Kylin Push reporters Oracle. Push reporters are contributors calling an extrinsic to submit a value to the chain. Their values are selected at random to populate the pool, from which a final value will be calculated before it is sent to the Kylin runtime oracle buffer. There is a cost to calling this extrinsic, but the reward absorbs this cost once the reporter’s value enters the buffer and is used to calculate the final result.

FIFO / circular buffer sampling for numeric type

Aggregation buffers in the form of a FIFO queue are the traditional data structure used to evaluate data coming from multiple sources. Calculations of one buffer sample per block can give different results according to the algorithm used. In any case, the user should be able to configure which algorithms he/she wants to use, and this algorithm should be specified in the config/SLA. The type of algorithm used is the data structure parameters and rules applied to the reporters. Factors that will govern our research for the aggregation buffers:

  • Various inputs: push, and OCW pull reporter values

  • Variable queue size

  • Random sample rate

  • Weighted/Moving average

  • Mean

  • Median

  • Median filters(Neighboring pixel value)

  • Quantization

  • Time-weighted median price

  • Median Absolute Deviation

  • Accepted same value over 2/3 of the buffer

  • Ban period If the value of OCW or push reporter is over or below the value of an accepted spread.

  • No two OCW or pull-based reporters should be allowed to submit twice in the same sample period. Select storage.

  • Binary/Quicksort for median

  • Aggregation Errors management

  • Time-weighted average price

API Pooling for inbound pull of data feed It is highly recommended to aggregate the results given by multiple APIs in order to get the most accurate data. When a smart contract requests an off-chain state/data, the pull-based inbound oracle receives the request from the caller contract, and gathers the state from off-chain components, Furthermore, it sends the state back to the caller using a transaction. This process is transparent as it is implemented using smart contracts that communicate via delegated calls. A pull-based inbound oracle’s performance is constrained by the transaction rate of the blockchain network and the time needed to collect external data once the request is made to the oracle. Response time depends on the network speeds, which can cause a bottleneck. This will be mediated by immediate access to the runtime of the OCW.

Extrinsic Pooling for inbound push of data feed A push-based inbound oracle uses off-chain components to monitor external state/data changes and proactively inject any updates into the blockchain, enhancing the performance as the calling entity does not need to wait for the oracle to collect data. In traditional systems, if the use case needs to resolve temporal conflicts, the history oracle pattern could complement push-based inbound oracle to provide historical values of off-chain data. In these patterns, how the data came into existence is invisible and cannot be mediated, but because a manifest generated by the system can be provided to the user and the rules of a strict and discriminatory buffer queue, the effects of data manipulation will be lessened. In the case of inbound push-based reporters, participants could need to stake the transaction fee amounts several times before gaining rewards obtained in fees from the buffer snapshot within that time frame, protecting the system against DDOS attacks and preventing collusion because the manipulation cost is higher than the rewards obtained.

Zero Knowledge proofs The calculations on the OCW occur off-chain and will need to be verified. This is a perfect use case for zero-knowledge proofs. ZKP is the method by which one party (the prover) can prove to another party (the verifier) that a given statement is valid while the prover avoids conveying any additional information apart from the fact that the statement is indeed true.

Filtering arbitration and triggering Data should be vetted before it reaches the blockchain for irrelevant/illegal data. A keeper or arbitrator can initially filter the data for integrity and pertinence while watching the state trigger appropriate actions on Sybil blockchains. This entity can act as an automated custodian allowing for the outsourcing of regular maintenance tasks in a trust minimized and decentralized manner.

Last updated