Complete and continue
Fundamentals 101: Video White Paper
Video White Paper
What is proof-of-individuality? Proof-of-individuality, or proof-of-uniqueness, is the work required to prove that a user has exactly one account in a given network. It gets rid of fake accounts because a user can't create more than one profile very easily. Unlike other networks, it also gets rid of paid-for accounts, because accounts become non-transferable, due to the use of biometrics. Anonymous proof-of-individuality means that you can identify a single account as belonging to a unique user, without knowing the user's identity in the real-world. While Project Oblio's distribution may not rely on anonymized biometrics, future iterations are expected to include leading encryption techniques such as functional encryption to enable this. Project Oblio's proof-of-individuality algorithm is a metric. It calculates trust levels as to whether an account belongs to a unique user, not whether a user is definitively unique. A number of financial incentives are in place to make the cost of generating and maintaining a fake account more costly than competing methods of influencing the network.
What's a dub? Dubs are mini challenges that simultaneously prove a user isn't a bot and that they're a unique human. Functionally, they're a bit like "I'm not a robot" tests. Dubs are decentralized because the challenge is seeded from a random block hash. When a user completes a dub, the dub needs to be evaluated whether it was performed correctly, or whether it was faked by a bot or duplicate user. This requires having a user database of a user's previously submitted dubs to compare to (later video). The "trust core" of a dub (e.g. for backing whether an internet comment is authentic) is performed by a network of decentralized nodes running machine learning algorithms. Anyone can become a node in this network -- you don't have to rely on a company (who could then generate fake accounts) to perform the verifications. The decentralized nature of this process is imperative for ensuring that a single entity is not incentivized or even capable of generating fake accounts. A simple example of a dub is to have a decentralized network first agree on a list of ~20,000 words. A random most-recent block hash from an external network, such as the Ethereum blockchain, is used to select 5 random words from this list. When a user visits the website, they're asked to say these 5 random words within about 30 seconds. If they can't say the words within that time, a new most-recent Ethereum block hash is chosen, and a user is asked to say a new set of 5 random words. Because the user's voice is a biometric, it proves that the user is unique within whatever website they're trying to comment on. Because the words they say are seeded from a random block, it proves that they are "alive" (i.e. not a bot -- they must've generated the words within 30 seconds, assuming their transaction propagates). Someone might say this is hocus pocus -- An A.I. could easily be trained to say these 5 random words within the allotted timeframe (30 seconds). But, if network nodes are trained against such fake voice data, it becomes a much more difficult task to fake. The network only works if there is a market for paying people for fake data to train against, and ultimately whether there is a greater financial incentive for users to submit fake data for payment to network nodes, rather than submit fake data as a new account. Project Oblio has a number of dubs in the pipeline that are both more secure and more user-friendly to perform than saying 5 random words out loud. We hope to implement them before the conclusion of our airdrop.
How do you identify fake dubs and fake accounts? (Other Projects) There's a common theme purported by science fiction that A.I. will eventually spin out of control, becoming more powerful than any structure created by humans. By creating financial ecosystems where strong, communal A.I. are resistant to attacks by smaller, adversarial A.I., leading blockchain researchers are building communities where honest nodes are incentivized to work together to overpower the bad ones. OpenMined are the leaders in distributing A.I. over multiple nodes, while keeping everyone's submitted data private to only the submittor. However, it is unclear whether their approach can incentivize a financial network where people are motivated to act honestly through financial rewards. https://www.openmined.org/ What OpenMined lacks in financial incentives, Truebit makes up for. Truebit wants to scale computations on the Ethereum blockchain by creating markets where people are paid for submitting and analyzing data honestly. https://truebit.io/ uPort is creating an identity system that could act as a transferable proof-of-individual system (accounts can be bought and sold). However, it is still easy to generate bots and fake accounts within the static biometrics collected by uPort. https://www.uport.me/ Project Oblio is like OpenMined in that it distributes machine learning processes over multiple nodes, and has plans to implement advanced encryption techniques post-Distribution. It's also like Truebit in that it creates financial incentives at various steps in the process for honest submittal and approval of data. And it's a simplified version of uPort, in which users can have identifiable accounts if they choose to. It's also the pioneers of Dubs - decentralized, uniqueness-detecting biometrics.
How do you incentivize an honest, communally-strong A.I.? Two methods of creating a communally-strong A.I. are proof-of-cognition and secure multi-party-computation. We've implemented proof-of-cognition in private with some high-quality results. We'll be using proof-of-cognition in the Project Oblio test network / initial distribution. Proof-of-cognition was designed for a blockchain-like network and is fully compatible with financial incentives. Secure multi-party computation is used by projects like OpenMined and is more common in academia due to its wider use cases outside of decentralized networks. While it offers immediate options for privacy and scaling, it is harder to financially incentivize nodes to act honestly. With more research, it will be possible to develop anonymized proof-of-cognition protocols. If it turns out proof-of-cognition is vastly inferior to secure multi-party computation, then Project Oblio will use a secure multi-party-computation consensus algorithm. However, the initial distribution of Arrows relies on proof-of-cognition, hencing the "pending" label next to most metrics of Karma.
Decentralized Neuroscience...Learn more here: projectoblio.com
Decentralized Neuroscience...Learn more here: projectoblio.com
Lecture content locked
If you're already enrolled,
you'll need to login
Enroll in Course to Unlock