How do you incentivize an honest, communally-strong A.I.? Two methods of creating a communally-strong A.I. are proof-of-cognition and secure multi-party-computation. We've implemented proof-of-cognition in private with some high-quality results. We'll be using proof-of-cognition in the Project Oblio test network / initial distribution. Proof-of-cognition was designed for a blockchain-like network and is fully compatible with financial incentives. Secure multi-party computation is used by projects like OpenMined and is more common in academia due to its wider use cases outside of decentralized networks. While it offers immediate options for privacy and scaling, it is harder to financially incentivize nodes to act honestly. With more research, it will be possible to develop anonymized proof-of-cognition protocols. If it turns out proof-of-cognition is vastly inferior to secure multi-party computation, then Project Oblio will use a secure multi-party-computation consensus algorithm. However, the initial distribution of Arrows relies on proof-of-cognition, hencing the "pending" label next to most metrics of Karma.