SingularityNET Integrated OpenCog‘s Atomspace Hypergraph

We discussed before how SingularityNET integrates the agents of the Social Intelligence Services. It may allow it to become possible for anyone including non-professionals to launch, create, train and educate their own personal AI agent.

Even if technological developments like this are critical first-move advantages for the SingularityNET platform, we would like to highlight some of the core components of the network starting with the OpenCog.

Check out our whitepaper so you can dive deeper.

OpenCog provides SingularityNET access to the valuable features that other platforms cannot offer.

The open-source OpenCog project provided the first AI node and component in SingularityNET. It has a unique primary feature that is the Atomspace component. That component is similar to a database for graph representations that may be suitable to the host of neuromorphic data structures and the ontologies and semantic graphs.

The Atomspace Hypergraph allows the creation of hybrid networks that consist of subsymbolic and symbolic segments. It became possible because of the Atomspace’s unique ability to store multi-layer hierarchical hyper-graphs that they built with atoms.

In these hyper-graphs, there is a single link at one layer that can serve as a node in the higher tiers. At the same time, the connection may have an internal subgraph representation in it at its lower layer. Aside from that, every atom may have numerous contextual properties that may be in use for either activation spreading, probabilistic inference or anything else based on the precisely chosen algorithm.

The capabilities of OpenCog are now integrated

A particular piece of the OpenCog that gets integrated into SingularityNET is now the RelEx component for the extraction of semantic relationships for the English texts.

RelEx Component

Another part of the OpenCog in active development is the ULL or Unsupervised Language Learning project. It employs a hybrid approach in unifying the symbolic and sub-symbolic AI techniques. The ULL’s goal is to provide a semantic comprehension of the capabilities of a lot of domain-specific and human languages.

ULL or Unsupervised Language Learning

As of the moment, there are only a few languages here on Earth that have formal grammars link literary English with Link Grammar. A lot of the others do not have this. There are vast volumes of digitized texts that lay in the storages for semantic comprehension.

So we can solve this issue, there are thousands of years of computational linguists needed to build the grammars of these languages. Or, they may need thousands of years to annotate the training corpora for the electronic grammar learning communications here on Earth.

SingularityNET can address these problems more efficiently and quickly than ever thought possible. As of the moment, our team is creating a pipeline that uses sub-symbolic approaches like the category learning, K-means for word-sense disambiguation, SVD, Word embeddings and ADAGRAM. At the same time, we use the probabilistic inference for the formation of the grammatical link type and the extraction of the semantic relationship from the parse trees.

It is important to note that the OpenCog language learning pipeline implements the continuous self-reinforced loop where knowledge that people learned at earlier iterations may be in use to seed later iterations, and they will get adjusted incrementally. We already demonstrated success through the extraction of semantic and grammatical categories from the unannotated English Gutenberg corpus.

After developing the Language Learning framework that is capable of dealing with the existing language, it may be in use to power SingularityNET’s Language Learning Nodes. It is a critical technological advance that may be difficult for other projects to match. The success in early development may create growing competitive advantages.

The development of intelligent systems’ core component is the ability to get incremental knowledge. Since our system is self-learning, these early advantages may provide exponential benefits over time.

The incremental development of a system knowledge’s process should become a part of a solution. People now have forgotten about it since they suggested the notion of the Baby Turing Test more than thirty years ago. So far, the modern AI community only has a few people aware of this approach which is why only a few projects are implementing it.

The approach is a reflection of the development’s incremental nature for the intelligent being. The test’s character is that you should get the newborn “black box” that is capable of little and then feed it with your parental feedback, knowledge, and data. You may gradually increase the complexity of your inputs.

Once finished, the “black box” should have the ability to render intelligent behavior like what the “Basic Turing Test” requires.

Overcoming challenges in development with its iterative design

We will be implementing this paradigm in Agents within a test-driven development approach that they adopted for AI software processes. Our script may include the incremental addition of the unique knowledge for the newly created agents. At the same time, the test assertions will validate that the agents’ intelligence increases incrementally along with the new experience in the acquisition process.

As mentioned earlier, they found the importance of this in OpenCog’s Unsupervised Language Learning project. Whenever you try to acquire a language ontology and grammar for a large uncontrolled and unannotated corpora, we noticed that the statistical biases might accumulate in the large text corpora and lead to misleading initial premises at the start of the learning curve.

One example is the counted word co-occurrences that may lead to a misleading minimum spanning tree parses that we can use within the existing approach. Afterward, the incorrect parse trees may cause problems in grammar learning. Combat these challenges by beginning with the smaller lexicons that have limited sentence lengths. Use the bi-terms constructions of intransitive verbs for the low-ambiguity three-term creations that have transitive verbs. We may keep on increasing the length of the sentences and the lexicon’s richness by adding adverbs and adjectives.

 

Overall, the development plan for the creation of intelligent systems will do the following:

  • Allow functional diversity that converges with the symbolic and sub-symbolic approaches.
  • Facilitate social computations that are powered by the reputation-based consensus
  • Allow incremental experiential learning through explicit and implicit feedback and inputs.
  • Become an open-source in open ecosystems.
  • Provide rich environments infused with data from various sources.

We’re Only Getting Started

Our team has worked tirelessly for the assembly of an AI ecosystem that has the significant competitive advantages. In the next few weeks, we may continue demonstrating how we can lower the barriers to AI development so much faster than any other project.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: