Select Page

We Help Machines Understand People

Bayon AI specializes in high-level neuromorphic computing research.
We reverse-engineer the human mind using a top-down, inside-out, approach in an attempt to uncover the specific primary agents that constitute the society of mind. Bringing together semantics, linguistics, psychology, and philosophy into a coherent high-resolution model of the human psyche.

The Vision

We believe that in the future people would communicate with machines intuitively. People will no longer ‘use’ computers, but instead, they will interact with computers – just as they interact with other people.

For that to be possible, computers, as well as people, must understand the inner workings of the human mind, conscious and unconscious alike.

Bayon AI is here to help facilitate this coming transition.

 

Mission

Here, at Bayon, we believe in semantics. True, it’s difficult and complex, but without semantic capabilities, computers and people will never truly understand each other. We believe that semantics is an inseparable part of intelligence and see no real way around it. So, since it’s better to fail at doing the right thing than to succeed doing the wrong thing, that’s what we’ll do.

We are here to close the gap between human and machine by explaining to computers the meaning of being human.

Strategy

We base our mental model on the work of Marvin Minsky, who understands the human mind as a collection of agents he refers to as a society of mind. These mental agents organized in groups and layers and collaborate as a society.

It is these agents that we’re trying to identify and map individually. Bringing together semantics, linguistics, psychology, and philosophy into a single coherent high-resolution model of the distributed system that is the human psyche.

Methods

Thankfully, we all come equipped with state of the art semantic supercomputer mounted in our heads. This is why we choose to focus on the top-down, or inside-out, approach rather than the bottom-up approach that’s so popular nowadays.

Indeed, this approach is by no means new as philosophers have been trying to figure out the constants of perception for millennia.  However, with new technological advancements and a set of adjusted reverse engineering methodologies, we set off to explore the inner mechanisms of the humanOS.

The Model, In a nutshell

Semantics and the Primary Building Blocks of Human Perception

This document is a work in progress.
Last update: June 2020

Data vs. Architecture

Our mind, the human mind, contains our knowledge. It stores within it everything we know about our external world, our internal world, and everything in between.

But just like any other data-storage, some of these entities are data, while some are architecture. As some things we know because we learned them, and some we simply know because they are part of the structure that contains the data.

That means that, when looking at relational databases, for example, we have columns, rows, and tables. In those, we can store whatever data we see fit. But, even if there’s no data in the system, there will still be columns, rows, and tables. These not to represent something, but as a way to represent something. These are the primary entities of that specific data storage.

If you were to take your data out of a relational database and move it into a graph DB, for example, then you would lose all the columns, rows, and tables. Instead, the storage architecture will now give you vertices and edges to work with.

But, what does the architecture gives you when you move data into the human mind, and what do we lose when taking data out of the mind?
Well… that’s what we’re here to try and figure out.

The Patterns of a Primary Entity

Every data-storage needs primary entities in order to exist, and we can only store data using the primary entities we have available on any given system. These are the entities we lose when transferring information out of the mind – or the entities we gain when moving information into the mind.

These primary entities we’re looking for must have a few characteristics that make them different than all other “data” entities.

First, primary entities aren’t knowledge. They are the things which knowledge is made of. They aren’t data that’s in the brain, but they are the structure of the brain itself. It is something that you have simply because you use the system.

Therefore, primary entities must be fixed and independent of any definition.

Primary entities are the constants and they remain the same through the entire life-cycle of the system. Primary entities (1)do not need to be learned; (2)can’t be forgotten, and (3)can’t be changed or manipulated.

That would also entail that, since they are ingrained in the structure, they are (4)independent of any definition.

Unlike learned concepts, that need to be defined before use, primary entities can be used without defining them. Furthermore, any definition we might attempt to impose on them will serve only as a description of our subjective experience of them, as it cannot bind them or change the entities in any way.

These are the traits I used in order to hunt down the primary entities and set them apart from other entities.

 

Primary entities aren’t islands. They exist as parts of greater primary functional units of mind – unconscious sub-systems – that require these entities in order to function.

You’ll notice is that I’ve taken the liberty of already dividing the primary entities into 4 clusters. Each cluster dealing with a different entity type and containing a button-up and a top-down interpretation of it. 

Note that the followings are not definitions but merely descriptions of undefinable terms. Please bear with me as you try to connect with the meaning of the term rather than the words.

Primary Entities – Nodes

The first type of primary entity we would expect to find in a neural network structure would be the node.

Cluster 1 – Actions

Actions – Things that can be done (concrete)

Procedures – A monitored and prioritized sequence of actions.  (action 1, wait [for x], then action 2)

Cluster 2 – Things

Concrete Objects – a specific Implementation of a concept / the source of a concept.

  • This house

Abstract Concepts – a generalization of an entity type.

  • A house

Cluster 3 – One of

The pickers are invisible entities that have no properties to describe. We can only know about them by studying their function.

The X | First | Last |

Picker – returns a single entity out of a closed list

  • the biggest spoon in the drawer

Finder – returns a single entity out of the entire knowledge base (by certain conditions)

  • The president
  • The best thing to do right now

Cluster 4 – All (Collections of similars / Several of the Same)

Group – A collection of known entities

  • Plants (that are) in the pot
  • My family

Fetcher (query) – a collection of entities which qualify under certain conditions

  • All the spoons that fell on the floor
  • Everything that happened today

 

Primary Entities – Connections

The second type of entity we would expect to find in a neural network structure would be the connection. Primary connections seem to be slightly more complex than we might expect as they sometimes act more like junctions – containing 3 ends and possibly even more. These ends connect to other primary node entities or other primary connections in a combination that carries a certain meaning.

Much like the primary nodes, these can be divided into 4 clusters, each containing a top-down and a bottom-up interpretation.

Cluster 1 – Doing

Event – An Action that happened
(Subject did Action [on Object])

  • He was running
  • He went home

Task – A Procedures to be executed

Cluster 2 

Association – The mutual appearance of two separate entities.
(Entity1 [association type] Entity2)

  • Hammers go with nails (General Association – Correlation)
  • I can use that hammer (Current Association – Availability/Affordance)
  • Sleep influences alertness (Associated as Influence)
  • Dropping a mug causes the mug to break (Causal Association)
  • After the rain comes the sun (Temporal Association)

Comparison – difference between two entities  
(A is more/less/equals X than B)

Cluster 3 – Parts & Members

Reference – Spatial relation / Abstract Affiliation + Hierarchy as Position in Relation to Reference
(Entity [relation (on/in/near/away from// Wear/Hold…)] Reference Entity)

  • A member of a group
  • An action within a process
  • An object within a container (alienable)
  • An object/concept part of object/concept (inalienable)

Inheritance – classification of an entity to an existing concept
(Entity [type of] Concept)

  • This is a horse
  • A horse is an animal
  • This procedure Implements that method / What he’s doing is called reverse-engineering

Cluster 4 

Metadata
(A is B | A’s R is B)

  • The couch is gray | The couch’s color is gray
  • This action is easy | This action’s difficulty is easy
  • This group is big | This group’s size is big

Mental Actions / Exec. Functions

(Still not sure how to describe these)

 

Linguistics – Structuring Sentences on a Neural Network

Now that we’ve familiarized ourselves with the primary entities of the human knowledge base, let’s see how these entities can be represented in language.

In the realm of linguistics, we’ll take the top-down approach as well, as we apply the structure of our neural network onto the creation of lingual representations.

Basic Nodes – C1 & C2

The most basic structure on a neural network is the node. In our case, the C1 & C2 primary nodes. These nodes can use labels, or names to represent them.

These nodes are represented as the individual words of the language.

  • Jack, tree, table, person, jogging, movement, cutting, chopping, squeeze, and so on.

or word combinations

  • Jack Dorsey, apple tree, rocket launching

Complex Nodes – C3 & C4

The more complex C3 & C4 nodes may also be labeled and named, but there are usually agreed-upon patterns of word combinations that could represent them.

These naming conventions may define the structure and describe the commonalities of the group, the order of the hierarchy, and/or the required qualifications.

  • C3 Picker: “The best author”
  • C3 Finder: “The biggest spoon in the drawer”
  • C4 Group: “The (whole) family”
  • C4 Fetcher: “The spoons in the drawer”

The Basic Primary Connections

Now that we have actions, things, groups, and hierarchies, we need to put them together. This is where the next entity type comes in – the connection. But, by connection I don’t mean the usual straight forward A to B kind.

Mental connections are more like junctions, as they usually connect between 3 different entities (S-V-O), and sometimes even 4.

Simple Connections between C1 & C2 nodes

  • Events: “He was writing. He wrote a book
  • Position: “It is on/in/under the couch
  • Metadata: “This action/plan/person is difficult

Simple Connections with C3 & C4 nodes

These sentences increase in complexity when combined with more complex nodes.

  • The best author in the world wrote a book
  • The knives in the drawer are sharp

Conjunctions – Members of the Same Node

These connections, just like any other entities, can be collected together by a C3 or C4 nodes, represented as conjunctions.

  • Addition: “Eat healthily, and exercise regularly.”
    With emphasis/priority: “This is fine, but that is not.”
  • Options: “Go out, or stay home.”

Compound Sentences – Connections Between Connections

The formal basic graph structure of connections between nodes is simple. The mind, however, is quite a bit more complex. In the mind’s knowledge base, connections are not only possible between nodes, but also between other connections.

Such connections would be represented by language as compound sentences.

  • Comparison: I ride a bike better than you ride a bike.
  • Associations: I was walking with my dog.
  • Affiliation: “Death is part of life
  • Sequence: “First listen, then ask questions
  • Requirement: “In order to run, you must first learn to walk

Additional Connection Detail

Another great benefit of having connections emerge out of other connections is the ability to add additional custom data to connections, just as we do with nodes. These are used to convey additional information about the specific nature of the connection, such as time, place, context, opinion, intensity, etc’.

  • Certainty: “He was (definitely) running
  • Speed: “He was running (very fast)”
  • Time: “He was running (last night)”

Since these additions are ‘owned’ by the connection, and refer to the clause as a whole, their position in the sentence is merely a convention. Yoda-speak (e.g. very fast he was running), even though breaking conventions, is still completely comprehensible since all we’re really looking for in a sentence is the existence of nodes, the connection between them, and the additional connection details.

* When it comes to subject and object though, in English, placing does matter and the subject must come first. In other languages though, there might be additional clues and modifications that identify the subject and the object, making their position optional as well.

 

 

Semantics – Codifying Opposites

Mental Redundancy

The human mind, being a complex distributed system that evolved as independent layers during a period of millions of years, is by no means free of redundancy. By redundancy, I simply refer to the ability of different brain areas to store the same kind of information, or in the abstract mental sense, the ability to codify the same exact concepts in various ways.

The first redundancy I would like to address, as I (and most ancient philosophers) believe it is the most important perception upgrade of all, is the perception of opposition.

Codifying Opposition

The human mind has two different ways of codifying opposing forces. One as a connection, and the other, as a collection.

To the connection of opposition, we would refer to as binary opposition. It is the simpler, more basic, and somewhat primitive way of codifying opposition that encapsulates two entities and the (negative) relationship between them.

The collection, however, can encapsulate various different levels of the same concept. Some similar and some different, some opposing and some amplifying. It is a unifying container that can contain various different levels of similar concepts and by that not only retain information regarding opposition, but also about the level of difference, and the order in which different concepts contain a common absolute property.

For example, let’s look into the concept of good and bad.

Naturally, since good and bad are opposites, they perceived as two opposing binary concepts. Defining bad as simply being the opposite of good.

But, as we learn that there are different levels of good and bad, while some things may also not be good nor bad, we begin to perceive these concepts not as opposing forces, but as different levels of the core unifying principle of goodness. Some positive, some negative, and some perfectly balanced.

Furthermore, making the effort to unify these forces within one’s own mind would elevate their perception from the mere binary positive or negative to a level of perception that’s better equipped to comprehend the complexity of the world around us.

Unifying Concepts

How do we know when concepts should be united, you ask?
Well, that’s a good question!

To determine which concepts should be unified, I use the principle of non-opposition. Here is the principle, explained by my favorite philosopher, Socrates: “the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time”.
From that ancient logic, or perhaps merely a description of our shared innate logic, it would follow that if seemingly separate entities can’t seem to hold contradictory views regarding the same object at the same time, then they must actually be two facets of the same single entity.
This means that if something can’t be both good and bad at the same time, then these two concepts must actually be two different facets of the same concept. Just like something can’t be both long and short, or heavy and light.
Coding reality in such a way would allow an agent to perceive reality in a more meaningful and accurate manner, making it easier to later apply common sense.

Other United Scales

Some other concepts that can be perceived as scales and you wouldn’t usually think about it are:

* Survival [Decay – Survive – Flourish]

* life [Dead – Sick – Healthy – Lively – Exuberant]

* truth [Impossible – unlikely – possible – probable – definite]

* Days of the week [Monday… Sunday]

* All human emotions

To Be Continued… 

Click the bell on the bottom right corner to subscribe to future updates →