Figure | System configuration and operational principle of metaAgent. (IMAGE)
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS
Caption
(a) System configuration: the metaAgent takes a collection of SPMs as its cerebellum, while its cerebrum is composed of a multi-agent discussion between four different domain experts (sensory expert, planning expert, grounding expert and coding expert). Besides, a memory module is introduced into the cerebrum for saving the knowledge graph, 3D visual-semantic map and past experiences. (b) Operational principle: metaAgent autonomously accomplishes the user’s command by taking sequentially the four-step operations: i) the sensory expert summons the multi-modality sensor data (radio, audio, text, image) and synthesizes the information in natura language, ii) the planning expert generates a body of executable subtasks, iii) the grounding expert assigns each subtask with an action function and associated devices, iv) the coding expert generates the action policy. The coding expert produces two types of outputs in natural language: one ‘external’ output for the communication with the external human user, and one ‘inner’ output for consideration by the planning and grounding experts. (c) and (d) are two experimental examples of semantic coding patterns. Here, given the knowledge graph, the semantic coding patterns convey the semantics of the SPM with a given space or space-time coding pattern is responsible for ‘generating a radiation beam pointing towards the router’ and ‘generating a binary-phase-shift-key modulated signal for Alice’, respectively.
Credit
Shengguo Hu et al.
Usage Restrictions
Credit must be given to the creator.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.