ASCI 604 ERAUDB Human Automation Cognitive Coupling Research Paper


9.3 Research Brief: 
The issue of reconciliation of human and machine agents so that they can jointly work as a cognitive team can be handled by using SMPA cycle of classical goal based agent which has to be updated to facilitate the mental modeling of the human in the loop in order to enable a truly cognitive teaming agent. Speci?cally, there is a need of introducing the Human Model (HuM) and the Human Mental Model (HuMM) as key components in the agent’s deliberative process. Changes to Model is a direct result of the requirement of human mental modeling. Coarsely speaking, changes to Sense contribute to the recognition of teaming context, changes to Plan contribute to the anticipation of team behavior and Act, and changes to Act contribute to the determination of proper actions at both the action and motion levels. In practice, these four functionalities are tightly integrated in the behavior loop. agent. With the help of this cycle.
Sense – The agent can no longer sense passively to check that the preconditions of an action are satis?ed, or after it applies an action to the world to con?rm that it is updated accordingly. In teaming scenarios, the agent needs to proactively make complex sensing plans that interact closely with other functionalities – Model and Plan – to maintain the correct mental
An updated view of the architecture of a cognitive teaming agent acknowledging the need to account for the human’s mental state by means of what we refer to as Human Mental Modeling or HuMM.
State (such as intentions, knowledge and beliefs) of its human teammates in order to infer their needs. For example, how the robot should behave is dependent on how much and what type of help the human requires, which in turn depends on the observations about the human teammates such as their behavior and workload. Furthermore, the inference about the human mental state should be informed by the human model that the robot maintains about the human’s capabilities and preferences. Note that directly asking humans (i.e. explicit communication) is a speci?c form of sensing.
Model – Correspondingly, the state, i.e. “what the world is like now”, needs to include not only environmental states, but also mental states of the team members which may not only include cognitive and affective states such as the human’s task-relevant beliefs, goals, preferences, and intentions, but also, more generally, emotions, workload, expectations, trust and etc. “What my actions do to the world” then needs to include the effects of the robot’s actions on the team member’s mental state, in addition to the effects on their physiological and physical states and the observable environment; “How the world evolves” now also requires rules that govern the evolution of agent mental states based on their interactions with the world (including information exchange through communication); “What it will be like” will thus be an updated state representation that not only captures the world state, agent physiological and physical state changes based on their actions and current states, but also those mental state changes caused by the agent itself and other team members.
Plan – “What action I should do” now involves more complex decision-making that must again also consider human mental state. Furthermore, since the robot actions now can in?uence not only the state of the world but also the mental state of the humans, the planning process must also consider how the actions may in?uence their mental state and even how to affect/manipulate such mental state. For example, in teaming scenarios, it is important to maintain a shared mental state between the teammates. This may require the robots to generate behavior that is expected or predictable to the human teammates such that they would be able to understand the robot’s intention. This can, in fact, be considered an implicit form of signaling or communication. On the other hand, a shared mental state does not necessarily mean that every piece of information needs to be synchronized. Given the limitation on human cognitive load, sharing only necessary information is more practical between different teammates working on different parts of the team task. A properly maintained shared mental state between the teammates can contribute signi?cantly to the ef?ciency of teaming since it can reduce the necessity of explicit communication.
Act – In addition to physical actions, we now also have communication actions that can change the mental state of the humans by changing their beliefs, intents, etc. Actions to affect the human’s mental state do not have to be linguistic (direct); stigmergic actions to instrument the environment can also inform the humans such that their mental states can be changed. Given that an action plan is eventually realized via the activation of effectors by providing motor commands, Act must be tightly integrated with Plan. While Plan generates the sequence of actions to be realized, motor commands can create different motion trajectories to implement each action and can in turn impact how the plan would be interpreted since different realizations can exert different in?uences on the human’s mental states based on the context.
An Exemplary Human-Robot Teaming Scenario
To better illustrate how mental modeling of teammates can contribute to the different capabilities needed for cognitive teaming agents, we will now consider scenarios from a humanrobot team performing an USAR task where each subteam i consists of one human Hi and one robot Ri.
For subteam 1: Based on the ?oor plan of the building in its search area, R1 realizes that the team needs to use an entrance to a hallway to start the exploration. R1 notices that a heavy object blocks the entrance to the hallway. Based on its capability model of H1 (i.e., what H1 can and cannot lift) and H1’s goal, R1 decides to interrupt its current activity and move the block out of the way. H1 and R1 then continue exploring different parts of the area independently when H1 discovered a victim and informs R1. R1 understands that H1 needs to get a medical kit to be able conduct triage on this victim as soon as possible but knows that H1 does not know where a medical kit is located. Since R1 has a medical kit already, but cannot deliver it due to other commitments, it places its medical kit along the hallway that it expects H1 to go through, and informs H1 of the presence of the kit.
For subteam 2: Based on the ?oor plan of the building in its search area, R2 ?nds that all the entrances are automatic doors that are controlled from the inside. Since the connection cannot be established due to power lost, the team needs to break a door open ?rst. R2 infers that H2 is about to break a door open based on the teaming context and its observations. Since it knows that breaking the door open may cause a board to fall on H2, R2 moves to catch the board preventatively. Once H2 and R2 are inside, however, H2 is uncertain about the structural integrity and has no information on which parts may easily collapse. R2 has access to the building structure information and proposes a plan to split the search in a way that minimizes human risk.
For both subteams: As both teams are searching their areas, they receive information about a third area to be explored. Since neither H1 nor H2 are ?nished with their current search task, they assume that the other will take care of the third area. Since R1 understands H1 and H2’s current situation, and expects itself to be done with its part of the task soon, R1 decides to work on the third area since it does not expect H1 to need any help. R1 informs H1. H1 is OK with it and informs H2 that team 1 is working on the third area. When R1 arrives at the third area, it notices new situations which require certain equipment from team 2. R1 communicates with R2 about the availability of the missing items. R2 quickly predicts equipment needs and anticipates that those items are not needed for a while. After getting the OK from H2 to lend the equipment to R1, R2 drives off to meet R1 half-way, hand over the equipment, and R1 returns to the third area with the newly acquired equipment. H1 was not informed during this process since R1 understands that H1 has a high workload. Once the equipment is no longer needed, R1 meets up with R2 again, returning the equipment in time for use by H2. Based on the above scenario, we can see that the mental modeling of the others on a cognitive robotic teammate is critical to the ?uent operation of the team For example, R1 needs to understand the capabilities of H1 (i.e., what H1 can and cannot lift); both R1 and R2 need to be able to infer about the intention of the human teammates. The modeling may also include the human’s knowledge, belief, mental workload, trust, etc. This human mental modeling for cognitive teaming between humans and robots connect with the three capabilities we introduced in Section I as critical to the functioning of human-human teams and form the basis of the updated agent architecture in Fig. 3 as follows
C1. Recognizing teaming context to identify the status of the team task and states of the teammates: For example, based on the ?oor plan of the building, R1 realizes that the team needs to use an entrance to a hallway to start the exploration. R2 ?nds that all the entrances are automatic doors that are controlled from the inside. Consequently, it infers that the team needs to break a door open ?rst. This inference process takes into account the modeling of the teammate’s state (e.g., the intention to enter the building).
C2. Anticipate team behavior under the current context: For example, given that a heavy object blocks the entrance to the hallway, R1 infers that the human will be ?nding a way to clear the object. R2 infers that H2 is going to break a door open based on the teaming context and its observations. This prediction takes into account of the modeling of the human’s capabilities and knowledge about the teaming context.
C3. Take proper actions to advance the team goal while taking into account the teammates: For example, after anticipating the human’s plan, the robots should proactively help the humans (e.g., R1 helps H1 move the block away andan), while taking the account the modeling of the human’s capabilities, mental workload, and expectation. Remark: C3 above not only includes actions that contribute to the team goal, but also actions for maintaining teaming situation awareness (e.g., making explanations). As such, C3 feeds back to C1 and the three capabilities in turn form a loop that should be constantly exercised to achieve ?uent teaming. Furthermore, although we have been focusing on implicit communication (e.g., through observing behaviors) to emphasize the importance of mental modeling, explicit communication (e.g., using natural language) is also an important part of the loop. Another note is that since both implicit and explicit communication can update the modeling of the other teammates’ mental states as discussed, they are anticipated to evolve the teaming process in the long term.

Order your essay today and save 15% with the discount code: VACCINE

Order a unique copy of this paper

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
Top Academic Writers Ready to Help
with Your Research Proposal