Friday, September 11, 2009

REAL- TIME ROLE COORDINATION FOR AMBIENT INTELLIGENCE

ABSTRACT
We propose group communication for agent coordination within” active rooms” and other pervasive computing scenarios featuring strict real-time requirements, inherently unreliable communication, and a large but continuously changing set of context-aware autonomous systems. Messages are exchanged over multicast channels, which may remind of chat rooms in which everybody hears everything being said. The issues that have to be faced (e.g., changing users’ preferences and locations; performance constraints; redundancies of sensors and actuators; agents on mobile devices continuously joining and leaving) require the ability of dynamically selecting the “best” agents for providing a service in a given context. Our approach is based on the idea of implicit organization, which refers to the set of all agents willing to play a given role on a given channel. An implicit organization is a special form of team with no explicit formation phase and a single role involved. No middle agent is required. Asset of protocols, designed for unreliable group communication, are used to negotiate a coordination policy, and for team coordination. We sketch a general computational model for an agent participating to an implicit organization.

INTRODUCTION
We use a form of group communication, called channeled multicast for coordinating agents. Channeled multicast often reduces the amount of communication needed when more than two agents are involved in a task, and allows overhearing of the activity of other agents. Overhearing, in turn, enables the collection of contextual information, pro-active assistance , monitoring , even partial recovery of message losses in specific situations, and is exploited by the protocols described in this paper. Our current implementation of channeled multicast is based on IP multicast, thus it features almost instantaneous message distribution on a local area network but suffers from occasional message losses. Objective of this paper is to describe our current work on an agent coordination technique based on channeled multicast. Rather than addressing the well-known problems of task decomposition or sub goal negotiation, we focus on achieving robustness and tolerance to failure in a setting where agents can be redundant, communication is unreliable, hardware can be switched off, and so on. Such an environment can evolve faster than the agents execute their task. We want to avoid centralized or static solutions like mediators, facilitators or brokers; rather, we aim at a fully distributed and flexible system, without necessarily looking for optimality.

IMPLICIT ORGANIZATIONS
In multi-agent systems, we define a role as a communication-based API, or abstract agent interface (AAI), i.e. one or more protocols aimed at obtaining a cohesive set of functions from an agent. An agent may play more than one role, simultaneously or at different times depending on its capabilities and the context. We call implicit organization a set of agents tuned on the same channel to play the same role but different ways and willing to coordinate their actions. This situation is commonly managed by putting a broker or some other form of middle agent supervising the organization. By contrast, our objective is to explore advantages and disadvantages of an approach based on unreliable group communication, in a situation where agents can come and go fairly quickly, their capabilities can change or evolve over time, and it is not necessarily known a-priori which agent can achieve a specific goal without first trying it out. An implicit organization is a special case of team. An implicit organization is in charge of defining its own control policy, which means how a sub-team is formed within the organization in order to a hive a specific goal and how the intentions of this sub-team are established.
ROLE BASED COMMUNICATION
In this initial work, we assume that any request – by which we mean any REQUEST and QUERY generates a commitment by an implicit organization to perform the necessary actions and answer appropriately.Thus, in principle the interactions between commanding agents and implicit organizations are straightforward, and can be summarized in the simple UML sequence diagram below. A generic Requester agent addresses its request to a role R on a channel; the corresponding implicit organization replies appropriately.
Plain Competition
This policy is nothing more than “everybody races against everybody else and the first to finish wins”. It is by far the easiest policy of all: no pre-work nor post-work coordination is required, while the on-going coordination consists in overhearing the reply sent by who finishes first. In summary, the policy works as follows: when a role receives a request, any agent able to react starts working on the job immediately. When an agent finishes, it sends back its results.The other agents overhear this answer and stop working on the same job. Coordinate enters do/notify(Work, ABORT). A race condition is possible: two or more agents finish at the same time and send their answers. This is not a problem since the requester must accept whatever answer comes first and ignore the others.
Simple Collaboration
This policy consists of collaboration among all participants for synthesizing a common answer from the results obtained by each agent independently. This policy does not require any pre-work coordination. The policy works as follows. As in Plain Competition, all agents able to react start working on the job as soon as the request is received. The first agent to finish advertises his results with an INFORM to the role, and moves to the post-work phase. Within a short timeout all other members of the sub team must react by sending, to the role again, either an INFORM with their own results, or an INFORM that says that they are still working followed by an INFORM with the results when finally done. The first agent collects all these messages and synthesizes the common result, in a goal-dependent way
Multicast Contract Net
This policy is a simplification of the well-known Contract Net protocol where the Manager is the agent sending a request to a role, and the award is determined by the bidder themselves, since everybody knows everybody else’s bid. Thus, effectively this policy contemplates coordination only in the pre-work phase, while neither on-going nor post-work are required. This policy has three parameters: the winning criteria (lowest or highest bid), a currency for the bid (which can be any string), and a timeout within which bids must be sent. The policy works as follows. As soon as a request arrives to the role, all participating agents send their bid to the role. Since everybody receives everybody else’s offer, each agent can easily compute which one is the winner. At the expiration of the timeout for the bid, the winning agent declares its victory to the role with an INFORM repeating its successful bid, and starts working. Some degraded cases must be handled. The first case happens when two or more agents send the same winning bid .The second case happens because of a race condition when the timeout for the bid expires, or because of the loss of messages; as a consequence, it may happen that two or more agents believe to be winners. This is solved by an additional, very short wait after declaring victory, during which each agent believing to be the winner listens for contradictory declarations from others. In spite of these precautions, it may happen that two or more agents believe to be the winners and attempt to achieve the goal independently. The winner declaration mechanism, however, reduces its probability to the square of the probability of losing single messages, since at least two consecutive messages (the bid from the real winner and its victory declaration) must be lost by the others.
Master Slave
This policy has many similarities with the Multicast Contract Net; the essential difference is that a master decides which agent is delegated to achieve a goal, rather than a bidding phase. The master is elected by the policy negotiation protocol. Typically, agents that support this policy either propose themselves as masters, or accept any other agent but refuse to be master themselves; this is because the master is necessarily part of any goal-specific sub-team, i.e. it must always be able to react to any request and must have an appropriate logic for the selection of the slave delegated to achieve a goal. The policy works as follows. On reception of a request, all agents of the sub team send an INFORM to the role declaring their availability. The master agent collects all declarations, and issues an INFORM to the role nominating the slave, which acknowledges by repeating the same INFORM. Message loss is recovered by the master handling a simple timeout between its declaration and the reply, and repeating the INFORM if necessary. Of course, there is no way that two agents end up believing to be slaves at the same time.
NEGOTIATING A POLICY
We discuss here how an implicit organization establishes its own coordination policy. The negotiation protocol works in two phases: first, the set of policy instances common to all agents is computed; then, one is selected. The protocol can be restarted at any time.In the normal case, i.e. when the protocol is not restarted mid-way, the total number of messages sent on a channel for a negotiation is equal to the number of agents in the implicit organization multiplied by a tunable parameter plus one. Some simple examples are the following:
name = MulticastContractNet
Param = Currency constraint = (one of Dollar, Euro)
Suggested = (Euro)
Param = Winning Criteria value = lowest
Param = Bid Timeout constraint = (in 100...2000 )
Suggested = (1000)
Name = Master Slave
Param = Master constraint = (not one of Agent1, Agent2)
Suggested = (Agent3, Agent4)
We show in the algorithm below how the negotiation is triggered. As said above, the negotiation is done in two straightforward steps. In the first, a mutual belief about the policy instances common to the entire organization is established. This can be easily achieved by having each agent sending INFORMs to the role on what it is able to support, and intersecting the contents of all received messages. The second steps consists in having one agent – called the oracle – selecting one policy among those common to everybody and notifying the organization of its decision, by means of a simple ASSERT sent to the role. The oracle can be any agent, either a member or external to the organization .
The algorithm presented below:
(1) allows for any external agent (such as a network monitor or an application agent interested in enforcing certain policies) to intervene just after the common policies have been established;
(2) provides a default oracle election mechanism if an external one is not present; and,
(3) handles conflicting oracles by forcing a renegotiation. For the negotiation protocol to work in an unreliable environment, one additional information and a few more messages to those already mentioned are needed.
All messages concerning policy negotiation are marked with a Negotiation Sequence Number (NSN). A NSN is the identifier of a negotiation process, and is unique during the lifetime of an organization. NSNs form an ordered set; an increment(nsn) function returns a NSN that is strictly greater than its input nsn. In our current implementation, a NSN is simply an integer; the first agent joining an organization sets the NSN of the first negotiation to zero. Goal of the NSN is to help in guaranteeing coherence of protocols
Implementing the protocol
This section describes, as pseudo-code, the negotiation algorithm performed by each agent of an implicit organization.A few primitives are used to send messages (INFORM, ASSERT, QUERY) to the role for which the policy is being negotiated; for simplicity, we assume that the beliefs being transmitted or queried are expressed in a Prolog-like language. For readability, the role is never explicitly mentioned in the following, since the algorithm works for a single role at the time. Each agent has three main beliefs related to the policies: the set of policies it supports, the set of policies believed to be common to all agents in the organization, and the current policy, which is the result of the negotiation.The three main beliefs described above and the states of the negotiation are represented by the following variables:
negotiationState: {UNKNOWN, NEGOTIATING, DECIDED};
commonPolicies: set of Policy; NegotiatingA ents: set of Agent_Identifier;
supportedPolicies: set of Policy; currentPolicy: Policy;
myNSN: Negotiation_Sequence_Number;
myself: Agent_Identifier;
When the agent starts playing a role, it needs to discover the current situation of the organization, in particular its current NSN. This is done by sending a query to the role about the current policy. If nothing happens, after a while the agent assumes to be alone, and forces a new negotiation to start; potential message losses are recovered during the rest of the algorithm.
On Start () {
Set negotiationState = UNKNOWN;
Set myNSN = MIN_VALUE;
Set myself = getOwnAgentIdentifier ();
REQUEST (CURRENT_POLICY(?,?));
suspend until timeout;
if ((currentPolicy == nil)
AND (negotiationState == UNKNOWN))
negotiate( increment(myNSN),
supportedPolicies,
set_of (myself) );
}on_Request ( CURRENT_POLICY (input_NSN, input_policy) ) {
if ((input_NSN == ? ) AND (input_policy == ? )
AND (currentPolicy != nil))
INFORM ( CURRENT_POLICY(myNSN,currentPolicy) );
else...... }
The negotiation process mainly consists of an iterative intersection of the policies supported by all agents, which any agent can start by sending an INFORM with its own supported policies and a NSN higher than the one of the last negotiation (see negotiate () Later on). Conversely, if the agent receives an INFORM on the common policies whose NSN is greater than the one known to the agent, it infers that a new negotiation has started, and joins it. The iterative step consists of intersecting the contents of all INFORMs on the common policies that are received during the negotiation. If the resulting common policies set is empty, i.e. no policy can be agreed upon, the agent notifies a failure condition, waits for some time to allow network re-configurations or agents to leave, and then restart the negotiation again.
on_Inform ( COMMON_POLICIES ( input_NSN,
input_policies, input_agents ) ) {
if (input_NSN > myNSN)
negotiate ( input_NSN,
intersect (supportedPolicies, input_policies),
union (input_agents, set_of(myself)) );
else
if ((input_NSN == myNSN) AND
(negotiationState == NEGOTIATING)) {
commonPolicies =
intersect (commonPolicies, input_policies);
negotiatingAgents =
negotiate( increment(myNSN), supportedPolicies,
set_of (myself) ); }}
else
if (negotiationState == DECIDED)
INFORM (CURRENT_POLICY (myNSN,currentPolicy));
else
if
negotiatingAgents));}
A negotiation is normally started by the agent setting the common policies to those it supports, unless it joins a negotiation started by somebody else. No matter the initial parameters, during the first phase of a negotiation the agent informs the channel about the policies it knows as common, then waits for a period, during which it collects INFORMs from the other members of the organization, as described above. This process is repeated for (at most) max_repeats times, to allow recovery of any lost message by having agents repeating more than once what they know about the common policies. Of course, max_repeats depends on the reliability of the transport media in use for the channels: if the reliability is very high, the max_repeats value is low (two or three). The set of negotiating agents, which is not exploited by this algorithm, may be used in future for a more sophisticated recovery . procedure negotiate (negotiation_NSN: NSN, initial_policies: set of Policy, initial agents:
set of Agent_Identifier ) {
set negotiationState = NEGOTIATING;
INFORM (COMMON_POLICIES (myNSN, commonPolicies,
negotiatingAgents));
repeat max_repeats times {
suspend until timeout;
if (currentPolicy != nil)
break; /// out of the ’repeat’ block
INFORM (COMMON_POLICIES (myNSN, commonPolicies,
negotiatingAgents));}
suspend until (timeout OR currentPolicy != nil);
if ((currentPolicy == nil) AND
(myNSN == negotiation_NSN) AND
(LowestId (negotiatingAgents) == myself)) {
set currentPolicy = random_choice(commonPolicies);
ASSERT ( CURRENT_POLICY(myNSN,currentPolicy) );}
When an ASSERT of the current policy is received, the agent does a few checks to detect inconsistencies – for instance, that the NSN does not refer to a different negotiation, or that two independent oracles have not attempted to set the policy. If everything is fine, the Assertion is accepted, causing negotiate () to finish (see above). Otherwise, the assertion is either refused, or triggers a new negotiation.
on_Assert( CURRENT_POLICY ( input_NSN, input_policy ) ) { if (input_NSN == myNSN) {
if (((currentPolicy == nil) AND
commonPolicies.contains(input_policy)) OR
(currentPolicy == input_policy))
set currentPolicy = input_policy;
else
negotiate ( increment(myNSN), supportedPolicies,
set_of(myself));}}
on_PolicyReminder_Timeout() {
INFORM (CURRENT_POLICY (myNSN,currentPolicy));
set policy_reminder_timeout = getPolicyReminder_Timeout();}
on_Inform(CURRENT_POLICY (input_NSN, input_policy)) {
if (input_NSN > myNSN)
negotiate( increment(input_NSN),
supportedPolicies, set_of(myself));
else
if (input_NSN < myNSN) {
if (negotiationState == DECIDED)
INFORM (CURRENT_POLICY (myNSN, currentPolicy));}
else
if (input_policy != currentPolicy)
negotiate( increment(myNSN),
supportedPolicies, set_of(myself));}
Finally, when an agent leaves the channel, it has a social obligation to start a new negotiation process, to allow the others to adapt to the new situation. This is done by triggering the negotiation process with an INFORM about the common policies, with an incremented NSN and the policies set to a special value any which means “anything is acceptable”.
on_Leave() {
increment(myNSN);
INFORM(COMMON_POLICIES(myNSN,set_of("any"),{empty set}));
}

PRACTICAL EXAMPLE
We elaborate an example which have been chosen to show some practical implicit organizations and the usage of the policies discussed.
Collaborative Search Engines
A Citation Finder accepts requests to look for a text in its knowledge base and returns extracts as XML documents. For the sake of illustration, we model searching as an action (e.g., as in scanning a database) rather than a query on the internal beliefs of the agent.
An example of interaction is: REQUEST From: UserAssistant033 Receiver: Citation Finder
Content: find (Michelangelo) DONE To: UserAssistant033 Content:
done ( find (Michelangelo), results (Michelangelo born in Italy,...,
... ) )
Typically, different CitationFinders work on different databases. Any coordination policy of those presented above seems to be acceptable for this simple role. Particularly interesting is Simple Collaboration, where an agent, when done with searching, accepts to be the merger of the results; indeed, in this case, merging is just concatenating all results by all agents. Consider, for instance, the situation where CitationFinders are on board of PDAs or notebooks. A user entering a smart office causes its agent to tune into the local channel for its role; consequently, in a typical peer-to-peer fashion, a new user adds her knowledge base to those of the others in the same room. This could be easily exploited to develop a collaborative work (CSCW) system. In this case, collaboration may be enforced by the CSCW agent by acting as the oracle during policy negotiation.


CONCLUSIONS AND FUTURE WORKS
We proposed implicit organizations for the coordination of agents able to play the same role, possibly in different ways, exploiting group communication and overhearing in environments where messages may be occasionally lost and agents can come and go very frequently. We presented a protocol for negotiating a common coordination policy, outlined a general organizational coordination protocol, and discussed an example. In the near future, we will focus on practical experimentation and application to our domain, i.e. multi-media, multi-modal cultural information delivery in smart rooms (“active museums”).





REFERENCES
1. Foundation for Intelligent Physical Agents. FIPA Communicative Act Library Specification. http://www.fipa.org/repository/cas.html.

2. J. Y. Halpern and Y. O. Moses. Knowledge and common knowledge in a distributed environment. Journal of the Association for Computing Machinery, 37:549–587, 1990.

3. A. Kaminka, D. Pynadath, and M. Tambe. Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach. Journal of Artificial Intelligence Research, 17:83–135, 2002.

4.Anand S. Rao. AgentSpeak(L): BDI Agents speak out in a logical computable language. In MAAMAW’96: 7th European Workshop on Modelling Autonomous Agen ts in a Multi-Agent World, LNAI 1038. Springer-Verlag, January 1996.

5. R. G. Smith. The contract net protocol: High level communication and control in a distributed problem solver. IEEE Transactions on Computers, C-29(12):1104–1113, 1980.

No comments: