Governança em Sistemas Multi-Agentes Abertos - LES PUC-Rio

Download Report

Transcript Governança em Sistemas Multi-Agentes Abertos - LES PUC-Rio

Agent Reputation Trust (ART)
Testbed
Andrew Diniz da Costa
[email protected]
Introdução
• Confiança é a segurança, certeza daquele que tem fé na
probidade (honradez, integridade de caráter, honestidade)
de alguém.
• Reputação é um conceito atribuído a uma pessoa por parte
da sociedade em que vive para medir o grau de confiança.
• Em sistemas multi-agentes abertos temos sociedades de
agentes heterogêneos.
• Importância da existência de mecanismos para identificar
agentes que não se comportam adequadamente.
© LES/PUC-Rio
Introdução
• Por quê modelar confiança e reputação ?
•
agentes devem escolher com quem interagir
•
objetivo de capacitar os agentes a fazer a escolha correta.
• Diversos algoritmos na área de confiança e reputação
• como compará-los ?
• quais as características principais
• ART Testbed
• competição entre agentes
• experimentos independentes
© LES/PUC-Rio
Visão Geral da Competição ART-Testbed
Domínio: Art Appraisal
• Agentes são avaliadores de pintura com níveis variados de perícias
em Eras artísticas diferentes
• Clientes solicitam avaliações
para pinturas de Eras diferentes
• Agentes avaliadores podem
pedir opinião de outros
• Agentes avaliadores podem
comprar reputação de outros
avaliadores
• Objetivo de produzir avaliação
mais precisa possível
© LES/PUC-Rio
Agente Avaliador
Zé Carioca LES
pintura
era1
1,0
...
era2
0,1
era9
0,5
era2
...
era9
1
era10
0,7
Agente Competidor 2
Agente Competidor 1
era1
*
era10
era1
© LES/PUC-Rio
era2
...
era9
era10
era
Transações dos Agentes
© LES/PUC-Rio
Conceitos importantes
• Tempo de análise
– Analisar uma pintura de um cliente
– Pintura de uma opinião requisitada
• Geração da opinião
– Informação baseada no tempo de análise
p*=∑i(wi . pi)
∑ i(wi)
wi = peso
pi = Avaliação
da opinião
– Informar valor
• Pesos
– Peso das próprias avaliações
– Peso das opiniões dos concorrentes
• Vencedor
– Aquele que tiver mais dinheiro no final do jogo.
© LES/PUC-Rio
Regras
• Número de sessões entre 100 e 200.
• Graus de conhecimentos das eras podem sofrer mudanças
durante o jogo.
• Dependendo do jogo pode haver limite de requisições de
opiniões e reputações.
• Dependendo do jogo o agente poderá ou não usar seus
conhecimentos em cada era. Avaliações geradas a partir das
opiniões solicitadas.
© LES/PUC-Rio
Agente Zé Carioca LES
• Agente avaliador com inteligência.
• Realizar boas avaliações das pinturas solicitadas
por clientes.
• Boas estratégias.
• Finalista em 2007
© LES/PUC-Rio
Simulador
© LES/PUC-Rio
Simulador
© LES/PUC-Rio
Competição
• 17 agentes (1 não foi aprovado) de 13 diferentes instituições
• Duas fases
– Preliminar
– Final
• Fase preliminar (Maio 10-11)
– 8 agentes de diferentes instituições
– 15 agentes da própria competição (5 “ruins”, 5 “neutros”, 5 “honestos”)
– 100 sessões
• Fase final (Maio 16-17)
– Apenas os 5 melhores agentes da fase preliminar
– 15 agentes da própria competição (5 “ruins”, 5 “neutros”, 5 “honestos”)
– 200 sessões
© LES/PUC-Rio
Fase Preliminar
© LES/PUC-Rio
Fase Final
1) Electronics & Computer Science, University of Southampton
2) Department of Math & Computer Science, The University of Tulsa
3) Department of Computer Engineering, Bogazici University
4) Agents Research Lab, University of Girona
5) Pontifícia Universidade Católica do Rio de Janeiro
© LES/PUC-Rio
Considerações finais
• Possíveis trabalhos futuros:
– Melhorar os agentes criados e que competiram em 2007 e
2008.
– Criar novos agentes.
• Grupo trabalhando com reputação
– 2 professores
– 5 alunos de mestrado
• ART-Testbed 2009 nos aguarda.
© LES/PUC-Rio
A Hybrid Diagnostic-Recommendation
Approach for Multi-Agent Systems
Andrew Diniz da Costa
© LES/PUC-Rio
Motivation
• Governance Framework
• Multi-agent systems are societies with autonomous and
heterogeneous agents, which can work together to achieve
similar or different goals.
• The reason for some agent not to achieve some goal.
• Buyer desires to buy some product from some seller.
–
If the goal was not achieved then which was the reason?
–
What to do?
© LES/PUC-Rio
Motivation
• Reputation concept related with diagnoses and
recommendation
• Ubiquitous Computing Systems provide several situations
that need of diagnoses and recommendations
© LES/PUC-Rio
Difficulties of Diagnosing and Providing Alternative
Executions
• We analyzed a set of points that deserved our attention
during the creation of the new module
1. Deciding how to analyze the execution of the agents
2. Selecting data for diagnosing
3. Determining strategies for diagnoses
4. Determining trustworthy agents
5. Determining strategies for recommendations
6. Representing profiles of agents
7. Different devices (cell phones, laptops, PDA)
•
Limitations of hardware
8. Types of connection
•
Speed of connection (56Kbps, 512Kbps, etc), IP.
© LES/PUC-Rio
General Idea
(2)
<<create>>
Mediator
Agent
(1)
Request name of the
Diagnosis Agent
(3)
Send the
Recommendation name
Diagnostic
Agent
(5)
Provide name of the
Diagnosis Agent
Requester
Agent
Recommendation
Agent
© LES/PUC-Rio
General Idea
Diagnostic
Agent
(2)
Provide diagnosis
result
(3)
Provide advices
Recommendation
Agent
Plan data
base
Requester
Agent
© LES/PUC-Rio
General Idea
Tipo de Diagnóstico 1
Requisita
Solicitador A
Provê
Agente
Diagnóstico A
Mediador A
Tipo de Diagnóstico 2
Agente
Diagnóstico B
Requisita
Tipo de Recomendação
Provê
Solicitador B
Mediador B
Agente
Recomendação B
Agente
Recomendação A
© LES/PUC-Rio
Architecture
DRP-MAS
Mediation
Diagnosis
Recommendation
Artificial Intelligence
Toolset
Application
© LES/PUC-Rio
Reputation
DRP-MAS (Artificial Intelligence Toolset)
Artificial Intelligence Toolset
AI DRPMAS
Forward Chaining
Inference Diagnoses
Backward
Chaining
Fuzzy Logic
API Bigus*
*Bigus, J., Bigus, J., 2001. Constructing Intelligent Agents Using Java, 2nd edition.
© LES/PUC-Rio
Performing Diagnosis
I/IV
• Goal: to perform diagnosis
• Such analyses are performed based on a set of information
provided by the Requester agent (application agent)
Information that can be provided:
• Goal
– The goal that was not achieved
• Plan executed
– The plan executed by the agent
• Resources:
– it may be the case that the resource could not be found, could
not used, the amount was not sufficient, …
• Profile
– The agent’s profile
© LES/PUC-Rio
Performing Diagnosis
II/IV
Information that can be provided:
• Quality of service
– A degree used to qualify the execution of the plan
• Partners
– The agents with whom the agent has interacted
• Services requested
– Services used by the agents
• Belief Base
– Base of Knowledge
• Devices
– Devices used by the customers.
• Connection
– Type of connection used.
© LES/PUC-Rio
Performing Diagnosis
III/IV
• The strategy used to make the diagnoses is a hot-spot
(flexible point)
• However, the framework provides a set of APIs* to help on
the diagnosis:
– backward chaining,
– forward chaining and
– reasoning with fuzzy logic
• The framework provide a default strategy that:
– Compares the amount of resource used and the desired one
– Analyzes the quality of the execution
*Joseph P. Bigus, Jennifer Bigus; Constructing Intelligent Agents Using Java, second edition.
© LES/PUC-Rio
Performing Diagnosis
IV/IV
• The diagnosis that the default strategy can provide are:
– The wrong amount of resources was used
– Several problems happened at the same time
– It was not possible to identify the problem
© LES/PUC-Rio
Providing Recommendations
• The Recommendation agent incorporates the process of
advising alternative ways to achieve some goal. It is
composed of three steps: (i) to select plans, (ii) to verify the
plans need for agents to request information, (iii) to choose
good agents
Selecting Plan
Verifying Selected Plans
Choosing agents
© LES/PUC-Rio
Selecting Plans
• The strategy used to select plans is a hot-spot (flexible
point)
– It depends on the diagnosis and on the information provided by
the agent
• Each plan should be associated with a set of information
that describes:
– resources used during the execution, desired goal, profiles of
agents that accept executing the plan, quality of service that
determines how the previous execution of the plan was
performed, related diagnoses, etc.
© LES/PUC-Rio
Verifying selected plans and choosing agents
• If the plan indicates that the agent will need to interact with
other agents, it is necessary to choose the must trustful
agents
• The agents are selected based on their reputations
– Using a Reputation agent
– We are using the reputation system Report1 implemented in the
Governance Framework2 and the model Fire.
• The agent profile defined the minimum accepted reputation
of its partners
• At the end, the recommendations are provided
1)
Guedes, J., Silva, V., Lucena, C., 2008. A Reputation Model Based on testimonies. In: Agent Oriented Information
Systems IV: Proc. of the 8th International Bi-Conference Workshop (AOIS 2006 post-proceedings), LNCS (LNAI)
4898, Springer-Verlag, pp. 37-52.
2) Silva, V.; Duran, F.; Guedes, J., Lucena, C., 2007. Governing Multi-Agent Systems, In Journal of Brazilian Computer
Society, special issue on Software Engineering for Multi-Agent Systems, n. 2 vol. 13, pp. 19-34.
3) Huynh, T. D., Jennings, N. R. and Shadbolt, N. (2004) FIRE: an integrated trust and reputation model for open multiagent systems. In: 16th European Conference on Artificial Intelligence, 2004, Valencia, Spain.
© LES/PUC-Rio
Scenarios used
• Translation
– Portuguese to English
• Music Market Place
– Buy cd from the name of some music.
Customer
Provider
Service
Customer
© LES/PUC-Rio
Technologies and Future Works
• Two versions of the DRP-MAS
– ASF + Report Framework
– Jadex + Report Framework and Fire model
• Future Works
– Extend the DRP-MAS
• Extend the information set
• Define new strategies of diagnosis and recommendation
• Ubiquitous Computing
– Learning in agents
– Complex scenarios
– Etc.
© LES/PUC-Rio
Fim!