一个基于全局利益优化的Agent欺骗模型
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
优化Agent目的是为了更好地提高Agent性能,以更有效地实现Agent既定的目标。当前Agent的优化研究工作可分为两类:以Agent个体为目标的优化和以多个Agent群体为目标的优化。其中,以Agent个体优化为目标的研究工作较多,且对Agent群体性能优化的研究工作仅限于应用信任模型来提高Agent间的协作能力。
     欺骗是信任的对立面,带来的大多是负面作用,因而现有的关于多Agent系统的研究工作多是以遏制欺骗为主。随着研究工作的深入,欺骗作为正面的手段也越来越受到人们的关注。
     本文提出了一个基于全局利益优化的Agent欺骗模型,扩展了GOLEM测试环境,设计了新的测试环境E-GOLEM,并给出了基于Eclipse插件ABLE的实现。论文主要工作包括:
     1.研究并提出一种基于全局利益优化的欺骗模型,当满足欺骗模型的适用条件时,就可以进行全局欺骗,并根据不同的情形以不同的欺骗方式来满足不同的欺骗目标。
    
     2.基于GOLEM测试环境构建出一个新的Agent测试环境,E-GOLEM,并在此环境基础上增加了全局欺骗的测试功能。
     3.基于Eclipse的插件ABLE(Agent Building and Learning Environment)实现了E-GOLEM测试环境,并在其中进行了欺骗模型的实验研究。
Agent optimization aims at improving the performance of Agent for efficiently achieving Agent’s goal. The researches for Agent optimization are classified based on different object as follows: Agent individual optimization and optimization for Agents of collectivity, most of which are focused on the former. And the majority on optimization for Agents of collectivity is working at improving competence of cooperation between Agents using trust and reputation model.
     Deception, as the opposite of trust and reputation, always takes negative effect. So the purpose researching on deception is counter deception, aiming at preventing it from taking negative effect. While with the further researches, deception is to prove that it can be used as a method to take positive effect if used properly. The thesis brings forward a deception model based on general benefit optimization. And for further validating the model, a new test environment for testing deception between Agents called E-GOLEM are proposed, which is extended from GOLEM and implemented using ABLE, a plug-in of Eclipse. The main contents of the thesis can be summarized as follows:
     1. A deception model based on general benefit optimization is proposed. When the premise in the model are satisfied, the deception can be used for general benefit optimization, which can be used to different goals based on different situations.
     2. A new Agent deception test environment called E-GOLEM is proposed, which is extended from GOLEM, aiming at test the deception model proposed in the thesis.
     3. An E-GOLEM experimental environment is implemented by using ABLE (Agent Building and Learning Environment) which is a plug-in of Eclipse.
引文
[1] Durfee E. H., Distributed problem solving and planning, Multiagent systems: a modern approach to distributed artificial intelligence, Cambridge, MA: MIT Press, 1999.121~164
    [2] Buller D. B., Burgoon J. K., Buslig A. L., et al., Interpersonal deception VIII. Further analysis of nonverbal and verbal correlates of equivocation from the Bavelas et al (1990) Research, Journal of Language and Social Psychology, 1994, 13: 396~417
    [3] Decaire M.W., The detection of deception via. non-verbal detection cues, Law Library, http://www.kameleonstudios.com/lawlibrary/Doc47.pdf, Nov. 2000
    [4] Zuckerman M., DeFrank R. S., Hall J. A., et al., Facial and Vocal Cues of Deception and Honesty, Journal of Experimental Social Psychology, 1979, 18: 378~396
    [5] Mitchell R. W., A model for discussing deception, Deception, perspectives on human and nonhuman deceit, Edited by Mitchell, R. W., and Thompson, N. S., Albany, N.Y., State University of New York Press, 1986
    [6] Raven B.H., Rubin J. Z., Social Psychology, 2nd ed., New York, N.Y. Riley, 1983
    [7] Turing A. M., Computing Machinery and Intelligence, Mind 59, Oct. 1950, 236: 433~460
    [8] Malsch T., et al., Expeditionen ins Grenzgebiet zwischen Soziologie und Kunstlicher Intelligenz, Kunstliche Intelligenz, German, 1996, 2(96): 6~12
    [9] Cohen F., Lambert D., Preston C., Berry N., et al., Model for Deception, Fred Cohen & Associates: Strategic Security and Intelligence, http://www.all.net/journal/deception/Model/Model.html, July 2001
    [10] Ekman P., O’Sullivan, M., Who Can Catch a Liar?, American Psychologist, Sep. 1991,46(9): 913~920
    [11] Castelfranchi C., Falcone R., de Rosis F., Deceiving in GOLEM: how to strategically pilfer help, Deception, Fraud and Trust in Multiagent Systems, Kluwer Publishing, 1998
    [12] Gmytrasiewicz P., Durfee E., Toward a Theory of Honesty and Trust Among Communicating Autonomous Agents, Group Decision and Negotiation, 1993, 2: 237~258
    [13] Sen S., Biswas A., Debnath, S., Believing others: Pros and Cons, In Proceedings of the 4th International Conference on MulitAgent Systems, Boston, MA, July 2000.279~285
    [14] Prietula M., Kathleen Carley K., Boundedly Rational and Emotional Agents Cooperation, Trust and Rumor, In Trust and Deception in Virtual Societies, Edited by Cristiano Castelfranchi and Yao-Hua Tan, 2000
    [15] Carofiglio V., de Rosis F., Ascribing and Weighting Beliefs in Deceptive Information Exchanges, User Modeling 2001, LNAI 2109, Springer, 2001.222~224
    [16] Tognazzini B., Principles, Techniques, and Ethics of Stage Magic and their Application to Human Interface Design, Proc. Conf on Human Factors in Computing Systems, ACM Amsterdam, Neth, April 1993.355-362
    [17] Michael B. M., Riehle R. D., Intelligent Software Decoys, Proc. Monterey Workshop on Engine Automation for Software-Intensive Syst. Integration, Monterey, CA, June 2001.178-187
    [18] Cohen F., A Note on the Role of Deception in Information Protection, Computers & Security 1998, 17(6): 483~506
    [19] 史忠植, 智能 Agent 及其应用, 北京, 科学出版社, 2002
    [20] Michael Wooldridge, Nicholas R Jennings, Intelligent Agents: Theory and Practice. Knowledge Engineering Review, 1995, 10(2):115~152
    [21] Chainbi W., Proposition of formal semantics for multi-Agent systems. Computers and Industrial Engineering, 1999, 37:453~456
    [22] Harsanyi J., Bayesian decision theory and utilitarian ethics. American Economic Review, 1978, 68: 223~228
    [23] Scott Cost R., Chen Ye, Finin Tim, et al., Agent Conversations with Colored Petri Nets.Third Conference on Autonomous Agents(Agents-99), Workshop on Agent Conversation Policies, Seattle, WA, May l999
    [24] FerberJ, Gutknecht O., Operational semantics of multi-Agent organization. ATAL'99, 1999 [9] FerberJ, Gutknecht O.A meta-model for the analysis and design of organizations in multi-Agent systems.ICMAS-98, 1998.128~135
    [25] Gmytrasiewicz P. J., Durfee E. H., Elements of a Utilitarian Theory of Knowledge and Action. IJCAI-93, 1993.396~403
    [26] Durfee E. H., Lee J., Gmytrasiewicz P. J., Overeager Reciprocal Rationality and Mixed Strategy Equilibria. AAAI-93,1993.225~ 231
    [27] Gmytrasiewicz P. J., Durfee E. H., A Decision Theretic Approach to Coordinating MultiAgent Interactions. IJCAI-91, 1991.63~68
    [28] Brafman R. I., Tennenholtz M., Modeling Agents as qualitative decision makers. Artificial Intelligence,1997
    [29] Piotr S., Gmytrasiewicz J., Durfee E. H., David W. K., A decision-theoretic approach to coordinating multiagent interactions. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 1991. 62~68
    [30] Sichman J. S, Conte R., DeCastelfran Y., et al., A social reachi C. A social reasoning mechanism based on dependences networks. In: the 11th EuropeanConference on Artificial Intelligence, Amsterdam, The Netherlands, 1994.182 ~ 192
    [31] Sandholm Tuomas, Larson Kate, Andersson Martin, et al., Anytime coalition structure generation with worst case guarantees. AAAI-98,1998.46~53
    [32] Dignum F., Morley D., Sonenberg E. A., et al., Toward Socially Sophisticated BDI Agents, In: the forth International Conference on Multi-Agent System, 2000.118~126
    [33] Jennings N. R., Campos J. R., Towards a Social Level Characterization of Socially Responsible Agents. IEEE Proceedings on Software Engeering, 1997, 144(1):11~25
    [34] Cavedon Lawrence, Onsocial commitment, roles and preferred goals. In: the Third International Conference on Multi-Agent Systems, 1998.80~86
    [35] Shoham Y., Tennenholtz M., On the emergence of social conventions: Modeling, analysis, and simulations. Artificial Intelligence, 1997, 94(1~2): 139~166
    [36] Fitoussi D., Tennenholtz M., Choosing social law for multi-Agent systems: Minimality and simplicity. Artificial Intelligence, 2000, 199:61~101
    [37] Sandholm Thomas, An Algorithm for Optimal Winner Determination in Combinatorial Auctions. JJCAI-99, 1999.542~547
    [38] Tennenholtz Moshe, Some Tractable Combinatorial Auctions. AAAI-2000, 2000.90~97
    [39] Makoto Yokoo, Yuko Sakurai, Shigeo Matsubara, Robust Combinatorial Auction Protocol against False-name Bids. AAAI-2000, 2000.110~115
    [40] Gmytrasiewicz J., Durfee E. H., David W. K., The utility of communication in coordinating intelligent agents. In Proceedings of the National Conference on Artificial Intelligence, 1991. 166~172
    [41] Steels, L., Cooperation between distributed Agents through self organization. In: Decentralized AI-Proceedings of the 1st European Workshop on Modeling Autonomous Agents in a Multi-Agent World (MAAMAW-89) (Eds Y. Demazeau and j.-p.Muller), Elsevier, Amsterdam, 1990.175~196
    [42] Searle J. R., Speech Acts: an Essay in the Philosophy of Language. Cambridge University Press. Cambridge, 1969
    [43] Cohen P. R., Perrault, C. R., Elements of a plan based theory of speech acts. Cognitive Science, 1979, 3: 177~212
    [44] Patil R. S., et al. The DARPA knowledge sharing effort: progress report. In: Knowledge Representation and Reasoning (KR&R-92) (eds C. Rich, W. Swartout and B. Nebel), 1992.777~788
    [45] Mayfield J., Labrou Y., Finin, T., Evaluating KQML as an Agent communication language. In: Intelligent Agents, II (Eds M. Wooldridge, J. P. Müller and M.Tambe), Springer, Berlin, 1996, 1037: 360~374
    [46] FIFA Specification part 2 – Agent communication language. The text refers to thespecification dated 16 April 1999
    [47] Smith R. G., Davis R., Models for cooperation in distributed problem solving. IEEE Transactions on Systems, Man and Cybernetics, 1980,11(1): 67~85
    [48] Smith R. G., The CONTRACT NET: a formalism for the control of distributed problem solving. In: the 5th International Joint on Artificial Intelligence (IJCAI-77), Cambridge, MA, 1977
    [49] Martial V. F., Interactions among autonomous planning Agents. In: the First European Workshop on Modeling Autonomous Agents in a Multi-Agent World (MAAMAW-89) (Eds Y. Demazeau and J.-P. Müller), Elsevier, Amsterdam, 1990.105~120
    [50] Durfee E. H., Lesser V. R., Using Partial global plans to coordinate distributed problem solvers. In: the 10th International Joint Conference on Artificial Intelligence (IJCAI-87), Milan, Italy, 1987.875~883
    [51] Durfee E. H., Coordination of Distributed Problem Solvers. Kluwer Academic, Boston, MA, 1988
    [52] Durfee E. H., Planning in distributed artificial Intelligence. In Foundations of Distributed Artificial Intelligence (eds G. M. P. O’Hare and N. R. Jennings), John Wiley and Sons, Chichester, 1996.231~245
    [53] Lesser V. R., Erman L. D., Distributed interpretation: a model and experiment. IEEE Transactions on Computers, 1980, C29(12): 1144~1163
    [54] Lesser V. R., Corkill D.D., The distributed vehicle monitoring testbed: a tool for investigating distributed problem solving networks. In: Blackboard Systems (eds R. Engelmore and T. Morgan), Addison-Wesley, Reading, MA, 1988.353~386
    [55] Decker K. S., TAEMS: A model for environment centered analysis and design of coordination algorithms. In Foundations of Distributed Artificial Intelligence (Eds G. M. P. O’Hare and N. R. Jennings), John Wiley and Sons, Chichester, 1996.429~447
    [56] Decker K. S., Lesser V., Designing a family of coordination algorithms. In: the 1st International conference on Multi-Agent Systems (ICMAS-95), San Francisco, CA, 1995.73~80
    [57] Jennings N. R., Commitments and conventions: the foundation of coordination in multi-Agent systems. The Knowledge Engineering Review, 1993, 8(3): 233~250
    [58] Levesque H. et al., Golog: a logic programming language for dynamic domains. Journal of Logic Programming, 1990, 31: 59~84