Great Opportunities for Venturing into Service Robots for Elderly Care
28 March 2018 (Wednesday), 12noon to 1.30pm
NUS Faculty of Engineering, Advanced Robotics Centre, Blk E6, Level 7, Engagement Room
Title of Talk: Vacuum-Driven Soft Robotic End Effectors
Conventional robots in the assembly line have great difficulty in managing to grasp soft, delicate objects of various topology. Instead, one has to look towards the various grasping methodologies found in nature.
In this seminar, a new strategy is developed that employs the use of negative pressure (“vacuumatics”) as an enabler to create useful actuation through the deformation of soft lattice structures, resulting in various possible designs and grasping methods. This strategy simplifies the design of soft robotic end effectors and eliminates the need for complex mechanical design and coding. An overview of the strategies of some existing soft robotic end effectors will also be shared and evaluated during the seminar.
About the Speaker:
Fabian Ong is a current Year 4 undergraduate at the Division of Industrial Design, National University of Singapore. He is interested in industrial design that revolves around sustainable, tangible product systems, with a focus on technical solutions that are designed with insights derived from the research process. In his spare time, he enjoys discovering global cultural norms, understanding the natural world and human behaviour.
SoftBank Robotics Europe is organizing it’s first Hackathon on the Pepper robot between 23-25 March 2018, Paris, France.
What is your vision about making a difference in people’s life through humanoid robots?
How would you like the robot to help you or your loved ones?
Hackathon 2018 is a unique event and opportunity to showcase your creative skills on Humanoid robot, Pepper, and to interact with experts in the company.
Centered on the theme of “Pepper: a robot for well-being?”, it is the first opportunity for developers to bring their ideas to life and to contribute in designing a new world with us.
Hackathon 2018 invites talents of all sorts to express themselves, where developers and UX/Marketing teams work hand in hand to create the most inspiring and useful well-being application using Pepper’s new SDK for Android and loT. Each team will be composed of 5 persons maximum. The applications must use the new Pepper SDK for Android to interact and communicate with the robot. IoT will be provided but you can bring your own. You’re also welcome to use third-party SDKs and tools. Registration fees will be 100% reimbursed after the event for the participants who will have taken part in the hackathon.
The winning team will earn great loT material.
For more information and registration visit the website: https://www.ald.softbankrobotics.com/en/hackathon2018
For any query, contact email@example.com
Enhanced Performance and Autonomy for Field Robots Through Safe Learning with Degraded Sensing in Unstructured, Uncertain and Changing Environments
Erkan Kayacan, Ph.D.
University of Illinois at Urbana-Champaign, IL, USA
Nowadays, the complexity in the design of robotic systems increases enormously due to the fact that human beings desire a higher level of intelligence and autonomy. Additionally, it is important that the developed systems must be capable of autonomously adapting to the variations in the operating environment while maintaining the overall objective to accomplish tasks even in highly uncertain and unstructured environments. Such robotic systems must display the ability to learn from experience, adapt themselves to the changing environment and seamlessly integrate information to-and-from humans. Traditional controllers have important limitations: i) inability to tune optimally the coefficients of controllers due to the complex nature and the vaguely known dynamics ii) inability to be able to adapt the control parameters considering changing system parameters and varying environmental conditions iii) inability to deal with constraints on systems iv) not account interactions between subsystems. These drawbacks of traditional control algorithms result in suboptimal control performance of systems. Therefore, advanced techniques are required to deal with naturally constrained, nonlinear, and multi-input-multi-output systems. In this talk, nonlinear model predictive control (NMPC) and nonlinear moving horizon estimation (NMHE), which are computationally very intensive, and require the real-time solution, will be addressed to handle aforementioned problems and their applications in field robots will be shown.
About the Speaker
Erkan Kayacan received the B.Sc. and M.Sc. degrees in mechanical engineering from Istanbul Technical University, Turkey, in 2008 and 2010, respectively. In December 2014, he received the Ph.D. degree at University of Leuven (KU Leuven), Belgium. During his PhD, he held a visitor PhD scholar position at Boston University under supervision of Prof. Calin Belta. After his Ph.D., he became a Postdoctoral Researcher with Delft Center for Systems and Control, Delft University of Technology, The Netherlands. He is currently a Postdoctoral Researcher with Coordinated Science Lab and Distributed Autonomous Systems Lab in the University of Illinois at Urbana-Champaign under supervision of Assist. Prof. Girish Chowdhary. His research interests center around real-time optimization-based control and estimation methods, nonlinear control, learning algorithms and machine learning with a heavy emphasis on applications to autonomous systems.
Please register your attendance at the following link by 18 December 2017:
Lunch will be provided.
Nanyang Technological University is one of the invited teams which have participated in 2014’s and 2016’s RobotX Challenges.
机器人产业作为国家新型战略产业之一，其研发及产业化应用是衡量一个国家科技创新、高端制造发展水平的重要标 志。《中华人民共和国国民经济和社会发展第十三个五年规划纲要》、“中国制造2025”、工业4.0、《机器人产业发展 规划（2016-2020 年）》等重要国家规划均鼓励大力发展机器人行业。为响应国家政策和推动中国机器人行业健康发展， CIROS2018 第 7 届中国国际机器人展览会再度起航，将于 2018 年 7 月 4-7 日在国家会展中心（上海）隆重举行。
As a new and strategic industry, the R&D and industrial applications of robots are important for the measurement of the development level of a country’s scientific and technological innovation and high-end manufacturing. Chinese national plans, such as THE 13th FIVE-YEAR PLAN FOR ECONOMIC AND SOCIAL DEVELOPMENT OF THE PEOPLE’S REPUBLIC OF CHINA, MADE-IN-CHINA 2025, Industry 4.0, ROBOT INDUSTRY DEVELOPMENT PLAN (2016-2020), are strongly supportive to the development of robot industry. In order to respond to national policies and to promote the development of China’s robot industry, CIROS2018, which is the 7th China international robot show, will be held at the national conventional centre in Shanghai between 4-7 July, 2018.
Ministry of Commerce of the People’s Republic of China
China Machinery Industry Federation
China Robot Industry Alliance
CMEPO Exhibition Co., Ltd
Japan Robot Association
Korea Association of Robot Industry
Nikkan Kogyo Shimbun Ltd.
Taiwan Automation Intelligence and Robotics Association
International Federation of Robotics
German Verband Deutscher Maschinen- und Anlagenbau
American Robotics Industries Association
The Land Transport Authority (LTA) has invited all researchers to a briefing on 1 Nov 2017 about an upcoming grant call. Please see below for more information:
All interested researchers are invited to attend. Please confirm your attendance with LTA via email LTA_Innovate@lta.gov.sg no later than 31 October (Tue), noon with your info such as: a) your organization, b) your name, c) your email, and d) your NRIC or passport number.
Title of Talk:
Rule Control of Teleo-Reactive, Multi-tasking, Communicating Robotic Agents
Prof. Keith Clark, Emeritus Professor @ Imperial College, UK
(joint work with Peter Robinson, University of Queensland)
Time and Venue:
10h00am, 31 October (Tuesday), 2017,
@ Meeting Room, Robotics Research Centre
Nanyang Technological University
The robotic agents are programmed in two rule based languages: TeleoR and QuLog. The roots of TeleoR go back to the conditional action plans of the first cognitive robot, SRI’s Shakey. These lead to Nilsson’s Teleo-Reactive robotic agent language T-R. QuLog is a flexibly typed multi-threaded logic + function + action rule language. Its declarative subset is used for encoding the agent’s dynamic beliefs and static knowledge. Its action rules are used for programming agent threads and inter-agent communication. The guards of TeleoR robotic action rules are QuLog queries to the agent’s dynamic beliefs using its knowledge rules. TeleoR is a major extension of T-R. We introduce the use of TeleoR and QuLog, and the multi-threaded agent architecture, with two robot control applications:
- A multi-tasking agent controlling two independent robotic arms in multiple construction tasks. Colleagues at UNSW Sydney have ported this to a Baxter robot, See https://www.doc.ic.ac.uk/~klc/20160127-LABCOT-HIx4.mp4
- Single task communicating agents each separately controlling a track following robot navigating through doorways that are exogenously opened and closed to a destination room. This is done without risk of collision and with continuous re-computation of the shortest path through doorways believed to be open. See https://www.doc.ic.ac.uk/~klc/pathFollowers.mp4. In this second application communication is as important as perception.
Biodata of Speaker:
Keith Clark has first degrees in both Mathematics and Philosophy and a PhD in Computational Logic. He started teaching computer science at Queen Mary College, London in 1969. With a colleague Don Cowell he developed a new course on automata theory more suited to CS students, based on a novel approach proposed by Rabin and Scott. The course notes became a book, “Programs, Machines and Computation” published by McGraw-Hill in 1975. In 1975 he moved to Imperial College to join Kowalski in setting up the Logic Programming group, which became the Logic and AI section of the Computing Department. He is now an Emeritus Professor in Computational Logic at Imperial, an Honorary Professor at UQ Brisbane and UNSW Sydney. This year he is also a Visiting Researcher at Stanford University. His research has covered: theoretical results in computational logic, design and implementation of new logic and hybrid logic programming languages, logic based concurrent programming languages and their implementation, multi-threaded symbolic languages for programming multi-agent systems, rule languages for programming multi-tasking communicating robotic agents. AI, agent and robotic applications of these languages. He has consulted for the Japanese Fifth Generation Project, Hewlett Packard, IBM, ICL, Fujitsu and two start-ups, one in Sweden and one in California.
San Diego, Oct. 25, 2017 (GLOBE NEWSWIRE) — Brain Corp today announced it has appointed Matt Grob to its Board of Directors as a representative of Qualcomm Ventures, the investment arm of Qualcomm Incorporated. Mr. Grob is the Executive Vice President of Technology for Qualcomm Technologies, Inc. “Brain Corp is at the forefront of the robotics revolution,” said Matt Grob. “I’m excited and honored to join Brain Corp’s Board, and I look forward to providing my expertise in Qualcomm’s computational and wireless platforms to further strengthen the company’s vision.” Mr. Grob joined Qualcomm in 1991 as an engineer and his early contributions to the company included system design, standardization and project leadership for early CDMA data services; the Globalstar satellite based mobile voice and data system and later 1x EV-DO high-speed wireless Internet access technology. Mr. Grob also served as Chief Technology Officer at Qualcomm and his experience in the areas of contextual awareness, machine learning, semiconductor technology, and computer vision adds a strong technical component to Brain Corp’s leadership team. “Matt brings a deep understanding of the cutting-edge mobile technology from both an engineering and commercialization point of view. His addition is a tremendous asset to our team, further demonstrating the relationship between Brain Corp and Qualcomm. Together we intend to power the most intelligent robotic products in the world.” said Dr. Eugene Izhikevich, founder and chief executive officer of Brain Corp.
Brain Operating System (BrainOS®)
BrainOS is the foundation of Brain technology. It is a proprietary operating system that integrates with off-the-shelf hardware and sensors to provide a cost-effective “brain” for robots. It plays the same role for robots as Android OS plays for smartphones. BrainOS has computer vision and A.I. libraries that enable quick and efficient development of smart systems that learn and adapt to people and environment. Its navigation stack provides advanced self-driving capabilities for cluttered and dynamic indoor environments. The stack takes into account robot geometry and dynamics, preventing collisions with obstacles — which is the essential safety requirement for commercial applications.
About Brain Corp.
Brain Corp. is a San Diego-based A.I. company that partners with manufacturers of commercial equipment to convert manual machines into autonomous robots. Brain Corp’s technology represents the next generation of artificial brains for robots. Brain Corp is funded by the SoftBank Vision Fund and Qualcomm Incorporated. For more information or to access videos of its robots, please visit www.braincorp.com.
LAAS-CNRS, Toulouse, November 30 – December 1st, 2017
The Anthropomorphic Motion Factory is a unique forum developed in the framework of the European ERC project Actanthrope (2014-2019). It is open to researchers and experts from various backgrounds, all involved in the study of human and humanoid motion. After the first workshop dedicated to «Dance Notations and Robot Motion» in 2014, the second workshop dedicated to « Geometric and Numerical Foundations of Movements» in 2015, the third workshop dedicated to «Biomechanics and Robotics», the current edition is devoted to «Rhetorics and Robotics». It will gather various perspectives regarding human language, understanding and perception of robotics. The aim is to deliver new leads on how the technique is perceived by our contemporaries.
In common with general scientific investigation, new ideas, concepts and interpretations emerge spontaneously in the field of robotics. Obviously, we need representations and words that can help us to explain our discoveries, to discuss and debate about them, to popularize them and spread our understanding and knowledge. As it has been done by many other scientific fields before, robotics picks some of its words from another field and one that interests us especially, the field of the human intelligence. Autonomy, decision, judgement, learning, intelligence, consciousness, are nothing else than familiar to the description of our own body.
But are these robots actually making decisions? Are they actually intelligent (whatever it means)? Does the arrival of robots create a rupture in the history of machines?
In order to contribute to the debate, roboticists will describe what is new about robots and their functioning. Robotics is the research field that studies the computer controlled machine and its link to the physical world. The discipline deals with the operations and uses of robots, automatic control, information processing, etc. The roboticist will then explain how these technical tools bring autonomy in robotics, how they allow decision making, learning, etc.
Rhetoricians and linguists will focus on the lexicon, i.e. they will dig deeply in the meaning of these words. When we talk about intelligent robots, do we actually mean clever robots? Would it be in fact more appropriate to talk about smart robots? What’s the difference? The rhetorician will explore the various connotations attached to this list of words. The aim is to gain in subtlety in the understanding of the language used in and about robotics.
Finally, other experts such as philosophers, anthropologists and researchers in cognition will broach the implications of robot actions from human perspective. Mainly, they will consider the question of how humans represent robot actions and how the attribution of intentionality works. Exploring the beliefs about the mind will nourish the first two points of view.
The workshop is by invitation only. It is organized around fifteen talks including three keynotes. In order to emphasize on interactions and discussions among participants, the number of attendees is limited to 50.
Jean-Paul Laumond (firstname.lastname@example.org, LAAS-CNRS, Toulouse)
- Danblon (Emmanuelle.Danblon@ulb.ac.be, GRAL, ULB, Brussels)
IEEE Transactions on Cognitive and Developmental Systems
Special Issue on Neuro-Robotics Systems：Sensing, Cognition, Learning and Control
AIM AND SCOPE
Neuro-robotics Systems (NRS) is a combined study of implementing human-like sensing, sensorimotor learning, coordination, cognition and control in autonomous robots, which can be integrated with cognitive capabilities, allowing them to imitate the way of humans and other living beings. NRS, a branch of neuroscience within robotics, is the current state-of-the-art research, as well as an important pillar in many countries’ brain projects. It assists the next generation of robots with embodied intelligence to be aware of themselves, interact with the environment and behave harmoniously with/as human beings. Therefore, it is a study to integrate recent breakthroughs in brain neuroscience, robotics and artificial intelligence in terms of new principles of understanding, modeling and developing robotic systems. This way of implementation will introduce smart and straightforward configuration of autonomous robots capable of handling complex tasks and adapting to unstructured environments. It will enable robots or robotic devices to not only do much more work, but also be smart enough to support or augment human abilities. As a bridge between neuroscience and robotics, it encourages researchers to study and understand how to define and develop the “brain” for future robots.
This special issue aims at surveying the state of the art of the latest breakthrough technologies, new research results and developments in the area of NRS. We are particularly interested in papers that describe the formulation of various functions of NRS, including human-like sensing, fusion, cognition, learning and control, especially, the topics related to system sensing, multi-dimensional information fusion, and cognitive computation, sensorimotor learning and control technology. It provides a platform for interdisciplinary researchers to present their findings and latest developments of biomimetic mechatronics and robotics systems, covering relevant advances in engineering, computing, arts and bionic sciences. Areas of interest include, but are not limited to
- Multi-modal perception, communication and interaction
- Multi-modal neuromorphic computing
- Brain-inspired end-to-end perception and control in robots
- Knowledge representation, information acquisition, and decision making in neuro-robotics systems
- Cognitive mechanism, and intention understanding in neuro-robotics systems
- Affective and cognitive sciences for bio-mechatronics
- Augmented cognitive robot systems, neuro-mechanical systems
- Biomimetic modeling of perception and control in neuro-robotics systems
- Brain-inspired development of rehabilitation robot systems, medical healthcare robot systems, prosthetic device systems, assistive robot systems, wearable robot systems for personal cooperative assistance.
- Sensorimotor coordination and control
- Multi-modal intelligent learning and skill transfer system for multiple neuro-robotic systems
- Robotic application oriented brain-inspired artificial intelligence algorithms and platforms on modeling, sensing, cognition, learning and control
Manuscripts should be prepared according to the “Information for Authors” of the journal found at http://cis.ieee.org/publications.html and submissions should be done through the IEEE TCDS Manuscript center: https://mc.manuscriptcentral.com/tcds-ieee and please select the category “SI: Neuro-Robotics Systems”.
November 30 2017 – Deadline for manuscript submissions
February 28 2018 – Notification of authors
May 31 2018 – Deadline for revised manuscripts
July 31 2018 – Final decisions
February/March 2019 – Special Issue Publication
We are pleased to announce that the second edition of MBZIRC is scheduled to take place in Abu Dhabi, UAE in the Fall of 2019. We wish to inform you that the initial Challenge Description for MBZIRC 2019 is ready and available on the MBZIRC website at www.mbzirc.com/challenge. MBZIRC 2019 has a two-stage application process. First, teams must declare their intention to participate by registering online at www.mbzirc.com/apply. (The deadline is October 15, 2017). The second stage, “Call for Proposals,” with details on how to apply for sponsorship, will be released in November 2017. We would like feedback from the robotics community about the Challenge, and kindly ask you to complete the questionnaire at www.mbzirc.com/apply.
The robotics training program under the sponsorship of LearnSG has been successfully concluded on 16 September, 2017 (final lesson). A total number of about 290 students from Polytechniques, Institutes of Technical Education, and Secondary Schools has participated in this training. Please go to our training website www.robotsos.org (LearnSG Workshop) to look at some photos. Thanks to all council members for your help in advice, technical support, and publicity, etc.
新加坡机器人学会谢明会长应邀参加了一年一度在北京举办的2017年世界机器人大会。这是一次很好的学习和交流的机会。在开幕式上, 谢明会长亲耳怜听了中国国务院副总理刘延东的致辞。并非常认同刘延东副总理提出的关于促进机器人科技和产业发展的四点建议。用谢明会长的话来概括, 这四点建议为:
- 合作的深度: 刘延东副总理希望机器人专业的中外科学家能够全面地和深度地开展学术和科研合作与攻关。
- 应用的广度: 刘延东副总理希望机器人专业的中外工程师能够深度地挖掘机器人技术在社会各个领域的广泛应用。
- 生态的亮度: 刘延东副总理希望机器人专业的科技工作者能够营造高度文明的科研和产业生态。避免恶性的、和同质的竞争。
- 发展的高度: 刘延东副总理希望机器人专业和产业能够朝着有利于社会进步的正确方向发展。避免低级的、破坏社会伦理道德的、和伤害世界和平的机器人产品的出现。
为了保持这个势头, 谢明会长建议世界机器人大会把<引领潮流>这四个字作为座佑铭, 并在以下四个方面进一步做实做强:
- 面向教育家的专场: 这是促进机器人专业内涵发展的重要任务之一。它主要围绕机器人领域的人才培养这个主题, 以教育论坛和中小大学生竞赛为内容, 吸引广大教育工作者亲临世界机器人大会。
- 面向科学家的专场: 这也是促进机器人专业内涵发展的重要任务之一。它主要围绕机器人领域的科学问题这个核心, 以学术会议和学术论坛为平台, 深入地交流解决科学问题的最佳、最新和最经典的方法, 并吸引广大科研工作者亲临世界机器人大会。
- 面向企业家的专场: 这是促进机器人专业外围拓展的重要任务之一。它主要围绕机器人领域的社会需求这个核心, 以总裁论坛、产品发布和成果展示为内容, 交流满足社会需求的产品、服务和创业模式, 并吸引广大科技企业家及投资人士亲临世界机器人大会。
- 面向决策者的专场: 这也是促进机器人专业外围拓展的重要任务之一。它主要围绕机器人领域的知识普及这个核心, 以专家报告会的形式, 分享机器人专业的基础知识、前沿技术及未知领域, 吸引广大科技决策者、科技干部和公众亲临世界机器人大会。
The 2017 Spacecraft Robotics Challenge is sponsored by the Air Force Research Laboratory (AFRL) Tech Engagement Office to strengthen ties between the military, industrial, and academic robotics communities and to promote research and development in the application of unstructured automation and robotic assembly concepts to small satellites. Participating entrants or teams will be using an autonomous robot to assemble a series of simplified satellite assembly steps based solely on computer aided design (CAD) parts and assembly models. Participants will be competing to assemble a series of five challenge levels of increasing difficulty and on the time it takes to complete each assembly. The top participants will receive prizes in the following amounts:
1st place – $6,000
2nd place – $3,000
3rd place – $1,000
IEEE Transactions on Cognitive and Developmental Systems
Special Issue on Language Learning in Humans and Robots
2nd Call for Papers
Deadline for manuscript submissions: 31 August 2017
The publication will be in early 2018
Children acquire language by interacting with their caregivers and others in their social environment. When children start to talk, their sensory-motor intelligence (visual perception, body movement, navigation, object manipulation, auditory perception and articulatory control) is already reaching a high level of competence. Importantly, communication is based on representations and skills that have started to develop much earlier and that are shaped already in first (social) interactions. These interactions are multimodal in nature and vary across contexts. The contexts vary not only across developmental time and situations within individuals, but also between individuals, socio-economic groups and cultures. Continuously, representations become further enriched in ongoing interactions and across different contexts.
Even though there are various efforts in developmental robotics to model communication, the emergence of symbolic communication is still an unsolved problem. We are still lacking convincing theories and implementations that show how cooperation and interaction skills could emerge in long-term experiments with populations of robotic agents or how these skills develop in children. Importantly, the continuous acquisition of knowledge in different contexts and being able to further enrich the underlying representations provides a potential powerful mechanism (cross-situational learning), which is already well recognized in learning in children. Still, we need to know more about how children recognize contexts and how their language learning benefits from different language use varying across contexts.
This special issue aims at surveying the state of the art of the emergence of communication which requires combining and integrating knowledge from diverse disciplines: developmental psychology, robotics, artificial language evolution, complex systems science, computational linguistics and machine learning. Topics relevant to this special issue include, but are not limited to
- Psychological experiments on language learning in children
- Corpus-based approaches to language acquisition
- Language learning models for all stages of acquisition (gesture learning, early lexicon and grammar)
- Representations for language learning (sensorimotor schemas, constructions, neural networks, mirror neurons)
- Cognitive architectures and strategies for language learning
- Cross-situational learning
- Language acquisition and development of self-awareness
- Role of context in language learning
- Role of embodiment in language learning
- Role of multimodality (gesture, gaze etc) in language learning
- Role of social interaction and joint attention
- Co-development of skills, e.g. motor and language skills
- Integration of natural language grounding into perception-action cycles
- Connection with cultural and biological evolution of language
The Human Brain Project (HBP) aims to bring a large number of users to the HBP-Joint Platform to make it even more attractive for the external science community and to foster new collaborations across the full width of the HBP’s Subprojects (including Neurorobotics). To achieve this, a substantial amount of funding is available in the next two-year funding period for partner organisations to help contribute.
The HBP is asking potential new Partners to submit proposals that will directly contribute to the development of the HBP Platforms and increase the scope of their application, in terms of neuroscience and clinical research. The selected Partners will become full Partners in the HBP Consortium. The projects will run from April 2018 to March 2020. Please note that for some of the CEoIs, current Partners of the HBP Consortium are explicitly invited to write a proposal with new Partners and apply together.
Priv.-Doz. Dr. Florian Roehrbein
Program Director HBP Neurorobotics
We would like to announce the availability of a MATLAB toolbox, the KUKA Sunrise toolbox (KST), to interface KUKA Sunrise.
In collaboration with the IEEE “Transactions on Cognitive and Developmental Systems” journal, we are launching a special issue on Language Learning in Humans and Robots. We encourage all interested researchers to submit their papers.
IEEE Transactions on Cognitive and Developmental Systems
Special Issue on Language Learning in Humans and Robots
2nd Call for Papers
August 2017 – Deadline for manuscript submissions
The publication will be in early 2018.
Summer school on
ENGINEERING AND EVOLUTION OF BIO-HYBRID SOCIETIES Graz, Austria, August 29th – 31st, 2017
The goal of this summer school is to teach state-of-the-art methods by going beyond bio-inspired systems, which are focused on developing technology, to bio-hybrid systems, where technology resides in symbiosis with living systems. These bio-hybrid systems can make the best use of the properties of both components: biological and technological. For this, both systems and their interactions need to be understood and modelled in a more detailed way than it would be necessary for classical bio-inspired systems. During this summer school, examples will come from hybrid systems involving robots and bees, fish and plants. The school will consist of four main parts: preparation, lectures, practicals, and a reporting for students who would like to submit the result of their practical work for evaluation.
Deadline: 26th of July 2017
Notification of acceptance: 31st of July 2017