The scope of the Laboratory of Social Informatics and Dialogue Systems (LSD) covers selected problems of social informatics, dialogue systems and assistive technologies. The approach of the laboratory research is based on modeling cooperation on the game theory basis with applications in socio-economic structures. An abstract model of cooperation in socio-economic structures has been developed in the laboratory. This model has been implemented in a system enabling us to simulate a wide range of social and economic factors.
Assistive techologies belongs to traditional fields of the laboratory research. Generally, assitive technologies are considered to be information technologies supporting quality of the life. The laboratory is oriented especially on developing support for visually impaired people.
Members
Ivan Kopeček
Associate professor
Jaromír Plhák
Lecturer
Radek Ošlejšek
Assistant professor
Luděk Bártek
Lecturer
Josef Daňa
Ph.D. student
Current Research
Modeling and Simulating Cooperation in Socio-economic Structures
Analyze the impact of (non-)cooperation in socio-economic structures.
Past Projects
Dialogue-Based Exploration of Graphics
Ask the system for picture content.
Dialogue-Based Generator of Web Pages
Tell the system how your website should look like.
Dialogue-Based Picture Generator
Tell the system what you want to draw.
Picture Sonification
Listen to color pictures.
Publications
2019
DAŇA, Josef, Ivan KOPEČEK, Radek OŠLEJŠEK and Jaromír PLHÁK. Simulating the Impact of Cooperation and Management Strategies on Stress and Economic Performance. In Proceedings of the 52nd Hawaii International Conference on System Sciences. To appear. USA, 2019. 10 pp.
2018
PLHÁK, Jaromír, Tomas MURILLO-MORALES and Klaus MIESENBERGER. Authoring Semantic Annotations for Non-Visual Access to Graphics. Journal on Technology and Persons with Disabilities, California State University, Northridge, 2018, vol. 2018, No 6, p. 399-414. ISSN 2330-4219.
2014
HAMŘÍK, Pavel, Ivan KOPEČEK, Radek OŠLEJŠEK and Jaromír PLHÁK. Dialogue-based Information Retrieval from Images. In Computers Helping People with Special Needs:14th International Conference, ICCHP 2014. LNCS, vol. 8547. Switzerland: Springer International Publishing, 2014. p. 85-92, 8 pp. ISBN 978-3-319-08595-1. doi:10.1007/978-3-319-08596-8_13.
KOPEČEK, Ivan, Radek OŠLEJŠEK and Jaromír PLHÁK. Ontology Based Strategies for Supporting Communication within Social Networks. In 17th International Conference on Text, Speech and Dialogue. Berlin Heidelber: Springer-Verlag, 2014. p. 571-578, 8 pp. ISBN 978-3-319-10815-5. doi:10.1007/978-3-319-10816-2_69.
2012
KOPEČEK, Ivan, Radek OŠLEJŠEK and Jaromír PLHÁK. Communicative Images – New Approach to Accessibility of Graphics. In Proceedings of the Conference Universal Learning Design. Brno: Masaryk University, 2012. p. 159-164, 6 pp. ISBN 978-80-210-6060-9.
KOPEČEK, Ivan, Radek OŠLEJŠEK and Jaromír PLHÁK. Integrating Dialogue Systems with Images. In Text, Speech and Dialogue. 15th International Conference, TSD 2012. Berlin Heidelber: Springer-Verlag, 2012. p. 632-639, 8 pp. ISBN 978-3-642-32789-6. doi:10.1007/978-3-642-32790-2_77.
BÁRTEK, Luděk and Ondřej LAPÁČEK. Assistive Photography. In Miesenberger, K.; Karshmer, A.; Penaz, P.; Zagler, W.;. International Conference on Computers Helping People with Special Needs 2012. Berlin: Springer, 2012. p. 543-549, 7 pp. ISBN 978-3-642-31521-3. doi:10.1007/978-3-642-31522-0_81.
2011
KOPEČEK, Ivan and Radek OŠLEJŠEK. Communicative Images. In Smart Graphics, 11th International Symposium. Berlin, Heidelberg: Springer-Verlag, 2011. p. 163-173, 11 pp. ISBN 978-3-642-22570-3. doi:10.1007/978-3-642-22571-0_19.
2010
PLHÁK, Jaromír. A Context-Based Grammar Generation in Mixed Initiative Dialogue System for Visually Impaired. In Computers Helping People with Special Needs, 12th International Conference. 6180/2010. Berlin: Springer-Verlag, 2010. p. 354-360, 7 pp. ISBN 978-3-642-14099-0. doi:10.1007/978-3-642-14100-3_52.
BÁRTEK, Luděk. Editing Web Presentations by Means of Dialogue. In Lecture Notes in Computer Science, Volume 6179, Computers Helping People with Special Needs. Berlin: Springer Berlin / Heidelberg, 2010. p. 358-365, 8 pp. ISBN 978-3-642-14096-9. doi:10.1007/978-3-642-14097-6_57.
BÁRTEK, Luděk, Radek OŠLEJŠEK and Tomáš PITNER. Is Accessibility an Issue in the Knowledge Society? Modern Web Applications in the Ligth of Accessibility. In Organizational, Business and Technological Aspects of the Knowledge Society. Heidlberg: Springer, 2010. p. 359-364, 6 pp. ISBN 978-3-642-16323-4. doi:10.1007/978-3-642-16324-1_40.
KOPEČEK, Ivan and Radek OŠLEJŠEK. Annotating and Describing Pictures -- Applications in E-learning and Accessibility of Graphics. In Computers Helping People with Special Needs: 12th International Conference, ICCHP 2010. Berlin: Springer-Verlag, 2010. p. 124-130, 7 pp. ISBN 978-3-642-14096-9. doi:10.1007/978-3-642-14097-6_21.
2009
KOPEČEK, Ivan, Radek OŠLEJŠEK, Jaromír PLHÁK and Fedor TIRŠEL. Detection and Annotation of Graphical Objects in Raster Images within the GATE Project. In Proceedings of the 2009 International Conference on Internet Computing ICOMP 2009. USA: CSREA Press, 2009. p. 285-290, 6 pp. ISBN 1-60132-110-4.
BÁRTEK, Luděk. Generating Dialogues from the Description of Structured Data. In HCI and Usability for e-Inclusion. Heidelberg: Springer-Verlag, 2009. p. 227-235, 9 pp. ISBN 978-3-642-10307-0. doi:10.1007/978-3-642-10308-7_15.
KOPEČEK, Ivan and Radek OŠLEJŠEK. Accessibility of Graphics and E-learning. In Proceedings of the Second International Conference on ICT & Accessibility. Hammamet: Art Print, 2009. p. 157-165, 9 pp. ISBN 978-9973-37-516-2.
KOPEČEK, Ivan. Ontology and Knowledge Based Approach to Dialogue Systems. In Proceedings of the 2009 International Conference on Internet Computing ICOMP. Las Vegas: CSREA Press, 2009. p. 291-295, 5 pp. ISBN 1-60132-110-4.
2008
PLHÁK, Jaromír. Dialogue Based Text Editing. In 11th International Conference on Text, Speech and Dialogue. Berlin: Springer-Verlag, 2008. p. 649-655, 7 pp. ISBN 978-3-540-87390-7.
BÁRTEK, Luděk and Jaromír PLHÁK. Visually Impaired Users Create Web Pages. In 11th International Conference on Computers Helping People with Special Needs. Berlin: Springer-Verlag, 2008. p. 466-473, 8 pp. ISBN 3-540-70539-2.
KOPEČEK, Ivan and Radek OŠLEJŠEK. Dialogue-Based Processing of Graphics and Graphical Ontologies. In Text, Speech and Dialogue. Proceedings of 11th International Conference. Berlin: Springer, 2008. p. 601-608, 8 pp. ISBN 978-3-540-87390-7.
KOPEČEK, Ivan and Radek OŠLEJŠEK. GATE to Accessibility of Computer Graphics. In Computers Helping People with Special Needs: 11th International Conference, ICCHP 2008. Berlin: Springer-Verlag, 2008. p. 295-302, 8 pp. ISBN 978-3-540-70539-0.
KOPEČEK, Ivan and Radek OŠLEJŠEK. Hybrid Approach to Sonification of Color Images. In The 2008 International Conference on Convergence and Hybrid Information Technology. Los Alamitos: IEEE Computer Society, 2008. p. 722-727, 6 pp. ISBN 978-0-7695-3407-7.
2007
BÁRTEK, Luděk and Ivan KOPEČEK. Adapting web-based educational systems for the visually impaired. International Journal Cont. Engeneering Education and Life-Long Learning, 2007, vol. 2007, No 17, p. 358-368. ISSN 1560-4624.
BÁRTEK, Luděk, Ivan KOPEČEK and Radek OŠLEJŠEK. Setting Layout in Dialogue Generating Web Pages. In Text, Speech and Dialogue. 10th International Conference, Pilsen, Proceedings. Berlin: Springer, 2007. p. 613-620, 8 pp. ISBN 3-540-74627-7.
KOPEČEK, Ivan and Martin RAJMAN. Project Internet for All - Creating Web Presentations and Graphics by means of a Dialogue System. In Proceedings of the 2007 International Conference on Internet Computing ICOMP 2007. Las Vegas USA: CSREA Press, 2007. p. 381-384, 4 pp. ISBN 1-60132-044-2.
2006
KOPEČEK, Ivan and Luděk BÁRTEK. Web Pages for Blind People - Generating Web-Based Presentations by means of Dialogue. In Computers Helping People with Special Needs -Proceedings of ICCHP 2006. Berlin heidelberg: Springer, 2006. p. 114-119, 6 pp. ISBN 3-540-36020-4.
KOPEČEK, Ivan and Radek OŠLEJŠEK. Creating Pictures by Dialogue. In Computers Helping People with Special Needs: 10th International Conference, ICCHP 2006. Berlin: Springer-Verlag, 2006. p. 61-68, 8 pp. ISBN 3-540-36020-4.
KOPEČEK, Ivan and Radek OŠLEJŠEK. The Blind and Creating Computer Graphics. In Proceedings of the Second IASTED International Conference on Computational Intelligence. Anaheim, Calgary, Zurich: ACTA Press, 2006. p. 343-348, 6 pp. ISBN 0-88986-602-3.
KOPEČEK, Ivan and Luděk BÁRTEK. Web Pages for Blind People - Generating Web-Based Presentations by means of Dialogue. In Computers Helping People with Special Needs -Proceedings of ICCHP 2006. Berlin heidelberg: Springer, 2006. p. 114-119, 6 pp. ISBN 3-540-36020-4.
Modeling and Simulating Cooperation in Socio-economic Structures
Analyze the impact of (non-)cooperation in socio-economic structures.
We have developed a multi-variable model based on a prisoner’s dilemma game in the NetLogo simulation environment. This model allows us to study various aspects of cooperation among employees in an organization. The key parameters are organizational performance, employment fluctuation resulting from stress level, sickness rates and the individual performance of employees. We have also implemented a concept of management strategy which represents a decision how to reward employees’ cooperative behavior and individual performance. This concept is extended by implementing a management insight, i.e. a parameter describing the accuracy of information that the management applies when employees are rewarded. Less than maximal insight can be perceived as a level of tolerance for rewarding or punishing employees’ behavior.
Initial experiments published at HICSS'19 conference have shown that management strategies and the quality of insight into employees’ cooperativeness and performance has a significant effect on both the organizational performance and the employees’ wellbeing. The highest organizational performance is achievable in settings when the management focuses on rewarding the cooperativeness of employees.
During the early stages of the research, we proposed a novel theoretical concept for dialogue-based image generation and exploration. This concept was based on the idea that well-structured semantic data driven by ontologies can be used for efficient dialogue-based information retrieval using a formal model of Pawlak information systems. The meaningfulness of this approach was discussed and tested by blind students, for whom the dialogue-based image exploration would be the most beneficial. The testing was performed using Wizard of Oz simulations. This proof of concept was published at ICCHP conference in 2006.
Over the next few years, we elaborated the preliminary idea into necessary details. We proposed taxonomies and ontologies that would be suitable for the semantic description and exploration of graphical content via natural language. We created a generic graphical ontology dealing with visual aspects of graphical content and classifying depicted objects according to their relative size, shape, position, etc. We proposed what we called the What-Where Language. This fragment of natural language, with relatively simple grammar, was designed according to the graphical ontology and other specific requirements put on the exploration of graphical content. We also proposed a technical solution enabling us to integrate ontology-based annotations into existing raster and vector images. Our results were summarized and published at ICCHP coneferent in 2008.
In 2014, we implemented an experimental system called GATE — Graphics Accessible To Everyone. This component-based web application provided several services. A semantic module dealt with shared semantic knowledge related to the possible content of uploaded images and provided services for semantic inspection. Users were able to upload an annotated image to the system and then explore it interactively by writing questions in What-Where Language. The underlying dialogue engine leveraged the picture annotation and the knowledge dataset to provide a smooth dialogue, including standard techniques for addressing misunderstandings. The results of the system evaluation were published at ICCHP in 2014.
Years: 2006 - 2014
Dialogue-Based Generator of Web Pages
Tell the system how your website should look like.
TheWebGen system is an online application that allows users with a visual disability to create web presentations in a simple and natural way using web page forms that simulate dialogue. It uses a web browser and a screen reader and does not require installation of special software or a knowledge of web technologies.
It enables users to create a web site step-by-step, using forms for data acquisition and web page templates. Users are able to enter one piece of semantic information at each step of a dialogue interaction. Based on the type of presentation, the WebGen system selects the dialogue strategy that chooses pieces of information to be acquired from the user.
Years: 2007-2010
Dialogue-Based Picture Generator
Tell the system what you want to draw.
We proposed a novel concept for dialogue-based image generation. This concept is based on the idea that well-structured semantic data can be used for selecting of objects from database and placing them in desired position of required picture. For the dialogue-based navigation in the picture, we use the Recursive Navigation Grid -- a virtual grid dividing the picture canvas into nine sectors that can be easily referenced in the dialogue. The meaningfulness of our approach was discussed and tested by blind students, for whom the dialogue-based image generation and exploration would be the most beneficial. The testing was performed using Wizard of Oz simulations and presented at the ICCHP conference in 2010 in Vienna.
Years: 2006 - 2010
Picture Sonification
Listen to color pictures.
We proposed and implemented a novel method of sonification of complex graphical objects, such as color photographs, based on a hybrid approach combining sound and speech communication. The transformation of colors into sounds is supported by a special color model called the semantic color model. Our results were published at the IEEE International Conference on Convergence and Hybrid Information Technologie in 2008 in Busan.