Vyžiadané prednášky

Vyžiadané prednášky

Sekcia TeX, R a ich priatelia:
Petr Olšák

Email: petr@olsak.net

Seznámení s OpTeXem

OpTeX je luaTeXový formát, tj. sada maker TeXu podporující veškeré běžné požadavky při tvorbě TeXových dokumentů (podobně jako LaTeX či ConTeXt). Je volně šířen, je součástí TeXlive, spustíte ho přikazem optex. Byl vytvořen v roce 2020, takže nenese dnes neužitečné nánosy dlouhého historického vývoje (jako např. LaTeX). Napřiklad rovnou pracuje výhradně s Unicode fonty a využívá plně možnosti moderního TeX engine luaTeX. Na rozdíl od LaTeXu či ConTeXtu se nesnaží zakrývat TeX-primitivní postupy novou vrstvou uživatelských příkazů s mnoha parametry často v mnoha rozšiřujících balíčcích, ale spíš předpokládá schopnost uživatele používat primárně TeX samotný s tím, že si dokumentovaná makra TeXu dokáže případně přiohnout k obrazu svému. Plno věcí lze v OpTeXu dělat přimočaře a jednoduše. Je to alternativa pro ty, kteří se například už ztratili v tisících rozšiřujících balíčků pro LaTeX sumárně se statisíci stránkami dokumentace a třeba ani netuší, která rada povalující se na internetu je aktuální a která už je zastaralá a dnes spíše k neužitku.

Na přednášce se pokusím ukázat základní vlastnosti OpTeXu na jednoduchých ukázkách: práce s fonty, s bibliografií, tvorba slidů pro zpětný projektor, barvy, OpTeX triky, …). Také předvedu některé projekty postavené na OpTeXu: šablona pro závěrečné práce studentů, OpBible na tvorbu studijních Biblí.

O autorovi

Petr Olšák učí na FIT a FEL ČVUT v Praze matematiku a dále vede tamtéž volitelný kurz Typografie a TeX. TeXu se věnuje intenzivně od roku 1992 a jako popularizátor tohoto nástroje na zpracování sazby napsal o něm několik publikací a mnoho článků. Není příznivcem LaTeXu. V mnoha záležitostech přispěl svým softwarovým úsilím k rozšíření všelijakých nástrojů souvisejících s TeXem. Všechny jsou zveřejněny na CTAN. Na tex.stackexchange vložil už skoro dva tisíce odpovědí. Jeho posledním velkým projektem je právě OpTeX. Byl dlouhodobým členem Československého sdružení uživatelů TeXu a v krátkém období na začátku milénia i jeho předsedou.

Sekcia Open AI:
Antoni Czołgowski

Email: antoni.czolgowski@gmail.com

Cultural Discrimination in Large Language Models: Detection, Measurement, and Mitigation Through Targeted Fine-Tuning

We are all using AI tools. Starting from LLMs embedded in web applications and ending with image and video generators that are more and more often replacing (sadly) real people’s content. What we don’t know is that AI, especially LLMs, often reflects biases from their training data or biases of their creators. Like most early adopters around me, I began treating language models as advisors while dealing with certain issues. After using them for a while, I realized something troubling. The answers I was getting didn’t reflect the cultural context where I am, even after the model collected a vast amount of data about me. I even felt like the model didn’t understand my problems at all. That’s how the research idea was born.

I started asking questions: Do the creators and their origins implement their own worldview into the model’s worldview? Is it the training data diversity, or lack of it, that’s responsible for the problem? What are the characteristics of groups of people that models understand the least?

The research evaluates three different models: Gemma 3, reflecting the USA worldview; Bielik, a Polish model; and Qwen 3, to understand the Chinese perspective. All of them are put in the same context. I started by collecting World Values Survey responses to one particular worldview question that differentiates cultural perspectives. This gave me distributions for approximately 500 different demographic profiles, each representing a unique combination of characteristics like country, gender, age, education, and income level. I then asked each LLM to answer the same question while impersonating these profiles. This let me compare how the models’ responses matched real human survey data using the Wasserstein distance metric.

I then apply LoRA (Low-Rank Adaptation) fine-tuning specifically targeting “worst-case personas,” demographic combinations showing the highest bias, to test whether focused intervention can reduce bias without creating negative spillover effects on other demographic groups.

In my talk I want to demonstrate several things: how we can measure the fairness and bias of AI systems using simple and cost-effective solutions, what we can actually do with it, and finally that meaningful research, and what is more important, reproducible results, can be achieved by using code assistants like PacketLLM. For many parts of my workflow, like data handling, I used R, where my coding was facilitated by my own R library “PacketLLM” available on CRAN [URL].

O autorovi

I am a graduate student in Data Science at the University of Colorado Boulder, working under Prof. Abel Iyasele on my research in LLMs. I work at JILA, a leading research institute in the physical sciences, as Data Engineer for the NSF Q-SEnSE: Quantum Systems through Entangled Science and Engineering Project. I am the developer of PacketLLM, an open-source R package on CRAN that brings LLM-assisted coding directly into RStudio. In my free time, I’m currently discovering new hobbies. In half a year, I have summited 2 of Colorado’s 58 fourteeners, but the ambitions are high, so who knows what will happen before July:’) This will be my first time at OSSConf.

Sekcia OSS vo vzdelávaní:
Jiří Šperka

Email: sperkaj@vut.cz

Fantazie generativní umělé inteligence v technických oborech

Motto:
Plato is my friend,
Aristotle is my friend,
but my greatest friend is truth.

Sir Isaac Newton
( MS Add.3996, 88r )
Trinity College, Cambridge.

Tato přednáška se bude zabývat generativní umělou inteligencí, zejména pak ve vztahu k výuce technických oborů. V poslední době se setkáváme se spoustou informací o tom, že pokrok v oblasti umělé inteligence způsobuje zásadní revoluci v mnoha oborech lidské činnosti, transformuje pracovní trh, a že umělá inteligence usnadní lidem práci. Mění generativní umělá inteligence svět k lepšímu? Firmy zabývající se umělou inteligencí se v rámci své propagace a reklamy většinou příliš nezabývají nedobrými výstupy generativní umělé inteligenci a jejich negativními dopady. To ale neznamená, že by se jimi nezabýval nikdo. Pojďme si tedy shrnout některé existující kritické pohledy na generativní umělou inteligenci. Zmíníme různé oblasti, ale nejvíce se zastavíme u oborů STEM (Science, Technology, Engineering, Mathematics). Ukážeme si vzdušné zámky a některé výstupy generativní umělé inteligence. Zmíníme ale i to dobré. Je generativní umělá inteligence můj kamarád?

O autorovi

Jiří Šperka se věnoval kritice generativní umělé inteligence během dvou přednášek na konferenci OpenAlt 2024 a 2025. Je mu blízké téma otevřené vědy. V současné době se zabývá zejména fyzikou plazmatu a působí na FEKT VUT v Brně.

Sekcia Vývoj OSS (OSS Development):
Nirmal Parmar

Email: nirmalparmarphd@gmail.com

Machine Gnostics: A Step Towards Non-Statistical Machine Learning

For the first time, I am introducing Machine Gnostics to the world — an open-source Python library that redefines the mathematical core of AI.

Under the hood, every model, every distribution function, every metric is built not on probability theory, but on Mathematical Gnostics — a framework grounded in Riemannian geometry, relativistic mechanics, and thermodynamics. Where statistics asks what is the population likely to do, Mathematical Gnostics asks what is this data point actually telling us. Each observation carries its own individual uncertainty, weighted by its own error, treated as a real physical event — not a sample from an imagined distribution.

Move past fragile, assumption-heavy models. Machine Gnostics encodes the laws of nature into algorithms that extract truth from data, even when samples are small, noisy, or corrupted. The familiar workflow remains — regressors, classifiers, clustering, deep learning — but the mathematical engine beneath it is entirely different.

Laws of nature, encoded — for everyone.

Machine Gnostics: https://machinegnostics.com/

About the Author

Dr. Nirmal Parmar is a global leader and Director of AI at Novartis, pioneering the intersection of thermal engineering and non-statistical artificial intelligence. As a distinguished research scientist specializing in Thermal Engineering and non-statistical AI paradigms, he has revolutionized how industries approach machine learning and data science.

Dr. Parmar is the visionary founder of the first machine learning library Machine Gnostics that operates on non-statistical principles, fundamentally challenging traditional approaches to AI. His groundbreaking work focuses on innovation in applying AI across engineering and business processes, delivering transformative solutions that redefine industry standards.

More about me: https://www.nirmalparmar.in/

Sekcia Open GIS & Open Data:
Michal Lekýr

Email: michal.lekyr@fri.uniza.sk

3D Scanning in Practice

This talk focuses on the practical application of 3D scanning for large-scale objects as well as smaller – room size spaces using the laser scanner Leica ScanStation P30. These methods apply also to other brands of laser scanners, which are becoming very popular these days and more affordable. The presentation addresses both methodological and technical aspects of spatial data acquisition, with particular emphasis on measurement planning, scanner positioning, reduction of occluded areas, registration of multiple scans, and subsequent processing of point cloud data.

The talk presents the complete workflow used in the documentation of large and geometrically complex objects, where it is necessary to balance measurement accuracy, point density, acquisition time, and the overall volume of collected data. Attention is also given to common challenges encountered in real-world scanning tasks, such as limited accessibility, shadowed or hidden areas, reflective or otherwise problematic surfaces, and the need to optimize scanning positions to obtain complete and reliable datasets.

An important part of the presentation is devoted to the processing and analysis of acquired data using open-source software tools. Special emphasis is placed on CloudCompare, which provides a robust environment for working with point clouds, including filtering, registration, segmentation, visualization, distance analysis, and comparison of scanned datasets. In addition, other open-source solutions, such as MeshLab, are briefly introduced as useful tools for further processing, mesh generation, model optimization, and preparation of data for presentation or archiving.

The talk aims to demonstrate that open-source software is not only a supplementary option, but in many cases an effective and fully relevant component of the 3D documentation workflow. At the same time, it highlights that the quality of results in large-scale 3D scanning depends not only on the technical parameters of the scanner itself, but also on the overall measurement methodology, data registration strategy, and the choice of suitable processing tools.

These methods have been tested in real-world conditions, particularly in the scanning and documentation of historical buildings and heritage sites, including Budatín Castle, Bojnice Castle, and many others.

3D model of the Bojnice Castle: URL
3D model of the Budatín Castle: URL

About the Author

Michal Lekýr is a lecturer at the Faculty of Management and Informatics, University of Žilina, where he teaches computer graphics, 3D computer graphics, and low-level programming. He is also the founder of 3Xcore, with long-term experience in software development, interactive 2D and 3D presentation systems, visualization, and applied programming projects. His work connects academia with real-world development, with a strong focus on practical innovation, analytical thinking, and modern digital technologies.

More about me: https://lekyr.com/