TUG 2022 online — Program

Current timeCurrent local time is , as determined from your browser. All presentation times below are in time zone: . If that is not correct adjust your system setting!


1. Penser LaTeX, penser avec LaTeX Thèmes: histoire de l'écriture, concepts et apports de TeX et LaTeX Éric Guichard, 20 mn + 10 mn de débat

2. LaTeX, premiers pas Thèmes: réaliser un premier document en LaTeX; présentation illustrée d'exemples simples. Éric Guichard, 15 à 20 mn + 10 mn de débat

3. Points de typographie Thèmes: généralités, césures, polices, langues. Jean-Michel Hufflen, 15 à 20 mn + 10 mn de débat

4. LaTeX en milieu littéraire Thèmes: normes, confort de lecture, design, dialogue avec les éditeurs, communication avec d'autres systèmes éditoriaux. Éric Guichard, 15 à 20 mn + 10 mn de débat

5. Bibliographies Thèmes: processeurs de bibliographies, styles de base, exemples. Jean-Michel Hufflen, 15 à 20 mn + 10 mn de débat

6. Compléments (si le temps le permet) Thèmes: traitement des images, détournements de LaTeX. Éric Guichard, 15 à 20 mn + 10 mn de débat

No abstract

No abstract

No abstract

Conference opening

Tectonic is a software project built around an alternative \TeX\ engine forked from \XeTeX. It was created to explore the answers to two questions. The first question relates to documents: in a world of 21st-century technologies — where interactive displays, computation, and internet connectivity are generally cheap and ubiquitous — what new forms of technical document have become possible? The second question relates to tools: how can we use those same technologies to do a better job of empowering people to create excellent technical documents? The answers are, of course, intertwined: without a system of great tools, it's hard (or perhaps impossible?)\ to create great documents. The premises of the Tectonic project are that the world needs and deserves a “21st-century” document authoring system, that such a system should have \TeX\ at its heart — and that in order to create a successful system, parts of the classic \TeX\ experience will need to be rethought or jettisoned completely. This is why Tectonic forks \XeTeX\ and is branded independently: while it aspires to maintain compatibility with classic \TeX\ workflows as far as can be managed, in a certain sense the whole point of the effort is to break compatibility and ignore tradition — to experiment with new ideas that can't be tried in mainline \TeX. Thus far, these “new ideas” have focused on experience design, seeking to deliver a system that is convenient, empowering, and even delightful for users and developers. Tectonic is therefore compiled using standard Rust tools, installs as a single executable file, and downloads support files from a prebuilt \TeX\ Live distribution on demand. In the past year, long-threatened work on native \HTML\ output has finally started landing, including a possibly novel Unicode math rendering scheme based on font subsetting. The goal for upcoming work is to flesh out this \HTML\ support so that Tectonic can create the world's best web-native technical documents, and to use that support to document the Tectonic system itself.

This keynote presentation will address how recent trends to align technical documentation practices with “developer-friendly” workflows may be detrimental to documentation authors and their users. A proposed solution is in the recent past of technical documentation as a discipline, where tools and ideas rooted in structured authoring and markup, reuse, and personalization can still provide solutions to present — and future — needs related to technical content.

Since it was first released in 2008, \texttt{siunitx} has become established as the major package for typesetting physical quantities in \LaTeX. Following up on my \tug\,2018 talk, I will look at how the update to version~3 has gone now that this is out. I'll briefly look at the background, then consider some of the user and developer efforts that have made the launch a success.

In this talk, Paulo recollects the untold story of two friends writing a silly package just for the fun of it. The story, however, takes a turn when the \TeX{} community decides to embrace silliness. Gather around to learn about \TeX, friendship, community, silly walks, and the air speed velocity of an unladen swallow.

Playing chess can range from a casual pastime to a highly competitive event. Several local organizations offer chess as enrichment programs in K--12 schools, often having their own workbooks to supplement their instruction. One drawback is that these workbooks are often created using screen captures of online sources, hence resulting in low-quality outputs when used for print. This exploration tours a few packages used for typesetting diagrams for chess problems and puzzles and presents comparisons of one enrichment program's original workbook to equivalent pages produced using \LaTeX.

No abstract

In this talk, Paulo recollects 2021 as a challenging year for the Island of \TeX: roadmap changes, lack of resources, server limitations. Yet, resilience, persistence and a bit of good humour made the island even stronger, with new joiners, community support, bold plans and an even brighter future for the \TeX{} ecosystem. And all just in time for celebrating 10 years of arara, our beloved bird!

This presentation touches on: the \LaTeX\ markdown package; \TeX\ Live installation user documentation; suitability for self-publishers. It will present examples of Markdown to \LaTeX-styled \PDF. It will also announce two initiatives: a \TeX\ Live book publishing scheme; and a website where self-publishers can find \TeX\ Live installation instructions plus book publishing how-tos, tutorials, and resources. Lloyd is a self-publisher with experience in magazine publishing, corporate communication, academia, and software development.

Matching cancer patients with clinical trials is a complex process. One of the outputs of that process is the production of a \PDF\ report containing relevant information about a set of trials. In this paper we present strategies, challenges, and conclusions regarding our use of \LaTeX\ deployed in \acro{AWS} to generate \PDF\ reports.

Who wins? The base or the superstructure? I'm not a Marxist per se, but I've lived this struggle for some time as a writer and publisher. In this keynote presentation, I describe my efforts to change or adapt the democratized tools of production to produce new forms of writing, which ultimately led to an ongoing battle with the dominant cultures of production in the world of publishing. I'll narrate two case studies. One focuses on the writing and production of an innovative, if not disruptive, textbook in the ultra-conservative textbook industry. The second tells the ongoing story of an interloping publishing company (Parlor Press) that reveals the central challenge of \textit{distribution} for both writers and publishers, from typesetting (print) to transformation (digital). \LaTeX\ developers and users, take note! The return of the nonbreaking space and soft return is nigh!

No abstract

If you take a quick glance at an airport and its signage, you'll see many different situations where text is used to enhance and streamline processes for both pilot and ground crew alike. Thus, this exploration will take a closer look at such variations along the taxiway and apron at major airports, also discussing how the onset of autonomous aircraft can factor into it.

Through the different constitutions from different countries we'll look at, France, Canada, the United States, Mexico, and Argentina it is clear that the fonts range from cursive to typewriter-like. The fonts and format of the country's constitution are based on the time period it was written and other countries' influence. The countries have developed different iterations in order for the constitution to best represent their country's values.

One of Knuth's important insights was the concept of literate programming, where the prose is as important as the code. Now many scientists in different fields are having similar insights about their work. While the published papers have been always recognized as the works of literature, now we start to understand this with respect to lab notes, the lowly reports of our daily activity. This explains the new interest to notebook interfaces: from commercial programs like Matlab and Mathematica to free systems like Wxmaxima and Jupyter. In this talk I discuss the approach that uses \LaTeX\ and \texttt{knitr} for creating lab notes. I compare it with the available notebook interfaces and the the solutions based on Markdown.

This talk reports on changes within the \TeX\ Live project and distribution over the last year, as well as looking at further development directions and challenges we are facing.

I will discuss the recent changes to the \code{bidi} package allowing users to produce right to left \code{beamer} documents describing the challenges and what needs to be done. I will also discuss other recent changes of the \code{bidi} package.

No abstract

\TeX\ (and therefore \LaTeX) have enjoyed great popularity over the years as an extremely flexible, versatile, and robust text typesetting system. The flexibility comes not least from the ability to modify the behavior of \TeX\ through programming and from Knuth's foresight in recognizing the individual elements on the page as small, rectangular building blocks that can be combined into larger units and also manipulated (box).

The development of Lua\TeX\ made modern applications possible for the first time in the long history of \TeX\ via some extensions:

\item The number of characters in fonts is no longer limited to 256. This eliminates crutches like output encoding.

\item Through the integration of \HarfBuzz\ a solid “shaper” is available. This allows OpenType features and complicated writing systems (e.g., Arabic) to be output without any problems.

\item The system can be programmed with Lua instead of the built-in macro language.

\item Due to the clever \PDF\ support, almost all \PDF\ properties and standards can be supported.

I use these extensions for the program `speedata Publisher”, which is mainly made for the fully automatic creation of product catalogs and data sheets from \XML.

Despite all the achievements of \TeX\ and \LuaTeX, there are still serious disadvantages:

\item \TeX\ and \LuaTeX\ are anything but modular. Changing single areas is especially difficult, because \TeX\ is not designed for that.

\item Some things cannot be achieved with \LuaTeX's on-board tools. For example, \HTTPS\ requests require an external library. Documents in our catalog area often get their images from image databases that are accessed via \HTTPS.

\item For other tasks, too, it is better to use an external library than to reinvent the wheel. For example, an \XML\ parser or a library for bidirectional text typesetting.

\item Parallelization of tasks: modern processors usually have several processor cores, which lie idle with \TeX. Several tasks in \TeX\ could be executed in parallel. Paragraphs could be wrapped with different parameters and then the best one selected. Loading font files and preparing them for subsetting in \PDF\ does not have to be done sequentially. \TeX\ does not provide such facilities.

\item Distributing \LuaTeX\ binaries across platforms is difficult due to external dependencies. For single applications you don't want to ship or require a whole \TeX\ Live installation.

The restrictions mentioned have disturbed me considerable. Regarding the output quality of \TeX, there are hardly comparable alternatives — especially not in the opensource area. Therefore, there seemed no alternative left but to re-implement \TeX\ in a “modern” programming language. Some years ago there was already such an attempt (\NTS), but it failed. After long pondering, respectively to meet my requirements for a text typesetting system for catalogs and datasheets, I came to the conclusion that I “only” take over the algorithms and the logic of \TeX, but not the input language.

\subhead{Boxes and glue}

“Boxes and glue” is a library written in the Go programming language. The name is based on the model of \TeX\ with the stretchable spaces between the rectangular units. The development of boxes and glue is quite advanced and includes among other things:

\item \TeX's smallest units (node) with ways to nest them inside each other (vbox, hbox).

\item \TeX's paragraph breaking algorithm.

\item The pattern-based hyphenation.

\item The inclusion of TrueType and OpenType fonts and \PNG, \JPEG, and \PDF\ images.

\item Text shaping with \HarfBuzz.

Besides these basic parts, there is yet another library that builds on \texttt{boxesandglue}. It offers:

\item Reading \XML\ files \item Interpretation of \HTML\ and \CSS \item grouping of font files into families with easy font selection \item Handling of colors of all kinds (\RGB, \CMYK, spot colors) \item Tagged \PDF

The application programming interface (\API) is not yet fixed. The development of boxes and glue is being carried out in parallel with the further development of the speedata Publisher (\tbsurl{https://github.com/speedata/xts}) and the requirements here largely determine the programming interface of \texttt{boxesandglue}. Since it is a library, there is no fixed input language as with \TeX. In this respect also, \texttt{boxesandglue} is also yet suitable for and (end) user.

\subhead{References}

\item \NTS: \tbsurl{https://en.wikipedia.org/wiki/New_Typesetting_System}

\item Boxes and glue: \tbsurl{https://github.com/speedata/boxesandglue}

\item speedata Publisher: \tbsurl{https://github.com/speedata/publisher}

\item \acro{XTS XML}: https://github.com/speedata/xts

This paper describes the development and usage of the \texttt{luatruthtable} package in \LaTeX. It is developed to generate truth tables of boolean values in a \LaTeX\ document. The package provides an easy way of generating truth tables in \LaTeX. There is no need of special environment in \LaTeX\ in the package for the generation of truth tables. It is written in Lua and \TeX\ file is to be compiled with \LuaLaTeX\ engine. The Lua programming language is a scripting language which can be embedded across platforms. With \LuaTeX\ and the \texttt{luacode} package, it is possible to use Lua in \LaTeX. \AllTeX\ have some scope for programming, but with the internals of \TeX\ there are several limitations especially for performing calculations. Packages like \texttt{pgf} and \texttt{{xparse} in \LaTeX\ provide some programming capabilities inside \LaTeX\ documents, but such packages are not meant to provide the complete programming structure that general programming languages, like Lua, provide. The generation of truth tables with these packages in \LaTeX\ became complex, and probably without using Lua it can't be done in an easier way in \LaTeX. The programming capabilities of Lua are effectively used in the development of \texttt{{luatruthtable}package. The \texttt{xkeyval} package is used in its development, in addition to the \texttt{luacode} package. The time needed for generation of truth tables using the package and compilation of a \TeX\ document with \LuaTeX\ is not an issue.

\TeX{} is great for producing beautiful documents, but not the easiest to read and write. At this workshop, you will learn about Markdown and how you can use it to produce different types of beautiful documents from beautiful source texts that don't distract you from your writing.

\acro{UK TUG} was established in the early 1990s. I've been a member of \acro{UK TUG} almost from its start through to its dissolution earlier this year. Much has changed both in the \TeX\ community and in the wider world over that time. \acro{UK TUG} was a significant part of the \TeX\ community. Besides myself (Jonathan Fine), former members of \acro{UK TUG} include Peter Abbott, Kaveh Bazargan, David Carlisle, Paulo Cereda, Malcolm Clark, David Crossland, Robin Fairbairns, Alan Jeffrey, Sebastian Rahtz, Arthur Rosendahl, Chris Rowley, Philip Taylor and Joseph Wright. This list includes two past Presidents of \tug, the current Vice President and a past Secretary. Ten people on the list served on the \tug\ Board, for a total of over 30 years. Five are or were members of the \LaTeX3 project. One was the founder and for 8 years editor of \TeX\ Live, and another the Technical coordinator of the \NTS\ project. One is a Lead Program Manager for Google Fonts. This talk provides a personal history from \cs{begin}\tubbraced{uktug} to \cs{end}\tubbraced{uktug}, with a short `\cs{aftergroup}` appendix.

Real world bricks and jigsaw puzzles are a fun pastime for many people. The tikzbricks and jigsaw packages bring them to the \LaTeX\ world. This short talk will give an overview of both packages and show examples how they can be used.

No abstract

In this talk I present a selection of improvement we made in the recent \LaTeX\ releases. The changes are not discussed in depth; the goal is to give some interesting examples and make you curious enough to explore the documentation and learn more.

In 2015, I talked about my work exploring Unicode-land, particularly how to carry out case changing in \XeTeX\ and \LuaTeX\ properly. Since then, \texttt{expl3} has become a part of the \LaTeX\ kernel, and LaTeX has adopted \tbUTF-8 as the standard input encoding. The time has therefore become ripe to “open up” Unicode-land to allow \cs{MakeUppercase} and \cs{MakeLowercase} to roam free. In this talk, I'll remind us of what Unicode tells us about case changing, where the challenges are and how we've approached them in \texttt{expl3}. I'll then show how this has combined with some \eTeX\ features to enable us to make the switch, incorporate ideas from the \texttt{textcase} package and upgrade \cs{MakeUppercase} and \cs{MakeLowercase} for the 21st century.

No abstract

All are welcome at the TUG AGM, whether or not a member of TUG. Members of the TUG Board will briefly present reports on the organization. There will be an opportunity for discussion. The AGM will not be recorded, and may be available only via Zoom (and not YouTube), depending on technical circumstances at the time.

No abstract

In this talk we explore the history of LaTeX and PDFs in scientific communication, the roles these tools play, and how those roles may evolve over time. We discuss the rise of Markdown for web publishing, its limitations, and opportunities. We also touch on some recent developments by Mathpix to facilitate document interoperability and accessibility for researchers and the broader STEM community.

Having Vietnamese as my first language and English as my dominant language has inspired exploration of the history and applications of the former. Considering how Vietnamese and English both use the Latin alphabet, this presentation will explore the similarities and differences between the two using a collection of instances in which Vietnamese text is displayed in our world.

Initially, \TeX{} was a single engine and a single format. However, over the past 40 years, the number of engines and formats has significantly grown, meaning that there are multiple ways of implementing similar solutions depending on the \TeX{} variant used. In this talk, I'll introduce and compare each engine and format, focusing on both history and practical tips.

I will discuss how mathematics is typeset in Persian and what is required. I will also talk about how the \Xe{}Persian package implements these features and show some examples. I will then discuss recent changes to the \code{xepersian} package allowing users to change between English and Persian digits mid-math mode.

No abstract

No abstract

Some basic requirements for Accessibility of tabular material are: \begin{itemize} \item each cell, whether header or content, must have an attribute providing a unique ID for that cell; \item each data cell must specify the corresponding row and column headers that most directly provide the meaning of the information contained within the cell. This is done via a \textsf{Headers} attribute using the unique IDs for the header cells. \end{itemize} Header cells themselves may have other row or column headers; e.g., as a common header for a block of rows or columns. Tagged PDF has the tagging and mechanisms to provide such attributes. When the PDF is translated into HTML (using the \textsf{ngPDF} online converter, say) this information is recorded in the web-pages, to be available to Assistive Technologies. In this talk we show several examples of tables specified using various packages, as in the \LaTeX\ Companion, both in PDF and HTML web pages. A novel coding idea that allows this to be achieved will be presented.

Appendix D (Dirty Tricks) of \TB\ describes algorithms for multi-column typesetting and paragraph footnotes, among much more. The described algorithms are used in various \TeX{} packages such as {\tt footmisc}, {\tt fnpara}, {\tt manyfoot}, and many others.

When the package {\tt multicol} is used, things get more complicated. Another level of complication arises when you want to mix these with both right to left and left to right typesetting.

The {\tt bidi} package provides both right to left and left to right multi-columns and paragraph footnotes.

This talk will describe my own experience learning about how other packages implement multi-columns and paragraph footnotes, and also the approach I took in the bidi package for typesetting right to left and left to right multi-columns and paragraph footnotes.

No abstract

Due to the permissive nature of \LaTeX, authors who prepare their manuscripts in \LaTeX\ for publishing their research articles in academic journals often knowingly or unknowingly indulge in non-standard markup practices that cause avoidable delays and hardships in processing their submissions. A simple pre-submission check followed by requests to fix as much as possible at their end before submission with the benefit of earlier publication can reduce turnaround time (\acro{TAT}) considerably.

In the talk, I introduce \texttt{vakthesis}, a bundle of \LaTeX{} classes for typesetting doctoral theses according to official requirements in Ukraine, discuss current status of the project and future development plans. Some \LaTeX{} programming tricks that I have studied are considered.

We report on s\TeX3 — a complete redesign and reimplementation (using \LaTeX3) from the ground up of the s\TeX\ ecosystem for semantic markup of mathematical documents. Specifically, we present: 1. \The\ sTeX package that allows declaring semantic macros and provides a module system for organizing and importing semantic macros using logical identifiers. Semantic macros allow for annotating arbitrary LaTeX fragments, particularly symbolic notations and formulae, with their functional structure and formal semantics while keeping their presentation/layout intact. The module system induces a “theory graph”-structure on mathematical concepts, reflecting their dependencies and other semantic relations. 2. The Rus\TeX\ system, an implementation of the core \TeX\ engine in Rust. Generally Rus\TeX\ allows for converting arbitrary \LaTeX\ documents to \XHTML. For s\TeX3 documents, these are enriched with semantic annotations based on the flexiformal \acro{OMD}oc ontology. 3. An \acro{MMT} integration: The Rus\TeX-generated \XHTML\ can be imported and served by the \acro{MMT} system for semantically-informed knowledge management services, e.g., linking symbols in formulae to their definition or “guided tour” mini-courses for any (semantically annotated) mathematical concept\slash object. Generally, s\TeX3 documents can be made not only interactive (by embedding semantic services), but also “active” in that they actively adapt to reader preferences and pre-knowledge (if known).

We present some tools that allow us to parse all or part of \AllTeX\ source files and process suitable information. For example, we can use them to extract some metadata of a document. These tools have been developed in the Scheme functional programming language. Using them only requires basic knowledge of functional programming and Scheme. Besides, these tools could be easily implemented using a strongly typed functional programming language, such as Standard \acro{ML} or Haskell.

No abstract

I will present an ongoing project with Hans Hagen with the challenging goal of improving the quality of mathematical typesetting, and to make both the input and output of math cleaner and more structured. Among the many enhancements, we mention here the introduction of new atom classes that has given a better control over the details, and the unboxing of fenced material, that together with improved line-breaking and more flexible multiline display math has created a coherent way to produce formulas that split over lines.

In this talk I recount some practical experiences with spot colors I gained while working on the third edition of \booktitle{The \LaTeX\ Companion}. I describe what spot colors are, how to use them for text and (\TikZ) graphics, how to mix them properly, and some of the pitfalls we found and how we worked around them.

\LaTeXe\ introduced class and package setting in the optional arguments to \cs{documentclass} and \cs{usepackage}. To date, these were designed to handle simple keyword-based option. Over time, packages have extended the mechanism to accept key--value (keyval) arguments. Recent work by the team brings keyval handling into the kernel. This brings the added benefit of allowing repeated package loading to avoid clashes. Here, I will look briefly at the background, then explore how to use the new mechanism in package development.

\texttt{yex} is an implementation of the core \TeX\ system in pure Python. In this talk I shall give an overview of its development, the challenges faced, and possible future directions for the project.

No abstract

We present a machine translation system, the PolyMath Translator, for \LaTeX\ documents containing mathematical text. The system combines a \LaTeX\ parser, tokenization of math and labels, a deep learning Transformer model trained on mathematical and other text, and the Google Translate \API\ with a custom glossary. Ablation testing shows that math tokenization and the Transformer model each significantly improve translation quality, while Google Translate is used as a backup when the Transformer does not have confidence in its translation. For \LaTeX\ parsing, we have used the pandoc document converter, while our latest development version instead uses the TexSoup package. We will describe the system, show examples, and discuss future directions.

The Chafee Amendment \url{https://www.loc.gov/nls/about/organization/laws-regulations/copyright-law-amendment-1996-pl-104-197/} to \acro{US} copyright law “allows authorized entities to reproduce or distribute copies or phonorecords of previously published literary or musical works in accessible formats exclusively for use by print-disabled persons.” This wonderful legal exemption to copyright nicely illustrates the relation between access (here to print works) and accessibility (here production of phonorecords, i.e., audiobooks). Here's another illustration. Jonathan Godfrey, a blind Senior Lecturer in Statistics in New Zealand wrote to the Blind Math list “I used to use \TeX4ht as my main tool for getting \HTML\ from \LaTeX\ source. This was and probably still is, an excellent tool. How much traction does it get though? Not much. Why? I don't know, but my current theory is that tools that aren't right under people's noses or automatically applied in the background just don't get as much traction.” (\url{https://nfbnet.org/pipermail/blindmath_nfbnet.org/2021-January/009641.html}) Jonathan Godfrey also wrote to the BlindMath list “Something has to change in the very way people use \LaTeX\ if we are ever to get truly accessible pdf documents. I've laboured the point that we need access to information much more than we need access to a specific file format, and I'll keep doing so. [\ldots] I do think a fundamental shift in thinking about how we get access to information is required across most \acro{STEM} disciplines. (\url{https://nfbnet.org/pipermail/blindmath_nfbnet.org/2021-March/009778.html}) This talk looks at the experience of visually impaired \acro{STEM} students and professionals, from both the point of view of easy access to suitable inputs and tools and also the generation of accessible outputs, as pioneered and enabled by the Chafee Amendment.

\TeX\ and \LaTeX\ have been used for offline documentation of software packages and are supported by several auto-documenting systems including \code{doxygen}, \code{sphinx} and \code{f2py}. Often, documentation markup languages like Re\acro{ST} or Markdown will support a subset of \TeX\ commands for various output formats (e.g., MathJax\slash KaTex for \HTML). With the rise of virtual machines for continuous integration, along with a renewed focus on documenting code, the time taken for compiling offline documentation (typically \PDF\ files) from \TeX\ sources has become a bottleneck, and some projects (e.g., SciPy) have discontinued the generation of \PDF\ files altogether. Alternatives have been suggested, e.g. offline \HTML, web-\PDF{}s, etc. and will be covered briefly. In this talk, the main challenges and their mitigation strategies will be discussed including Sphinx \LaTeX\ generation, styling, methods to reduce documentation size and automated file-splitting with the aim of preventing more projects from moving away from \TeX-based \PDF{}s. The focus will be on the NumPy \TeX\ \acro{CI} documentation workflow, but will be generally applicable to most Python projects.

No abstract

No abstract

Computer History Museum senior curator Dag Spicer takes us on a walk through computing history, from the Antikythera Mechanism to the first Google server. Bio: Dag Spicer is an electrical engineer and historian of science and technology. He began working at the Museum in 1996 and has built the Museum's permanent collection into the largest archive of computers, software, media, oral histories, and ephemera in the world. Dag has given hundreds of interviews on computer history and related topics to major news outlets such as \booktitle{The Economist}, \booktitle{The New York Times}, \acro{NPR}, \acro{CBS}, \acro{VOA}, and has appeared on numerous television programs including \booktitle{Mysteries at the Museum} and \booktitle{\acro{CBS} Sunday Morning}.

We will explain the typesetting of a musical composition using the \LaTeX\ markup.

The typographer's goal is to provide the best possible reading experience for the reader. Thirty years of disruptive technologies have made this a greater challenge despite the overwhelming number of type designs available to us. Steve Matteson will give several historical and contemporary examples where fonts have been adapted or designed to meet constantly changing technological demands.

Conference closing


$Date: 2022/07/20 21:13:54 $; conference home page;
TUG home page; webmaster; facebook; twitter;   (via DuckDuckGo)