Almost all companies, large and small, are developing or offering intelligent assistants (Apple, Google, Microsoft, Amazon, Meta, ....). These typically now handle both text and speech input and output. Their main components are speech recognition, language parsing and generation (typically now based on large language models) and text-to-speech components. These are sometimes only used in an integrated way, but are often available as components. It is then possible to build a custom solution from these in a building block fashion. Widely known and recognised testing and evaluation methods that are used in practice are still lacking. In addition to the systems of international players, the project will also test domestic solutions by developing demo systems. Participants will gain insight into the parameters that determine real-world usage (cloud - local, number of languages, acoustic environment, resource requirements, etc.).