This was the first beyond tellerrand I attended which had a live stream running for the complete event. Which turned out to be really great, because I was watching over our kids every second talk. It only occurred to me on day two that I just could use my phone to listen to the live stream of the talks while parenting (at least partially) and then being able to participate in the discussions after the talk. Really cool.
One of the main topics this time was accessibility. I was surprised by how bad the user experience of screen reeders still is. Coming from a machine learning background I expected screen readers to be able to just generate descriptions for images. But the state of the art seems to be still "don't forget to put alt tags on your images". I understand that building self driving cars is more rewarding than trying to fix screen readers, but there's a lot of potential for improvement.
Maybe there's also a business opportunity: Amazon makes a ton of money for improving the accessibility of buying stuff online. They already have all the required data so they can reduce the effort to just clicking the "buy" button. For most other shops, you have to jump to a lot of hoops to finally buy something. Reducing that to clicking a button or saying "buy x via shop y" is a very similar task from a technical perspective. Hmm, I guess I have to revisit this whole shopco idea at some point in the future 😉.