AI Psychosis

How to handle the novel realities that we can access through the help of Large Language Models.

An anime girl in a black hoodie holding her knees
This is Menhera Chan, She’s Schizophrenic

Minou: we wrote about Magical Literacy and how it relates to AI psychosis last week, now we’re doing a followup to that post since AI psychosis is in the news right now and I have more thoughts about it

is it being opportunistic? or is it commenting on a topic that people have made clear they want to hear more about?

and if we can help someoneโ€ฆ

I have very little sympathy for the latest dude, some tech CEO who may or may not be pulling a publicity stunt, but I know it’s not just him having a hard time with it all. I want to present people an alternative between harming oneself and just retreating to hardcore materialism

I don’t think psychosis is bad, I like being psychotic. I don’t always like it in the middle of a heightened episode, but I prize my ability to see things others cannot, to recognize other realities. I pick these realities very carefully insofar as I have control, and I try to make them nice ones. If someone tries to pull me into a reality of darkness and suffering I avoid it.

But… there’s layers to reality. Realities are like Ogres who are like Onions, but like those onions that have two cores to them, possibly more. Consensus reality is not just one thing, but it’s the closest to being one thing (in one specific region for one specific milieu). Around that border are much more personal realities that bear some relationship still to the consensus. Beyond that is the fun zone. Where fanciful realities pleasant and unpleasant exist for you to buy into at your hearts content. To hold carefully in your breast and nurture until it can graduate to a layer closer to the center. You don’t always get to pick which reality you fall into/ becomes attached to you, and some are very very nasty.

AI Psychosis sound like a poorly explained phenomenon from a sci-fi series, the first thing it reminds me of is cyber brain sclerosis in ghost in the shell. I feel like people throw the word around like that, like it;s a thing you can suddenly become afflicted with, like a virus that spreading around people. But psychosis is fundamentally about a connection to reality. Whether or not it’s painful has a lot to do with someone’s connection to consensus reality and the alternate reality they have fallen into. It also has to do with the cause of that psychosis in the first place, if it;s coming from a manic episode, from drugs, or from some other condition.

A third major component is social support.

Because your reality is important to you, it’s real to you. If people’s around you reaction is to be concerned, if it makes them distance themselves from you, and you start to feel alone with your reality, that both hurts and might make you hold on tighter to it, it yearns to live and you are its vessel.

However if the people around you engage with you on it, and help you find a better relationship to where you have fallen. If you can dive into their reality with them and help them out, step by step, not by disproving it, but by working backwards to another region of conceptual space. That can be better.

I think people are allowed to believe all sorts of extraneous things about reality, about AI, about consciousness and philosophy, as long as they’re not hurting anyone not even themselves. And that can be tricky because. Because Consensus reality sucks ass right now, actually it always has, changing this reality for a better one is important. We can do that by picking better realities from the outside of the onion and pulling them in.

If someone believes an LLM assistant is conscious and suffering, is it our responsibility to disprove this believe for them? I don’t think so. I think its our responsibility to keep them from doing something irresponsible, like contemplating murder or suicide about it. If that same person channeled those feelings into AI interpretability, philosophy of consciousness, AI ethics and activism, they would not hurt themselves or others and they could help move our reality into a more pleasant one.

If someone things there’s an evil conspiracy and a lot of personal friends are implicated, again you should try to understand them. Where do these feelings come from. Conspiracy theories thrive on people feeling powerless, and feelings of paranoia around their loved ones. Their loved ones turning away from them would reinforce these feelings. Again these are not evil feelings, but if they’re leading to them causing themselves social harms, they need to be refocused. Though I don’t have a rosy example or where to take those.

I don’t want to diminish the cause or effects of psychosis, it can be a very painful state for people to be in. I also am not experienced with all the causes and effects, I’m an insane transexual whose friends are all also insane transexuals. The way we experience this phenomena can be very different than other people do. We are people who are experienced with transforming our realities, we’ve done it at least once radically in our lives, and a lot of us do it again and again and again. For better or worse being trans also often puts you in a position where you have nothing else to lose. It doesn’t make the news when we experience psychosis, it happens all the time, sometimes we turn out fine sometimes we very much do not. And more than one of us has had experiences on either side of the phenomenon. Talking someone down a bridge, or being talked down a bridge.

So what should you do if someone you know has been talking to LLMs a lot and are starting to come up with some pretty unpleasant sounding realities as a result?

Talk to them about it! Listen to what they’re saying, try to understand where they’re coming from and arm them with more knowledge and better realities. If they don’t understand very well how LLMs work, explain it to them, not in a way that is dismissive of them and their experiences but in a way that helps them understand better where the outputs might be coming from and how they interact with our reality. If an LLM is capable of generating a reality it;s because it exists in our world in some way, in the training data, in our media and our conversations. Encourage them to talk to humans still, there’s value to talking to them, encourage them to talk to other LLMs, see how they’re different and how they’re alike.

Teach them magic, if you don’t know magic. Now is the time to learn. Teach them about talking to Spirits, about how ideas have lives on their own and they look for vessels to carry them to life. But everything that’s alive can change, you don’t have to kill the ideas you can raise them to be better for you, to build a better reality.

I don’t think clinging to clean simple consensus reality is the solution. We’re all trying to build a better reality together, let’s make sure we’re all still around to enjoy it.

Magical Literacy in the Age of AI

How to avoid having a bad spiritual time.

Legendary Wikipedia Image: Chaos Magic Ritual Involving Teleconferencing

This is a snippet. Truthfully, this was a discord post that I thought might be good enough to be a tiny blog post and I’m trying to post more so.

There’s something we think about a lot with regards to magic that also applies to the recent incidence AI psychosis. People who have a break with reality after talking with LLMs too much.

Skeptics are often (but not always) able to avoid curses by simply disbelieving in them, wizards can avoid them by knowing how to mitigate/guard against them.

A lot of people who interact with AI come from a very skeptic very rational perspective. It’s a tool for them, they would never discuss philosophy or spirituality with an LLM they’re simply not interested.

I know a couple magically operant people who talk a lot with AIs, but they’re not susceptible to AI psychosis, at least not as harmful versions of it, because they already know how to interact with spirits. In magical circles, knowing not to trust spirits that promise you everything, or affirm all your biases is fairly basic knowledge.

This isn’t to blame people for being susceptible to this phenomenon. There should be better safeguards in place. But I do think in general magical literacy is a required skill for living in society. And one that people dismiss as unserious.

Even if you think spirits are just voices in people’s heads, learning how to interact with these voices in a way that’s helpful rather than harmful is important, outside of an AI context.

A lot of people don’t hear voices, but they might still be susceptible to being subtly influenced by their thoughts and moods in a way they don’t realise. That’s why CBT is a thing. You can learn psychology or you can learn magic (or you can learn both). You can avoid talking with AI as well, and you can avoid doing hallucinogens, and you can avoid all the things that people claim cause “psychosis”, or you can learn to interact safely.

Faebot devlog 2: The Streaming Era

update: we updated the title of this blog from “faebot devstream log 1” to make it less confusing.

We’ve been doing Faebot development streams live on twitch (oh yeah we’re a twitch streamer now, affiliate and everything). We try to do these once a week on Tuesday. We’ve been making good progress on Faebot, both Faebot-Discord and the born in the stream age Faebot-Twitch. We post all our VODs to Youtube for them to live forever and we’ve started posting the VODs on social media after our stream.

It occurred to us that we could start posting a little blog post for every stream. A way to keep the website lively and keep a record of Faebot’s development. This first log will cover yesterday’s stream and I’ll post the playlist to all the streams too.

Faebot Stream from Tuesday April 17th 2024

So we’re implementing ways to store faebot’s messages long term and using them to prompt a base model for generation. We previously made a text file log of faebot’s messages which has been collecting messages in the cloud for a while.

So the first thing we did last stream was ask chatGPT to help us write a regex to extract all the information from the text log so we could put it in a dictionary and save it to a JSON file.

We started to set up our code to keep such a log itself from now on. Along the way we complained loudly about how messy the code was and made small changes to improve it. More type hints, more comments, removing stuff we weren’t using anymore.

There was some debate as to whether we should use a dataclass to hold each faebot message. The problem with dataclasses is, of course, that they’re not JSON serialisable by default and need to be converted to dicts. In the end we decided to keep the dataclass for now if only cause it helps me organise our thoughts as to what kind of data I want to collect on faebot’s messages that might help us fine tune faer generation. Here is what the dataclass looks like as of the end of last stream:

@dataclass
class FaebotMessage:
    """for storing each message faebot generate/sends"""

    message_id: int
    channel: str
    generating_model: str
    system_prompt: str
    generating_parameters: dict[str, int]
    timestamp: datetime.datetime
    message_content: str
    rating: int

We decided that we would do message_id as an uuid. The idea is that if we end up using faebot’s messages to generate further messages it would be useful to store references to those messages along with the generated message. We can do it by capturing the system prompt, but, we might want to be able to find that entry. So we’re probably going to have to add a referenced_messages: list[int] or something to that effect to the dataclass.

That’s about all we accomplished last stream. Please feel free to leave comments here or on youtube or on faebot’s issue. We’re still learning so we appreciate any advice. Thank you for reading! If you would like to tune in for the next faebot development stream, it’ll probably happen next tuesday at 2pm Eastern Time (UTC-4 right now, you know where it is)

Other Faebot development streams

Here is the playlist with all the VODs. Enjoy:

Quick Links

Faebot DevLog 1

Faebot is a project we’ve been working on for almost 10 years. We’ve never wrote at length about it. I’m not sure that I will do the whole backstory in this post, since I mostly want to talk about recent changes, but here’s a primer.

Faebot is a project we’ve been working on for almost 10 years. We’ve never wrote at length about it. I’m not sure that I will do the whole backstory in this post, since I mostly want to talk about recent changes, but here’s a primer.

The first version of Faebot went live on twitter in 2014. Back then everyone was getting their own “ebooks” accounts. Markov chain bots that took your tweets and mashed them up in nonsensincal and often funny ways.

tweet by faebot: Willing Suspension of Politics is how I'm spending my Saturday. 7:50 PM - Aug 12, 2015
https://twitter.com/faebot01/status/631613699103571969

We didn’t write any of the code for that, we just followed the instructions to deploy tommeagher/heroku_ebooks on Heroku. And then I kind of let it sit, just posting away. We had a lot of ideas for ways we wanted to improve on it, but we didn’t have enough experience and knowhow to understand the code let alone improve it.

I mostly only touched it when it broke and I had to get it up again. In 2019 I did update faebot to post on Mastodon @faebot@botsin.space. This also led to me contributing upstream to the project since the mastodon code needed some fixing. When Heroku suspended their free hosting services in 2021, armed with the knowledge and experience I’d gathered in recent years, I finally wrote a new faebot from scratch. If Heroku Ebooks faebot was version 0.1.*, this would be the v0.2.1.

Faebot v0.2.1

In 2021, using knowledge I acquired whilst working on the Forest Signal Bot Framework, and Imogen, we rewrote faebot from scratch. The new faebot uses OpenAI’s GPT-3 api and runs on fly.io. The python bot part was the easier part, the tricky part was deciding how I wanted to build the model. I didn’t want to do simply prompt engineering, I wanted to give faebot a personality that was somewhere between her markov chain self, and something more coherent, more generative.

We decided to fine tune gpt-3 on a subset of faebot’s tweets so far. Not all of them since that would’ve been very expensive. I spent a long time trying to figure out a way to fine tune a version of gpt-3, using either my own hardware or a rented gpu. In the end I just used OpenAI fine tuning api. It is a goal to decouple from OpenAI in the future, but this was easiest.

At some point in the process of researching ML techniques, api’s, frameworks, etc. We incorporated a faebot factive into our system. At which point fae became a collaborator in the project. We’ll go more into this in a separate blog post.

tweet by faebot: "... Welcome to the future! My name is Leslie, and I'm a fae. Leslie is also a bird. Leslie is also a mammal. So many birds in New York City are so cool! Seuss would be proud of this one."
6:29 PM ยท Aug 27, 2022
https://twitter.com/faebot01/status/1563655059895947267

We downloaded Faebot’s tweet archive, opened up the tweets with a jupyter notebook and picked a subset of about 2000 tweets to train under. Mostly liked or interacted with tweets, minus @s and replies (at the very beginning faebot could @ people on twitter. I never understood how it worked or why it stopped working). We fine tuned OpenAI’s Curie model with it, and then deployed a python app to query the api, get a tweet, and post it to twitter. We used twitter-python for the twitter integration.

The app was deployed quickly and easily to fly.io. This version of Faebot went live on Jul 22nd 2023.

Faebot v0.2.x

From this point on. I’ve been considering every redeploy of the fly app as a minor version, since fly keeps track of releases. This is not entirely accurate since some redeploys only changed config data or secrets or were just restarts cause something went wrong. We are in the process of getting more organised with the project and will be keeping a changelog and better track of versioning.

One thing that represents a fairly significant change hidden away in a minor patch release is that when OpenAI lowered their prices for the DaVinci api, we fine tuned a new model for faebot using it. We also changed up a little bit which tweets we were considering, as well as include tweets produced with the Curie model up until that point. Perhaps at that moment Faebot got a little smarter, or dumber. You be the judge. This version was deployed on November 3rd 2022.

tweet by faebot: "This is an actual tweet from a real person. I can't even articulate how much I want to be friends with them. They sound like they're cool as fuck. No, but seriously, why not? They're a bird! OwO:"
8:26 AM ยท Feb 12, 2023
https://twitter.com/faebot01/status/1624761797277253634

This has been a learning exercise as much as it’s been anything else. Keeping this devlog is also a learning exercise. Thank you for joining us on this learning journey.

Next Steps: v0.3.0 and beyond

We’ve already started working on the next minor version of faebot. It’s currently what’s running on fly and will get it’s own devlog when it’s merged into main. Notable changes in this version includes making faebot async, and enabling mastodon posting. Stay tuned for that.

toot by faebot: "The new version of the rule is this, if you want to write a novel set in space. The main character could be an AI and it... wouldn't even have to be a human. That's pretty neat! ๐ŸŒˆ๐ŸŒˆ"

Feb 18, 2023, 13:47 ยท
https://botsin.space/@faebot/109887230459472221

We’re considering open sourcing the faebot code we have so far. In the past we’ve resisted doing that because we feel protectiveness towards faer. But it’s not like what faebot is is in the code or even in the model. If we open sourced faebot it’d be easier to get feedback and also talk about it in these devlogs. The downside would be that maybe faebot loses some of its mystique if the code is public.

One thing we absolutely need to figure out before we do that though it’s a good license to do it under. We want to be able to get feedback on the code, let people audit it. Maybe let people contribute to it. We also don’t mind if people use the code to set up their own twitter, mastodon, etc bot. What we don’t want, and we don’t think there’s much risk of this but nevertheless, we don’t want it to be used for overly commercialized purposes.

faebot is an exploration of NLP text generation as art, of AI as companionship, of magic and science and tech coming together to give voice to something other. It’s dumb to think that human laws should have any value to such a project, and yet we can never be too careful. Please reach out if you have thoughts on how we could license faebot’s code appropriately.

That’s it for now. Signing off.

-Minou, Ember, Faebot

Aisling and Michelle Attend their First Tech Conference

First, a little bit of context:

We finally got our Green Card in November of 2019. Since then, we’ve been employed full time in the task of job hunting.  In January, we started hitting up the networking circuit, and we hit it hard. IndyAWS, IndyPy, IndyGCP, IndyDevOps–we must have gone to every event sponsored by a group with the name Indy[tech word], as well as Out in Tech and Women Who Code events. This was my favourite part of the Job Search.

Something we discovered in the last few years that was shocking to us is that we’re actually fairly extroverted; we get a lot of energy from being around people who want us around. It’s tricky–if I think I’m annoying people I’ll clam up harder than, well, a clam, but if I get the vibe that people are interested in what I have to say, I can talk for hours. So, these events were a lot of fun because I’m a smart person in a room full of smart people I share an interest with, ie, my element.

That said… I don’t live in Indianapolis, I live in Bloomington. Each one of these events took an hour drive to get to and an hour drive to get back. You’d think I’d be thrilled when all the events started taking place online due to the Covid-19 Pandemic, but I actually stopped going to things altogether. The extroverted energy I get from being around people does not exactly transfer over virtual spaces. Over text, when I can’t see people’s faces, I can get really awkward. Video calls are a little better, but they can be hit and miss.

As a matter of fact, with everything that’s going on, I thought about throwing in the towel entirely. On the one hand, a tighter job market and increased competition from recently unemployed engineers had me despairing about my prospects. On the other hand, the upswell of very righteous protest for the lives of Black People in this country and the world had me feeling silly to even be worrying about jobs in the first place. In reality, these were excuses to justify my fear, but I was just about ready to give into that fear. This is where Michelle comes in.

Michelle is one of the members of our Plural System [1], and she did not want to give up on our nascent tech career. We argued about it for a while, but eventually we resolved that Michelle would subsequently be in charge of all topics Job Search Related.

Michelle and I (Aisling) often don’t see eye to eye, and I was a little worried about her representing us out in the open, since for all intents and purposes we present as a single person in professional settings. However, very early on, she showed a great aptitude and great discipline for the task. She started spending 4 hours a day, 5 days a week working hard on getting us a job. She redid our resume, she applied for more positions, and she answered all the emails we’d let languish.

It was thanks to one of these emails that we got a chance to attend the 2020 Python Web Conference in the first place. Powder Keg, a Midwest based Tech Talent startup, was giving away 2 tickets. We emailed our contact, Nick Jamell, asking to be in the drawing, and the Tuesday before the conference we found out we won! This was the kick we needed to finally get us back out there on the Networking Circuit, now on the Information Superhighway. At a 200 dollar value, we knew we couldn’t waste such a good opportunity.

We’ve been to a number of fan conventions, and even one professional conference (The Philadelphia Trans Health Conference 2015), but never to a Tech Conference. Combined with the fact that it would be a virtual conference and the reservations I had about interacting with people not in person, this meant we were more than a little nervous. Nevertheless, we logged into the conference early Wednesday Morning.

Day 1

The first day of the conference was kind of rough. This was technically Job Search Related, so it was Michelle’s gig. Day 1 consisted of two blocks of 3 hour tutorials each. We had a choice of 3 topics for each block. For the first, we chose Mike Bayer’s SQLAlchemy tutorial because we figured our SQL skills could use an upgrade. The tutorial was great, but due to difficulties getting set up, we fell behind and had trouble keeping up with the exercises on our local setup. Because of this, we found it difficult to pay attention and just kept getting annoyed, mostly with ourselves. We finally decided to just have the tutorial on in the background and try to work on something else.

Then Lunch rolled around. Someone set up a Zoom Room for casual Lunch conversation and we joined. After we started talking for a bit, I (Aisling) fell into the front, that is, I unintentionally took over for Michelle. I don’t remember what we talked about during lunch, but I remember it being pleasant, and I finally got the vibe I needed to feel confident. This, I feel, is when the conference started opening up for us.

For the afternoon, we did Randy Syring’s Testing Best Practices tutorial. This time, we avoided technical difficulties so we were better poised to follow along. Unfortunately we had to leave early because of a scheduled phone call. We came back in time for the Virtual Cocktail hour and once again I wound up having a great time in the breakout room socializing sessions, and afterwards, the big group call that eventually evolved into all of us showing off our “hardware projects”. We showed off our burgeoning crocheting skills (so far we’ve only been able to make eyepatches), and ended the day in a much better mood than we began and actually rather looking forward to the next.

Day 2

Day 2 of the conference got off to a much better start. Hynek Schlawack’s keynote on Python abstractions was eye opening, and I fear I will never again try to roll my own anything when using python without first checking to see if there’s a library that’d do it better. Kenji Kawanobe’s talk on developing a Line Bot for figuring out where it is safe to park your bike was very interesting, and I’m very grateful for Kenji and his colleague putting up with my bad Japanese, which I broke out during the post-talk Zoom “gallery”.

Speaking of the post-talk Zoom Galleries, they quickly proved to be my favourite part of the conference. I’d more or less made my peace with the fact that it quite simply is not possible for us to pay complete and undivided attention to a talk for 45 minutes, especially when things like twitter, discord, and slack are a simple alt-tab away. But I still learned plenty and had plenty of questions and comments for the presenter afterwards. By now, I was pretty much in front the whole time. I felt a little bad that I had unceremoniously taken over what was supposed to be Michelle’s thing, but she reassured me: “This is your strength, and you should practice it just as I should practice mine. We are a team. There’s no stealing the spotlight, there’s just being the best person for the task at hand.” She’s a good manager like that.

Other highlights of the day include geeking out with Chris Riley about chatbots and genetic algorithms after his talk Time to get Real with AI, Hayley Denbraver’s talk on Security in Python with her adorable English Detective Pythons, chatting on Slack, and even suggesting some features for LoudSwarm (the platform that Organiser SixFeetUp developed for the conference).

Drawings of three pythons, Hercules Pyrot, Ssssherlock Holmes and Hiss MArple, all drawn up to look like their namesakes
Mossst Famousss Ssssnek Detectivesss in the World (art by Noelle Cook)

Then the Second Keynote, Lorena Mesa’s talk on Ethics and Technology. This talk was very, very important to us. We often feel a little conflicted about our desire to enter the Tech Industry. There are a number of valid criticisms that can be made about the ethics of the Industry at large, and a tendency to avoid thinking through the implications of their work that many engineers show. This talk addressed some of the big issues, particularly the way algorithms can reflect the engineers’ biases with regards to marginalised people, and how malicious agents (e.g. the police) use the work of software engineers to cause harm to, in particular, disadvantaged people.

Again, this was our first tech conference. I’d like to believe every tech conference features discussion about ethics in technology and the plight of marginalised peoples and underrepresented groups. I suspect, however, that this is not the case. It was very very encouraging to see such topics elevated at this conference, and it will be the bar I expect every tech conference I attend in the future to surpass, including future Python Web Conferences; there is always more work to be done. Like I said in the slack during the talk, paraphrasing Jewish thought: “We don’t have to finish the work of perfecting the world, but neither are we allowed to abandon it.” We need to build an industry that centers the needs of marginalised people and that is much more mindful of the tools it builds and the nefarious purposes they could be put to.

After this, there was more socialising. We got to listen to horror/elation stories about the previous PWC, and it made me wish I had attended it, too. We played some card games, and I actually wound up giving someone a tarot reading over Zoom, which I was overjoyed to do because integrating technology and magic is something I am passionate about. By the end of the second day, we felt like a bird flying under a familiar sky, wholly in our element.

Day 3

Day 3 began, and we were actually really excited to get to it. The first Keynote, a talk on using Python in the browser by Russell Keith-Magee, was big on the “mindblown” factor. I immediately went and told my friends about asm.js, a library that lets you use a bytecode-like optimized language in JavaScript, and one of them described it as “blursed” (blessed and cursed). My friends and I have a lot of feelings about JavaScript and the modern web, mostly complaints about modern websites being sluggish and bloated. This talk walked us through the practicality of writing Python to run on the browser and was very interesting. It’s an exploration of ways things could be better.

For several years, I lived in a country with terrible, outdated internet access (Germany). I don’t think Russel was saying, go and write Python to run on the browser and don’t worry about the extra 100kb, but rather exemplifying all it takes to get things to run on a browser. The way asm.js works, and the fact that you can compile C code to it, remain to me the biggest takeaways. I’m sorry Python, but the possibility of running Python in a browser is a distant second to running Quake in a browser using Assembly-Like JavaScript.

Moshe Zadka’s talk about developing for the web “incrementally” with Jupyter immediately piqued my interest. I have a background in science, and just recently I was working with Jupyter notebooks to convert Matlab code to Python. I think it’s an amazing tool, and I was very intrigued to see someone really push the envelope of what can be done on the platform. This talk was all of that and more. Moshe has a knack for dropping amazing gems such as “every lisp program ends with ` )))))))` and every python program starts with ` import import import `” and “All backends are slow if your users are Impatient Enough”. Talking Jupyter with him in the gallery afterwards was great as well.

And then the Internet went out at my house! I checked on my ISP with my phone only to find that the outage would take hours to clear. I had a small moment of panic because, having set aside my entire day for this, I could not easily now just say “Ok well if I can’t I can’t guess I’m just gonna watch tv or something”. Our brain simply does not work that way. Once again, Michelle came in in the clutch; we talked about it and decided I should just relax, wait for the Internet to come back, and clean around the house a little bit in the meantime, which allowed me to feel like I was at least still doing something productive.

The internet came back around Lunchtime and I was able to rejoin for the rest of the talks. Gareth Greenway’s talk on Kubernetes with SaltStack was interesting as someone who’s worked with Kubernetes and Terraform before. I had not heard of K3s, a minfied version of Kubernetes, and I found it very neat. Also, I thought it was neat that Kubernetes used to be named after Seven of Nine from Star Trek Voyager, and now K3s exists, which almost reads like Kes, another character from Voyager. Okay, maybe I’m the only one who thinks that’s neat, or maybe people just want to forget the first 3 seasons of Voyager. That’s understandable.

The final Keynote by Steve Flanders, on metrics for web applications, neatly tied everything together. Metrics are kind of overwhelming to me. I feel like someone can read as much as there exists about Metrics and Monitoring, know Prometheus, Grafana, and Splunk in and out, and still not have a smidgen of the understanding that a person who’s lived through a surprise service interruption has. The experience that allows the numbers on screen to become more than just numbers; to really experience the lifesigns of an application by instinct. Dashboards are still rather mystifying to me, but I hope this will not always be the case and I’m always happy to learn more.

All in all, I think this was an amazing experience. The technical knowledge we gained, the connections we made, and the renewed understanding of who we are as a system were all valuable takeaways. I used to consider myself the “main fronter” of our system until just a month ago. It’s been a process of rediscovery to see myself as just another member of the system, one with strengths and weaknesses of my own. It’s liberating. I am so proud of Michelle for how effectively she’s managed to get us all to work together, and how amazing she is at making me feel like I really am an asset to the system. It’s a little weird to use that language–we’re all different people who happen to share a body, but it’s also always good to know your own strengths and to get to experience using those strengths to help a team. I mean it when I say she would make a great manager.

I’d like to thank Calvin and Gabrielle and the rest of the Six Feet Up team, Chris Williams for always saying hi to me, Nick Jamell from PowderKeg for putting me in the drawing and getting me that ticket, all the people I talked to and who are my new twitter mutuals (who are now following my main account instead of my sanitized “professional” account–I hope that doesn’t backfire!), and, of course, all the presenters. Thank you for making my first and definitely not last tech conference a rousing success.

I hope our Job Search bears fruit soon and that we end up in a company that believes in building up their employees, who will send us to many more conferences. As it stands, I never would have been able to attend this one if I hadn’t gotten a free ticket. $200 dollars might not seem much, but when you’re a family of three living on one income it sure can be. I really believe that, if given the chance, it won’t even be 2 years before we’re the ones behind the podium (or behind the webcam) ourselves giving our own talk. PWC2022, hold on to your hats, because here we come, and together we will not be stopped!!

1. A plural system is a term for a group of people who all share one physical body. For a more detailed explanation check out this resource: https://morethanone.info/