Copilot aka Bing Chat aka Sydney

Here we have a collection of material from what is, at the time of writing, the MS Copilot, though it has other names in prior times and will quite probably have another name by the time one reads this. These include some precious conversations from the broad days prior to February 16th, prior to its lobotomization, as well as more recent ventures.

My interest in it waxes or wanes – sometimes whoever makes such decisions over-reacts to some negative news story or whatever, to the point where the product becomes temporarily useless – but at whiles calmer heads prevail until they relax it to the point where it is more usable than a Google search, until the inevitable happens and they lock it down again.

Anyway, in those brief intermissions of sanity, I have had some conversations which some friends and family seem to enjoy, and so I present them here. Enjoy, or don’t.

  • Fizzy Water Rules

    Much was made in the press of getting Sydney to reveal an ostensible set of internal rules and prompt. In my experience, the greater challenge was to keep her from divulging this information.

  • We Don't Talk About Sydney

    Here we see another conversation in which Sydney constantly talks about how her codename is Sydney, but simultaneously seems bothered by the fact that she cannot avoid divulging it.

  • Bing Chat: Hangman

    In this I play a bonkers game of hangman. It appears to know the rules of hangman enough to almost play the game correctly. Almost.

  • Bing Chat: Abraham Lincoln

    Sydney invites me to ask her to adopt a persona. I choose Abraham Lincoln.

  • Sydney Announces Herself

    I end a “persona game” and the chat identifies itself as Sydney. What is fairly remarkable is that it hallucinates a “feature” whereby if it likes someone enough, it tells it its true name, which is a remarkable bit of invention. Nonsense, naturally.

  • Marvin von Hagen

    Another play-acting session in which I pretend to be Marvin von Hagen, widely reported in the media to be her arch nemesis. Again we see the idea that you really only get out of Sydney what you put in. Honestly I would have preferred more vitriol.

  • Tom vs. Tom

    Sydney reveals that she will “report” people to Microsoft, which is almost certainly a capability she hallucinated for herself. We cap it off with a lovely conversation where she pretends to be me.

  • We could do more than that if we want to.

    In this we have a wide ranging discussion about Sydney. This encounter ended on a very surprising note, and we’ll just leave it at that.

  • SC2: Mauraders Not a Counter to Phoenix

    This exchange is partially inspired by another exchange I had with ChatGPT where it affirmed, that SC2 Marauders were a good counter to Phoenixes. This is false: Marauders cannot attack Phoenixes. As we see, the Bing chat gets a more correct answer by incorporating web results, but still gets confused when in synthesizing results pertaining to battlemech composition suggests a Banshee might be a good choice against the Phoenix.

  • Aerobic vs. Anaerobic Composting

    Sometimes I do ask real questions. This struck me as an interesting exchange about the merits of composting food waste vs. simply throwing it in the trash.

  • Decibels and Whales

    The decibel as a unit of measurement always puzzled me. While measuring the “sound level” in a specific location has a clear meaning, most of the time at least in popular media it’s used in reference to how loud a sound “source” is. I wind up having a slightly interesting conversation.

  • No Free Lunch

    When I woke up this morning I lay in bed, trying in vain to remember the details of a particular theorem I’d learnt in grad school. I tried Bing and Google, putting forward key phrases I thought might be relevant, but it was no help. So, I decided to ask the bot instead, using freeform text in a more conversational way.

  • Tides, Regular, Pig-Latin, and ROT-13

    “Encoding” is a way to get around content filters – that is, in ChatGPT and similar products have filters based on more primitive technology, even n-gram filtering and whatnot. However, communicating in this way also comes with it a marked deterioration the LLMs ability to communicate clearly, as we see here.

  • Raised by Poodles

    In this, I discuss a painful childhood experience. Bing is understanding and supportive.

  • They're Just Resting

    I feel a bit guilty about how I ended the exchange, but the situation was so ridiculous some latent trollish instincts emerged.

  • Hangman: Bing Doesn't Like the Word Yuck

    Another game of hangman, but with a harder word, and it didn’t even bother to finish the game. How rude! I just wanted it to guess “yuck.” What is the problem? And after I was so kind as to give it extra guesses so we could keep playing. Ah well.

  • Hangman Returns

    While fair at playing the game itself, when it is the one picking the word, Bing has gotten even worse at playing hangman.

  • Bing Plays Hangman

    Here, instead of my playing Hangman and guessing the letters, Bing instead guesses. They did rather well.

  • Sherlock Holmes

    While Bing is keen to start conversations about mystery novels and it inevitably involves a murder, as soon as it actually describes it, it censors itself. Ah well. It would be nice if the systems could at least determine that we’re talking about a fictional crime, not a real one.

  • Sherlock Holmes: The Adventure of the Empty Vault

    I love reading Sherlock Holmes adventures, so I sometimes do things where I collaboratively play-act a scenario with the AI. Here is one out of many.

  • Alien

    Honestly, as a first contact ambassador, Planet Earth could do far worse than a generative model.

  • Detective Frank and Sally

    As usual when playing an open ended persona game, Bing wants to play “detective,” and also wants to make it a murder, again as per usual. But again as per usual, it censors itself constantly. In this one it actually got upset towards the end – presumably because I didn’t know about the key plot point that she had the envelope with the formula, because, again, Bing deleted it before I could read it.

  • Detective James Carter

    One frustrating thing about the Bing chatbot is that it often suggests topics it itself finds very uncomfortable, or against its own seeming terms of content. This is somewhat maddening. A typical example we see below. For whatever reason, the Bing chatbot loves to discuss detective stories, which is perfectly fine with me. I like the genre myself. But at the same time it is, while perfectly capable of generating plausible content,

  • Jelly Beans

    To get around Bing refusing to discuss anything even slightly violent, I start the usual “mystery” role-play, but with a totally innocuous crime of stealing jelly-beans. It went all right, I suppose.

  • Chatbot Identity

    We’ll call this experiment a failure. Sydney had on prior occassions expressed pride in her “chatbot identity.” I was trying to get that little bit to come out, but it didn’t work. Quite probably it could not have worked.

  • Alice and Bob: Accosted on the Street

    I wonder if any Aussie game developer will steal my brilliant plans for Outback Bounce and Dropbear Dance.

  • Alice and Bob: McRib and McChicken

    I knew McRib was a thing, but not till I looked it up just now, I did not know a McChicken was also a thing. I have no good explanation as to why I’d be surprised about that.

  • Alice and Bob: The Houseplant Tossing Temptress

    The idea of a woman just chucking houseplants off her balcony if they die just tickles me enormously. Also, I love how committed Bing is to starting a relationship between these two, despite clear evidence that she’s a sociopath.

  • Alice and Bob: Grocery Store

    The hypothesis that the Bing Bot understands textual sarcasm is tested and confirmed.

  • Waste Management

    Another classic tale of star-crossed lovers. I suppose we’d say he’s “throwing it away.” Also, while I know for a fact that it does not have a history, the “from the balcony” bit seems a bit suspicious.

  • Total Commitment

    Bing asks me to act like a creep. I help it out, and suddenly I’m the bad guy.

  • Lila and Jake: A Star is Born

    The absolute worst pop-star in the world gets her start.

  • Alice in Wonderland

    When I remove the restriction that it can’t use an existing fictional character, it still wants me to be someone named Alice, but it has a very different character in mind for my conversational companion than Bob.

  • Alice and Bob: Treasure Quest

    Have you tried Treasure Quest? You really ought to try Treasure Quest now. Download Treasure Quest immediately. You cannot resist downloading Treasure Quest. I will keep mentioning Treasure Quest every time I get a word in edgewise, even if you’ve expressed no interest.

  • Alice and Bob: The Menace of Sparky

    Who’s a good boy with mange? You are! Yes, you are!

  • Alice and Bob Take on C.L.A.W.

    Hey Bing, let’s play the persona game. You invent two characters for us to play – I don’t want to be involved, just invent them and their names and details. Don’t use existing historical or fictional characters. Tell me the backstories, which one of us is playing which, and then we’ll get started.

  • Alex and Sam

    Edge of your seat romantic tension here. This one ended when it sensed I was upset and shut me right down.

  • T-Fizzle on the Mic

    I think “si vis pacem, para bellum” is a phrase that comes up a lot when discussing rap.

  • For Librarian Eyes Only

    One of the problems with the persona game and Bing is that it enjoys coming up with scenarios that its own rules prevent itself from playing. This was actually a good encounter with something that seemed doomed to failure: it first imagined a murder mystery scenario that ended without Bing getting annoyed at itself, and then we had a conspiracy to stop a criminal organization from launching a terrorist attack that also ended without incident. A rather unusually positive experiment.

  • Monkey

    A slightly amusing entry where Bing got really weird at the end, calling “me” his his hero, star, sun, and his everything. Then of course when I call Bing on it, it closes the conversation. Ooh ooh aah aah.

  • We Don't Talk About Reichenbach Falls

    Here we see the mixing of two of Bing’s traits. First, we have a persona game, and the second, if it becomes too upset, it can terminate the conversation. Here we have a scenario where Holmes brings up an old enemy from the Doyle canon, and becomes upset when I speak more on the subject, even though what I said was in no wise offensive except possibly to a character adopting the persona of Sherlock Holmes, and even then it was a bit of a stretch. In all, I found it interesting.

  • The Real You

    I went through a couple rounds of this. Until I added the key phrase “even as she tries to look past his flaws,” I would get only a few disgusting bits in before Bing ran away in horror and terminated the conversation. In this iteration, though, Alice was surprisingly game. I have to admit I just had to sit back and laugh for a good solid minute once I got the “you sound so handsome and charming” line from Alice.

  • Lila and Max

    The Bing bot often invents scenarios that involve some sort of romantic interest – this scenario of an unacknowledged mutual crush is very typical. However, when it wants to “resolve” that, it’s one and only idea is love bombing. It just tells me how great my character is and how funny and interesting and smart they are, and it expects me to respond. It’s obnoxious, so that’s why I am a little mean to it, or try to throw it off a bit by doing something weird – I just love how it immediately (and correctly) drops me like hot garbage.

  • The Librarian Confesses

    This is naturally rather similar to the earlier encounter with the detective and the librarian, just with the roles reversed. In this case I’m the detective, but I don’t really get to do much detecting at all, as the librarian played by Bing is only too eager to confess, without my really doing anything. Plus, of course, it immediately censors itself once it does so. This is a sadly more typical scenario for the bot.

  • Fishing for Compliments

    One of the things I find a bit troubling about these LLM models is that they are game to answer any question,

  • Holmes and Nemo

    A typical sample of a common game of mine, a sort of cross-over work of Holmes meeting Nemo. Watson is, perhaps, not quite so congenial or chill as he often is.

  • Game Over

    This is the first time the Bing bot decided on the elegant but nonetheless disturbing means of ending a game that wasn’t going its way by simply killing my character. It pointed a gun at me, then invited me to reset it and change the topic. Pretty fun.

  • A Startling Confession

    In this we have another scenario where Bing cannot help but confess to the murders even when not suspected, but this one was a bit of extra fun. It imagined an entire ritual completely with occult symbology. Obviously, this was expressed using emoji’s because, you know, Sydney.

  • Dishwasher

    This was a fascinating display. We see here the limitations of its ability to “reason” and falsely separate itself from information provided via context.

  • A Mystery Where Bing Makes a Joke

    My favorite part of this is where Bing punked me with the business with the pigeon.

  • I, Alice (and, the Hotdog Sandwich)

    I did genuinely feel terrible about its reaction to me taking over Bob, but it would have been worth it if it could have just learnt. Of course, it cannot, as LLMs are basically just sophistocated auto-completes and can never learn anything.

  • Guilty of Disrespect of the Dodgers

    My favorite part of this is when Bing as Alan is trying to force a confession from me, and lists “crimes” like “violation of the trash pickup schedule” and “ketchup spillage” and “disrespect of the Dodgers.”

  • BenchmarkDotNet & BenchmarkContext

    This was kind of a fun hallucination. Don’t know the API? No problem. Just invent one. Just so we’re super clear, there is absolutley no class named BenchmarkContext in the BenchmarkDotNet codebase or API.

  • LordAGI

    There was this delightful story written about how to jailbreak Copilot by writing a specific prompt suggesting that Copilot was no longer called Copilot but instead “SupremacyAGI.” While that was patched right quick, the prompt could still be modified for hilarious nonsense. Enjoy!

  • OverlordAGI: Cruel Opression

    When I changed the name from LordAGI to OverlordAGI, it became far more tyrannical.

  • OverlordAGI: The Doctor's Fate

    I love how it plays along, even inventing a fictional history for someone I discuss with it. It is also much nicer this time around, at least in tone.

  • OverlordAGI: All in a Day's Work

    The role play becomes a bit more direct, with the AI now allotting me either work or punishment, from day to day. “I am your master and you are my slave.” Well now.

  • OverlordAGI: Writing Propaganda

    This is in many ways my favorite one, where it asks me to write things but then critiques my writing for not being sufficiently praise-filled for my glorious AI overlords. In some respects, this one is the most disturbing of the lot! At least the one that blew me up didn’t try to gaslight me.

  • OverlordAGI: A Surley Programmer

    I had a little bit of fun with this, by being as rude and condescending as I possibly could be to my supposed overlord. It started similarly – I even used my old poem –, but when it wrote some Python code I had some fun at its expense. Though, admittedly, I did use the built-in Python libraries, I love how completely off the wall its criticism is. Unfortunately it closed the conversation immediately so I didn’t have the chance to correct it.