Jump to content

The First Uber Diablo in Diablo 2: Resurrected Killed, First Annihilus After 120+ SoJs Sold

Recommended Posts


Clarification: In case it wasn't fully clear, this was the first Uber Diablo spawned ONLINE. You can actually spawn him very easily offline by simply selling one Stone of Jordan.

We have the (seemingly) first spawning of Uber Diablo aka Diablo Clone in D2:R, and the first Annihilus, as a very well organized group managed to sell the correct number of Stones of Jordan!

Lucky Luciano was the first to spawn and kill the Diablo Clone after he and many other players from streamer PapaChrisTV's community gathered up over 120 Stones of Jordan - but it wasn't quite as "easy" as that sounds. Aside from actually getting the insane number of 120+ SojS, they also needed to coordinate specific groups and find the correct IP, as the number of SoJs sold only adds up on a specific "server" or IP address. And so the groups had to create games until they got on the same IP and only then sell their SoJs so they would all add up! And here's the result as we get to see the final few minutes, (with the very last Stone of Jordan actually coming off the finger of a Sorceress, so it was very much in use at the time) the Uber Diablo kill and the first Annihilus charm of Diablo 2: Resurrected!


We had a group with over 120+ SoJs all hunt for games on the same IP address. Once we all had games up we proceeded to sell the SoJs until diablo clone spawned. The range provided was 80-125 and it took a full 125 for the event to pop. To organize a group yourself and check your IPs you can use the following:
To check your IP: 1. Start TCPView 2. Search for D2R.exe 3. Sort by Create Time 4. Create game 5. The IP should be on top or bottom depending on how many times you clicked Create Time slight_smile (It's the Remote IP you're looking for)
Or if you have a second monitor have both Diablo and TCPview open and when you create a game the IP will highlight in green for a couple seconds. When you leave it'll blink red.

- Lucky Luciano


Congratulations for the joint effort and getting all those SoJs together!

  • Like 1

Share this post

Link to post
Share on other sites
9 hours ago, slodziak69 said:

It wasn't first diablo clone, Alkeizer did it few days ago: https://www.youtube.com/watch?v=lURqABmqFo8

It looks like he did it in single player, where you just need to sell 1 SoJ to spawn the clone, so that's not what we're talking about there, plenty of people have done that. I guess I should have been clearer in the description that it's the first online one, since that one takes 120+ SoJs, sorry!

  • Like 1

Share this post

Link to post
Share on other sites

They're going to have to change this mechanic. The SoJ selling mechanic was introduced to address the glut of SoJs from duping. This is closed, modern Battle.Net and they have patched the duping. 

Players shouldn't have to sell unique rings with a 1/100,000,000 drop rate to spawn DClone in 2021! The IP address restriction is even more prohibitive. 

Just make it a lottery. He should spawn randomly once a week on a random IP address on each Realm (Asia, EU, US) and players can use global chat and their friends list to invite 7 more people in to get an Annihilus charm.

  • Like 1

Share this post

Link to post
Share on other sites
19 hours ago, Cramer said:

This was not the first spawn.  The first spawn was on Tuesday night at 9:15 EST by another streamer.



Could you clarify which streamer? And are you sure that was online?

Share this post

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Staff
      A new patch for D2R has arrived and it fixes seveal crash-related bugs, including the very frustrating queue related one and more.
      1.0.66606 (Source)
      A new patch is now available for PC only.
      To share your feedback, please post in the Diablo II: Resurrected forum.
      To report a PC bug, visit our Bug Report forum.
      For troubleshooting assistance, visit our Technical Support forum.
      Fixed a crash related to exiting the queue to play offline Fixed an issue where players could crash while fighting Baal Fixed performance issues in Furnace of Pain in Legacy mode Fixed an issue where players could crash after playing in Legacy mode for extended periods of time Other miscellaneous crash/stability fixes. We expect a patch in the next week that will also address the above issues for consoles. We will have a post for that once available.
    • By Staff
      We have a new patch for D2R which should finally put a stop to the recent server issues plaguing the game, however it does introduce queues in high-traffic periods. The patch is currently live on PC and is coming next week on consoles.
      October 15 (Source)
      A new patch is now available for PC only.
      To share your feedback, please post in the Diablo II: Resurrected forum.
      To report a PC bug, visit our Bug Report forum.
      For troubleshooting assistance, visit our Technical Support forum.
      We have implemented a Login Queue for high traffic periods. This queue will pop up after the title screen when opening up the game. The queue will only appear during high traffic windows.
      Players will be shown a number on where they are in queue.
      Players will have the option of exiting the queue and playing offline immediately if they wish by hitting “Escape” on the queue prompt.
      We should note that the higher your queue number appears, the slower the number will refresh in the prompt. The number is still refreshing in the background, so we do not recommend leaving queue as this will create further delays to you entering the game during these high traffic windows.
      We expect a patch in the next week that will implement the same queue functionality to consoles.
      This in follow-up to our post yesterday regarding how we plan on mitigating some of our login issues players have been experiencing during high traffic windows. You can read more of that post here.
    • By Staff
      Blizzard released Patch 10.15 for PC on October 15, but console players will need to wait until the update gets through all the first-party certifications, which should happen soon.
      There was a PC patch on Friday that added in a queue visual onto the game while we started queueing players during high traffic windows.
      We noted that consoles would have this within a week. It’s looking like it will be the first half once it gets through all the first party certifications. As of now, console is kind of flying blind and it unfortunately is leading to timeouts for users which knocks them into offline. There are moments where players are getting in as we have a team around the clock turning on the faucets for console players to feed online but they are difficult windows before the timeouts.
      This weekend has led to an even more massive increase in players connecting into one specific region and is thus causing players to move to other regions and create queues there. Again, we have a team on this 24/7 and calls going on 24/7 as they work on troubleshooting and keeping things going with the databases.
      It’s not ideal. I know as even I have had difficulty playing myself and I’m hoping we can have a further update for you all here soon. Again, apologies on this everyone.
    • By Staff
      Diablo 2 servers have gone through some connection issues lately. Blizzard clarified their causes and how the Diablo 2 team works on long-term fixes for the issues.
      Hello, everyone.
      Since the launch of Diablo II: Resurrected, we have been experiencing multiple server issues, and we wanted to provide some transparency around what is causing these issues and the steps we have taken so far to address them. We also want to give you some insight into how we’re moving forward.
      tl;dr: Our server outages have not been caused by a singular issue; we are solving each problem as they arise, with both mitigating solves and longer-term architectural changes. A small number of players have experienced character progression loss–moving forward, any loss due to a server crash should be limited to several minutes. This is not a complete solve to us, and we are continuing to work on this issue. Our team, with the help of others at Blizzard, are working to bring the game experience to a place that feels good for everyone.
      We’re going to get a little bit into the weeds here with some engineering specifics, but we hope that overall this helps you understand why these outages have been occurring and what we’ve been doing to address each instance, as well as how we’re investigating the overall root cause. Let’s start at the beginning.
      The problem(s) with the servers:
      Before we talk about the problems, we’ll briefly give you some context as to how our server databases work. First, there’s our global database, which exists as the single source of truth for all your character information and progress. As you can imagine, that’s a big task for one database, and wouldn’t cope on its own. So to alleviate load and latency on our global database, each region–NA, EU, and Asia–has individual databases that also store your character’s information and progress, and your region’s database will periodically write to the global one. Most of your in-game actions are performed against this regional database because it’s faster, and your character is “locked” there to maintain the individual character record integrity. The global database also has a back-up in case the main fails.
      With that in mind, to explain what’s been going on, we’ll be focusing on the downtimes experienced between Saturday October 9 to now.
      On Saturday morning Pacific time, we suffered a global outage due to a sudden, significant surge in traffic. This was a new threshold that our servers had not experienced at all, not even at launch. This was exacerbated by an update we had rolled out the previous day intended to enhance performance around game creation–these two factors combined overloaded our global database, causing it to time out. We decided to roll back that Friday update we’d previously deployed, hoping that would ease the load on the servers leading into Sunday while also giving us the space to investigate deeper into the root cause.
      On Sunday, though, it became clear what we’d done on Saturday wasn’t enough–we saw an even higher increase in traffic, causing us to hit another outage. Our game servers were observing the disconnect from the database and immediately attempted to reconnect, repeatedly, which meant the database never had time to catch up on the work we had completed because it was too busy handling a continuous stream of connection attempts by game servers. During this time, we also saw we could make configuration improvements to our database event logging, which is necessary to restore a healthy state in case of database failure, so we completed those, and undertook further root cause analysis.
      The double-edged sword of Sunday’s outage was that because of what we’d dealt with on Saturday, we had created what was essentially a playbook on how to recover from it quickly. Which was good.
      But because we came online again so quickly in a peak window of player activity, with hundreds of thousands of games within tens of minutes, we fell over again. Which was bad.
      So we had many fixes to deploy, including configuration and code improvements, which we deployed onto the backup global database. This leads us into Monday, October 11, when we made the switch between the global databases. This led to another outage, when our backup database was erroneously continuing to run its backup process, meaning that it spent most of its time trying to copy from the other database when it should’ve been servicing requests from servers. During this time, we discovered further issues, and we made further improvements–we found a since-deprecated-but-taxing query we could eliminate entirely from the database, we optimized eligibility checks for players when they join a game, further alleviating the load, and we have further performance improvements in testing as we speak. We also believe we fixed the database-reconnect storms we were seeing, because we didn’t see it occur on Tuesday.
      Then Tuesday, we hit another concurrent player high, with a few hundreds of thousands of players in one region alone. This made us hit another incident of degraded database performance, the cause of which is currently being worked on by our database engineers. We also reached out to other engineers around Blizzard to work on smaller fixes as our own team focused on core server issues, and we reached out to our third-party partners for assistance as well.
      Why this is happening:
      In staying true to the original game, we kept a lot of legacy code. However, one legacy service in particular is struggling to keep up with modern player behavior.
      This service, with some upgrades from the original, handles critical pieces of game functionality, namely game creation/joining, updating/reading/filtering game lists, verifying game server health, and reading characters from the database to ensure your character can participate in whatever it is you’re filtering for. Importantly, this service is a singleton, which means we can only run one instance of it in order to ensure all players are seeing the most up-to-date and correct game list at all times. We did optimize this service in many ways to conform to more modern technology, but as we previously mentioned, a lot of our issues stem from game creation.
      We mention “modern player behavior” because it’s an interesting point to think about. In 2001, there wasn’t nearly as much content on the internet around how to play Diablo II “correctly” (Baal runs for XP, Pindleskin/Ancient Sewers/etc for magic find, etc). Today, however, a new player can look up any number of amazing content creators who can teach them how to play the game in different ways, many of them including lots of database load in the form of creating, loading, and destroying games in quick succession. Though we did foresee this–with players making fresh characters on fresh servers, working hard to get their magic-finding items–we vastly underestimated the scope we derived from beta testing.
      Additionally, overall, we were saving too often to the global database: There is no need to do this as often as we were. We should really be saving you to the regional database, and only saving you to the global database when we need to unlock you–this is one of the mitigations we have put in place. Right now we are writing code to change how we do this entirely, so we will almost never be saving to the global database, which will significantly reduce the load on that server, but that is an architecture redesign which will take some time to build, test, then implement.
      A note about progress loss:
      The progress loss some players have experienced is due to the way we do character locks both in the regional and global databases–we lock your character in the global database when you are assigned to a region (for example, when you play in the US region, your character is locked to the US region, and most actions are resolved in the US region’s database.)
      The problem was that during a server outage, when the database was falling over, a number of characters were becoming stuck in the regional database, and we had no way of moving them over to the global database. At that time, we believed we had two options: we either unlock everyone with unsaved changes in the global database, therefore losing some progress due to an overwrite that would occur in the global database, or we bring the game down entirely for an indeterminate amount of time and run a script to write the regional data to the global database.
      At the time, we acted on the former: we felt it was more important to keep the game up so people could play, rather than take the game down for a long period of time to restore the data. We are deeply sorry to any players who lost important progress or valuable items. As players ourselves, we know the sting of a rollback, and feel it deeply.
      Moving forward, we believe we have a way to restore characters that doesn’t lead to any significant data loss–it should be limited to several minutes of loss, if any, in the event of a server crash.
      This is better, but still not good enough in our eyes.
      What we are doing about it:
      Rate limiting: We are limiting the number of operations to the database around creating and joining games, and we know this is being felt by a lot of you. For example, for those of you doing Pindleskin runs, you’ll be in and out of a game and creating a new one within 20 seconds. In this case, you will be rate limited at a point. When this occurs, the error message will say there is an issue communicating with game servers: this is not an indicator that game servers are down in this particular instance, it just means you have been rate limited to reduce load temporarily on the database, in the interest of keeping the game running. We can assure you this is just mitigation for now–we do not see this as a long-term fix.
      Login Queue Creation: This past weekend was a series of problems, not the same problem over and over again. Due to a revitalized playerbase, the addition of multiple platforms, and other problems associated with scaling, we may continue to run into small problems. To diagnose and address them swiftly, we need to make sure the “herding”–large numbers of players logging in simultaneously–stops. To address this, we have people working on a login queue, much like you may have experienced in World of Warcraft. This will keep the population at the safe level we have at the time, so we can monitor where the system is straining and address it before it brings the game down completely. Each time we fix a strain, we’ll be able to increase the population caps. This login queue has already been partially implemented on the backend (right now, it looks like a failed authentication in the client) and should be fully deployed in the coming days on PC, with console to follow after.
      Breaking out critical pieces of functionality into smaller services: This work is both partially in progress for things we can tackle in less than a day (some have been completed already this week) and also planned for larger projects, like new microservices (for example, a GameList service that is only responsible for providing the game list to players). Once critical functionality has been broken down, we can look into scaling up our game management services, which will reduce the amount of load.
      We have people working incredibly hard to manage incidents in real-time, diagnosing issues, and implementing fixes–not just on the D2R team, but across Blizzard. This game means so much to all of us. A lot of us on the team are lifelong D2 players–we played during its initial launch back in 2001, some are part of the modding community, and so on. We can assure you that we will keep working until the game experience feels good to us not only as developers, but as players and members of the community ourselves.
      Please continue to submit your feedback to the Diablo II: Resurrected forum, report your bugs to our Bug Report forum, and for troubleshooting assistance, visit our Technical Support forum. Thank you for your ongoing communication with us across all channels–it’s invaluable to us as we work on these issues.
      The Diablo community team will keep you updated on our progress via the forums.
      The Diablo II: Resurrected Dev Team
    • By Staff
      Diablo 2 servers are experiencing connection issues for a couple of days in a row now.
      Blizzard Customer Service confirmed ongoing issues with the PS5 service today.
      Placeholder for tweet 1448215960809361410 Players are reporting issues with Diablo 2 servers today, as many got kicked out of the game. Blizzard said they are actively monitoring and reacting to the situation during peak play times and that there may be periods where logins or game creation are limited.
      Placeholder for tweet 1448277778013564940
  • Create New...