• 0 Posts
  • 836 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • Yes. Keep in mind nothing in the article talks about the fiber repeater. That is my addition with some knowledge of telecommunications infrastructure. Because fiber optic cable isn’t perfect, there is light loss over distance. Different grades of fiber have different levels of loss across distance. An example of high end fiber would be ZBLAN. There is experimental level manufacturing (successful in small quantities already) of producing ZBLAN fiber in space to improve the fiber quality, but that makes it much more expensive. Once the limits of the fiber are reached a telecommunications provider can place a fiber repeater to double the length by intercepting the light (signal) and reproducing it (blinking new laser light) into the next segment of fiber.

    However, these repeaters create NEW light, and that would mean the quantum information is not carried over in present day fiber repeaters. Even measuring the entangled photon to recreate it would break the quantum state of the entangled photon at the source, so current means can’t be used as a repeater for quantum data.


  • This is a cool progress forward.

    TLDR; Researchers used a 30km optical fiber. They found a wavelength that was off-to-the-side that would mean the quantum entangled photons could ride in the same fiber without interfering (or being interfered with) the classical fiber optic communications. One current shortcoming for scaling this up is that the quantum photons would not survive optical repeaters commonly used for extremely long distant fiber runs. That doesn’t take away from the success of their research, just puts it in perspective for the next researchers to tackle at some point in the future.





  • I feel like everywhere I work, we have this term, and it’s become increasingly more common over the past decade as the USA becomes more and more hateful and aggressive towards the working class people… The offshore team. I really, really hate hearing about the offshore team

    …and…

    then you see the USA and how We have millions of computer science grads who struggle to find work, can’t get a job

    There’s a couple factors in play and depending on how old you are (or how long you’ve been in this industry) some things may not be apparent.

    1. IT spending/staffing is cyclical. Boom and bust. This happens every 5-8 years. There is massive spending by organizations in IT for various reasons. This drives up the need for IT staff and as the talent pool is exhausted, salaries rise sharply as companies try to poach from one another. IT workers win in this case. However, when the pendulum swings expensive IT staff are on the chopping block. For the cycle we’re in right now, that started about a year ago and the cuts are still ongoing, but to me, it feels like it will start swinging back in the other direction in the next 8-12 months with hiring picking up again.

    2. In-source vs Outsource/ onshoring vs offshoring cycle - Many businesses have short memories and “the grass is always greener” mentality. If they are heavily In-sourced and onshore they look at their budgets and see this MASSIVE number next to the “payroll” line item. They start asking how they can lower this number and save money. Consultants come in and convince them that the company can save money by cutting out a segment of the company’s operations and outsourcing that to another firm that quotes them an attractive rate. The company chooses this option, fires their own staff, contracts out the work. The bottom line is appropriately attractive, and executives get a bonus for making cost cuts. Inertia from the previous staff keeps the org going much as before for awhile. However, the service begins suffers because the contract company is attempting to provide the least amount of resources and money to fulfill the contract. Many times this means using offshore staffing themselves. After a few expensive outages for outright rebellions from the company business departments, the company fires the contracting company hires their own staff again and brings the service back in house. This pendulum swings again for another 8-10 years.

    3. International pay disparity - IT workers in the USA are crazy expensive compared to nearly anywhere else in the work. I’m not just talking about a little more, but by a factor of 10 or 15 times more expensive than other nations that provide similar skilled staff. A $150k USD IT worker in the USA can be replaced (mostly) with a $15k USD worker in India with the same level of skill. That same IT worker skill level would earn $75k-$100k CAD in Canada. In Germany that same worker would earn €60-$90. During boom times that USA worker might be able to earn $175k-$300k USD.

    As a worker, you can see that working in the USA will earn you the most money when you can get a job. So the trick is to save during the boom times knowing the bust is coming. If you earn $300k for one year, and are unemployed for two years afterward you’ve effectively earned $100k per year for 3 years straight. Being unemployed in IT for over a year is unusual. You can usually find a lower paying job in IT to cover your living expenses and then some until the boom occurs again.


  • Except companies are already jumping ship to other solutions. One very large company is moving thousands of VMs to an implementation of KVM, virtually eliminating the insane VM licensing.

    Sure there are a few, but its unlikely that many large enterprises will be able to completely migrate away from VMware, evaluate and deploy ancillary support products for the alternate hypervisor, as well as retrain all their support staff inside of the time that their existing support contract expires. All but a lucky few that happened to negotiate a long multiyear support deal under the old licensing terms (and pricing) will be paying at least 1 year of expensive support renewals and more than likely more than one year.

    Broadcom knows this and will make these companies bleed until they can migrate away.

    Broadcom has all but admitted their own solution is inferior, by converting their workstation virtualization to KVM!

    This is what sucks about Broadcom. Vmware vSphere is still a good product with thousands of trained professionals available for hire to support it, and great third party support for things like backup and enterprise support services.

    To Broadcom’s credit, the writing was on the wall that versions of KVM would be eating their market over the next 10 years (for example, Proxmox), so they’re getting all they can now before their corner on the market weakens.

    There was no such writing. Most large enterprises were just fine paying for VMware licensing under the old terms.

    I like Proxmox, but it doesn’t even provide half of all the features that vSphere does that are needed for large enterprises. Small shops with a few nodes and no HA requirement? Sure. Hundreds of ESX nodes and tens of thousands of VMs? That is just beyond Proxmox as it is today. Also, good luck hiring Proxmox trained staff. Large companies want ready pools of labor, and Proxmox doesn’t have that market penetration today.



  • Sure!

    Here’s more relatable analogy. Microsoft Office costs about $30/month per user for companies. For our example imagine Google Workspace doesn’t exist. So the “default” office software of MS Office that nearly everyone uses is not cheap, but not expensive. Further, you don’t buy MS Office from Microsoft directly, but through a partner that gives you other discounts and support. Now imagine that overnight Microsoft decides they’re desolving their partner network and you have to buy MS Office directly from Microsoft and also starting tomorrow that MS Office now costs ten times as much at $300/month per user. Would everyone stop using MS Office? Eventually, but you’ve got business you need to do today. Your company can’t even send email without Outlook, which is part of MS Office. So your company is BLEEDING money just paying for MS Office, and there’s no good alternative. So you pay it for now. You try desperately to come up with a plan to use something else, but for now you’re paying through the nose. Companies will take years to identify another product that replaces everything in the company that MS Office is used for and training the entire company to use and support this new product.

    Replace the name Microsoft with Broadcom. Replace the name MS Office with VMware. This is what is happening and Broadcom is laughing all the way to the bank.





  • Here is a better example of the different classes in architecture: https://www.landzero.com/post/understanding-property-zoning-a-comprehensive-guide

    That looks like a guide on zoning, not on the layout inside office buildings or the age in which they were constructed.

    As far as the windows, I don’t know that site and the window requirements, but it’s hard to see what’s going on on the sides. The overhead trusses are easily accessible as well. Maybe, maybe not.

    I’m confused. You said this in the prior post:

    The class B pic shown in your link would be a perfect candidate to retrofit to housing if it’s unrented.

    If you say “its hard to see whats going on” or “maybe, maybe not”, why did you say that picture was the perfect candidate?

    Don’t just take my word for it. Go look up the studies actually performed on Office-to-Residential conversion. There was one that evaluated something like 1250 office buildings in North America. Look up your local building codes for residential apartments. Some Class B are good candidates yes, but I doubt the one pictured is for some of the reasons I cited and more.

    I disagree with you on what you think you can do with “Class B” , but I don’t think you’re wrong about anything, if that makes sense.

    No, that doesn’t make sense to me. I’m no expert in this field. I just read the studies commissioned by the Federal government or articles about those studies. I even replied on Lemmy with this info a few months ago citing those sources. You’re welcome to take a look at it for more info here.


  • The class B pic shown in your link would be a perfect candidate to retrofit to housing if it’s unrented.

    According to the architectural studies I’ve read when I looked into this question for myself, you would be incorrect. Open floor plans are apparently pretty horrible for residential conversions. Many residential building codes require each bedroom to have a window with a screen for ventilation. Now look at that picture of the Class B. The only exposed areas that could have a window with a screen would be on the perimeter. Further, codes many have rules that say that you cannot have one bedroom accessible by passing through another, so that would exclude long skinny apartments unless the are a 1 BR. That would leave lots of square footage trapped in the middle unusable for bedrooms. Could you put windowless living rooms and kitchens there? Sure, but even then its very few residences when they could knock that building down and get many more windowed rooms on the same piece of land.

    Class C’s don’t have these issues as they were built with small individual offices in mind and not open floorplans, which make for affordable cost effective conversion to residences.

    The classifications you’re showing are classes of rentals, not building construction.

    I’m no building expert, but I am not aware of a difference in “class of rental” vs “building construction” you’re making the distinction of. The studies I read only referred to them by class letter and never mentioned any distinction that you’re referring to.


  • Then he should act like any other office building owner and rent some space to other companies.

    There are more buildings/office spaces to rent than people wanting office space these days. There are LOTS of empty unrented buildings. He would have difficulty even finding a tenant.

    Bonus points if he gets with the future and works to convert some of the building to living space so people don’t have to travel to get to work.

    An exceptionally small number (we’re talking single digits in the world) of Class A office buildings are good candidates for this, and these are typically done with grants/subsidies from state or local governments. These are only in the most lucrative geographic locations where housing is at an absolute premium regardless of the cost.

    For good value of converting office space look at Class C buildings. These are typically older and smaller office buildings (think built in 1910s-1950s). In these, there are ways to make cost effective residential conversions and these are happening by the dozen now.

    Here’s a guide to the different class of office buildings



  • I mean, imagine a future where every computer is just a chromebook, phones are no longer phones but just a “terminal” that streams the actual OS which runs in the cloud.

    It will get close to happening for nearly all computing, then it will swing back the other way to local storage and compute, then after 15-20 years it will swing back toward centralized compute and storage. This has already happened 3 times.

    • Original computing was mainframes. “Dumb terminals” that had zero local storage and only the most rudimentary compute power to handing the incoming data and display it, and take keystrokes, encode them, and send them on.

    • Then “personal computers” became a thing with the advent of cheap CPUs. Dumb Terminals/mainframes were largely discarded and everyone had their own computer on their desk with their won compute and storage. Then the Netware/Banyan era began and those desktop computers were networked to have some remote shared storage. (there’s a slightly different branching with Sun/HPUX/DigitalUnix and Workstation grade hardware)

    • Then Citrix WinFrame and Sun Ray stateless thin clients showed up once again swinging the compute and storage almost entirely remotely to centralized heavy powered servers with (mostly) dumb terminals, but these were graphic interfaces like MS Windows or Xwindow.

    • Once again, powerful desktop CPUs showed up with the Pentium II etc compute was back under users desks.

    • Now phones and tablets with cloud has show up, and you’re asking the question.

    So what I think will swing primary compute and storage back to the user side (handheld now) is again, cheap compute and storage on the device. Right now so many services are cloud based because the massive compute and storage requirements only exist in volume in the cloud. However, bandwidth is still limited. Imagine when the next (next?) generation of mobile CPUs arrive, and with a tiny bit of power you could do today’s bitcoin mining on your phone or process AI datasets with ease in the palm of your hand. And why would you send the entire dataset to the cloud when you can process it locally and then send the result?

    So the pendulum keeps swinging; centralized and distributed, back and forth.