×
Medicine

Is 'Amazon Care' a True Benefit Or Industrial Era-Style Healthcare? (computerworld.com) 155

Lucas123 writes: Like Apple and Intel, Amazon is piloting an in-house program for employees that in addition to healthcare insurance affords workers access to telemedicine and at-home visits from a contracted provider. While growing in popularity, in-house healthcare programs, which even include corporate clinics, are seen by some as an example of the growth in fragmented care or mimicking corporate care during the industrial era when factories had worksite clinics to get employees back to work faster. "[Corporate-based virtual healthcare programs, like Amazon's] is yet one more example of fragmented care," says Cynthia Burghard, a research director with IDC's Health Insights. "Back in the day, manufacturers had worksite clinics to take care of workers injured on the job mostly so they could get back to work sooner. The difference with what Amazon is doing compared to what the [Deloitte] survey shows is that the Amazon offering is disconnected to other care providers rather than under the supervision of an employee's providers." [The Deloitte survey found that 66% of physicians said telemedicine improved patient care access and 52% said it boosted patient satisfaction.]

Vik Panda, lead of operations for French sleep company Dreem, had this to say: "The news is that Jeff Bezos' company, and others like it, don't need anyone's permission to start building and paying for their own parallel healthcare systems, little by little. If Amazon replaces the existing health care system bit by bit, and employees of self-insured companies migrate to this new digital health system, do we all get to come along?" Amazon Care, Panda said, represents a wake-up call for providers, payers and employers because telehealth is not just about video chats with a doctor or wearable fitness trackers. "...It's a new operating system for health, and big technology companies are not going to wait for everyone else to figure it out."
Graphics

Ask Slashdot: Why Doesn't the Internet In 2019 Use More Interactive 3D? 153

dryriver writes: For the benefit of those who are not much into 3D technologies, as far back as the year 2000 and even earlier, there was excitement about "Web3D" -- interactive 3D content embedded in HTML webpages, using technologies like VRML and ShockWave 3D. 2D vector-based Flash and Flash animation was a big deal back then. Very popular with internet users. The more powerful but less installed ShockWave browser plugin -- also made by Macromedia -- got a fairly capable DirectX 7/OpenGL-based realtime 3D engine developed by Intel Labs around 2001 that could put 3D games, 3D product configurators and VR-style building/environment walkthroughs into an HTML page, and also go full-screen on demand. There were significant problems on the hardware side -- 20 years ago, not every PC or Mac connected to the internet had a decently capable 3D GPU by a long shot. But the 3D software technology was there, it was promising even then, and somehow it died -- ShockWave3D was neglected and killed off by Adobe shortly after they bought Macromedia, and VRML died pretty much on its own.

Now we are in 2019. Mobile devices like smartphones and tablets, PCs/Macs as well as Game Consoles have powerful 3D GPUs in them that could render great interactive 3D experiences in a web browser. The hardware is there, but 99% of the internet today is in flat 2D. Why is this? Why do tens of millions of gamers spend hours in 3D game worlds every day, and even the websites that cater to this "3D loving" demographic use nothing but text, 2D JPEGs and 2D YouTube videos on their webpages? Everything 3D -- 3D software, 3D hardware, 3D programming and scripting languages -- is far more evolved than it was around 2000. And yet there appears to be very little interest in putting interactive 3D anything into webpages. What causes this? Do people want to go into the 2020s with a 2D-based internet? Is the future of the internet text, 2D images, and streaming 2D videos?
Earth

Silicon Valley is One of the Most Polluted Places in the Country (theatlantic.com) 71

Before Silicon Valley became the idea center of the internet, it was a group of factory towns, the blinking heart of "clean" manufacturing, the hallmark of the Information Age. A report adds: Silicon Valley was a major industrial center for much of the 20th century. Semiconductors and microprocessors rolled out of factories scattered all over the area (known on maps as Santa Clara County) from the 1950s to the early 1990s -- AMD, Apple, Atari, Fairchild, Hewlett-Packard, Intel, and Xerox, to name just a few. From the mid-1960s to the mid-1980s, Santa Clara County added 203,000 manufacturing jobs, 85 percent of them in tech. Beginning in the 1980s, as government contracts disappeared, Silicon Valley companies moved toward creating software, and beginning in the 1990s, companies there largely focused on internet-based applications. Now the area trades mostly in the rarefied and intangible realm of apps and software. It's hard to see that now, when glass-walled office buildings, corporate campuses, and strip malls along highways that bloom into concrete clovers dominate the landscape of this former industrial area. But all of that industrial history left something behind.

The Google Quad Campus looks way too nice to be contaminated with toxic waste: There are matching bikes, a pool with primary-colored umbrellas, and a contained universe that looks more like a college or a park than a satellite campus of one of the biggest companies in the world. But it turns out that this idyllic garden of corporate harmony sits on land that since 1989 has been a Superfund site, a designation the EPA gives some of the most contaminated or polluted land in the country. And while thousands of tons of contaminants have since been removed, it is still being cleaned up. For a few weeks at the end of 2012 and into 2013, toxic vapors got into two campus buildings, possibly exposing the office workers there to levels of chemicals above the legal limit set by the EPA. Santa Clara County has 23 active Superfund sites, more than any other county in the United States. All of them were designated as such in the mid to late 1980s, and most were contaminated by toxic chemicals involved in making computer parts. Completely cleaning up these chemicals may be impossible.

Linux

Linux 5.3 Released (kernelnewbies.org) 43

"Linux 5.3 has been released," writes diegocg: This release includes support for AMD Navi GPUs; support for the umwait x86 instructions that let processes wait for short amounts of time without spinning loops; a 'utilization clamping' mechanism that is used to boost interactivity on power-asymmetric CPUs used in phones; a new pidfd_open(2) system call that completes the work done to let users deal with the PID reuse problem; 16 millions of new IPv4 addresses in the 0.0.0.0/8 range are made available; support for Zhaoxin x86 CPUs; support Intel Speed Select for easier power selection in Xeon servers; and support for the lightweight hypervisor ACRN, built for embedded IoT devices. As always, many other new drivers and improvements can be found in the changelog.
Intel

Weakness In Intel Chips Lets Researchers Steal Encrypted SSH Keystrokes 78

An anonymous reader quotes a report from Ars Technica: In late 2011, Intel introduced a performance enhancement to its line of server processors that allowed network cards and other peripherals to connect directly to a CPU's last-level cache, rather than following the standard (and significantly longer) path through the server's main memory. By avoiding system memory, Intel's DDIO -- short for Data-Direct I/O -- increased input/output bandwidth and reduced latency and power consumption.

Now, researchers are warning that, in certain scenarios, attackers can abuse DDIO to obtain keystrokes and possibly other types of sensitive data that flow through the memory of vulnerable servers. The most serious form of attack can take place in data centers and cloud environments that have both DDIO and remote direct memory access enabled to allow servers to exchange data. A server leased by a malicious hacker could abuse the vulnerability to attack other customers. To prove their point, the researchers devised an attack that allows a server to steal keystrokes typed into the protected SSH (or secure shell session) established between another server and an application server.
"The researchers have named their attack NetCAT, short for Network Cache ATtack," the report adds. "Their research is prompting an advisory for Intel that effectively recommends turning off either DDIO or RDMA in untrusted networks."

"The researchers say future attacks may be able to steal other types of data, possibly even when RDMA isn't enabled. They are also advising hardware makers do a better job of securing microarchitectural enhancements before putting them into billions of real-world servers." The researchers published their paper about NetCAT on Tuesday.
Supercomputing

University of Texas Announces Fastest Academic Supercomputer In the World (utexas.edu) 31

On Tuesday the University of Texas at Texas launched the fastest supercomputer at any academic facility in the world.

The computer -- named "Frontera" -- is also the fifth most-powerful supercomputer on earth. Slashdot reader aarondubrow quotes their announcement: The Texas Advanced Computing Center (TACC) at The University of Texas is also home to Stampede2, the second fastest supercomputer at any American university. The launch of Frontera solidifies UT Austin among the world's academic leaders in this realm...

Joined by representatives from the National Science Foundation (NSF) -- which funded the system with a $60 million award -- UT Austin, and technology partners Dell Technologies, Intel, Mellanox Technologies, DataDirect Networks, NVIDIA, IBM, CoolIT and Green Revolution Cooling, TACC inaugurated a new era of academic supercomputing with a resource that will help the nation's top researchers explore science at the largest scale and make the next generation of discoveries.

"Scientific challenges demand computing and data at the largest and most complex scales possible. That's what Frontera is all about," said Jim Kurose, assistant director for Computer and Information Science and Engineering at NSF. "Frontera's leadership-class computing capability will support the most computationally challenging science applications that U.S. scientists are working on today."

Frontera has been supporting science applications since June and has already enabled more than three dozen teams to conduct research on a range of topics from black hole physics to climate modeling to drug design, employing simulation, data analysis, and artificial intelligence at a scale not previously possible.

Here's more technical details from the announcement about just how fast this supercomputer really is.
Intel

Intel Is Suddenly Very Concerned With 'Real-World' Benchmarking (extremetech.com) 72

Dputiger writes: Intel is concerned that many of the benchmarks used in CPU reviews today are not properly capturing overall performance. In the process of raising these concerns, however, the company is drawing a false dichotomy between real-world and synthetic benchmarks that doesn't really exist. Whether a test is synthetic or not is ultimately less important than whether it accurately measures performance and produces results that can be generalized to a larger suite of applications.
IT

USB-IF To Continue Confusing Name Scheme With USB4 Gen 3x2 (techrepublic.com) 79

intensivevocoder writes: USB4 will be formally published at the USB Developer Days Seattle on September 17, and the USB Implementers Forum (USB-IF) is expected to continue the widely maligned naming scheme for USB speeds introduced in February for USB 3.2, an engineer familiar with the USB-IF's plans told TechRepublic. As a quick recap, USB 3.1 Gen 2, increased the lane speed to 10 Gbps. A second 10 Gbps lane was added in the USB 3.2 standard, which the USB-IF calls "USB 3.2 Gen 2x2." USB4 (which is not written as "USB 4.0") will reach speeds of 40 Gbps, doubling the speeds again. USB4 was first previewed in March, when the USB Promoter Group announced that USB4 would be based on Intel's Thunderbolt 3 specification, though specific details are expected later this month.
AMD

New Stats Suggest Strong Sales For AMD (techspot.com) 32

Windows Central reports: AMD surpassed NVIDIA when it comes to total GPU shipments according to new data from Jon Peddie Research (via Tom's Hardware). This is the first time that AMD ranked above NVIDIA in total GPU shipments since Q3 of 2014. AMD now has a 17.2 percent market share compared to NVIDIA's 16 percent according to the most recent data. John Peddie Research also reports that "AMD's overall unit shipments increased 9.85% quarter-to-quarter."

AMD gained 2.4 percent market share over the last year while NVIDIA lost 1 percent. Much of AMD's growth came in the last quarter, in which AMD saw a difference of 1.5 percent compared to NVIDIA's 0.1 percent.

The Motley Fool points out that "NVIDIA doesn't sell CPUs, so this comparison isn't apples-to-apples."

But meanwhile, TechSpot reports: German hardware retailer Mindfactory has published their CPU sales and revenue figures, and they show that for the past year AMD had sold slightly more units than Intel -- until Ryzen 3000 arrived. When the new hardware launched in July, AMD's sales volume doubled and their revenue tripled, going from 68% to 79% volume market share and 52% to 75% revenue share -- this is for a single major PC hardware retailer in Germany -- but the breakdown is very interesting to watch nonetheless...

Full disclaimer: German markets have historically been more biased towards Ryzen than American ones, and AMD's sales will fall a bit before stabilizing, while Intel's appear to have already plateaued.

Programming

Should the Linux Kernel Accept Drivers Written In Rust? (lwn.net) 169

Packt's recent story about Rust had the headline "Rust is the future of systems programming, C is the new Assembly."

But there was an interesting discussion about the story on LWN.net. One reader suggested letting people write drivers for the Linux kernel in Rust. ("There's a good chance that encouraging people to submit their wacky drivers in Rust would improve the quality of the driver, partly because you can focus attention on the unsafe parts.")

And that comment drew an interesting follow-up:

"I spoke with Greg Kroah-Hartman, and he said he'd be willing to accept a framework in the kernel for writing drivers in Rust, as long as 1) for now it wasn't enabled by default (even if you did "make allyesconfig") so that people don't *need* Rust to build the kernel, and 2) it shows real benefits beyond writing C, such as safe wrappers for kernel APIs."
Programming

Intel Engineer Launches Working Group To Bring Rust 'Full Parity With C' (packtpub.com) 111

Someone from the Rust language governance team gave an interesting talk at this year's Open Source Technology Summit. Josh Triplett (who is also a principal engineer at Intel), discussed "what Intel is contributing to bring Rust to full parity with C," in a talk titled Intel and Rust: the Future of Systems Programming.

An anonymous reader quotes Packt: Triplett believes that C is now becoming what Assembly was years ago. "C is the new Assembly," he concludes. Developers are looking for a high-level language that not only addresses the problems in C that can't be fixed but also leverage other exciting features that these languages provide. Such a language that aims to be compelling enough to make developers move from C should be memory safe, provide automatic memory management, security, and much more...

"Achieving parity with C is exactly what got me involved in Rust," says Triplett. Triplett's first contribution to the Rust programming language was in the form of the 1444 RFC, which was started in 2015 and got accepted in 2016. This RFC proposed to bring native support for C-compatible unions in Rust that would be defined via a new "contextual keyword" union...

He is starting a working group that will focus on achieving full parity with C. Under this group, he aims to collaborate with both the Rust community and other Intel developers to develop the specifications for the remaining features that need to be implemented in Rust for system programming. This group will also focus on bringing support for systems programming using the stable releases of Rust, not just experimental nightly releases of the compiler.

Last week Triplett posted that the FFI/C Parity working group "is in the process of being launched, and hasn't quite kicked off yet" -- but he promised to share updates when it does.
Businesses

Ask Slashdot: Who Are the 'Steve Wozniaks' of the 21st Century? 155

dryriver writes: There are some computer engineers -- working in software or hardware, or both -- who were true pioneers. Steve Wozniak needs no introduction. Neither do Alan Turing, Ada Lovelace or Charles Babbage. Gordon Moore and Robert Noyce started Intel decades ago. John Carmack of Doom is a legend in realtime 3D graphics coding. Aleksey Pajitnov created Tetris. Akihiro Yokoi and Aki Maita invented the Tamagotchi. Jaron Lanier is the father of VR. Palmer Luckey hacked together the first Oculus Rift VR headset in his parent's garage in 2011. To the question: Who in your opinion are the 21st Century "Steve Wozniaks," working in either hardware or software, or both?
Intel

Microsoft Announces Surface Event On October 2nd, Could Launch New Dual-Screen Tablet/Laptop Hybrid (theverge.com) 16

Microsoft announced it will be holding a Surface hardware event in New York City on October 2nd, which could be where the company unveils its dual-screen Surface laptop / tablet hybrid that's been in development for more than two years. As The Verge reports, the new dual-screen device, codenamed "Centaurus," is "designed to be the hero device for a wave of new dual-screen tablet / laptop hybrids that we're expecting to see throughout 2020." From the report: Microsoft demonstrated this new device during an internal meeting earlier this year, signaling that work on the prototype has progressed to the point where it's nearing release. Still, it's not certain that Microsoft will show off this new hardware in October or even launch it. Microsoft CEO Satya Nadella famously killed off the Surface Mini just weeks before its scheduled unveiling. If Microsoft does plan to show this dual-screen Surface device, then it won't be ready to ship immediately. Sources familiar with Microsoft's plans tell The Verge that the company is currently targeting a 2020 release date for its dual-screen Surface.

Alongside Centaurus, Microsoft will likely refresh other Surface devices. The Surface Book is long overdue an update, and Microsoft's Surface Laptop and Surface Pro hardware could finally see the addition of USB-C ports this year. Even Microsoft's Surface Go tablet is more than a year old now and could see a minor refresh.

Google

Google and Dell Team Up To Take on Microsoft with Chromebook Enterprise Laptops (theverge.com) 76

Google is launching new Chromebook Enterprise devices that it hopes will draw more businesses away from Windows-powered laptops. From a report: Microsoft has dominated enterprise computing for years, but as businesses increasingly look to modernize their fleet of devices, there's an opportunity for competitors to challenge Windows. Google is teaming up with one of Microsoft's biggest partners, Dell, to help push new Chromebook Enterprise laptops into businesses. Dell is launching Chrome OS on a pair of its popular business-focused Latitude laptops, offering both a regular clamshell design and a 2-in-1 option. While it might sound like just two existing Windows laptops repurposed for Chrome OS, Google and Dell have been working together for more than a year to ensure these new Chromebook Enterprise devices are ready for IT needs. That includes bundling a range of Dell's cloud-based support services that allow admins to have greater control over how these Chromebooks are rolled out inside businesses.

It means IT admins can more easily integrate these Chromebooks into existing Windows environments and manage them through tools like VMware Workspace One. Microsoft and its partners have offered a range of admin tools for years, making it easy to customize and control Windows-based devices. Google has also tweaked its Chrome Admin console to improve load times, add search on every page, and overhaul it with material design elements. Businesses will be able to choose from Dell's 14-inch Latitude 5400 ($699) or the 13-inch Latitude 5300 2-in-1 ($819). Both can be configured with up to Intel's 8th Gen Core i7 processors, up to 32GB of RAM, and even up to 1TB of SSD storage.

Security

Facebook Awards $100,000 Prize For New Code Isolation Technique (zdnet.com) 13

ZDNet reports: Facebook has awarded a $100,000 prize to a team of academics from Germany for developing a new code isolation technique that can be used to safeguard sensitive data while it's being processed inside a computer. The award is named the Internet Defense Prize, and is a $100,000 cash reward that Facebook has been giving out yearly since 2014 to the most innovative research presented at USENIX, a leading security conference that takes place every year in mid-August in the US.
An anonymous reader writes: The new technique is called ERIM and leverages Intel's memory protection keys (MPKs) and binary code inspection to achieve both hardware and software-based in-process data isolation. The novelty of ERIM is that it has an near-zero performance overhead (compared to other techniques that induce a big performance dip), can be applied with little effort to new and existing applications, doesn't require compiler changes, and can run on a stock Linux kernel.
AI

Amazon, Microsoft Are 'Putting World At Risk of Killer AI,' Says Study (ibtimes.com) 95

oxide7 shares a report from International Business Times: Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons. Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

Google, which last year published guiding principles eschewing AI for use in weapons systems, was among seven companies found to be engaging in "best practice" in the analysis that spanned 12 countries, as was Japan's Softbank, best known for its humanoid Pepper robot. Twenty-two companies were of "medium concern," while 21 fell into a "high concern" category, notably Amazon and Microsoft who are both bidding for a $10 billion Pentagon contract to provide the cloud infrastructure for the U.S. military. Others in the "high concern" group include Palantir, a company with roots in a CIA-backed venture capital organization that was awarded an $800 million contract to develop an AI system "that can help soldiers analyze a combat zone in real time." The report noted that Microsoft employees had also voiced their opposition to a U.S. Army contract for an augmented reality headset, HoloLens, that aims at "increasing lethality" on the battlefield.
Stuart Russel, a computer science professor at the University of California, argued it was essential to take the next step in the form of an international ban on lethal AI, that could be summarized as "machines that can decide to kill humans shall not be developed, deployed, or used."
Security

Intel, Google, Microsoft, and Others Launch Confidential Computing Consortium for Data Security (venturebeat.com) 44

Major tech companies including Alibaba, Arm, Baidu, IBM, Intel, Google Cloud, Microsoft, and Red Hat today announced intent to form the Confidential Computing Consortium to improve security for data in use. From a report: Established by the Linux Foundation, the organization plans to bring together hardware vendors, developers, open source experts, and others to promote the use of confidential computing, advance common open source standards, and better protect data. "Confidential computing focuses on securing data in use. Current approaches to securing data often address data at rest (storage) and in transit (network), but encrypting data in use is possibly the most challenging step to providing a fully encrypted lifecycle for sensitive data," the Linux Foundation said today in a joint statement. "Confidential computing will enable encrypted data to be processed in memory without exposing it to the rest of the system and reduce exposure for sensitive data and provide greater control and transparency for users."

The consortium also said the group was formed because confidential computing will become more important as more enterprise organizations move between different compute environments like the public cloud, on-premises servers, or the edge. To get things started, companies made a series of open source project contributions including Intel Software Guard Extension (SGX), an SDK for code protection at the hardware layer.

Intel

Intel's Line of Notebook CPUs Gets More Confusing With 14nm Comet Lake (arstechnica.com) 62

Intel today launched a new series of 14nm notebook CPUs code-named Comet Lake. Going by Intel's numbers, Comet Lake looks like a competent upgrade to its predecessor Whiskey Lake. The interesting question -- and one largely left unanswered by Intel -- is why the company has decided to launch a new line of 14nm notebook CPUs less than a month after launching Ice Lake, its first 10nm notebook CPUs. From a report: Both the Comet Lake and Ice Lake notebook CPU lines this month consist of a full range of i3, i5, and i7 mobile CPUs in both high-power (U-series) and low-power (Y-series) variants. This adds up to a total of 19 Intel notebook CPU models released in August, and we expect to see a lot of follow-on confusion. During the briefing call, Intel executives did not want to respond to questions about differentiation between the Comet Lake and Ice Lake lines based on either performance or price, but the technical specs lead us to believe that Ice Lake is likely the far more attractive product line for most users.

Intel's U-series CPUs for both Comet Lake and Ice Lake operate at a nominal 15W TDP. Both lines also support a "Config Up" 25W TDP, which can be enabled by OEMs who choose to provide the cooling and battery resources necessary to support it. Things get more interesting for the lower-powered Y-series -- Ice Lake offers 9W/12W configurable TDP, but Comet Lake undercuts that to 7W/9W. This is already a significant drop in power budget, which Comet Lake takes even further by offering a new Config Down TDP, which is either 4.5W or 5.5W, depending on which model you're looking at. Comet Lake's biggest and meanest i7, the i7-10710U, sports 6 cores and 12 threads at a slightly higher boost clock rate than Ice Lake's 4C/8T i7-1068G7. However, the Comet Lake parts are still using the older UHD graphics chipset -- they don't get access to Ice Lake's shiny new Iris+, which offers up to triple the onboard graphics performance. This sharply limits the appeal of the Comet Lake i7 CPUs in any OEM design that doesn't include a separate Nvidia or Radeon GPU -- which would in turn bump the real-world power consumption and heat generation of such a system significantly.

AI

Cerebras Systems Unveils a Record 1.2 Trillion Transistor Chip For AI (venturebeat.com) 67

An anonymous reader quotes a report from VentureBeat: New artificial intelligence company Cerebras Systems is unveiling the largest semiconductor chip ever built. The Cerebras Wafer Scale Engine has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel's first 4004 processor in 1971 had 2,300 transistors, and a recent Advanced Micro Devices processor has 32 billion transistors. Samsung has actually built a flash memory chip, the eUFS, with 2 trillion transistors. But the Cerebras chip is built for processing, and it boasts 400,000 cores on 42,225 square millimeters. It is 56.7 times larger than the largest Nvidia graphics processing unit, which measures 815 square millimeters and 21.1 billion transistors. The WSE also contains 3,000 times more high-speed, on-chip memory and has 10,000 times more memory bandwidth.
Businesses

Tech Companies Challenge 'Open Office' Trend With Pods (nbcnewyork.com) 112

Open floor plans create "a minefield of distractions," writes CNBC. But now they're being countered by a new trend that one office interior company's owner says "started with tech companies and the need for privacy."

They're called "office pods..." They provide a quiet space for employees to conduct important phone calls, focus on their work or take a quick break. "We are seeing a large trend, a shift to having independent, self-contained enclosures," said Caitlin Turner, a designer at the global design and urban planning firm HoK. She said the growing demand for pods is a direct result of employees expressing their need for privacy...

Prices can range anywhere from $3,495 for a single-user pod from ROOM to $15,995 for an executive suite from ZenBooth. Pod manufacturers are expanding rapidly. In addition to Zenbooth and ROOM, there are TalkBox, PoppinPod, Spaceworx and Framery. Pod sizes also vary to include individual booths designed for a single user, medium-sized pods for small gatherings of two to three people and larger executive spaces that could host up to four to six people.

Sam Johnson, the founder of Zenbooth, said the idea for pods came from his experience working in the tech industry, where he quickly became disillusioned by the open floor plan. It was an "unsolved problem" that prompted him to quit his job and found ZenBooth, a pod company based in the Bay Area, in 2016. He said the company is a "privacy solutions provider" that offers "psychological safety" via a peaceful space to work and think. "We've had customers say to us that we literally couldn't do our job without your product," Johnson said.

The company now counts Samsung, Intel, Capital One and Pandora, among others, as clients, as it works in tech hubs including Boston, the Bay Area, New York and Seattle. Its biggest customer, Lyft, has 35 to 40 booths at its facilities.

"In 2014, 70% of companies had an open floor plan, according to the International Facility Management Association," the article points out -- though one Queensland University of Technology study found 90% of employees in open floor plan offices actually experienced more stress and conflict, along with higher blood pressure and increased turnover.

Slashdot Top Deals