A major hospital chain has struck a partnership with Google to upload 32 million private patient medical records to the Google “Cloud.”
These records from HCA Healthcare, based out of Nashville, Tenn., will reportedly be used by Google to create algorithms that instruct doctors and healthcare workers about how to treat their patients. You read that correctly: Google is planning to become America’s physician.
HCA currently operates 186 hospitals and approximately 2,000 healthcare sites across the United States. The 32 million private patient medical records it is providing to Google will supposedly be anonymized and stripped of personally identifying information.
Artificial Intelligence (AI) from Google will then take over and create computer programs that “customize” care for patients without the need for human input. Machines, in other words, will soon be administering medicine to people who are hooked into the system.
Google’s “planetary-scale database” will supposedly improve treatment for patients by calculating which drugs and vaccines will “work” the best for their bodies. HCA will then use this information to advise its staff members about how to do their jobs.
“Our contract prohibits Google Cloud from the use of patient identifiable information,” an HCA spokesman told the DailyMail Online in a statement. “Furthermore, access to any data is prohibited without HCA Healthcare’s permission.”
Nothing is sacred in a crony capitalist society
Back in 2019, we reported that Google had launched its medical AI program, known as “Project Nightingale,” in an attempt to take over modern medicine.
Wanting to control everything there is to control on planet earth, Google was busily hatching technology that would make it easy to siphon private information and capitalize it, even in the realm of healthcare.
Now, we are seeing the fruit of that labor with the HCA partnership, which The Federalist‘s Jordan Davison describes as a “privacy invasion” and “technology power grab.”
Another person on Twitter sarcastically wrote that there are no privacy issues here: “nope, not at all.”
While medical records are supposed to be protected under federal law, the rules allow for hospitals and other healthcare providers to share patient information with contractors, just so long as they abide by the same privacy protections.
“Privacy and security will be guiding principles throughout this partnership,” HCA insists.
“The access and use of patient data will be addressed through the implementation of Google Cloud’s infrastructure along with HCA Healthcare’s layers of security controls and processes.”
HCA already employed a similar technology during the Wuhan coronavirus (Covid-19) crisis to supposedly monitor patients who tested “positive.” The system notified caregivers as to which treatments should be administered to fight the Chinese Virus.
“Next-generation care demands data science-informed decision support so we can more sharply focus on safe, efficient and effective patient care,” HCA CEO Sam Hazen said in a statement.
“We view partnerships with leading organizations, like Google Cloud, that share our passion for innovation and continual improvement as foundational to our efforts.”
The partnership marks Google’s second known foray into healthcare. St. Louis-based Ascension also partnered with Google several years back for the same purpose, feeding more than 50 million private medical records into Google’s AI abyss.
“Two simple questions kept hounding me: Did patients know about the transfer of their data to the tech giant? Should they be informed and given a chance to opt in or out?” a whistleblower wrote in an essay for The Guardian.
“The answer to the first question quickly became apparent: no. The answer to the second I became increasingly convinced about: yes. Put the two together, and how could I say nothing?”
As TFTP reported earlier this month, Parler has been a haven for those who have been banned, deleted, or otherwise algorithmed into the memory hole by establishment media platforms. Users moved to Parler because the platform claimed not to censor their content and it was safe space for MAGA folks. For over a year, Parler has remained an open network where pro-Trump users largely proceeded uncensored. Until now. Though the Parler url is still active, it is no longer a functioning social media site.
In the middle of the night on January 11, Amazon took down Parler from its web-hosting service. Amazon Web Services, or AWS said Parler had violated its terms of service given its inadequate content-moderation practices for failing to remove posts glorifying the recent riot at the U.S. Capitol. Google and Apple joined in as well, ensuring that Parler is deplatformed indefinitely.
This move came on the heels of a massive purge of tens of thousands of pro-Trump folks who allegedly espoused ridiculous Qanon theories. It was bad enough that these folks were duped into following the psyop known as Q. Now, however, instead of realizing the absurdity of these ideas as they are debunked in the public arena, that is no longer an option. Now, they will grasp onto these whacky ideas as the massive monopolistic power of tech behemoths imposing their neoliberal will on them gives them justification for doing so.
In the meantime, however, the social media giants Twitter and Facebook remain unfazed as they keep their insidious relationships with the US government thriving. While banning Trump for his speech during the riots at the capitol, Twitter is alleged in a lawsuit to have victimized a child by knowingly allowing a video of him to go viral.
The boy and his mother are now suing the platform alleging that it benefitted financially by failing to remove the video featuring the child and another minor — which was retweeted thousands of times and garnered nearly 200,000 views.
To be clear, this was not a mistake that simply didn’t pick up on the nature of the content. The boy and his mother, according to the lawsuit, repeatedly contacted Twitter about the content, but the social media giant allegedly didn’t suspend accounts distributing it until a federal agent from the Department of Homeland Security (DHS) intervened.
In fact, according to the lawsuit, Twitter even responded to the boy and his mother via email and said the child porn did not violate its policies. According to the suit, an email shows Twitter telling John Doe on Jan. 28, 2020, that it “reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time.”
“What do you mean you don’t see a problem?” the minor asks in a response that same day. “We both are minors right now and were minors at the time these videos were taken. We both were 13 years of age.”
A subsequent screen shot shows that the video accumulated 167,000 views within a day and received more than 2,200 retweets and 6,640 likes.
The video made its way to Twitter after the boy was tricked into sharing the content with a fake account on Snapchat. The account belonged to child traffickers posing as a 16-year-old girl and they blackmailed the boy into sending in the video.
“Plaintiff John Doe was solicited and recruited for sex trafficking as a minor,” reads the lawsuit brought in part by the National Center on Sexual Exploitation (NCOSE). “After John Doe escaped from the manipulation, child sexual abuse material depicting John Doe was disseminated on Twitter. When Twitter was first alerted to this fact and John Doe’s age, Twitter refused to remove the illegal material and instead continued to promote and profit from the sexual abuse of this child.”
Twitter has not confirmed any details about the incident. But the attorney representing the family says the video going viral — despite the heavy-handed censorship on the platform — shows that they are more concerned with censoring political speech than protecting children.
“We found it very interesting that Twitter, over the last few months, has really shown the world what kind of policing of their platform they are capable of, what the technology is they have at their fingertips, and what they are able to do,” Lisa Haba, partner at the Haba Law Firm, told Fox News in an interview.
She then added that “you would think that amongst everything they are able to police that there would be a premium priority on the protection of children. They literally have policies stating that they’ll do that but their practices say another word.”
As FOX reported, John Doe is seeking damages under the federal Trafficking Victims’ Protection Reauthorization Act, and claiming the platform was a significant cause of his distress. As the suit noted, recent legislation has clarified that Section 230 doesn’t apply to platforms that knowingly facilitate sex trafficking.
For years, TFTP has reported on this phenomenon of Facebook attacking political speech while child exploitation goes unchecked. In 2018, Facebook and Twitter — without warning or justification — deleted the pages of Free Thought Project and Police the Police which had over 5 million followers.
During this purge, they also removed hundreds of other pages including massive police accountability groups, antiwar activists, alternative media, and libertarian news outlets. Facebook claimed to remove these pages in the name of fighting disinformation online and creating a safer user experience. But this was a farce. Illustrating just how big of an ostentatious sham this was, just weeks after claiming to keep their community safe, a child was openly sold on their platform.
An auction was held on Facebook in which a child bride was put up for sale in a public post. People openly bid on Facebook for a 16-year-old girl’s hand in marriage.
Facebook claims they removed the post, but this wasn’t until weeks after the auction had ended and the girl had been sold. Had the post had something about Qanon on it, however, rest assured, it would have been removed immediately.
This was no isolated incident either. The Guardian reported a study in 2020 that suggested Facebook is not fully enforcing its own standards banning content that exploits or endangers children.
According to the study, it examined at least 366 cases between January 2013 and December 2019, according to a report from the not-for-profit investigative group Tech Transparency Project (TPP) analyzing Department of Justice news releases.
Of the 366 cases of child sex abuse on Facebook, the social media giant reported just 9% of them to authorities. Investigations initiated by authorities discovered the other 91% of the cases — not Facebook.
It’s not just Facebook either. Twitter is in the same boat. In 2020, TFTP reported on Twitter allowing the promotion of child molestation on their platform.
Since we reported on the rebranding of pedophiles as Minor Attracted Persons several years ago, the terminology became so popular that it morphed into multiple categories and abbreviations. There are now NOMAPS, which apparently are the “best kind” of MAP because the “NO” means they don’t want to have sex with children. That’s where the pro-c MAPs come in. The “pro-c” denotes pro-contact as in the belief that children can consent into having physical contact and sex with an adult. Children cannot consent to sex with an adult.
Though our report led to the deletion of multiple accounts who openly advocated for sex with children, these pro-pedophile tags on Twitter still openly trend to promote this content.
With growing concerns over online privacy and securing personal data, more people than ever are considering alternatives to Google products. After all, Google’s business model essentially revolves around data collection and advertisements, both of which infringe on your privacy. More data means better (targeted) ads and more revenue. The company pulled in over $116 billion in ad revenue last year alone – and that number continues to grow.
But the word is getting out. A growing number of people are seeking alternatives to Google products that respect their privacy and data. This guide aims to be the most exhaustive resource available for documenting alternatives to Google product. So let’s get started (in no particular order or preference)…
Google search alternatives
When it comes to privacy, using Google search is not a good idea. When you use their search engine, Google is recording your IP address, search terms, user agent, and often a unique identifier, which is stored in cookies.
Here are ten alternatives to Google search:
StartPage – StartPage gives you Google search results, but without the tracking (based in the Netherlands).
Searx – A privacy-friendly and versatile metasearch engine that’s also open source.
MetaGer – An open source metasearch engine with good features, based in Germany.
Google Chrome is a popular browser, but it’s also a data collection tool – and many people are taking notice. Just a few days ago, the Washington Post asserted that “Google’s web browser has become spy software,” with 11,000 tracker cookies observed in a single week. Here are seven alternatives for more privacy:
Firefox browser – Firefox is a very customizable, open-source browser that is popular in privacy circles. There are also many different Firefox modifications and tweaks that will give you more privacy and security. (Also check out Firefox Focus, a privacy-focused version for mobile users.)
There are many solid Google Docs alternatives available. The largest offline document editing suite is, of course, Microsoft Office. As most people know, however, Microsoft is not the best company for privacy. Nonetheless, there are a few other good Google Docs alternatives:
CryptPad – CryptPad is a privacy-focused alternative with strong encryption, and it’s free.
Etherpad – A self-hosted collaborative online editor that’s also open source.
Zoho Docs – This is another good Google Docs alternative with a clean interface and good functionality, although it may not be the best for privacy.
OnlyOffice – OnlyOffice feels a bit more restricted than some of the other options in terms of features.
Cryptee – This is a privacy-focused platform for photo and document storage and editing. It’s open source and based in Estonia.
LibreOffice (offline) – You can use LibreOffice which is free and open source.
Tip: Invidio.us is a great Youtube proxy that allows you to watch any Youtube video without logging in, even if the video is somehow restricted. To do this, simply replace [www.youtube.com] with [invidio.us] in the URL you want to view.
Google Translate alternative
Here are a few Google translate alternatives I have come across:
DeepL – DeepL is a solid Google Translate alternative that seems to give great results. Like Google Translate, DeepL allows you to post up to 5,000 characters at a time (but the pro version is unlimited). The user interface is good and there is also a built-in dictionary feature.
Linguee – Linguee does not allow you to post large blocks of text like DeepL. However, Linguee will give you very accurate translations for single words or phrases, along with context examples.
dict.cc – This Google Translate alternative seems to do a decent job on single-world lookups, but it also feels a bit outdated.
If you want to translate blocks of text, check out DeepL. If you want in-depth translations for single words or phrases, then Linguee is a good choice.
Google Analytics alternative
For website admins, there are many reasons to use an alternative to Google Analytics. Aside from privacy concerns, there are also faster and more user-friendly alternatives that also respect your visitors’ privacy.
Clicky is a great alternative to Google Analytics that truncates and anonymizes visitor IP addresses by default. It is lightweight, user-friendly, and fully compliant with GDPR regulations, while also being certified by Privacy Shield.
Matomo (formerly Piwik) is an open-source analytics platform that respects the privacy of visitors by anonymizing and truncating visitor IP addresses (if enabled by the website admin). It is also certified to respect user privacy.
AT Internet is a France-based analytics provider that is fully GDPR compliant, with all data stored on French servers, and a good track record going back to 1996.
Many websites host Google Analytics because they run Google Adsense campaigns. Without Google Analytics, tracking performance of these campaigns would be difficult. Nonetheless, there are still better options for privacy.
Here WeGo provides good mapping solutions for both PCs and mobile devices with their app.
MapHub is also based on OpenStreeMap data and it does not collect locations or user IP addresses.
Note: Waze is not an “alternative” as it is owned by Google.
Google Play Store alternative
Currently the best Google Play Store alternative is to use F-Droid and then go through the Yalp store. As explained on the official site, F-Droid is an installable catalog of FOSS (Free and Open Source Software) applications for the Android platform. After you have installed F-Droid, you can then download the Yalp store APK, which allows you to download apps from the Google Play Store directly as APK files.
Want to ditch the Chromebook and Chrome OS? Here are a few alternatives:
Linux – Of course, Linux is arguably the best alternative, being a free, open-source operating system with lots of different flavors. With some adjustments, Linux Ubuntucan be run on Chromebooks.
Tails – Tails is a free, privacy-focused operating system based on Linux that routes all traffic through the Tor network.
QubesOS – Recommended by Snowden, free, and also open source.
Of course, the other two big operating system alternatives are Windows and Apple’s operating system for MacBooks – Mac OS. Windows, particularly Windows 10, is a very bad option for privacy. While slightly better, Apple also collects user data and has partnered with the NSA for surveillance.
The biggest alternative to Android is iOS from Apple. But we’ll skip over that for reasons already mentioned. Here are a few Android OS alternatives:
LineageOS – A free and open-source operating system for phones and tablets based on Android.
Ubuntu Touch – A mobile version of the Ubuntu operating system.
Plasma Mobile – An open source, Linux-based operating system with active development.
Sailfish OS – Another open source, Linux-based mobile OS.
Replicant – A fully free Android distribution with an emphasis on freedom, privacy, and security.
/e/ – This is another open source project with a focus on privacy and security.
Purism is also working on a privacy-focused mobile phone called the Librem 5. It is in production, but not yet available (estimated Q3 2019).
Google Hangouts alternatives
Here are some alternatives to Google Hangouts:
Wire – A great all-around secure messenger, video, and chat app, but somewhat limited on the number of people who can chat together in a group conversation via voice or video.
Signal – A good secure messenger platform from Open Whisper Systems.
Telegram – A longtime secure messenger app, formerly based in Russia, now in Dubai.
Riot – A privacy-focused encrypted chat service that is also open source.
Google Domains alternative
Google Domains is a domain registration service. Here are a few alternatives:
Namecheap – I like Namecheap because all domain purchases now come with free WhoisGuard protection for life, which protects your contact information from third parties. Namecheap also accepts Bitcoin and offers domain registration, hosting, email, SSL certs, and a variety of other products.
Njalla – Njalla is a privacy-focused domain registration service based in Nevis. They offer hosting options, too, and also accept cryptocurrency payments.
OrangeWebsite – OrangeWebsite offers anonymous domain registration services and also accepts cryptocurrency payments, based in Iceland.
Other Google alternatives
Here more alternatives for various Google products:
Google Forms alternative – JotForm is a free online form builder.
Google Keep alternative – Below are a few different Google Keep alternatives:
Standard Notes is a great alternative for a note-taking service. It is secure, encrypted, and free with apps for Windows, Mac, Linux, iOS, and Android (web-based also available).
Joplin is another great option that is open source and works on Windows, Mac, Linux, iOS, and Android.
Zoho Notebook from Zoho, with apps for desktop and mobile devices.
QOwnNotes is an open source file editor with Nextcloud integration.
Google Fonts alternative – Many websites load Google fonts through Google APIs, but that’s not necessary. One alternative to this is to use Font Squirrel, which has a large selection of both Google and non-Google fonts which are free to download and use.
Google Voice alternative – JMP.chat (both free and paid)
G Suite alternative – Zoho is probably the best option
Google Firebase alternative – Kuzzle (free and open source)
Apple and Google last week announced a joint contact tracing effort that would use Bluetooth technology to help alert people who have been in close proximity to someone who tested positive for COVID-19. Similar proposals have been put forward by an MIT-associated effort called PACT as well as by multiple Europeangroups.
These proposals differ from the traditional public health technique of “contact tracing” to try to stop the spread of a disease. In place of human interviewers, they would use location or proximity data generated by mobile phones to contact people who may have been exposed.
While some of these systems could offer public health benefits, they may also cause significant risks to privacy, civil rights, and civil liberties. If such systems are to work, there must be widespread, free, and quick testing available. The systems must also be widely adopted, but that will not happen if people do not trust them. For there to be trust, the tool must protect privacy, be voluntary, and store data on an individual’s device rather than in a centralized repository.
A well-designed tool would give people actionable medical information while also protecting privacy and giving users control, but a poorly designed one could pose unnecessary and significant risks to privacy, civil rights, and civil liberties. To help distinguish between the two, the ACLU is publishing a set of technology principlesagainst which developers, the public, and policymakers can judge any contact tracing apps and protocols.
Technology principles that embed privacy by design are one important type of protection. There still need to be strict policies to mitigate against overreach and abuse. These policies, at a minimum, should include:
Voluntariness — Whenever possible, a person testing positive must consent to any data sharing by the app. The decision to use a tracking app should be voluntary and uncoerced. Installation, use, or reporting must not be a precondition for returning to work or school, for example.
Use Limitations — The data should not be used for purposes other than public health — not for advertising and especially not for any punitive or law enforcement purposes.
Minimization — Policies must be in place to ensure that only necessary information is collected and to prohibit any data sharing with anyone outside of the public health effort.
Data Destruction — Both the technology and related policies and procedures should ensure deletion of data when there is no longer a need to hold it.
Transparency — If the government obtains any data, it must be fully transparent about what data it is acquiring, from where, and how it is using that data.
No Mission Creep – Policies must be in place to ensure tracking does not outlive the effort against COVID-19.
These policies, at a minimum, must be in place to ensure that any tracking app will be effective and will accord with civil liberties and human rights.
The Apple/Google proposal, for instance, offers a strong start when measured against these technology principles. Rather than track sensitive location histories, the Apple/Google protocol aims to use Bluetooth technology to record one phone’s proximity to another. Then, if a person tests positive, those logs can be used to notify people who were within Bluetooth range and refer them for testing, recommend self-isolation, or encourage treatment if any exists. Like the similar proposals, it relies on Bluetooth because the location data our cell phones generate is not accurate enoughfor contact tracing.
Like location histories, however, proximity records can be highly revealing because they expose who we spend time with. To their credit, the Apple/Google developers have considered that privacy problem. Rather than identify the people who own the phones, apps based on the protocol would use identifiers that cannot easily be traced back to phone owners.
As of this writing, the Apple/Google protocol could better address certain important privacy-related questions, however. For example, how does the tool define an epidemiologically relevant “contact”? The public needs to know if it is a good technological approximation of what public health professionals believe is a concern. Otherwise, the tool could be collecting far more personal information than is warranted by the crisis or could cause too many false alarms. And if there is indeed a plan to terminate the program at the conclusion of the pandemic, what criteria are the companies using to indicate when to press the built-in self-destruct button?
Another issue is whether phone users control when to submit their proximity logs for publication to the exposure database. These decisions should be made by the phone user. There may be good reasons why people do not want to upload all their data. User control can help to reduce false positives, for example if a user knows that identified contacts during that time were inaccurate (because they were in a car or wearing protective gear). It would also encourage people whose records include particularly sensitive contact information to at least volunteer the non-sensitive part of their records rather than fail to participate completely.
Also, when users share their proximity logs, what will they reveal? Right now, under the Apple/Google proposal, an infected user publicly shares a set of keys. Each key provides 24 hours of linkable data — a length of time that threatens the promised anonymity of the system. It is too easy to re-identify someone from 24 hours of data and the current proposal makes it impossible for the user to redact selected times during the day. There are other options that would ensure that identifiers published in the exposure database are as difficult as possible to connect to a person’s name or identity.
Voluntariness is particularly important. A critical mass of people will need to use a contact tracing app for it to be an effective public health mechanism, but some proposals to obtain that level of adoption have been coercive and scary. This is the wrong approach. When people feel that their phones are antagonistic rather than helpful, they will just turn location functions off or turn their phones off entirely. Others could simply leave their phone at home or acquire and register a second, dummy phone that is not their primary device with which they leave home. Good public health measures will leverage people’s own incentives to report disease, respond to warnings, and help stop the virus’s spread.
In the coming weeks and months, we are going to see a push to reopen the economy — an effort that will rely heavily on public health measures that include contact tracing. Bluetooth proximity tracking may be tried as a part of such efforts, though we don’t know how practical it will prove in real-world deployments. But privacy-by-design principles and the policy safeguards outlined here must be core to that effort if we are to benefit from a proximity tracking tool that can give people actionable medical information while also protecting privacy and giving users control.