Gene Therapy, ‘Medical Chips’ Top Advances Predicted for 2018

computer-chip-world

This will be the year of the “medical chip.” That’s the prediction of leading health expert Dr. Michael Roizen, chief wellness officer at the prestigious Cleveland Clinic and a regular Newsmax Health contributor.

“We will be using more computer chip technology or silicon chips to improve health whether internally to communicate with an insulin pump or externally like the new sleep apnea device,” he says.

“We will be seeing vast improvement and efficiency in emergency rooms and clinics all over the world using chips to monitor patients’ vital statistics instead of poking and prodding them. A team of physicals in one central monitoring area, like a ‘command central,’ will be able to evaluate hundreds of patients at a time in hospitals — even around the world — using chip technology.”

Chip advances are just one of several up-and-coming technologies and advances in the medical field predicted by the Cleveland clinic for 2018. Each year, the clinic convenes a panel of physicians and scientists — led by Roizen — to identify the next big things in medicine.

Here are some of the panel’s top picks for 2018, along with other medical innovations we can expect.

Gene therapy: This year, the U.S. Food and Drug Administration is expected to approve a new gene therapy treatment that targets cells in the body through viral “vectors” to provide visual function in some patients with forms of retinitis pigmentosa and Leber Congenital amaurosis.

The advance marks the latest in a series of breakthoughs in gene therapy that have emerged in recent years, from new medications that target gene defects to diagnostic tests for heart disease, cancer, and Alzheimer’s disease. Genetics are expected to be a major focus of medical research in the year ahead.

Immunotherapy for cancer: Several critical recent developments in using immunotherapy techniques that enlist the body’s own natural defenses to fight cancer are expected to make greater strides in 2018.

“Treatments are being developed using antibodies to disrupt the tumor’s shield, so that your own immune system can attack,” says Dr. Joanne Weidhaas, director, Division of Molecular and Cellular Oncology at UCLA’s David Geffen School of Medicine.

“We are also working on treatments where we take cells out of your immune system, reengineer them and return them to fight the cancer.”

Hybrid insulin delivery system: Hailed as the first artificial pancreas, the hybrid closed-loop insulin delivery system helps make Type 1 diabetes more manageable. It was approved by the FDA in 2016 and the market for the product is expected to be officially launched in 2018 as more patients demand the technology.

The new technology replaces the “open loop” system that requires diabetics to use the information from their continuous glucose monitor to determine how much insulin to inject. A chip allows direct communication between the glucose monitoring device and the insulin pump to ensure stabile blood glucose at an unprecedented level, says Roizen.

Wearable patch for apnea: Sleep apnea, the most common sleep disturbance in the country, impacts 21 million Americans and can lead to high blood pressure, heart disease, and stroke. While continuous positive airway pressure (CPAP) devices are the gold standard of treatment, it is estimated that almost half sleep apnea patient refuse to wear them.

Companies are now marketing an implant that delivers direct stimulation to open key airways during sleep. It is controlled by a wearable patch that works like a pacemaker and has had positive results in clinical testing.

“These neuromodulation systems are predicted to help deliver a better night’s sleep to more patients and spouses nationwide,” says Roizen.

Telemedicine: Roizen heralds the emergency of telemedicine technology as one of the greatest life-saving advances.

“Removing geographic barriers to health care can result in timelier, more efficient and more optimal outcomes as well as significant cost savings,” he says.

Telemedicine, also known as “distant health technology,” can enable care for both the physically challenged and those most vulnerable to infection. It’s predicted that over 7 million patients will use telemedicine technologies in 2018 — a 19-fold increase from 2013.

Pointer study for Alzheimer’s disease: The Alzheimer’s Association will launch a $20 million U.S. two-year clinical trial to test the ability of a multi-dimensional lifestyle intervention to prevent cognitive decline and dementia in2, 500 older adults at risk for cognitive decline.

This important study comes on the wake of the breakthrough Alzheimer’s Association International Conference held in 2017 pinpointing the important of lifestyle changes in preventing and delaying Alzheimer’s disease.

Surgical advance: In 2018, the MasSpec pen will undergo clinical trials that may allow surgeons to analyze tissue during surgery to determine on-the-spot whether tissue is healthy or cancerous, increasing the success of such operations. (Click to Source)

Checkmate humanity: In four hours, a robot taught itself chess, then beat a grandmaster with moves never devised in the game’s 1,500-year history and the implications are terrifying

 

  • Robot taught itself chess in just four hours and learned moves never seen before
  • Oxford academic: AI could go rogue and become too complex for engineers
  • AlphaZero surpassed years of human knowledge in just a few hours of chess 

4782a77100000578-0-image-a-10_1513904848099

Will robots one day destroy us? It’s a question that increasingly preoccupies many of our most brilliant scientists and tech entrepreneurs.

For developments in artificial intelligence (AI) — machines programmed to perform tasks that normally require human intelligence — are poised to reshape our workplace and leisure time dramatically.

This year, a leading Oxford academic, Professor Michael Wooldridge, warned MPs that AI could go ‘rogue’, that machines might become so complex that the engineers who create them will no longer understand them or be able to predict how they function.

Yes, it’s a concern, but a ‘historic’ new development makes unpredictable decisions by AI machines the least of our worries. And it all started with a game of chess.

AlphaZero, an AI computer program, this month proved itself to be the world’s greatest ever chess champion, thrashing a previous title-holder, another AI system called Stockfish 8, in a 100-game marathon.

So far, so nerdy, and possibly something only chess devotees or computer geeks might get excited about.

But what’s so frighteningly clever about AlphaZero is that it taught itself chess in just four hours. It was simply given the rules and — crucially — instructed to learn how to win by playing against itself.

In doing so, it assimilated hundreds of years of chess knowledge and tactics — but then went on to surpass all previous human invention in the game.

In those 240 minutes of practice, the program not only taught itself how to play but developed tactics that are unbeatably innovative — and revealed its startling ability to trounce human intelligence. Some of its winning moves had never been recorded in the 1,500 years that human brains have pitted wits across the chequered board.

Employing your King as an attacking piece? Unprecedented. But AlphaZero wielded it with merciless self-taught logic.

Garry Kasparov, the grandmaster who was famously defeated by IBM’s supercomputer Deep Blue in 1997 when it was pre-programmed with the best moves, said: ‘The ability of a machine to surpass centuries of human knowledge . . . is a world-changing tool.’

Simon Williams, the English grandmaster, claimed this was ‘one for the history books’ and joked: ‘On December 6, 2017, AlphaZero took over the chess world . . . eventually solving the game and finally enslaving the human race as pets.’

The wider implications are indeed chilling, as I will explain.

AlphaZero was born in London, the brainchild of a UK company called DeepMind, which develops computer programs that learn for themselves. It was bought by Google for £400 million in 2014.

The complex piece of programming that created AlphaZero can be more simply described as an algorithm — a set of mathematical instructions or rules that can work out answers to problems.

The other term for it is a ‘deep machine learning’ tool. The more data that an AI such as AlphaZero processes, the more it teaches itself — by reprogramming itself with the new knowledge.

In this way, its problem-solving powers become stronger all the time, multiplying its intelligence at speeds and scales far beyond the abilities of a human brain. As a result it is unconstrained by the limits of human thinking, as its success in chess proved.

But the real purpose of such artificial intelligence goes far beyond playing board games against other boxes of silicon chips. It is already starting to make life-or-death decisions in the high-tech world of cancer diagnosis.

It is being trialled at NHS hospitals in London, including University College London Hospital (UCLH) and Moorfields Eye Hospital.

At UCLH, a system is being developed in which an AI developed by DeepMind will analyse scans of patients with cancers of the head and neck, which afflict more than 11,000 people a year in the UK.

Google experts say the AI should be able to teach itself to read these scans ever quicker and more accurately than any human, so radiation can be more precisely targeted at tumours while minimising damage to healthy tissues in the brain and neck. What currently takes doctors and radiologists four hours could be done in less than an hour.

Meanwhile, at Moorfields, a DeepMind AI will analyse the 3,000 or so high-tech eye scans carried out each week. Currently, only a handful of experts can interpret the results, which may cause delays in treatment. It is believed that AI will be able to identify problem scans faster.

On the surface, it looks like a win-win for patients and the NHS. But there are major issues. The first is privacy — the London hospital trials have involved handing over the scans of more than a million NHS patients to Google.

This is causing alarm among privacy campaigners and academics. Dr Julia Powles, who works on technology law and policy at Cambridge University, says ‘Google is getting a free pass for swift and broad access into the NHS, on the back of unproven promises of efficiency and innovation’.

Dr Powles adds: ‘We do not know — and have no power to find out — what Google and DeepMind are really doing with NHS patient data.’

Google has tried to address the criticisms of its project by declaring that all data access will be subject to NHS monitoring, but this is an organisation that has long had to contend with allegations of prying into people’s data for commercial advantage.

It faces court action in the UK over claims it unlawfully harvested information from 5.4 million UK users by bypassing privacy settings on their iPhones. The group taking action, called Google You Owe Us, alleges Google placed ‘cookies’ (used to collect information from devices to deliver tailored adverts) on users’ phones without their knowledge or permission.

Google has responded: ‘This is not new. We don’t believe it has any merit and we will contest it.’

But the insertion of a super-intelligent AI into NHS decision-making procedures brings an infinitely more worrying concern.

It is an open secret that the NHS effectively rations access to care — through waiting lists, bed numbers and limiting availability of drugs and treatments — as it will never have enough funds to give everyone the service they need.

The harsh reality is that some deserving people lose out.

The harsher alternative is to be coldly rational by deciding who and who not to treat. It would be most cost-effective to exterminate terminally ill or even chronically ill patients, or sickly children. Those funds would be better spent on patients who might be returned to health — and to productive, tax-paying lives.

This is, of course, an approach too repugnant for civilised societies to contemplate. But decision-making AIs such as AlphaZero don’t use compassionate human logic because it gets in the way. (The ‘Zero’ in that program’s name indicates it needs no human input.)

The same sort of computer mind that can conjure up new chess moves might easily decide that the most efficient way to streamline the health service would be to get rid of the vulnerable and needy.

How we keep control of deep learning machines that will soon be employed in every area of our lives is a challenge that may well prove insurmountable. Already top IT experts warn that deep-learning algorithms can run riotously out of control because we don’t know what they’re teaching themselves.

And the programs can develop distinctly worrying ideas. A system developed in America for probation services to predict the risk of parole-seekers reoffending was recently discovered to have quickly become unfairly racially biased.

DeepMind certainly acknowledges the potential for problems. In October it launched a research team to investigate the ethics of AI decision-making. The team has eight full-time staff at the moment, but DeepMind wants to have around 25 in a year’s time.

But, one wonders, are 25 human minds enough to take on the super-intelligent, constantly learning and strategising powers of a monstrously developed AI?

The genie is out of the bottle. In building a machine that may revolutionise healthcare, we have created a system that can out-think us in a trice. It’s a marvel of human ingenuity. But we must somehow ensure that we stay in charge — or it may be checkmate for humanity. (Click to Source)

 

 

New FDA-approved “trackable” pill transmits information — it will tattle on you if you don’t take your meds

technology-circuit-board-pill-medicine

(Natural News) The U.S. Food and Drug Administration (FDA) recently approved a digital pill embedded with a sensor designed to inform physicians whether their patients are taking their medications. The federal approval marks a growing trend towards addressing drug non-adherence among patients, according to a New York Timesreport.

The pill, called Abilify MyCite, is a modified version of Otsuka Pharmaceutical’s drug Abilify that is used in the treatment of schizophrenia, bipolar disorder, and depression. It is equipped with a small tracking device developed by Proteus Digital Health. The new tracking pill works by transmitting a message from the sensor to a wearable patch, which then sends data to a mobile app to enable patients to monitor drug ingestion on their smartphone.

Patients who agree to taking the tracking pill can sign consent forms that allow their health care providers and up to four other people including their family members to receive information about the date and time that the drugs are ingested. The technology is currently not approved for patients suffering from dementia-related psychosis.

“The FDA supports the development and use of new technology in prescription drugs and is committed to working with companies to understand how technology might benefit patients and prescribers,” says Mitchell Mathis of the FDA’s Center for Drug Evaluation and Research.

A 2014 report by the World Health Organization (WHO) reveals that as much as 50 percent of patients on prescription medications fail to take their drugs as instructed. In fact, psychiatric medicine practitioners note that taking medications between 70 and 80 percent of the time is already considered ‘good’ adherence. Experts add that noncompliance costs as much as $100 billion annually as patients only get sicker and spend more on additional treatments and hospitalizations.

FDA approval may exacerbate paranoia in patients, experts warn

The latest FDA approval has been met with ethical concerns, especially among the psychiatric circle. The American Psychiatric Association has stressed the importance of balance between psychiatric care and patient privacy. Likewise, an expert has cautioned that the new tracking pill may boost drug adherence but may also be doomed to backfire due in part to trust issues. Dr. Peter Kramer, a psychiatrist and the author of “Listening to Prozac,” also warns that the new technology seems coercive despite being technically ethical. (Related: Talk to the voices: Unconventional yet obvious ways to heal schizophrenia and average mental mayhem.)

“Psychotic disorders are often characterized by some degree of paranoia, often reaching delusional proportions, in which patients may believe that outside forces are trying to monitor and control them, including controlling minds or bodies or harm them in some way. The idea that we’re giving this group of patients a pill that, in fact, transmits info about them from inside their body to the people that are involved in their treatment almost seems like a confirmation of the worst paranoias of the worst patients,”  says Dr. Paul Appelbaum, director of law, ethics and psychiatry at Columbia University’s psychiatry department. (Click to Source)

Huge A.I. Conference Wonders How “Ethical Conscience” Can Be Engineered Into The Living Image Of The Beast – ARTIFICIAL INTELLIGENCE SEEKS AN ETHICAL CONSCIENCE

antichrist2

LEADING ARTIFICIAL-INTELLIGENCE RESEARCHERS gathered this week for the prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual cutting-edge research, panel discussions, and socializing: concern about AI’s power.

The issue was crystallized in a keynote from Microsoft researcher Kate Crawford Tuesday. The conference, which drew nearly 8,000 researchers to Long Beach, California, is deeply technical, swirling in dense clouds of math and algorithms. Crawford’s good-humored talk featured nary an equation and took the form of an ethical wake-up call. She urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said.

One such problem occurred in 2015, when Google’s photo service labeled some black people as gorillas. More recently, researchers found that image-processing algorithms both learned and amplified gender stereotypes. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.

Concern about the potential downsides of more powerful AI is apparent elsewhere at the conference. A tutorial session hosted by Cornell and Berkeley professors in the cavernous main hall Monday focused on building fairness into machine-learning systems, a particular issue as governments increasingly tap AI software. It included a reminder for researchers of legal barriers, such as the Civil Rights and Genetic Information Nondiscrimination acts. One concern is that even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it.

Some researchers are presenting techniques that could constrain or audit AI software. On Thursday, Victoria Krakovna, a researcher from Alphabet’s DeepMind research group, is scheduled to give a talk on “AI safety,” a relatively new strand of work concerned with preventing software developing undesirable or surprising behaviors, such as trying to avoid being switched off. Oxford University researchers planned to host an AI-safety themed lunch discussion earlier in the day.

Krakovna’s talk is part of a one-day workshop dedicated to techniques for peering inside machine-learning systems to understand how they work—making them “interpretable,” in the jargon of the field. Many machine-learning systems are now essentially black boxes; their creators know they work, but can’t explain exactly why they make particular decisions. That will present more problems as startups and large companies such as Google apply machine learning in areas such as hiring and healthcare. “In domains like medicine we can’t have these models just be a black box where something goes in and you get something out but don’t know why,” says Maithra Raghu, a machine-learning researcher at Google. On Monday, she presented open-source software developed with colleagues that can reveal what a machine-learning program is paying attention to in data. It may ultimately allow a doctor to see what part of a scan or patient history led an AI assistant to make a particular diagnosis.

Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.

Hanna Wallach, co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public. Wallach also points to behavioral research showing that diverse teams consider a broader range of ideas when solving problems.

Ultimately, AI researchers alone can’t and shouldn’t decide how society puts their ideas to use. “A lot of decisions about the future of this field cannot be made in the disciplines in which it began,” says Terah Lyons, executive director of Partnership on AI, a nonprofit launched last year by tech companies to mull the societal impacts of AI. (The organization held a board meeting on the sidelines of NIPS this week.) She says companies, civic-society groups, citizens, and governments all need to engage with the issue.

Yet as the army of corporate recruiters at NIPS from companies ranging from Audi to Target shows, AI researchers’ importance in so many spheres gives them unusual power. Towards the end of her talk Tuesday, Crawford suggested civil disobedience could shape the uses of AI. She talked of French engineer Rene Carmille, who sabotaged tabulating machines used by the Nazis to track French Jews. And she told today’s AI engineers to consider the lines they don’t want their technology to cross. “Are there some things we just shouldn’t build?” she asked. (Click to Source)

Will THIS Integrate With The Beast’s System? New Tattoos “Made Of Living Cells” – The Ink In This Tattoo Is Made Out Of Living Cells

A.K.A. bacteria.

p-1-these-tattoos-are-made-of-living-ink

Tattoos are all the rage at MIT these days: researchers at the university have recently produced prototypes that point to the future of biotech, from electronic tattoos that serve as interfaces to tattoos that change color based on body chemistry.

The latest? A tattoo made of living ink–genetically programmed cells that activate when exposed to different types of stimuli. While right now that means they light up when they come in contact with particular molecular compounds, there are exciting potential applications: the tattoos could be designed so that they respond to environmental pollutants or changes in temperature. That means that sometime in the future, we could all be walking around with living, responsive tattoos that tell us when it’s not safe to go outside because air pollution levels are dangerous, or even just act as a temperature gauge right on your body.

MIT researchers in mechanical engineering and bioengineering recently published a paper on their work in Advanced Materials, where they demonstrate a method to 3D print living cells–combined with a gelatinous material called hydrogel, which keeps the bacteria alive–on top of each other. This layering allowed them to build up the cell “ink” into patterns. Their prototype tattoo looks like a branching tree graph, where different parts of the graphic respond to different types of external stimuli. The next step is to create more patches designed to light up when they come in contact with particular chemicals.

The researchers also programmed some of the living bacterial cells to communicate, so that they light up in response to messages from other cells. “This is very future work, but we expect to be able to print living computational platforms that could be wearable,” graduate student and co-author Hyunwoo Yuk tells MIT News. Using this technique, scientists might be able to build a “living computer,” where layers of cells talk to each other like transistors do in electronics today. (Click to Source)

TRACK THIS! OLYMPIANS NOW FACE IMPLANTED CHIPS

hand-microchip-image

The head of an association of Olympic athletes wants to require anyone who participates in the Summer or Winter Games to be implanted with a tracking chip to prevent the use of performance-enhancing drugs.

Mike Miller, CEO of the World Olympians Association, remarked recently at an anti-doping forum in London that athletes should accept digital implants or be barred from Olympic-level competition, according to the Guardian of London.

“Some people say it’s an invasion of privacy,” Miller said. “Well, sport is a club and people don’t have to join the club if they don’t want to, if they can’t follow the rules.

Consumer privacy expert Liz McIntyre, co-author of “Spychips: How Major Corporations and Government Plan to Track Your Every Purchase and Watch Your Every Move,” called Miller’s proposal “outrageous,” insisting “no human being should ever be forced to accept a tracking implant to fully participate in society.”

“When someone in Miller’s position has the audacity to suggest that RFID dog tracking chips are an acceptable prerequisite for participation in any endeavor, it’s time for action,” she said

Miller didn’t specify what kind of microchip technology he was considering. McIntyre pointed out there are RFID chips with sensors that could detect the health status of a host or substances in the blood.

“A chip/biosensor combo could theoretically monitor an athlete’s blood 24/7 and report aberrations when queried by a nearby reader device – perhaps a phone with a built-in reader, for example,” she said.

And chipping athletes likely would be just a first step, she warned.

“We need legislation that guarantees citizens the right to reject tracking implants without fear of losing the right to work or enjoy other pursuits,” said McIntyre.

She said Miller’s “outrageous recommendation made me realize that we are running out of time.”

She said it’s the reason she formed Citizens Against Marketing, Chipping and Tracking, or CAMCAT.

“Lawmakers need to act now to protect their constituents,” she said. “CAMCAT will work to make that happen.”

McIntyre is co-author with Katherine Albrecht of “Spychips.” She works as a consultant for StartPage.com and StartMail.com, privacy-based services to help protect consumers against surveillance.

Miller’s organization works with the 48 national Olympians associations and 100,000 living Olympians, although he said he was not speaking on behalf of the organization.

“I’m gauging reaction from people, but we do need to think of new ways to protect clean sport. I’m no Steve Jobs, but we need to spend the money and use the latest technology,” he said.

WND reported earlier this year Nevada was heading toward becoming the fifth state to pass a law banning the implanting of RFID chips in people without their permission.

At least four other states – Wisconsin, Oklahoma, California and North Dakota – had previously passed laws against involuntary chipping of human beings.

McIntyre said at that time there already have been incidents in Florida in which nursing home staffs have tried to forcibly chip Alzheimer’s patients, but relatives caught wind of it and stopped it.

In “Spychips,” McIntyre and Albrecht included a whole chapter on the use of chips in hospitals and the health-care industry.

Article Source


THRIVE Why-THRIVE-Website2

EZprepGourmet.com – Getting you Prepared and Making your life Easier, Simpler, & Healthier!  We believe that long terms storage should be food that is healthy and non-GMO, don’t you?  It’s so good for you that you can eat it everyday!

IMPLANTED MICROCHIP TO REPLACE CREDIT CARDS, CAR KEYS

Swedes already using biometric chip instead of train tickets

140917chip

A microchip embedded under the skin will replace credit cards and keys according to Stephen Ray, who has already overseen a program for Sweden’s largest state owned train operator that allows customers to scan their chips instead of using tickets.

BBC News showcased the system in which Swedes are able to have their embedded chip scanned by a conductor who uses an app to match up their chip membership number with a purchased ticket.

Around 3,000 people in Sweden have already had a chip embedded in their hand in order to access secure areas of buildings.

SJ – the first travel company in the world to implement the system is north Europe’s largest train operator. The company initially expects around 200 people to join the program.

Despite Ray dismissing concerns about privacy, when the program was launched some customers complained that their LinkedIn profiles were appearing instead of their train tickets when conductors scanned their biometric chip.

“You could use the microchip implant to replace a lot of stuff, your credit cards, they keys to your house, the keys to your car,” Ray told the BBC.

His sentiments echo the tone of an NBC News report last year which asserted the microchipping of children will happen “sooner rather than later” and that Americans will eventually accept the process as something just as normal as the barcode.

“It’s not a matter of if it will happen, but when,” electronics expert Stuart Lipoff told the network.

Concerns about the embedded microchip representing the “mark of the beast” mentioned in the bible have been expressed by many on the Christian right for over two decades.

Revelations 13:16-17 talks about every man receiving “a mark in their right hand, or in their forehead,” without which they are not able to “buy or sell”. (Click to Site)

Strathspey Crown LLC : Announces Issuance of US Patent of the First Implantable Intraocular Lens (IOL) with a Video Camera and Wireless Transmission Capability

Transparant light sensor microchip resting on a fingertip

07/12/2017 | 09:01am EDT

 

NEWPORT BEACH, Calif., July 12, 2017 /PRNewswire/ — Strathspey Crown LLC, a lifestyle healthcare company focused in ophthalmology, medical aesthetic and elective technologies and procedures, today announced that the United States Patent and Trademark Office has issued U.S. Patent No. 9,662,199 covering an implantable intraocular lens with an optic (including accommodating, multifocal and phakic configurations), a camera and an LED display, and a communications module that wirelessly transmit and receive information from an external device (e.g. PDA).

Robert Edward Grant, Founder and Chairman of Strathspey Crown LLC commented, “Video cameras are now a standard feature of smart phone technology and wearable cameras have become popularized by companies like Google and Snap in recent years. This patent represents a significant step forward in the rapidly growing sector of human cyborg technology. The eye, as a transparent medium for light, is ideal for advanced and rechargeable implantables that enable video capture of all of life’s experiences. Our broader vision is to develop ground-breaking medical-grade ocular smart implantables that integrate cellular, WIFI and 802.11 transmissions in an elegant cognitive interface that we believe will enhance human intelligence, augment perceived reality, and digitally capture experiences and individual memories. We look forward to several continuations and expansions on this important intellectual property portfolio.”

Grant further commented, “Although Samsung, Sony and Google have all recently filed patent applications related to the same field, Strathspey Crown is thus far the only company to hold an issued patent in this promising ocular smart implant category. Our first camera-integrated acrylic IOLs will be completed in 2018, upon which we plan to pursue an FDA Investigational Device Exemption (IDE) and subsequent Pre-Market Approval (PMA) and related clinical trial.”  (Click to Site)

The Equifax Breach – Sinister Implications

Transparant light sensor microchip resting on a fingertip

Could the Equifax breach be the precursor for something far more sinister?
The recent Equifax revelation that the personal information of 143 million Americans has been stolen by hackers is a catastrophic event.
In perspective, that is 44% of the United States citizens.  (Ponder that for a moment)
Right now, Social Security numbers, birthdates, addresses, and full credit history of almost half of the country is in the hands of criminals.
In my household, my husband and my teenage son are among the number.  My child will face the threat of identity theft for the rest of his life… unless some fundamental change is made to identify and control credit, that is.
What better rationale and logical reason to put forth than to “protect” people from identity theft?  Simply have this invisible tattoo or chip on your hand… then you can buy and sell in complete safety.
The implant technology is there, it simply faces resistance from the public in what a recent Fortune article called, the “Ick Factor”.  What is more likely is some sort of biometric security mechanism that faces none of the public angst that implants do (source).  Afterall, how many iPhone users scan their thumb print dozens of times a day?
Imagine with me for a moment, a world which is thrown into financial chaos with the disappearance of millions of people.  Identities have already been compromised and with millions gone, there is no conceivable way to know if the person standing before the bank teller is the real Mary Smith or if Mary Smith is one of the millions who has disappeared.
It just doesn’t even seem far fetched any longer.