• CRYPTO-GRAM, January 15, 2024

    From Sean Rima@21:1/229 to All on Monday, April 15, 2024 12:04:26
    Crypto-Gram, January 15, 2024

    A monthly newsletter about cybersecurity and related topics.

    Crypto-Gram
    January 15, 2024

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************

    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    A Robot the Size of the World

    Police Get Medical Records without a Warrant

    OpenAI Is Not Training on Your Dropbox Documents -- Today

    GCHQ Christmas Codebreaking Challenge

    Cyberattack on Ukraine’s Kyivstar Seems to Be Russian Hacktivists

    Data Exfiltration Using Indirect Prompt Injection

    Ben Rothke’s Review of A Hacker’s Mind

    Google Stops Collecting Location Data from Maps

    New iPhone Security Features to Protect Stolen Devices

    AI and Lossy Bottlenecks

    AI Is Scarily Good at Guessing the Location of Random Photos

    TikTok Editorial Analysis

    Facial Recognition Systems in the US

    New iPhone Exploit Uses Four Zero-Days

    Improving Shor’s Algorithm

    Second Interdisciplinary Workshop on Reimagining Democracy

    PIN-Stealing Android Malware

    Facial Scanning by Burger King in Brazil

    Pharmacies Giving Patient Records to Police without Warrants

    On IoT Devices and Software Liability

    Upcoming Speaking Engagements

    ** *** ***** ******* *********** *************

    A Robot the Size of the World

    [2023.12.15] In 2016, I wrote about an Internet that affected the world in a direct, physical manner. It was connected to your smartphone. It had sensors like cameras and thermostats. It had actuators: Drones, autonomous cars. And it had smarts in the middle, using sensor data to figure out what to do and then actually do it. This was the Internet of Things (IoT).

    The classical definition of a robot is something that senses, thinks, and acts -- that’s today’s Internet. We’ve been building a world-sized robot without even realizing it.

    In 2023, we upgraded the “thinking” part with large-language models (LLMs) like GPT. ChatGPT both surprised and amazed the world with its ability to understand human language and generate credible, on-topic, humanlike responses. But what these are really good at is interacting with systems formerly designed for humans. Their accuracy will get better, and they will be used to replace actual humans.

    In 2024, we’re going to start connecting those LLMs and other AI systems to both sensors and actuators. In other words, they will be connected to the larger world, through APIs. They will receive direct inputs from our environment, in all the forms I thought about in 2016. And they will increasingly control our environment, through IoT devices and beyond.

    It will start small: Summarizing emails and writing limited responses. Arguing with customer service -- on chat -- for service changes and refunds. Making travel reservations.

    But these AIs will interact with the physical world as well, first controlling robots and then having those robots as part of them. Your AI-driven thermostat will turn the heat and air conditioning on based also on who’s in what room, their preferences, and where they are likely to go next. It will negotiate with the power company for the cheapest rates by scheduling usage of high-energy appliances or car recharging.

    This is the easy stuff. The real changes will happen when these AIs group together in a larger intelligence: A vast network of power generation and power consumption with each building just a node, like an ant colony or a human army.

    Future industrial-control systems will include traditional factory robots, as well as AI systems to schedule their operation. It will automatically order supplies, as well as coordinate final product shipping. The AI will manage its own finances, interacting with other systems in the banking world. It will call on humans as needed: to repair individual subsystems or to do things too specialized for the robots.

    Consider driverless cars. Individual vehicles have sensors, of course, but they also make use of sensors embedded in the roads and on poles. The real processing is done in the cloud, by a centralized system that is piloting all the vehicles. This allows individual cars to coordinate their movement for more efficiency: braking in synchronization, for example.

    These are robots, but not the sort familiar from movies and television. We think of robots as discrete metal objects, with sensors and actuators on their surface, and processing logic inside. But our new robots are different. Their sensors and actuators are distributed in the environment. Their processing is somewhere else. They’re a network of individual units that become a robot only in aggregate.

    This turns our notion of security on its head. If massive, decentralized AIs run everything, then who controls those AIs matters a lot. It’s as if all the executive assistants or lawyers in an industry worked for the same agency. An AI that is both trusted and trustworthy will become a critical requirement.

    This future requires us to see ourselves less as individuals, and more as parts of larger systems. It’s AI as nature, as Gaia -- everything as one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (And also with science-fiction dystopias, like Skynet from the Terminator movies.) It will require a rethinking of much of our assumptions about governance and economy. That’s not going to happen soon, but in 2024 we will see the first steps along that path.

    This essay previously appeared in Wired.

    ** *** ***** ******* *********** *************

    Police Get Medical Records without a Warrant

    [2023.12.18] More unconstrained surveillance:

    Lawmakers noted the pharmacies’ policies for releasing medical records in a letter dated Tuesday to the Department of Health and Human Services (HHS) Secretary Xavier Becerra. The letter -- signed by Sen. Ron Wyden (D-Ore.), Rep. Pramila Jayapal (D-Wash.), and Rep. Sara Jacobs (D-Calif.) -- said their investigation pulled information from briefings with eight big prescription drug suppliers.

    They include the seven largest pharmacy chains in the country: CVS Health, Walgreens Boots Alliance, Cigna, Optum Rx, Walmart Stores, Inc., The Kroger Company, and Rite Aid Corporation. The lawmakers also spoke with Amazon Pharmacy.

    All eight of the pharmacies said they do not require law enforcement to have a warrant prior to sharing private and sensitive medical records, which can include the prescription drugs a person used or uses and their medical conditions. Instead, all the pharmacies hand over such information with nothing more than a subpoena, which can be issued by government agencies and does not require review or approval by a judge.

    Three pharmacies -- CVS Health, The Kroger Company, and Rite Aid Corporation -- told lawmakers they didn’t even require their pharmacy staff to consult legal professionals before responding to law enforcement requests at pharmacy counters. According to the lawmakers, CVS, Kroger, and Rite Aid said that “their pharmacy staff face extreme pressure to immediately respond to law enforcement demands and, as such, the companies instruct their staff to process those requests in store.”

    The rest of the pharmacies -- Amazon, Cigna, Optum Rx, Walmart, and Walgreens Boots Alliance -- at least require that law enforcement requests be reviewed by legal professionals before pharmacists respond. But, only Amazon said it had a policy of notifying customers of law enforcement demands for pharmacy records unless there were legal prohibitions to doing so, such as a gag order.

    ** *** ***** ******* *********** *************

    OpenAI Is Not Training on Your Dropbox Documents -- Today

    [2023.12.19] There’s a rumor flying around the Internet that OpenAI is training foundation models on your Dropbox documents.

    Here’s CNBC. Here’s Boing Boing. Some articles are more nuanced, but there’s still a lot of confusion.

    It seems not to be true. Dropbox isn’t sharing all of your documents with OpenAI. But here’s the problem: we don’t trust OpenAI. We don’t trust tech corporations. And -- to be fair -- corporations in general. We have no reason to.

    Simon Willison nails it in a tweet:

    “OpenAI are training on every piece of data they see, even when they say they aren’t” is the new “Facebook are showing you ads based on overhearing everything you say through your phone’s microphone.”

    Willison expands this in a blog post, which I strongly recommend reading in its entirety. His point is that these companies have lost our trust:

    Trust is really important. Companies lying about what they do with your privacy is a very serious allegation.

    A society where big companies tell blatant lies about how they are handling our data -- and get away with it without consequences -- is a very unhealthy society.

    A key role of government is to prevent this from happening. If OpenAI are training on data that they said they wouldn’t train on, or if Facebook are spying on us through our phone’s microphones, they should be hauled in front of regulators and/or sued into the ground.

    If we believe that they are doing this without consequence, and have been getting away with it for years, our intolerance for corporate misbehavior becomes a victim as well. We risk letting companies get away with real misconduct because we incorrectly believed in conspiracy theories.

    Privacy is important, and very easily misunderstood. People both overestimate and underestimate what companies are doing, and what’s possible. This isn’t helped by the fact that AI technology means the scope of what’s possible is changing at a rate that’s hard to appreciate even if you’re deeply aware of the space.

    If we want to protect our privacy, we need to understand what’s going on. More importantly, we need to be able to trust companies to honestly and clearly explain what they are doing with our data.

    On a personal level we risk losing out on useful tools. How many people cancelled their Dropbox accounts in the last 48 hours? How many more turned off that AI toggle, ruling out ever evaluating if those features were useful for them or not?

    And while Dropbox is not sending your data to OpenAI today, it could do so tomorrow with a simple change of its terms of service. So could your bank, or credit card company, your phone company, or any other company that owns your data. Any of the tens of thousands of data brokers could be sending your data to train AI models right now, without your knowledge or consent. (At least, in the US. Hooray for the EU and GDPR.)

    Or, as Thomas Claburn wrote:

    “Your info won’t be harvested for training” is the new “Your private chatter won’t be used for ads.”

    These foundation models want our data. The corporations that have our data want the money. It’s only a matter of time, unless we get serious government privacy regulation.

    ** *** ***** ******* *********** *************

    GCHQ Christmas Codebreaking Challenge

    [2023.12.20] Looks like fun.

    Details here.

    ** *** ***** ******* *********** *************

    Cyberattack on Ukraine’s Kyivstar Seems to Be Russian Hacktivists

    [2023.12.21] The Solntsepek group has taken credit for the attack. They’re linked to the Russian military, so it’s unclear whether the attack was government directed or freelance.

    This is one of the most significant cyberattacks since Russia invaded in February 2022.

    ** *** ***** ******* *********** *************

    Data Exfiltration Using Indirect Prompt Injection

    [2023.12.22] Interesting attack on a LLM:

    In Writer, users can enter a ChatGPT-like session to edit or create their documents. In this chat session, the LLM can retrieve information from sources on the web to assist users in creation of their documents. We show that attackers can prepare websites that, when a user adds them as a source, manipulate the LLM into sending private information to the attacker or perform other malicious activities.

    The data theft can include documents the user has uploaded, their chat history or potentially specific private information the chat model can convince the user to divulge at the attacker’s behest.

    ** *** ***** ******* *********** *************

    Ben Rothke’s Review of A Hacker’s Mind

    [2023.12.22] Ben Rothke chose A Hacker’s Mind as “the best information security book of 2023.”

    ** *** ***** ******* *********** *************

    Google Stops Collecting Location Data from Maps

    [2023.12.26] Google Maps now stores location data locally on your device, meaning that Google no longer has that data to turn over to the police.

    ** *** ***** ******* *********** *************

    New iPhone Security Features to Protect Stolen Devices

    [2023.12.27] Apple is rolling out a new “Stolen Device Protection” feature that seems well thought out:

    When Stolen Device Protection is turned on, Face ID or Touch ID authentication is required for additional actions, including viewing passwords or passkeys stored in iCloud Keychain, applying for a new Apple Card, turning off Lost Mode, erasing all content and settings, using payment methods saved in Safari, and more. No passcode fallback is available in the event that the user is unable to complete Face ID or Touch ID authentication.

    For especially sensitive actions, including changing the password of the Apple ID account associated with the iPhone, the feature adds a security delay on top of biometric authentication. In these cases, the user must authenticate with Face ID or Touch ID, wait one hour, and authenticate with Face ID or Touch ID again. However, Apple said there will be no delay when the iPhone is in familiar locations, such as at home or work.

    More details at the link.

    ** *** ***** ******* *********** *************

    AI and Lossy Bottlenecks

    [2023.12.28] Artificial intelligence is poised to upend much of society, removing human limitations inherent in many systems. One such limitation is information and logistical bottlenecks in decision-making.

    Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. Artificial intelligence has the potential to remove that limitation. And it has the potential to drastically change how democracy functions.

    AI researcher Tantum Collins and I, a public-interest technology scholar, call this AI overcoming “lossy bottlenecks.” Lossy is a term from information theory that refers to imperfect communications channels -- that is, channels that lose information.

    Multiple-choice practicality

    Imagine your next sit-down dinner and being able to have a long conversation with a chef about your meal. You could end up with a bespoke dinner based on your desires, the chef’s abilities and the available ingredients. This is possible if you are cooking at home or hosted by accommodating friends.

    But it is infeasible at your average restaurant: The limitations of the kitchen, the way supplies have to be ordered and the realities of restaurant cooking make this kind of rich interaction between diner and chef impossible. You get a menu of a few dozen standardized options, with the possibility of some modifications around the edges.

    That’s a lossy bottleneck. Your wants and desires are rich and multifaceted. The array of culinary outcomes are equally rich and multifaceted. But there’s no scalable way to connect the two. People are forced to use multiple-choice systems like menus to simplify decision-making, and they lose so much information in the process.

    People are so used to these bottlenecks that we don’t even notice them. And when we do, we tend to assume they are the inevitable cost of scale and efficiency. And they are. Or, at least, they were.

    The possibilities

    Artificial intelligence has the potential to overcome this limitation. By storing rich representations of people’s preferences and histories on the demand side, along with equally rich representations of capabilities, costs and creative possibilities on the supply side, AI systems enable complex customization at scale and low cost. Imagine walking into a restaurant and knowing that the kitchen has already started work on a meal optimized for your tastes, or being presented with a personalized list of choices.

    There have been some early attempts at this. People have used ChatGPT to design meals based on dietary restrictions and what they have in the fridge. It’s still early days for these technologies, but once they get working, the possibilities are nearly endless. Lossy bottlenecks are everywhere.

    Take labor markets. Employers look to grades, diplomas and certifications to gauge candidates’ suitability for roles. These are a very coarse representation of a job candidate’s abilities. An AI system with access to, for example, a student’s coursework, exams and teacher feedback as well as detailed information about possible jobs could provide much richer assessments of which employment matches do and don’t make sense.

    Or apparel. People with money for tailors and time for fittings can get clothes made from scratch, but most of us are limited to mass-produced options. AI could hugely reduce the costs of customization by learning your style, taking measurements based on photos, generating designs that match your taste and using available materials. It would then convert your selections into a series of production instructions and place an order to an AI-enabled robotic production line.

    Or software. Today’s computer programs typically use one-size-fits-all interfaces, with only minor room for modification, but individuals have widely varying needs and working styles. AI systems that observe each user’s interaction styles and know what that person wants out of a given piece of software could take this personalization far deeper, completely redesigning interfaces to suit individual needs.

    Removing democracy’s bottleneck

    These examples are all transformative, but the lossy bottleneck that has the largest effect on society is in politics. It’s the same problem as the restaurant. As a complicated citizen, your policy positions are probably nuanced, trading off between different options and their effects. You care about some issues more than others and some implementations more than others.

    If you had the knowledge and time, you could engage in the deliberative process and help create better laws than exist today. But you don’t. And, anyway, society can’t hold policy debates involving hundreds of millions of people. So you go to the ballot box and choose between two -- or if you are lucky, four or five -- individual representatives or political parties.

    Imagine a system where AI removes this lossy bottleneck. Instead of trying to cram your preferences to fit into the available options, imagine conveying your political preferences in detail to an AI system that would directly advocate for specific policies on your behalf. This could revolutionize democracy.

    One way is by enhancing voter representation. By capturing the nuances of each individual’s political preferences in a way that traditional voting systems can’t, this system could lead to policies that better reflect the desires of the electorate. For example, you could have an AI device in your pocket -- your future phone, for instance -- that knows your views and wishes and continually votes in your name on an otherwise overwhelming number of issues large and small.

    Combined with AI systems that personalize political education, it could encourage more people to participate in the democratic process and increase political engagement. And it could eliminate the problems stemming from elected representatives who reflect only the views of the majority that elected them -- and sometimes not even them.

    On the other hand, the privacy concerns resulting from allowing an AI such intimate access to personal data are considerable. And it’s important to avoid the pitfall of just allowing the AIs to figure out what to do: Human deliberation is crucial to a functioning democracy.

    Also, there is no clear transition path from the representative democracies of today to these AI-enhanced direct democracies of tomorrow. And, of course, this is still science fiction.

    First steps

    These technologies are likely to be used first in other, less politically charged, domains. Recommendation systems for digital media have steadily reduced their reliance on traditional intermediaries. Radio stations are like menu items: Regardless of how nuanced your taste in music is, you have to pick from a handful of options. Early digital platforms were only a little better: “This person likes jazz, so we’ll suggest more jazz.”

    Today’s streaming platforms use listener histories and a broad set of features describing each track to provide each user with personalized music recommendations. Similar systems suggest academic papers with far greater granularity than a subscription to a given journal, and movies based on more nuanced analysis than simply deferring to genres.

    A world without artificial bottlenecks comes with risks -- loss of jobs in the bottlenecks, for example -- but it also has the potential to free people from the straitjackets that have long constrained large-scale human decision-making. In some cases -- restaurants, for example -- the impact on most people might be minor. But in others, like politics and hiring, the effects could be profound.

    This essay originally appeared in The Conversation.

    ** *** ***** ******* *********** *************

    AI Is Scarily Good at Guessing the Location of Random Photos

    [2023.12.29] Wow:

    To test PIGEON’s performance, I gave it five personal photos from a trip I took across America years ago, none of which have been published online. Some photos were snapped in cities, but a few were taken in places nowhere near roads or other easily recognizable landmarks.

    That didn’t seem to matter much.

    It guessed a campsite in Yellowstone to within around 35 miles of the actual location. The program placed another photo, taken on a street in San Francisco, to within a few city blocks.

    Not every photo was an easy match: The program mistakenly linked one photo taken on the front range of Wyoming to a spot along the front range of Colorado, more than a hundred miles away. And it guessed that a picture of the Snake River Canyon in Idaho was of the Kawarau Gorge in New Zealand (in fairness, the two landscapes look remarkably similar).

    This kind of thing will likely get better. And even if it is not perfect, it has some pretty profound privacy implications (but so did geolocation in the EXIF data that accompanies digital photos).

    ** *** ***** ******* *********** *************

    TikTok Editorial Analysis

    [2024.01.02] TikTok seems to be skewing things in the interests of the Chinese Communist Party. (This is a serious analysis, and the methodology looks sound.)

    Conclusion: Substantial Differences in Hashtag Ratios Raise

    Concerns about TikTok’s Impartiality

    Given the research above, we assess a strong possibility that content on TikTok is either amplified or suppressed based on its alignment with the interests of the Chinese Government. Future research should aim towards a more comprehensive analysis to determine the potential influence of TikTok on popular public narratives. This research should determine if and how TikTok might be utilized for furthering national/regional or international objectives of the Chinese Government.

    EDITED TO ADD (1/13): Blog readers have complaints about the methodology.

    ** *** ***** ******* *********** *************

    Facial Recognition Systems in the US

    [2024.01.03] A helpful summary of which US retail stores are using facial recognition, thinking about using it, or currently not planning on using it. (This, of course, can all change without notice.)

    Three years ago, I wrote that campaigns to ban facial recognition are too narrow. The problem here is identification, correlation, and then discrimination. There’s no difference whether the identification technology is facial recognition, the MAC address of our phones, gait recognition, license plate recognition, or anything else. Facial recognition is just the easiest technology right now.

    ** *** ***** ******* *********** *************

    New iPhone Exploit Uses Four Zero-Days

    [2024.01.04] Kaspersky researchers are detailing “an attack that over four years backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky.” It’s a zero-click exploit that makes use of four iPhone zero-days.

    The most intriguing new detail is the targeting of the heretofore-unknown hardware feature, which proved to be pivotal to the Operation Triangulation campaign. A zero-day in the feature allowed the attackers to bypass advanced hardware-based memory protections designed to safeguard device system integrity even after an attacker gained the ability to tamper with memory of the underlying kernel. On most other platforms, once attackers successfully exploit a kernel vulnerability they have full control of the compromised system.

    On Apple devices equipped with these protections, such attackers are still unable to perform key post-exploitation techniques such as injecting malicious code into other processes, or modifying kernel code or sensitive kernel data. This powerful protection was bypassed by exploiting a vulnerability in the secret function. The protection, which has rarely been defeated in exploits found to date, is also present in Apple’s M1 and M2 CPUs.

    The details are staggering:

    Here is a quick rundown of this 0-click iMessage attack, which used four zero-days and was designed to work on iOS versions up to iOS 16.2.

    Attackers send a malicious iMessage attachment, which the application processes without showing any signs to the user.

    This attachment exploits the remote code execution vulnerability CVE-2023-41990 in the undocumented, Apple-only ADJUST TrueType font instruction. This instruction had existed since the early nineties before a patch removed it.

    It uses return/jump oriented programming and multiple stages written in the NSExpression/NSPredicate query language, patching the JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.

    This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still, it has around 11,000 lines of code, which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.

    It exploits the JavaScriptCore debugging feature DollarVM ($vm) to gain the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.

    It was designed to support both old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of recent models.

    It uses the integer overflow vulnerability CVE-2023-32434 in XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to obtain read/write access to the entire physical memory of the device at user level.

    It uses hardware memory-mapped I/O (MMIO) registers to bypass the Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.

    After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device including running spyware, but the attackers chose to: (a) launch the IMAgent process and inject a payload that clears the exploitation artefacts from the device; (b) run a Safari process in invisible mode and forward it to a web page with the next stage.

    The web page has a script that verifies the victim and, if the checks pass, receives the next stage: the Safari exploit.

    The Safari exploit uses CVE-2023-32435 to execute a shellcode.

    The shellcode executes another kernel exploit in the form of a Mach object file. It uses the same vulnerabilities: CVE-2023-32434 and CVE-2023-38606. It is also massive in terms of size and functionality, but completely different from the kernel exploit written in JavaScript. Certain parts related to exploitation of the above-mentioned vulnerabilities are all that the two share. Still, most of its code is also dedicated to parsing and manipulation of the kernel memory. It contains various post-exploitation utilities, which are mostly unused.

    The exploit obtains root privileges and proceeds to execute other stages, which load spyware. We covered these stages in our previous posts.

    This is nation-state stuff, absolutely crazy in its sophistication. Kaspersky discovered it, so there’s no speculation as to the attacker.

    ** *** ***** ******* *********** *************

    Improving Shor’s Algorithm

    [2024.01.05] We don’t have a useful quantum computer yet, but we do have quantum algorithms. Shor’s algorithm has the potential to factor large numbers faster than otherwise possible, which -- if the run times are actually feasible -- could break both the RSA and Diffie-Hellman public-key algorithms.

    Now, computer scientist Oded Regev has a significant speed-up to Shor’s algorithm, at the cost of more storage.

    Details are in this article. Here’s the result:

    The improvement was profound. The number of elementary logical steps in the quantum part of Regev’s algorithm is proportional to n1.5 when factoring an n-bit number, rather than n2 as in Shor’s algorithm. The algorithm repeats that quantum part a few dozen times and combines the results to map out a high-dimensional lattice, from which it can deduce the period and factor the number. So the algorithm as a whole may not run faster, but speeding up the quantum part by reducing the number of required steps could make it easier to put it into practice.

    Of course, the time it takes to run a quantum algorithm is just one of several considerations. Equally important is the number of qubits required, which is analogous to the memory required to store intermediate values during an ordinary classical computation. The number of qubits that Shor’s algorithm requires to factor an n-bit number is proportional to n, while Regev’s algorithm in its original form requires a number of qubits proportional to n1.5 -- a big difference for 2,048-bit numbers.

    Again, this is all still theoretical. But now it’s theoretically faster.

    Oded Regev’s paper.

    This is me from 2018 on the potential for quantum cryptanalysis. I still believe now what I wrote then.

    ** *** ***** ******* *********** *************

    Second Interdisciplinary Workshop on Reimagining Democracy

    [2024.01.08] Last month, I convened the Second Interdisciplinary Workshop on Reimagining Democracy (IWORD 2023) at the Harvard Kennedy School Ash Center. As with IWORD 2022, the goal was to bring together a diverse set of thinkers and practitioners to talk about how democracy might be reimagined for the twenty-first century.

    My thinking is very broad here. Modern democracy was invented in the mid-eighteenth century, using mid-eighteenth-century technology. Were democracy to be invented from scratch today, with today’s technologies, it would look very different. Representation would look different. Adjudication would look different. Resource allocation and reallocation would look different. Everything would look different, because we would have much more powerful technology to build on and no legacy systems to worry about.

    Such speculation is not realistic, of course, but it’s still valuable. Everyone seems to be talking about ways to reform our existing systems. That’s critically important, but it’s also myopic. It represents a hill-climbing strategy of continuous improvements. We also need to think about discontinuous changes that you can’t easily get to from here; otherwise, we’ll be forever stuck at local maxima.

    I wrote about the philosophy more in this essay about IWORD 2022. IWORD 2023 was equally fantastic, easily the most intellectually stimulating two days of my year. The event is like that; the format results in a firehose of interesting.

    Summaries of all the talks are in the first set of comments below. (You can read a similar summary of IWORD 2022 here.) Thank you to the Ash Center and the Belfer Center at Harvard Kennedy School, and the Knight Foundation, for the funding to make this possible.

    Next year, I hope to take the workshop out of Harvard and somewhere else. I would like it to live on for as long as it is valuable.

    Now, I really want to explain the format in detail, because it works so well.

    I used a workshop format I and others invented for another interdisciplinary workshop: Security and Human Behavior, or SHB. It’s a two-day event. Each day has four ninety-minute panels. Each panel has six speakers, each of whom presents for ten minutes. Then there are thirty minutes of questions and comments from the audience. Breaks and meals round out the day.

    The workshop is limited to forty-eight attendees, which means that everyone is on a panel. This is important: every attendee is a speaker. And attendees commit to being there for the whole workshop; no giving your talk and then leaving. This makes for a very collaborative environment. The short presentations means that no one can get too deep into details or jargon. This is important for an interdisciplinary event. Everyone is interesting for ten minutes.

    The final piece of the workshop is the social events. We have a night-before opening reception, a conference dinner after the first day, and a final closing reception after the second day. Good food is essential.

    Honestly, it’s great but it’s also it’s exhausting. Everybody is interesting for ten minutes. There’s no down time to zone out or check email. And even though a shorter event would be easier to deal with, the numbers all fit together in a way that’s hard to change. A one-day event means only twenty-four attendees/speakers, and that’s not a critical mass. More people per panel doesn’t work. Not everyone speaking creates a speaker/audience hierarchy, which I want to avoid. And a three-day, slower-paced event is too long. I’ve thought about it long and hard; the format I’m using is optimal.

    ** *** ***** ******* *********** *************

    PIN-Stealing Android Malware

    [2024.01.09] This is an old piece of malware -- the Chameleon Android banking Trojan -- that now disables biometric authentication in order to steal the PIN:

    The second notable new feature is the ability to interrupt biometric operations on the device, like fingerprint and face unlock, by using the Accessibility service to force a fallback to PIN or password authentication.

    The malware captures any PINs and passwords the victim enters to unlock their device and can later use them to unlock the device at will to perform malicious activities hidden from view.

    ** *** ***** ******* *********** *************

    Facial Scanning by Burger King in Brazil

    [2024.01.10] In 2000, I wrote: “If McDonald’s offered three free Big Macs for a DNA sample, there would be lines around the block.”

    Burger King in Brazil is almost there, offering discounts in exchange for a facial scan. From a marketing video:

    “At the end of the year, it’s Friday every day, and the hangover kicks in,” a vaguely robotic voice says as images of cheeseburgers glitch in and out over fake computer code. “BK presents Hangover Whopper, a technology that scans your hangover level and offers a discount on the ideal combo to help combat it.” The stunt runs until January 2nd.

    ** *** ***** ******* *********** *************

    Pharmacies Giving Patient Records to Police without Warrants

    [2024.01.11] Add pharmacies to the list of industries that are giving private data to the police without a warrant.

    ** *** ***** ******* *********** *************

    On IoT Devices and Software Liability

    [2024.01.12] New law journal article:

    Smart Device Manufacturer Liability and Redress for Third-Party Cyberattack Victims

    Abstract: Smart devices are used to facilitate cyberattacks against both their users and third parties. While users are generally able to seek redress following a cyberattack via data protection legislation, there is no equivalent pathway available to third-party victims who suffer harm at the hands of a cyberattacker. Given how these cyberattacks are usually conducted by exploiting a publicly known and yet un-remediated bug in the smart device’s code, this lacuna is unreasonable. This paper scrutinises recent judgments from both the Supreme Court of the United Kingdom and the Supreme Court of the Republic of Ireland to ascertain whether these rulings pave the way for third-party victims to pursue negligence claims against the manufacturers of smart devices. From this analysis, a narrow pathway, which outlines how given a limited set of circumstances, a duty of care can be established between the third-party victim and the manufacturer of the smart device is proposed.

    ** *** ***** ******* *********** *************

    Upcoming Speaking Engagements

    [2024.01.14] This is a current list of where and when I am scheduled to speak:

    I’m speaking at the International PolCampaigns Expo (IPE24) in Cape Town, South Africa, January 25-26, 2024.

    The list is maintained on this page.

    ** *** ***** ******* *********** *************

    Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram's web page.

    You can also read these articles on my blog, Schneier on Security.

    Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

    Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books -- including his latest, A Hacker’s Mind -- as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

    Copyright © 2024 by Bruce Schneier.

    ** *** ***** ******* *********** *************

    Mailing list hosting graciously provided by MailChimp. Sent without web bugs or link tracking.

    ---
    * Origin: High Portable Tosser at my node (21:1/229)