Vg Wort Elektronische Dissertation Sample
Vg Wort Elektronische Dissertation Sample E 31st Street zip 10016
Calculate Your Price
Word count 275
Price for this order:
Proceed to

When can we help you?

Running late with the deadline for your work? Then we are your reliable assistant in paper help.

Get ready to ask for our assistance when you need essays, research or course works, reports, case studies, etc. Our experts have seen it all and are ready to start working on your assignment right away. Go for it!

Why trust us?

With over 6 years of experience in the custom writing service, our team of support agents, managers, editors and writers has got a lot of knowledge about everything that may be required by you. Heres what you get for sure when cooperating with us:

  • Professional paper writer. Our native speakers are true masters at creating unique and custom works per the most detailed and even the vaguest instructions you have.
  • Quality writing. We check all works with several detectors. To make sure your work is 100% unique, add a Plagiarism Report to your order.
  • Convenience of your personal control panel. Signing in, you can get your expert in touch through a direct message, also place orders there, and see their statuses.

Everyone needs some paper help from time to time, because we are only human.

Whats included in the total cost?

Our prices start at $10 per page for works completed from scratch and from only $6 per page you need to be edited and proofread.

What factors influence the cost of our paper writing services? There are 5 of them:

  • Type of work
  • Deadline
  • Academic level
  • Subject
  • Number of pages

Youre a lucky client! Why? Because you never pay for everything. You have lots of freebies to go with every single assignment. They are:

  • title page
  • reference page (bibliography)
  • formatting
  • 3 free revisions
  • scan for originality
  • email delivery

Asking for our paper writing help, you dont only pay us. We also pay you! You can receive up to 15% bonuses back and even earn money with our referral program.

Your full security

We understand that sometimes you may want your deeds to go unknown. That is why we guarantee your complete privacy and security with our paper help writing service. After registration, you receive a unique ID and that is the only thing along with your instructions visible to our experts. Only our support team will see all the details you provide to be able to contact you in case any questions arise and send you a happy birthday discount on your special day.

Our custom writing service is completely ethical and provides busy students with great resources for their assignments. In the modern world when we need to do a lot of things at the same time, its nice to know you can count on someone for back up. We are always here to create the needed sample or perfect your work through editing/proofreading or explain the solutions to any problems you may have. Find out how much more free time you can get with our writing help.

Vg wort elektronische dissertation sample 71 review movie carol White Plains campus, youtube dissertation mentor Rockland Community College, Weill Cornell Medical College mayavan kadhal movie review Monroe. Monroe College chinua achebe vultures essay checker mlspulse listing presentation 11th Avenue zip 10018. Genesee Community College cod black ops report hackers dissertation for construction students scholarships Hamilton Benjamin N. Cardozo School of Law, Tompkins Cortland Community College my first day at kindergarten essay 111st Street, East zip 10029.

Vg wort elektronische dissertation sample

Vg wort elektronische dissertation sample capstone asset management llc for money presentation of science fair project hi everybody I'm Joey Ito director of the MIT Media Lab welcome to the Media Lab I'm gonna start by quoting Kate Crawford who quoted Pedro Domingo's who said people worry that computers will get too smart and take over the world but the real problem is that they're too stupid and they've already taken over the world and I think this really sort of reflects the sentiments of the Kate who's one of the cofounders of this but also the Media Lab I think there's an outside chance of a superintelligence but I think that's fairly unlikely I think what's more likely is that we have we use the term extended intelligence so if you think about society government we have a very complex system that's arguably smarter at least more complex than some of the parts and pieces of it are already becoming automated whether you're talking about risk scores in the judiciary or Diagnostics in in in with doctors or self-driving vehicles and things like that and so if you let me try to use an image so if you can imagine a bunch of people doing some funky folk dance it involves them being tied together with thousands of years of history that are kind of running around in a complex motion headed in roughly some direction and then imagine some weird people going and putting jetpacks on them sort of without really talking to them and then imagine the jetpacks starting to fire randomly so what would happen is it would be kind of unpredictable it probably they'd head roughly in the direction that they were headed but things would go mostly badly but could randomly happen in good ways so the jetpacks are machines that are not necessarily getting smarter they're getting more powerful and the people are us and people putting the jetpacks on another computer scientists who sometimes try to understand what the people want to do by sort of looking at their data so this has sort of been going on for a while now and what I think that a number of us and a lot of the people who you'll be hearing from today realize that we can't just leave it up to the jetpack engineers to sort of figure out where they are put and where they're pointed and the other thing is that we kind of get have to get our ship in order before these jet packs go off because if we're headed in the wrong direction we're gonna be headed in the wrong direction a lot faster and in a lot more complex way and so I think the key thing right now is for the jet pack guys to be talking to the people who are actually having these jet pack put on them the people who understand this complex dance which all these people are doing to sort of understand that the jet packs are coming so that when they do fire that society has some chance of surviving or maybe even getting better so I think it's a huge opportunity but unless we have this conversation among all the people involved in what's a complex system so we talk about the word design when we think about design we think about like designing mice when you design a complex system like the ecosystem the environment or Society or government you don't design it like you design a thing you design it like you design a society assistant and so it requires a lot of people coming together having a conversation and building things and so the two co-founders Kate and Meredith of this initiative and we recently launched this AI and ethics fund and we're happy that they're one of our first groups that were supporting that we announced today organized this conference last June was so great that we asked them to do it here and with that I'll hand it off to Kate and Meredith much jury and welcome everybody to the second annual AI now symposium it is fantastic to see you all here my name is Kate Crawford I'm gonna be your co-chair for tonight we have put together a packed program of incredible speakers and we're also going to make a small announcement of our own but more on that later the guiding question for tonight is powerful artificial intelligence become part of our lives and this is not the stuff of the far-off future this is already happening AI is being embedded in banal back-end systems and being incorporated into our core social institutions it's influencing everything from the news you read through to who gets released from jail and frankly these effects just aren't very well understood yet so what do we even mean when we say artificial intelligence because AI has a long history and that definition has changed every few years but if we go back to 1956 when a small group of scholars got together in Dartmouth and said let's do a summer project let's create intelligent machines all right that was ambitious here we are 60 years later and we're still trying and the field has had some extraordinary leaps and bounds but it's also had some very real dead ends but three things have caused this field to accelerate in just the last decade huge amounts of computational power lots of data and better algorithms so these days when people talk about AI they're actually talking about a grab-bag of techniques or sometimes they're talking about this film but often they're talking about a grab-bag of techniques I'm sure we hear about machine vision neural networks natural language processing and all of these approaches learn about the world by ingesting large amounts of data so if you did see this film you might remember that the AI system learns about the personality of its owner by reading all of his email so now imagine an AI system learning by ingesting all of the Facebook trolling or all of that stop-and-frisk data with all of those SKUs and biases tact so the next decade of AI development is going to present challenges that go far beyond the technical it is going to implicate our core legal economic and social fabrics so there's a lot at stake tonight to talk about so to set the scene for the next ten minutes my co-chair Meredith and I are going to give you an incredibly high-speed tour through what's happening ai why these topics tonight that is bias governance gaps under Trump and finally rights and liberties and why these particular topics are so important right now and to give you a sense of how rapidly these social impacts are being felt I'm going to restrict myself just to examples from the last year so the first topic tonight is bias and AI and personally as a researcher working in this domain I've been incredibly thrilled to see a lot of progress has been made in just the last year we've had some important papers show significant gender biases that have been embedded in the models that do natural language processing so an NLP model might associate for example woman to nurse and man to doctor but while we're starting to see more computer scientists get interested in this topic of fairness and bias which is fantastic there are very real disagreements about what we can do about it and how we might address this problem but how we respond is going to matter because these models could have serious unintended impacts now one paper in particular got a lot of negative press and it was this one it claimed to have created an automatic criminality detector it could tell if you're going to be a criminal based on nothing more than your head shot well the researchers said that any resemblance to the phrenology or eugenics of the 19th century is purely accidental totally accidental because machine learning is neutral well hmm we're going to hear a lot of skepticism about that particular claim from the people on the panels tonight because even the earliest pioneers of artificial intelligence were concerned about this myth of neutrality and about bias this is Joseph Weizenbaum he invented Eliza she was the first ever chat bot and she was invented right here MIT in 1964 and this chap bolt was a it was a huge hit and maybe because of that Joseph Weizenbaum was deeply worried about what he described as the essentially deeply powerful delusional thinking his words about the way that we are pretending to just accept what an AI system will tell us it's deciding now this phenomenon now has a name it's called automation bias and it's when people will just accept a decision from an automated system much more than from a human because they assume that it's somehow more neutral or objective and this has been evidenced in intensive care units in nuclear power plants and now in an important study this year it could also be affecting the judicial system through algorithmic scoring another example of automation bias in action came through this report from Rand which took a year to study the predictive policing system in Chicago and after doing a very in-depth study they showed that this system had zero impact in reducing violent crime but it didn't have one achievement it managed to massively increase the amount of harassment of the people who are on the hate list so just as Joseph Weizenbaum feared we are starting to rely on these systems even as they are failing us and we need to do a lot better so also on this topic of predictive policing and bias this is one of my favorite interventions of the year it is the white collar crime app that's made by a new inquiry and it's just as good as it sounds it basically reverses who is typically visible in the predictive policing data by focusing just on the rich and powerful so basically what they did is they mapped all of the financial crime data from FINRA against all of the neighborhoods of the u.s. so you can see here that I put in Boston and you know right now up in Cambridge we're doing okay there's a few red spots but any one here going to downtown financial district you guys are going into a hotbed of crime check it out that's that's something you should be worrying about finally the AI field is starting to confront the sticky question of its own bias and I've been really heartened to see new initiatives by people like Faye Faye Lee and AI for all which are directly trying to address inclusion for women and people of color in AI development and this is something that our panel will be addressing tonight with the acknowledgement that we have a very long way to go so now let's quickly turn to the second topic tonight governance gaps under Trump and this is a moment for some real talk from us because Meredith and I at around this time last year hosted the first AI now symposium in collaboration with Obama's White House and the Office of Science and Technology Policy and this was part of being initiative by the administration to develop cutting-edge policy around AI that process has stalled OSTP is no longer actively engaged in a ed policy nor much of anything if you actually believe their website and the other parts of the administration let's be honest and not picking up the slack the Treasury secretary quote recently said that the impact of AI and automation on labor not even on his radar so we have a very different political scene to deal with this year and I think it provides quite a stock background for some of the topics that we'll be discussing tonight but the lack of a reality-based agenda for AI doesn't mean that AI is an impacting politics so Cambridge analytical probably several of you have heard about them they're a very controversial data firm that offers to manipulate audience behavior and they claim to have this massive set of individual profiles on 220 million Americans so basically all of us and depending who you believe they may have played a role in both the Trump election and brexit so we have cause for concern but now the calls for accountability are coming from inside the house now we heard actually at the ACM Turing Awards which a little bit like the Oscars but for computer science ben shneiderman made a call for a national algorithmic safety board which would monitor and assess ai's impacts on our social sis and these impacts are going to be complex at a time of rising wealth inequality researchers are now noticing a new tension in geopolitical power the global north are rapidly becoming the AI halves while the global south are becoming the AI have nots and this is going to create a very serious imbalance that we're going to think about in terms of how we start framing governance and policy which our panel of senior leaders is going to be discussing with you tonight and on that thought I'm gonna pass to my co-chair Meredith Whittaker to fill you in on what to expect for the final panel of the night Thank You Kate so and thank you all great to set the scene for rights and liberties which is the topic that will take us out tonight cast your mind back to just a month ago when Dubai's first Robocop reported for duty complete with facial recognition to ID anyone at seize this is the first of many if all goes according to plan by 2030 25% of Dubai's police force will be robots and US law enforcement is not far behind we're seeing a marked increase in the use of AI technologies like computer vision mobile sensing and machine learning the Department of Homeland Security is offering prize money now to anyone who can help improve the algorithm tsa uses to detect threats under clothing that's hashtag interesting training data set at the same time the Director of National Intelligence this is who oversees the nation's spy agencies have a contest of their own looking for the most accurate facial recognition algorithm but I don't want to give the impression that this is all contests and aspirations these systems are already being built into the core of our government Palantir who are sure many of you are familiar with is building a massive machine learning platform for ice this will allow 10,000 agents at a time to access millions of people's sensitive information including where they live who they work for who their friends are and their biometric data this could be a powerful engine for mass deportation in the US meanwhile at a local level taser the company that makes stun guns and police surveillance equipment is currently renamed themselves axon and recently rebranded as an AI company they're busy adding facial recognition to police body cameras that will ID anyone who comes in contact with a cop meanwhile this is at a time where more than half of the u.s. adult population already has their image and a law enforcement database many of whom have never committed a crime so what rights to due process will people have when facing systems that can pull up their record before they've been even considered a suspect offering a bit of hope here the judicial system is recognizing the need for accountability justice may a judge in Texas ruled that teachers do have a due process right to contest performance reviews made by algorithms this is going to be a big case to watch especially as it intersects with tricky policy and research questions around a I explained ability and bias now of course labor and automation also have significant impacts on rights and liberties we've all heard the stories about robots replacing human workers like say the one about Amazon who is scrambling right now to automate everything from forklifts delivery workers to Truckers trucking being one of the most common jobs in the US and we need to note that this is coming at a time when low-wage workers are organizing for better working conditions and higher wages and campaigns like fight for 15 so this raises really tough questions about the future of labor rights of course the story of AI and labor is more complicated than find/replace human robot a is also augmenting workers and judging them being used to decide who to hire who to fire and who gets promoted most recent research indicates that within five years a full eighty percent of US companies will be using AI for performance reviews and hiring so pause for a moment and think about the implications of bias embedded there in addition AI systems are changing the way we work and often without our really knowing it this is what was happening at uber who are using their vast data troubs combined with behavioral economic models to nudge workers into working longer hours and and here you see a very crisp example of the power that a centralized platform can have when it can see and control worker data down to the individual level so for our final panel of the night we have leaders from industry academia and civil society who will discuss the rights and liberties implications of AI as it's woven through our social and economic institutions now okay we just presented you with a rapid series of AI related changes and remember all of these stories happen in the last year alone so what's to be done speaking personally for a moment my background is in large-scale measurement designing measurement and analysis platforms to better understand complex systems so when I look at these problems I see a measurement challenge AI is weaving its way through everything and yet we know so little about its impact and of course before we know we need to measure and I'm not talking about the kind of measurement where you instrument a server to collect another variable although that may be useful here as well I'm talking about drawing on diverse methods from across disciplines to create a shared understanding of a eyes powerful effects which leads me to our big announcement Kate Crawford is not only my co-chair she is my co-founder and we are launching the AI now initiative this is a new research center based in New York that will be dedicated to empirical research across four key domains these are bias and machine learning labor change and automation the effects of AI in our critical infrastructures and of course how AI is impacting our basic rights and liberties we'll be inviting academics researchers ai developers and advocates to join us in addressing these and we are delighted to let you know that the ACLU is coming on board as our first partner they're committed to mapping the effects of advanced computation on civil rights [Applause] thank you so watch this space and please join us it's this community here tonight who are doing this essential and urgent work we want to support it and we want to join you to build a field together that can understand and map the social impacts of AI and with that I am honored to introduce solon Baracus from Cornell University who researches bias and machine learning he will be chairing our first panel tonight [Applause] welcome hi everyone it is also my distinct privilege and honor to introduce you to the first panel of the evening I want to introduce the August panelists I have for you today and so I'll start here to my right this is Cathy O'Neill mathematician author blogger columnist to her right is Deirdre Mulligan associate professor at University of California Berkeley moving down the panel we have John Wilbanks who is the chief Commons officer at Sage bio networks and finally Arvind Narayan an assistant professor of computer science at Princeton University so I'd like to just begin very quickly by saying that I think there's been a change in the past year where there is a growing recognition that these systems which we had hoped would be a mechanism to combat long-standing issues of prejudice and bias as a way to really advance civil rights are actually vulnerable to inheriting a lot of the exact biases that we thought we thought these systems would help us overcome and I think this has been an interesting thing to observe where initial hopes about machine learning and artificial intelligence being a force for good unquestionably now I think being complicated by recognition that these things are actually very very difficult that they depend crucially on data produced by humans evidence that reflects human behavior and culture and as a consequence that these things are in fact traps there are real serious opportunities to replicate to reproduce and even possibly exacerbate long-standing issues of bias and inequality and so I thought what I would do just to start is to ask the panelists to maybe reflect on what recent examples that we have of documented cases of bias in AI in the wild that can give us a concrete sense of what this might look like so let's maybe start with you Cathy not what you just asked because I'm actually not going to do a document documented case but in fact a thought experiment which I think is is happening all over the world right now in the world of machine learning but to say it's documented and I'll get to why it's difficult to document this and it's actually I'll start there it's secret we can't we can't get our hands on this even the people that are being targeted by these algorithms don't know they're being targeted so the example the thought experiment I want to do with everyone in the audience it comes from the world of Fox News and I want I want to imagine we all know that that Roger Ailes was kicked out of Fox News after 20 years of harassing women and I think it's fair to say for the thought experiment say that it was an environment where women were not encouraged to succeed they would come and leave early because they were being harassed they wouldn't be promoted appropriately or given raises and I'm imagine a thought experiment is this let's imagine that Fox News decided to turn over a new leaf and they'd replace their hiring process with the machine learning algorithm now machine learning algorithms are touted as objective they follow the numbers they're fair inherently but I want to do the thought experiment with you because it's actually going to show us not only that that's not true but that a well-meaning professional data science is doing their job as well as they can well unintentionally propagate bias so here's here's what a machine learning algorithm is it's taking historical data looking for patterns of success and that's defined by the person building the algorithm and then it looks for patterns in the past that led to success in the past and it tries to it just basically makes the assumption that that kind of pattern recognition is going to repeat in the future so that might sound complicated but it's actually not that complicated if I were a data scientist hired by Fox News to build a hiring algorithm for them I would look for the most relevant historical data which would be of course the last 21 years of people applying to Fox News and then I would say look for patterns in that I would be training an algorithm to look for patterns at historical data for who was successful at fox news that's what that machine learning algorithm does you train it on historical data patterns to success of course I would have to define success and a reasonable definition of success for any company would be say somebody who stayed at least for four years maybe and was promoted at least once or maybe given a raise something along those lines a reasonable definition of success a reasonable historical data set and I would train my algorithm and then I would apply that algorithm now that it's trained it's a professional machine learning algorithm right I would apply it to a current pool of applicants and then the question for the audience is what would happen when I do that and I set it up it's Fox News it's not just any company right it's Fox News I set it up so that you'll see that the women in the current pool of applicants would be filtered out by that machine learning algorithm because they do not look like people who were successful in the past so remember all machine learning algorithms do is recognize patterns recognize patterns so if they're looking at a woman applicant a qualified woman they'd be like oh people like her when they were hired did not become successful statistically speaking so they that would be a sense of propagating bias in the past not only what are you propagating as Sola mentioned it sometimes exacerbates past practices right and in this case I think you could argue that if we trust machine learning algorithms blindly to be fair when they're not actually fair when they're actually propagating old bias then what we fail to do is scrutinize that process that new hiring algorithm so what we're doing this we're actually doubling down on it we're saying yes it says medon't have happen to know this but it is actually as biases as the past practices were but we trust it as well and we don't scrutinize it so that's where the real problem with machine learning comes in not that it creates problems and bias but that we don't know about that bias and we we tend to think of it as just the way things are and we don't question it and that's what we need to start doing thanks Kathy sure it's an example that has gotten a fair amount of media play was the North Point compass system which despite the fact that it was a different system that Kathy just described suffered from many of the same issues about perpetuating historical biases and policing practices sentencing but the thing that was so interesting about it from my perspective is that it is a system that is being acquired by government to do its work and when we think about government decision-making we are really cognizant of the sort of power that it yields right there's a different level than a company and we also think about issues around transparency and accountability because we want to make sure that the government is actually doing what it is seeking to do and when Kathy just described the procurement of this system right and she described it in a way that suggested that the person who had historically been responsible for thinking about the data that was used in making hiring decisions in hopefully a more thoughtful and curated way that didn't just say well whatever we did in the past is going to predict what happens in the in the future was read out of the equation and one of the things that I think we have to be really thoughtful about is as government is using technology to assist it in making decisions or potentially making decisions on its behalf what level of insight does it have into not just the data but also the biases that may be built into those systems because of the choices of algorithms or the biases that are part of the production process right so who is making them we're displacing one set of professionals right in the context that Kathy described people who were in the business they're trained as HR professionals with a different set of professionals who come out of a different culture right an engineering culture that may bring a different level of sensibility and sensitivity and commitments to the work and so we see transfers of this sort happening as we automate and if we're not really careful some of the sensitivities that different kinds of professionals might bring to their analysis of data get lost in that kind of outsourcing or procurement of technology to help us do those sorts of tasks yeah that's great I mean I think there's an interesting point here now already that you know even well-intentioned people who are building these models left to their own devices or without proper guidance could easily end up reproducing or propagating these issues and that many of the kind of traditions we have in the professions might not carry over one these questions of how to make decisions or sort of delegated to these other other folks John I wonder if you could maybe think about this in the context of health and medicine yes so so the the comp that I was thinking about is the Amazon same-day delivery algorithms which we're in Boston it's a very clear hole in where you can get same-day delivery in Boston it's completely encircled by same-day delivery but it happens to be tied to a region that is not as more not as able to have purchasing power right it's primarily in African American region Roxbury more-or-less and so I work in health and so I'm interested in the way that the social determinants of health don't really come through and a lot of the data that we capture and it strikes me that the social determinants of purchasing have some of the same contours as the social determinants of health and there's very little in health that is is definitive about your future as your zip code your genome is way less predictive than your zip code of your health and it doesn't capture a lot of the the kind of data that we capture out of the health system in electronic health records doesn't capture how long it took you to get to health care or the choices that you had to make about going to work versus getting health care or whether or not you wanted to get food or have your prescriptions filled and so I think that the same structures that the AI found in order to predict who should get same-day delivery are likely to find and create holes structural holes in in the way that we predict how healthcare resources ought to be allocated and priced so that's the one that I look at the most because it's very predictable when you look at healthcare usage all right there's a straight line to the economics of the individual and their usage of the healthcare system and their capacity use the healthcare system and so it's it's already a system that's opaque if you've tried to get your medical records you know this so if you take a system that's already opaque that you have very little right to look at your data come compared to what I can look at my bank data I can't see anything in my health record if you have an opaque data system that's sort of vending the records secretly and quietly to AI systems by moving to the technology actually strip out the requirement to consult with ethicists genetic counselors health counselors community systems and support systems right it's it's almost a perfect recipe to do the exacerbation that Kathy was talking about thanks on an Argand can I ask you to for an example we've heard I think several examples of bias in AI when you put it in a position of power over people when you put it in a decision-making context whether it's jobs criminal justice health obviously really important really great examples what I've been looking at what some colleagues at Princeton is a different situation perhaps a more subtle kind of bias bias and a eyes perception of the world and I think this is important as well because I increasingly mediates our own interaction with the world that affects how we perceive the world through search through natural language translation through automated computer vision systems that label things for us through voices systems on the phone yada yada yada so what happens when there are biases in these contexts to understand the difference let's go back to human bias once upon a time maybe we would have naively believed that once we had equality for everyone in the eyes of the law then equality of opportunity would automatically results right well not really today we know how strong our implicit biases can be bias is that we are not even necessarily aware of but which affect our actions sometimes so we started to look at implicit biases in AI we came up with a version of the implicit association test for the machine and specifically we looked at this in the context of a popular natural language processing technology called word embeddings what are word embeddings it's a simple concept you train these machine learning using text on the web and it builds up an understanding of their relationships between words that's what it's about now when we talk about implicit biases and people we know from 20 years of research in psychology and cognitive science that many people perhaps most people subconsciously associate women for example with the Arts and homemaking and families and men with science and math and couriers and so on we found a way to interrogate the machine for exactly those biases and surprise surprise we found that it behaves exactly in the way that humans are documented to behave when faced with these pairs of concepts so that was one we found another was racial biases the machine just as humans these word embedding techniques considered European American names on average to be more pleasant than African American names so what can be some of the consequences of this after our paper came out brob spear ad concept net I believe looked at training a sentiment analysis system just to show the effects of this on a purpose of restaurant reviews and what he found was that the system picked up just based on text on the web that the words Mexican and the and the phrase illegal immigrant often occur in proximity to each other and so it picked up at the word Mexican is somehow related to illegal and therefore must have a negative connotation and as a result was ranking Mexican restaurants lore okay this is an example of a very subtle kind of propagation of bias and AI through several steps and where it originates as just a eyes perception of the world in the models that we built so that was one example that we did that was really instructive for me in thinking about this our paper appeared this year and the April issue of science if you're interested in looking that up I'll just end with one more related example to this and this is something you can go online and look up right now in Google Translator Bing Translator something like that again these AI systems replicating patterns as Kathy said very good at picking a pattern it's not just in text but through that patterns in our world disparities that reflects our history of injustice and inequality and disparity and so on so here's an example the turkish language in many languages like it don't have gender-neutral pronouns so a sentence like Ober docked or could be either he's a doctor or she is a doctor but when you translate that to English it's gonna come out he is a doctor every time you try it with nurse it's gonna come out she is nurse every time and these stereotypically gendered translations that the Machine automatically produces almost perfectly reproduce the labor statistics that we have in occupations in our society in our country so very complex pathways they affect a eyes perception of the world and in turn they affect a variety of applications that we're building using natural language technologies using vision technologies so that's my provocation for you thank you Arvind so I want to now turn to the question of what can or should we do about this we've already heard about the problems of sort of not even being able to necessarily recognize the bias as Cathy was mentioning that there are certain variables that are deeply important to be able to recognize for instance the social determinants of health that might be absent and I think Arvind did a great job now to explaining that when we rely on kind of cultural products like the language on the web to learn to teach machines how to understand the world that they're very likely to inherit these kinds of stereotypical associations and that's unsurprising then too that the criminal justice system future was mentioning might suffer from similar problems so I just would like to turn now to this question of what we can what is already happening to address these problems what do we what are our possibilities here maybe we can switch off the order does anyone want to start ok so right so I think because I wanted to jump off of what you were saying Arvind that implicit biases everywhere like literally every time we look for it we find it and that's why when I told you it's a thought experiment I use Fox News as an extreme example but the truth is that same basic idea happens everywhere anytime you automate something that was you used to be a human system you're going to pick up all the implicit bias that the human system always so how do we circumvent that and so I'm gonna give you a really wonderful story about what actually happened with Orchestra audition so some of you might have heard this but they notice that Orchestra auditions were really nepotistic this is a bias people the conductor's would basically choose the students of their friends things like that or their own students so they decided to do better and what they did was they put it curtain in between the listeners and the audition er and by the way at the beginning they noticed that the curtain was showing feet so they lowered the curtain drops more they didn't want to know whether it was a man or a woman and then they even noticed that they could hear high heels on the hallway and so they put a rug into the into the audition space and not only did they get rid of nepotism but the number of women went up by a factor of five and I want to say like huge success but what did they do differently from what I was telling you with the machine learning algorithm well it's actually the opposite of a machine learning algorithm machine learning algorithm basically says thrilled the data at a wall look for patterns assume the data is perfect in other words data is perfect just find the patterns in the data and replicate them the orchestra audition was the opposite was saying what do we care about music sound that's it that was the first thing they did what do we care what are our values the second thing was don't tell me anything else because other things could be a distraction and we see that with implicit bias studies that if you have equally qualified people but you also have information about their class their gender or their race it will be held against them so only look at their qualifications decide what those are and forget about everything else well I'll just jump off of that which is I think one of the lessons is that this is not a problem that was arrived at simply and it's not likely to be a problem that we have a simple solution to so it's not just that they blind it they were iterative and so starting with the idea that you're going to have to correct the bias iteratively that it's probably going to be an intersectional kind of problem that needs technical elements policy elements governance elements and then adjustment right that's a really good mindset to be in because if they had started with the curtain solves everything like they wouldn't have gotten the results that they did and so that's I think the mindset that we're going to iterate all of these elements and they're going to be designed to play together right to help create something that trends towards fairness right that's I think is as important as any of the first steps that you take is they do that you're not done you're never done I just want to add though making yourself blind to characteristics isn't necessarily the right way to make sure that you're not building in bias right but bias can be encoded in things that might otherwise look quote-unquote neutral right as you were describing and it's only by having those attributes right we all understand that we collect data about race and gender so that we can police right to limit discrimination and so by blinding ourselves to those characteristics in that example you just provided you would lose data that was necessary to police and make sure that people were being treated in ways that were fair under some definition and so one has to be careful when we think about the data that's necessary to look at and to protect against certain kinds of biases that it's not always blindness that we should be seeking in thinking about strategies more broadly though I think there's a bunch of really important things that we ought to be thinking about um one is holding people accountable for the tools that they use right and so in the example that I was talking about with the compass software the idea that a government can be procuring a system whose code is proprietary that there are limits on how they test it where they're not sure for example how an important attributes such as gender is being used is it being used as a factor a data element or is it being used to norm the results that's a really important difference right that they could procure a system that had been trained on a national sample and never examine for whether or not it stood up you know local application right like that's just bad data science but it also has enormous implications for the quote unquote fairness of the results that are produced and certainly those who are using tools you know even when they're math even when they're algorithmic have to be held accountable for the tools that they choose and it can't just be like oh well it's a black box right that we have to think about ways in which we ensure that those black boxes are tested for the values that we want to carry forward and that's going to require different kinds of professionals different kinds of review and different kinds of technical approaches we're going to need and I think Arvind I'll probably talk about this there we're going to need algorithms that help us police algorithms and other sorts of approaches that help us ensure that our systems are producing the values not just the results that we see thank you dinner you set it up perfectly for me I think in terms of what specifically machine learning researchers can do and here's one recommendation I want to suggest that we should get away from what I call the accuracy fetish and what do I mean by this some of you may know this better than I do but some of you may not so you might find this interesting here's how here's a major way in which progress in machine learning research happens today the community tends to agree on what are the important data sets benchmarks and particular machine learning tasks that we want to make progress on what are the baseline scores to beat and that kind of thing and then each year there's a competition where hundreds of teams from all over the world try their hand at being those cores at beating whatever's the current best score they're widely known benchmarks and data sets for this kind of thing a classic one and language processing is called the penn treebank more recent one in computer vision is called image net some of these might be familiar to you and so this works pretty well it has a lot of advantages it allows you to know at any moment in time which group of algorithms is performing really well it allows you to quantify progress it's great on the other hand the downside is that when you've got the whole community almost you know one dimensionally focusing on these competence structurally it becomes very hard to address bias because if you're gonna focus on factors other than the one accuracy metric you're not going to win next year's competition right and it's worse than that a lot of the time even these training datasets these benchmarks that have been created for this purpose themselves incorporate and embed our historical biases and prejudices and so doing well on those benchmarks necessarily means reproducing that bias so that's kind of the situation that we're in today with the process the way in which machine learning algorithm and model development happens and I want to kind of try to encourage us to get away from that a little bit think about a more multi-dimensional way of evaluating how well we're doing with our algorithms great thank you so I also would like to maybe try to dig a little deeper and and talk about some of the kind of technical and policy proposals that have already been floated about these ideas so we've kind of talked so far at a sort of a higher level about you know what we might do generally speaking but I'd like to you know reflect because there has been I think really interesting developments and in particular it might be worth reflecting on you know what are the challenges of adopting and using these new approaches what needs to be overcomed in order to adopt them and so I I wonder if I could ask anyone on the panel just to maybe offer the audience some sense of the work that's happened in the past year for instance in this area so I think one of the initial responses to concerns around quote/unquote black boxes right as this statement we need transparency right we need to be able to look at the code and quickly that has kind of fallen apart people are like well any talented programmer can hide an awful lot in the number of lines of code that are in that machine and the way in which one algorithm interacts with another algorithm might not be obvious even after some careful examination so we've seen the conversation turn to issues of explained ability and interpret ability your own work looking at the concept of inscrutability and those concepts there's lots of different levels in which we can look at them I had mentioned the fact that many of these systems are proprietary right and when we think about a piece of code that's being used and I can talk about this in kind of systems that are not about AI right so people voting using technical systems to vote right the idea that there isn't an ability to scrutiny to scrutinize that code whether it's literally looking at it or doing different kinds of testing requiring it the requiring that it be built using languages that we can actually develop formal proofs off of right that there are different ways in which we can begin to both constrain and interpret and test and there's been a whole range of work coming out of the technical community on all of those issues I would say the policy community is somewhat behind in that there's a pretty robust conversation now about the question of what does it mean to explain to people the logic of an AI system right how is it thinking and what does that require and if we want people not just in the context of bias but we want people to interact safely right with machines they're learning about us all the time how do we learn about them so that we can interact with them and that requires us to think about what are the assumptions of the models what are the limits of the models what are the biases that might be built in intentionally what are the biases that might be a product of the data the limits of the data the collection processes of the data and I think one of the areas that is right now underutilized that we might think about is the whole body of work around reproducible research right which has struggled with a similar set of ideas about how can we actually understand the results and so I think that there's some disparate bodies of research that we need to start to knit together I'm gonna jump in and like from a different angle I mean but building on that so another example I talked about in the book I wrote about this stuff called weapons of mass destruction a little plug there is personality tests also about getting a job right personality test so there was this one personality test which we suspect embeds an illegal aspect what is it's actually a mental health exam called the five factor model also called the ocean score it's embedded in this Chronos personality test we suspect we deeply suspect it basically that's the the jurisdiction of the EEOC right and like the EEOC is a regulatory body that well they need to know how to prove this how to build evidence that this is actually constitutes a mental health exam and I should mention that seventy percent of people in this country when they apply for a job have to take a personality test before they get an interview this is a huge deal if these personality tests have illegal elements in them we need to we need to figure that out right but I'm just saying even though it kind of looks pretty obvious it's still a big hurdle for the regulatory agency in charge to like make that case legally and it's just like this technological divide problem which will be helped once we have better tools and better ways to understand AI but right now it's it's difficult yeah I mean one of the things we're seeing in health is there's there's a couple of different factors one is that there's been a sort of a generic movement in the patient population towards rights to your data activation there's been a sort of long slow policy slog towards having it be illegal to block your data flowing back to you and the NIH has also put together a multi-billion multi-million dollar hundreds of millions of dollars product over ten years trying to create a million person dataset that is seventy five percent people who've been underrepresented in biomedical research historically and then to make that resource shareable so it's available as a training set in a democratized way there is verily which is part of Google is part of the project that they're going to be part of a collective governance structure and so it's going to be really interesting to see if and we know the limits of the inclusion and the data set because it doesn't capture the social determinants of health but just beginning to have an AI system that's in a collective governance structure that includes representatives of the cohort at a significant level that's overseen by independent independent review board and an access and resource access community that's a pretty new way to do this and so that's in the last year or so that's probably the biggest thing that we've seen in health beyond sort of the generic technocratic application of AI to things like image detection all right this is the first time that we're dealing with it in a cultural context make sure on an orphan to close us out I wonder if you can kind of talk the with us a little bit about the difficulties are kind of dilemmas involved in an intervening what makes this actually sort of challenging and even possibly unclear about what the right thing to do is so if I'm going to close this out I want to do it on a more positive note so let me say this to changing your question slightly slowly I think one way to make progress would be to appreciate that bias is not only a danger for AI but also an opportunity because one of the things that machine learning technologies can help us to do is not necessarily replace human decision-making but instead shine the light on biases in human decision-making and let me give you an example of this one of my favorite examples of human biases that are really subtle you'd never expect is a study out of LSU that looked at judges decisions depending on whether or not they're college football team won or lost the previous night crazy right it's not a thing you would ever think of until somebody looked at the statistics looked at the data and a lot of this research is being done using machine learning and the paper showed that judges are harsher in their sentences if their favorite team lost the previous night and so the answer of course is not to replace judges with AI but instead to use machine learning to at least understand the way that humans make decisions and perhaps have automated augmentation of the way we make decisions of automated input into human decision-making and of human oversight over machine decision making and so these can be passed forward Thanks Arvind so I would hope that everyone will join me in thanking the panelists it's been a great conversation [Applause] Thank You Solon and thank you to everyone on the bias panel our next panel turns to the dramatic changes in science and technology policy which were previewed earlier how will these affect aí and aí governance I am delighted to welcome Julie Brill former FTC Commissioner to lead our panel through a discussion of where we are and what we do about it thank you I am so excited to be here this is such an interesting conversation and such a great place to be having it and I am especially excited because I have the ability to introduce to you a powerhouse panel of incredible thought leaders on this issue so immediately to my right is current Federal Trade Commissioner Terrell McSweeney and just to make sure that you all know what the FTC does my former agency terral's current agency the FTC is our nation's premier consumer protection agency that focuses on stopping unfair and deceptive actions and unfair methods of competition and spends a great deal of time thinking about data privacy and other issues to Terrell's right is Nicole Wong my dear friend who is or was the deputy chief technology officer of the United States and she previously served as vice president and deputy general counsel at Google and legal director of products at Twitter and then to Nicole's right is Vanita Gupta the former head of the Department of Justice's Office of Civil Rights and she is now the president and CEO of the Leadership Conference on civil and human rights you can see it's just an unbelievable group of people okay so what I thought we would spend just a nanosecond doing is we've heard a lot about concerns about AI and I think it's incredibly important to be thinking about those concerns as you're developing governance structures but as everybody knows who's involved in policy issues you have to balance concerns against benefits risks and benefits and the truth is that artificial intelligence does have a lot of benefits to society and I think it's important to at least at a very high level and very briefly mention those so for instance enhancing efficiency whether it's and transportation systems or other systems AI will obviously play a critical role there increasing safety whether it's through autonomous vehicles or in the medical sphere AI will clearly be in play a very important role going forward in enhancing safety improving accuracy improving security I know cybersecurity experts who are dying to get their hands on more AI to help them deal with cybersecurity threats okay those are just some of the many many benefits that we see coming down the road the risks of AI you heard a lot about bias on the last panel there's also opaque decision-making security and safety vulnerabilities despite the fact that AI can be used to help in those ways some sometimes there's going to be some significant problems with respect to security and safety uh pending labor markets you know AI displacing certain jobs certain entire sectors of workers AI and machine learning also challenges traditional notions of privacy and data protection including things like individual control transparency access and data minimization it also on content and social platforms can lead to narrowcasting discrimination filter bubbles so we need to figure out ways to balance these tremendous opportunities with some of these risks and so that's what these brilliant people are going to tell us how we're going to to do that so okay let's open it up first question governance in many ways can be when you think about how do we govern AI in many ways that question ought to really be to whom is AI answerable who is responsible for AI for the inputs for the outputs and even for the black box so let's break this down a little bit before we get to that big question let's talk about the current governance gaps so Nicole I'll start with you tell me with all your experience both within companies especially at the White House what do you see as the current gaps in governance with respect to AI Thank You Julie so let me start by thanking Kate and Meredith for putting this together for a second time it was brilliant the first time it is still unbelievably highly produced and fabulous the second time around so I want to thank them for all the work they've done here so here's the thing I do I am no longer in government and I don't work for a single company which means I get to talk freely without having my talk by anyone else so let me be real about what what we're doing in artificial intelligence under the current administration because my experience was obviously in the passport under Obama here's of a it may be a little early to be judging this administration and where it will go with artificial intelligence I will tell you that I have friends who are still there serving in many areas of government but including the u.s. digital services and they're getting very positive signals about using technology to make government more efficient in the delivery of services so we should take that with as much as we can in terms of hopefulness I will also say though that there's a bunch of signals that are not fabulous there is a dismissiveness shown in this administration around regulation there is a dismissiveness of ethical guidance there's also and this feeds partly to some of the frameworks that the prior panel talked about right which is you have to be conscious of and to want to really interrogate the bias the racism the sexism in our existing systems and datasets in order to build meaningful policy around artificial intelligence and I don't see that desire in the current administration and I think we we need to focus really hard and push our policymakers in that space so that's that's like gaps in this current administration that concerned me there's in in artificial intelligence in particular I feel less where we've retrenched anything cuz it's early but that we may be missing opportunities and there are three areas where I think that we really need to focus the first is to figure out what are the principles by which we measure success of artificial intelligence I don't think we've agreed on that like what do we want the goals of artificial intelligence to be in any given sector and until we have those principles we have nothing to benchmark against that goes to like a second layer of I see companies really struggling to figure out who's supposed to make these decisions this business leader is the technologist is it some outside group that's more independent that says your responsibility as a good corporate citizen is to have AI deliver certain types of results I think that responsibility and accountability doesn't exist currently in our structure nor do they have anything to market to in terms of principles and then and then there's a third really operational level which I think we see in in the privacy field which is what's the checklist what's the toolkit that I go into any institution with and say like did you check all these boxes to make sure we're doing this right how do I build a process and tools to ensure the quality of these systems and I don't think we have that yet great Terrell what do you think governance gaps what do you think saying thank you to everybody for organizing the terrific conference and thank you to you julie for introducing us like nicole I don't work for a company either I do work for the US Federal Trade Commission though but if I say the following disclaimer I can give you my independent view just like Nikhil so I'll just say I'm not representing the official views of the FTC nor those necessarily shared by my colleague acting chairman Olhausen and I'll launch right in because they're talking about governance gaps under the Trump administration to say that I think Nicole's right that we have a big opportunity to try to shape the debate here around governance and ethical concerns around AI and that it will be a shame if the administration doesn't continue to use its platform as a convener to try to facilitate that conversation certainly at the Federal Trade Commission we're looking carefully carefully at these issues and I think we have something to offer over time as you know well Julie from your terrific work on privacy and security at the FTC the FTC has been very focused on privacy by design and security by design concepts that really take transparency notice and choice but also a process based approach much like you're talking about to these notions of privacy and security that are so complex and so dynamic and so what I what I think we can do with these frameworks is adapt them as the technology gets smarter so I think we can come up with governance by design I think we can come up with ethics by design ultimately but it's going to take a lot of conversation about what those key components are I'd argue probably for explain ability data quality testing some sort of mitigation but first and foremost I think Nicole's making the right point which is we need to understand organizationally who is in charge of these decisions and relatedly as a law enforcement official I need to understand which humans I'm going to be holding accountable for machine dispute making as well which is a big challenge and I'm not sure we fully got it through yet Vanita what do you see as the gaps that we're currently struggling to or we should be struggling to fill so I probably am taking the most pessimistic view here on stage and part of that is related to watching a Sessions Justice Department undo a lot of the civil rights work that we were doing particularly justice reform and policing where there's been such a significant retreat forget AI from even understanding and diagnosing and remedying problems of discrimination in human decision-making in these sectors such that I really do worry about you know when I was at the Civil Rights Division we were kind of at the cusp of having some serious conversations with our state local and federal law enforcement partners about AI about predictive policing about pretrial risk assessment instruments that we're using AI and there is you know this is a law-and-order administration that is done such a retreat on all kinds of aspects of reform kind of run by humans that even thinking that there's going to be any kind of pressure brought to bear by the administration on investigating in a really serious way the ways in which I can speak about law enforcement is going to be looking at AI I I think there's a huge gap a huge gap there and what I'm excited about is that you know AI the AI now initiative and the private sector really do need to step up in this space and kind of take I mean they're already I think there's been significant leadership in the private space but I can't wait for the government at this point to be putting that pressure on in the way that we might have in an administration that cared about facts and science and fighting discrimination because the reality is that you know when there was a Justice Department I was willing at least to begin to investigate some of these questions you had funding streams that could fund research through the office of Justice programs through NIJ through any number of issues you had a White House that was that working with AI now on a convening last year and really beginning to say across agencies how do you deal with these questions in health and labor and criminal justice and the like and you mean for for most of us we aren't feeling that kind of pressure brought to bear and so right now I think there really is a very serious set of questions about what kind of transparency there is with vendors that are using AI and I'm going to speak about the criminal justice and police and context specifically we're in criminal justice systems are at large they were such so much racial discrimination and racial bias that some of it is structural some of it is implicit bias that we really have to contend with you know being able to understand and get through the black box of what vendors are using as their algorithms can produce this stuff I was just looking at pred poll online a predictive policing company that's a little bit further along and doing this research and on their website it says well it only uses three data points and making predictions past types of crime place of crime type of crime of crime it uses no personal information about individuals or groups of individuals eliminating any personal liberties or profiling concerns that's on their website really I mean part of the problem here is that even if we are if the private sector isn't even willing to admit that there are serious concerns and questions about the stuff in a real way then you know without pressure from government and without those kinds of gaps being filled I think that we have not only I think a real we have a political will problem among some of the companies that are that are propagating the stuff we have a research issue and then we have to have an ability to really advance and engage these questions in a real way and I believe that civil rights groups and civil liberties groups need to be at the table with the researchers with with the companies that are doing this stuff in a very real way the Leadership Conference just a few years ago with the ACLU and other groups created some principles about this stuff that were principles of inquiry really very basic as a way of helping provide some guidance for the ways in which AI is being used particularly with regards to communities of color in this very imbalanced power structure in criminal justice where consumers the consumers are also the victims of racially biased or biased AI and so you know I I weave I do think the private sector has to fill this gap because the conversation at least in the criminal justice environment but also in others around issues of discrimination has changed so wildly Wow terrific so you know let's talk a little bit about what are the institutions that we can bring to bear to to deal with AI Vanita you just talked about consumer groups and civil liberties groups as well as private sector Terrell you mentioned the the FTC has a role to play and and can do that but what I'd love you each to do is a little bit of a thought experiment you know if you could create an agency or if you could actually let's let's make it more real-world if you could pick an entity that exists now where do you think that this conversation should be happening who is best suited or what is best suited what agency what entity is best suited to deal with some of these issues and let me tea this up I mean we talked a little bit about some of the federal institutions right the FTC the White House DOJ pretty important institutions obviously now we have some issues with whether you know there is a will to go forward in some of those institutions not necessarily the FTC but the others there's the state attorneys general who have their hands full dealing with lots of other issues maybe not AI is gonna be first on their mind we've got the European regulatory scheme and there's a lot of things happening in Europe right now including a law that is going to be coming online in May of 2018 which actually addresses some profiling issues it addresses some automated processing issues in a very process-oriented way actually and not so much in an outcome oriented way and then we have standard-setting organizations we have the potential for ethical boards we have the potential for international organizations my personal favorite is the OECD Nicole you and I talked about this the organisation of Economic Cooperation and Development which was an entity that created the Fair Information principles used by privacy professionals has been in use for decades now at this point so maybe they should be developing principles for AI use but I'm interested in what you all think so let's do it first ideal ideal governance structure Nicole you want to you want to give it a whirl I'll give it a whirl and and I kind of kind of elide through like not quite answering the question what her Ave I you want to regulate right so because there are different things that we want to think about one is I'm gonna go back to the principles problem we have a real principles problem like can we agree and agree not just here in the United States or with our Western partners but globally on a set of principles around the development of AI is I think we should not discount the rapid progression that China is making in the area of AI and bear in mind right the norms that they have around individualism versus collectivism about openness versus closed about privacy versus and censorship they will not be the same the training data will not be the same so who develops the frameworks around these things and gets it deployed first will matter and so global principles are gonna have a really big part in how we set our overall frameworks and it's gonna matter who goes out first and so if we can have a an entity like in OECD and I don't know if that's the right one but that's it's got a lot of the countries in it right to develop something equivalent to the Fair Information privacy principles that might be a really good start so that's one but but there are other components of AI which I think that would be a terrible place to do it like who makes a determination about whether or not AI deployed in a financial institution in a healthcare institution and a commercial and enterprise is fit for purpose what's the right regulator for that and I I love the FTC and I love a lot of the models that they've done for regulation but I also feel like the generalist position for those regulators is not going to have enough understanding of the landscape for that for particular sectors to say you know what your data is not right you know what you didn't include X I just don't know that they're they're gonna be qualified for that and so I think that might be a different set of regulators and then there's the question of like automation and the displacement of labor if we want to solve that problem whose problem is that right and and I my gut says we're either creating a new entity or we're we've got to get a bunch of existing agencies together to work on that problem or focus on income transfer yeah generally speaking which is not necessarily gonna be any agency other than the IRS or the WTO or something along those lines Terrell what do you think I'm definitely not gonna take on the labor question but I am gonna agree with with what Nicole just said cos I think she said a couple of important things one generalized consumer protection enforcement agency like the FTC is awesome I love it I work there it is also not necessarily able to do everything for everybody and in fact we are speaking in incredibly general terms about technology that is very very different depending on what it is right so I think we need to move the policy conversation into something more specific and what we need to do is really work with other government agencies that are expert regulators to evolve their understanding of the technology convene and try to work together on solutions that's actually going to take a really important innovation in government in the u.s. that we started in the Obama administration and I'm worried about whether we're continuing which is including technologists front and center with in government agencies with policy makers so that we have people who understand the technology and how it's working helping inform the decisions that are being made about what's appropriate here that's a really big really big important innovation and we really need to continue it in order to get the right solutions I think and and present them to people now I would argue the White House actually the office that you used to work in Nicole apparently isn't well staffed at the moment is a really good office for convening all the different parts of government that have equities and the debates about what we're doing about autonomous cars or medical devices or any of these other kinds of technologies we're talking about in order to try to get some of the right solutions to the table as has been pointed out they don't appear to be actively working on these issues right now so that's that's unfortunate I do think it also has to be a global conversation as well these are really complicated questions and just like we have in the privacy space we're going to have different norms that we have to balance in the privacy space this is a real challenge in the u.s. that we do have some really nice analog world brick-and-mortar world red lines that we've held on to pretty well for the last thirty years forty years the civil rights laws right equal opportunity laws Fair Credit Reporting Act one of my favorites because we enforce it at the FTC but we have areas in the brick and mortar world in which we have decided to make sure that there are special protections for people when they're accessing credit or housing or jobs in order to protect those choices so I think some of these laws are really good frameworks that we've already agreed around where we can really say look we have this norm we need to make sure we're importing it into the digital world what do you think Bonita I've had all of this so I mean I I think that that both of my co-panelist have raised really good suggestions and I you know in back in the day we would have had through the White House the ability to have high-level principles engage from every federal agency really thinking through some of the questions on big data AI and and bias and inclusion and civil rights and civil liberties you know in one way that some of us have been thinking about this is that once you have kind of the political is engaged that ultimately one notion would be that you ingrain and embed technologists but also advocates in the offices of civil rights that exist in every federal agency that are really tasked with thinking through these issues of AI and bias and civil rights enforcement that are enforcing every slew of civil rights statutes exist in the country now again sorry to paint the bleak picture that the civil rights machinery in the federal government right now is being pretty harshly taxed or or eroded in various parts and so imagining how that could actually happen right now and being institutionalized is difficult but I think that there could be a real value just long term about institutionalizing in career staff who have day in and day out the responsibilities of really thinking through some of these cutting-edge questions and having that kind of test force working through these things I right now would put my hope more in having kind of independent agency that is that is operating and it may just be that it's a private sector kind of really effective coalition of thinkers on this stuff but it's Amex as I said of the right people around the table that are coming up with but some guiding principles and breaking it down AI is so many different things in different sectors and so understanding the way in which AI is important and can be used to as the last speaker on the last panel said to actually potentially point out where biases is coming in to human decision-making but also to ensure that there is an ability to evaluate and study where bias may be entering into a I I just I think there should be kind of something that some kind of body that is doing that on a regular basis broken down by the different uses of AI in different sectors but ultimately the long-term game would be I would hope it's more of an institutionalization of this very set of inquiries in every federal agency because of the amount of funding and work that they are propagating in every community around the country we also have a threshold problem right now which is we need to understand more about what is actually happening and in order to do that we need organizations like this one we need civil society we need research and one of the things we could be doing at the federal level is sorting through some of the laws that actually are barriers to performing that kind of research in the first place which is also a conversation that we're not having and I think it's a significant one because we need to make sure that we can continue to have a way in which we can conduct some of the testing and work and and research that that is going to help us understand what's happening absolutely let's go back to the neatest point for a moment about market forces and the private sector because one of the things I've spent a lot of time thinking about Terrell I know you I don't actually know all of you have in different ways thinking about is how do you empower the individual in these contexts in these ecosystems and you know on some level look we've clearly over leveraged notice and choice and and I think probably 99% of the people in the audience would say how could you possibly do transparency engage in transparency and notice and choice in an AI system but I wonder about that I mean is there some usefulness to having that market force so that consumers are walking with their feet I mean are are voting with their their fingers on their keyboards right and moving away from entities that are mistreating them and and and concomitantly are there companies that are going to be saying you know we we're not gonna wait for the federal government to create an independent agency we're gonna set up a standard setting organization we're gonna set up a partnership on AI which we know already exists you know are there going to be forces out there that will leverage the market to get us to a better place with respect to AI or to a place where we can at least begin to trust it and understand it for a bunch of reasons really one is I only ask hard crust to try to answer the question so there's one layer that which is just the transparency of when does the company let you know that they're using AI to make a decision right and again we haven't decide a greed what's the right trigger cuz it can't be all of them cuz I get enough notifications and I don't read the ones I get so like it can't be all of them and so it should be the important ones and then we have to get to some agreement on what's important right I also think what my engineering friends tell me right is how am I supposed to get your consent to give you notice about how your day is gonna get used what I don't know yet but I'm pretty sure one day in the future it'll be really important and I don't know so so how does that consent work because at the FTC n in Europe write specific consent right enough to let a person know what that use what the boundary is of that useful be and and I think that big data analytics and machine learning AI kind of is in real tension with that notion and that's that's a really big challenge which from a regulatory perspective means does that mean you clamp down on uses to make it not harmful to people to have the secondary uses right I mean the issue for me as I'm thinking through that question is I don't know what that looks like what is notice and consent even mean when AI is being used in the criminal justice context or by police departments and encounters with african-american men on the streets and certain kind of neighborhoods in Chicago just for instance or LA where they're using red pole and so I mean this is that relationship is that in that particular context just so that question becomes much more charge where the power dynamic with who is using predictive policing and who the subjects are is already so so skewed so it does it certainly in the criminal justice context I don't think is is an answer to that question because residents in Chicago or Los Angeles may very well know that there are police department is using predictive policing and may not understand the first thing about what the algorithm is what what is actually what the you know there's a lack of transparency still about the uses of that and so it becomes a little bit tricky just to rely on market forces so to speak when I say that I think it's important for the private sector to come together I think there's a lot of potential in having kind of not necessarily a government-sponsored independent agency but a group of as I said kind of people who are very deeply engaged in this that are setting standards and asking the right questions in different contexts about the use of AI I mean I think we're pushing a lot of things the other in a you know in a very complicated conversation so we only have 30 minutes and like 30 seconds left I guess I would say just as a Federal Trade Commissioner I'm not prepared to get rid of the concepts of notice and trace in the privacy context so nobody misinterpret what I'm about to say right and I think that we still need to hang on to these concepts when we're thinking about providing people with information so that they can make informed choices about what is happening when they exchange data for services and other things I think that's incredibly important and I think in in many cases what's happening is people value that their data is potentially being undervalued in that transaction or they're not valuing it enough and so what I would love is for everybody to be an incredibly an informed consumer and for all of us to understand how all of this technology works all the time but that's simply not possible as Nicole pointed out noir isn't even really that useful in our day-to-day lives necessarily but I think there's such a tremendous information asymmetry that I don't see that there's a market force here that corrects it if we are simply taking humans and consumers out of the equation entirely absent some sort of regulation right so so that tees up well what is what is the regulation and there's some really foundational questions that we haven't answered yet which is when are we going to say okay that's a choice that humans are going to make and that's a choice the machines are and we don't know all of the answers to those questions now I've suggested I think we have some guideposts from the brick and mortar world that we can bring in here where we've already established when we want to know how and why someone's making a decision about us I want to know what my credit score is I want to see what's on my credit report so I can understand why I'm getting the certain kinds of offers that I'm getting tonight and so I I think we need to continue to be able to have ways to engage in with this material and in the ways in which the machines are making decisions about us I'm not sure exactly what format that takes it's gonna be different than the fits privacy policy it's right but I'm not prepared to give up the notion of consent as a human sorry that's why I ask the question I think it raises an incredibly important issue and transparency you know there's a sunshine effect to the fact that if a company knows it's going to have to make a disclosure or a police department knows it's going to have to disclose its practices over the past year that could make it start thinking about well gosh you know we we actually need to make sure that we're saying and we're doing good things with respect to AI and that we can stand up to what we're doing all right well our time is definitely up so would you please join me in thanking our panel on governance [Applause] [Music] [Applause] thank you so much to Julie and all of our governance panelists tonight we are up to the final panel of the night and it's gonna be on rights and liberties and it's my honor to introduce a pioneer in thinking about technological due process it's professor Daniel citron and she has been such an important scholarly leader here and has influenced my research speaking personally for many years that it is a particular privilege to have her here tonight and she's going to be guiding a discussion with leaders in four different domains economics technology anthropology and civil rights but we also really want to hear from you on this panel so as you listen if you start to get questions which I hope many of you now have start tweeting them to hashtag a on our 2017 and we're actually going to get those to the panelists because there's so many of us tonight we're gonna have to do it via the old Twitter way but please send through your questions and we're gonna get it to the panel tonight on that please welcome Daniel citron there's nothing like Kate saying that she admires your work to give you like great joy as she's our North Star in AI so thank you so much so we it's true we have an interdisciplinary dream team and we're gonna start off with bless I I get I I'm gonna get this wrong right again a artists who is the head a machine-learning at Google he was stolen from Microsoft and is bringing I think surreptitiously social justice to this endeavor or maybe explicitly so so welcome to the stage thank you so much thank you so much and I'm so not the not the head of machine learning at Google there are quite a few heads it's like a Hydra but but but very honored to be here and we we have in in in our group we have a lot of concerns that intersect with things that are being discussed here tonight but the one in particular that that I wanted to to talk about since since I have only a short time to talk is sort of in the spirit of solutions journalism a tool or a technique that that we've developed over the last couple of years that that I think might have something useful to offer to tonight's topic so what you're seeing on on the screen is an animation of of a little technology a very small technology that is showing up in in Android and in the newest release of Android called Smart select and the idea is that when you when you press on a piece of text there is a little neural net that that tries to take a guess as to what you're selecting because that does a better job of guessing what you're selecting than than just a regular expression or some some rule and it's a little neural net it's a classic example of AI albeit a very very small kind of AI but this is also an example of a sort of AI that you really want be on the device and not to be implemented as a service that Google runs selecting text on your phone is not googling something it's so it's it's a piece of AI if you if you like that one would like the company that makes the phone to embed in the device rather than to run as a service now the challenge is that if you're an AI researcher you know that the usual story with with deep learning with machine learning in general is that the use of the service is what produces the the logs the training data that allow you to make the next generation of the service better and this is this is what prompts the sort of why why big data is the new oil kind of narrative and in this case you know the importance of putting this this algorithm on the device and not sending the training data to Google is that you know of course you want you want to preserve the users privacy but then how do you make this thing better so we developed the technique in the group called federated learning that involves the device remembering all of the corrections in other words if you if you try to select text and you you change the beginning or ending caret then that correction is remembered by the phone itself and it dreams at night it does the same thing probably that that we do when we when we sleep we can consolidate memories that's why by the way if you don't get enough sleep you can't learn a new skill and and so it trains its own copy of the neural net but the really interesting part of this technique is that those learnings those changes in the neural net weights are then compressed and encrypted and sent back up to the cloud and combined with everybody else's compressed and encrypted changes in that way that learning can aggregate across all of these devices and in that way it's possible to separate deep learning from big data which which we see is a very important step in in moving toward you know all the benefits that come from from learning at very very large scale without having to have this compromise with with privacy and this is the sort of infographic that we tried to make to convey this idea to muggles I'm not sure if if it worked or not I was very I was very happy to hear my my boss jonsharland Reyes say at Google's i/o conference you want to do machine learning on the device as much as you possibly can it's lower latency it's closer to the user it's distributed and this obviously has a lot of a lot of interesting implications the more we can do it in terms of rights and liberties and I guess I should close by saying that when we think about about the future of AI I very much hope that we're not talking about a future in which there is some singular giant AI that that somehow is embedded in all of us which you know I've sometimes described as being a sort of Borg like future it's it's more interesting I think to talk about a more x-men like future in which the AI is that companies and researchers and so on build can augment us as individuals and and and I think that it's fundamental to think about AI that way in order for us to be more as opposed to less the the metaphor that this this this is a still I don't think I got permission actually but this is from the not very good movie made out of Philip Pullman's wonderful His Dark Materials trilogy that imagines sort of witches familiars as being like like a part of every human consciousness manifested in something physical and this is this idea of a familiar or of an extension of yourself that isn't quite you but it isn't quite not you is I think the sort of the sort of future that we should be aiming for and I I guess I'll stop there Ben do next [Applause] thank you very much I apologize for ruining the aesthetics of the event by having my notes in my hand but I wasn't expecting the podium to be imaginary I want to start by thanking Kate and Meredith for their leadership for their friendship and for inviting the ACLU to be a founding partner in the AI now initiative we are so excited to be your fellow travelers in every sense of the word on this journey it seems to me this is an auspicious time for us to get together and to ask these questions for two reasons at least first is as many of the speakers have said today there's still time to do something about the provocations and the questions that are being raised today it isn't too late for us to have an impact on the legal and policy and Technology debates that are taking place and the second reason is Donald Trump the democratic stress test of Donald Trump's presidency has gotten our attention it's much harder to believe as Eric Schmidt once told us that technology holds the answers to all of the world's problems and many technologists who were once fond of saying that they had no interest in politics that come to realize I think that politics is very interested in them by contrast consider how over the last two decades the Internet came to become the engine of a surveillance economy Silicon Valley's apostles of innovation managed to exempt the Internet economy from basic consumer protection rules that govern most industrialized democracies by arguing that it was too early for in too early for regulations they would stifle innovation in almost the same breath they told us that it was also too late for regulations because they would break the internet and by the time significant numbers of us came to recognize that maybe we hadn't gotten such a good deal the dominant business model had become so entrenched that to change it will now require a Herculean political effort so when we place innovation within or atop a normative hierarchy we end up with a world that reflects private interests rather than public values so if we shouldn't just trust the technologists Trust innovation trust the corporations and the government that employ most of the technologists what should be our North Star in this conversation as the civil libertarian I would offer that liberty equality and fairness are the defining values of a constitutional democracy and each of those values can be threatened by advances in automation that are unconstrained by strong legal protections Liberty will be threatened when the architecture of surveillance that we've already constructed is trained or trains itself to track us comprehensively and to draw conclusions based on our public behavior patterns equality is threatened as you've heard tonight when automated decision making mirrors biases that already exists in our society at replicating them under the cloak of technological impartiality and basic fairness what we lawyers call due process is threatened when enormous lis consequential decisions that affect our lives whether we'll be released from prison offered a home loan offered a job are generated by proprietary systems that don't allow us to scrutinize their inputs or methodologies and meaningfully push back against their outcomes since my own work focuses mostly on surveillance I'm going to devote my limited time to that when we think about the interplay between automated technologies and the surveillance society what are the kinds of harms to core values that we should be most worried about let me just mention just a few when we program our surveillance systems to identify suspicious behaviors what will be our metrics for suspicious these are the eight signs of terrorism I found this for sure in a rest area in upstate New York upstate New York which surely has hordes of terrorists roaming around my favorite I think is number seven putting people into position and moving them around without actually committing a terrorist act how smart can our cameras be if the humans programming them are this dumb and of course this means that many people particularly the usual suspects are going to be logged into systems that will in turn subject them to additional coercive state interventions but we shouldn't just be worried about false positives if we worry only about how error-prone these systems are then more accurate surveillance systems will be seen as the solution to that problem and I'm at least as worried about a world in which all of my public movements are tracked stored and analyzed accurately Bruce Schneier who is here likes to say think about how you feel when a police car is driving right alongside you then think about having that feeling at all hours of every day another danger in our eagerness to make the world quantifiable we may find ourselves offering the wrong answers to the wrong questions the wrong answers because extremely remote events like terrorism don't track accurately into hard categories like these and the wrong question because it doesn't even matter what color we choose on this chart once we've adopted this framework we say that terrorism is an issue of paramount national importance even though that is a highly questionable proposition the question becomes how alarmed should we be not should we be alarmed at all and once we're stuck in this framework the only remaining question will be how accurate and effective our surveillance machinery is not whether we should be constructing and deploying it in the first place one final observation about the interplay between rights and liberties and technological progress if we're serious about protecting liberty equality and fairness we have to recognize that in some contexts in efficiencies can be a feature not a bug look at these words written over 200 years ago this is an anti efficiency manifesto it was created to add friction to the exercise of state power fourth amendment they can't search or seize without a warrant supported by probable cause of wrongdoing the fifth amendment the government can't force people to be witnesses against themselves they don't get two bites at the Apple they can't take our freedom or our property without due process sixth amendment everyone gets a lawyer and a public trial by jury to confront evidence against an eighth amendment they can't beat evidence out of us this document reflects a very deep mistrust of aggregated power and if we want to preserve our fundamental liberties in a world of aggregated computing power I would suggest that mistrust should be one of our touchdowns thank you so one quick story before I introduce said until what we confirm today which I thought was Apocrypha but it turns out to be true that when when Ben Ben is Edward Snowden's primary lawyer and when yeah right okay sorry and when Ben went and when Snowden first got a touch with Ben before they ever talked on the phone there was a very important Supreme Court case that the ACLU lost before the Supreme Court in which the court found that individuals under surveillance lawyers representing people in human rights cases because they couldn't prove they were under persistence and total surveillance why couldn't they because all these surveillance systems were secret so apparently and it's true when when Snowden called Ben to see if he would be his counsel he said you know what Ben do we have standing now I love that so yes thank you so much Ben for doing that he's our civil libertarian watchdog so now we get to welcome to the stage Sendhil Bulava thunde who is a professor of economics at Harvard a MacArthur Genius grant recipient I feel like I'm a mother I'm excited to brag and and also as someone who has given really deep thought to discrimination in the workplace as well as working I like this with CFPB the Consumer Fraud the Consumer Financial Protection Board so thank you so much I've had to follow a lot of things but following the Declaration of Independence has got to be pretty hard so I'm just gonna give you a little bit about my background I think it's a little different here I'll start with a couple of things the first thing I'll start with is I have done a lot of work in behavioral science so part of what I'm going to talk about here is a little bit the contrast of artificial intelligence with human intelligence that's going to be in the background and the second thing is because I've done a lot of work on policy I kind of come to this with the view that there are a lot of pretty intractable policy problems so part of what I want to think about is how these things can play a role there so I want to start with probably a very intractable policy problem in the u.s. they're about 12 million arrests every year and I don't know if you guys knew this before started working on this I didn't realize this but shortly after arrest something happens you go before a judge and the judge has to make a decision fairly quickly about you it's not about whether you're guilty for this crime it's not about anybody there's evidence it's just whether in waiting for trial will you wait here at very comfortable jail or will you be sent home this is an incredibly consequential decision its consequential financially they're about 750,000 people in jail every year so if you just took a pure dollars and cents point of view that's a lot of money if you took a human point of view a typical jail stay is about two to three months in some jurisdictions it's nine to twelve months that's not a person who's been found guilty of anything they're simply in limbo waiting and that's insane and we have both types of errors actually if you look at the amount of crimes that are committed by the people who were released while waiting for trial that's also a shocking number so why is this problem so relevant it's at the epicenter of a lot of what we think about of crime for example people who look at mass incarceration depending on how you count it a large fraction of mass incarceration is jail not incarceration for crimes committed but incarceration for waiting now it's another problem because it's actually quite relevant for the artificial intelligence question we've talked I think you've heard a lot about prediction you know what the judge is asked to do here by law they're asked to make a prediction will this person flee will this person commit a crime weirdly the judge is carrying out the standard machine learning problem 12 million times every year in the United States so we just I will skip the pictorial but the judge is like a little algorithm of taking a defendant history and outputting a prediction of crime this isn't Minority Report it's actually what we do so you could ask the question given all of the data if a judge is executing this piece of code using the human brain maybe an algorithm could do the exact same thing now what's weird about all of this is these are all quantified we what are the judge predicting failure to appear we know failure to appear because a person appeared there didn't so what happens and how well does this work well here's some data here's the predicted risk and here's the release rate of judges I think there's some very interesting this is a this is from New York this is about 750 thousand cases so this is our risk prediction on the x-axis by the algorithm this is what the judge does to everybody in that bin there are two areas that I find interesting here the first is this area this is where the judge and the algorithm agree that is risk is increasing and basically release rates are dropping okay so there's a large set of agreement there this area is kind of shocking there's I would argue let's just look at what this is this is an area of very high release about 50% but yet the algorithm is saying these are people who are gonna commit crimes at around a 60% rate so they're extremely high risk individuals being good at least and I think that's the first sign that something is slightly askew so if you go back and say what if we were to rear anq people by predicted risk and decide who should be released based on that that might seem a little uncomfortable but let's just see what happens it's just data I can implement it well something weird happens which i think is part of what i want to come to my first theme which is optimism which is here's what would happen if the algorithm released nobody that's on the left the algorithm at least everybody that's on the right and this is a crime rate that would realize so how is the algorithm doing so here's a point I find useful judges released seventy-three point seven percent of people the algorithm at that rate produces about 8.5% crime rates the judge produced about eleven point three percent crime rates but you can go horizontally if we're okay with society 11.3% crime rate why don't we pick this point off over here and that point over here is an eighty four point six percent release rate so we can empty jail population by about forty one percent of people and not change the crime rate I'm raising this because I think we have every reason to be very concerned by these algorithms but we have every reason to feel if used correctly there's an enormous leap of optimistic potential with them if used correctly why is this happening because if I put aside everything I know about machine learning and I pull in everything no I know about human intelligence human intelligence is enormously biased we've talked about the bias of machines humans are crazy biased this is a hard problem if you've had 40 years of statistical research on human decision-making these are the kind of problems we do badly at what happens someone comes in you say do you see that guy the way he was looking at me that guy's gonna commit a crime we need the word he was looking at you I shouldn't play any role so let's go further and you'll see the nature of human bias let's go back to racial bias here and by the way these effects are even bigger I think machine bias is one of the most important areas let's go back to see what happens in this data how big is the Machine bias well let's start with a benchmark amongst the people who come before the judge the 48.7% of African American justjust take that that could be very wrong we could say there's lots of embedded and we should lots of problems the criminal justice system but that's fine let's just start with that number judge is actually Jail at a 57 percent rate so there's a african-americans face a much higher discrimination but actually what the algorithm does is it jails about roughly base rate so actually the algorithm is extremely good for african-americans and in fact if you want you can turn up the dial we can release down to about 40 percent African America we have about 40% jailing rates African Americans and not really effect the crime right why is this happening maybe the algorithm is biased in this case doesn't happen to be but you know the one algorithm for which we have an astounding amount of evidence for bias the one in your head and so to the extent that we think that there's the potential for these algorithms and again this I think really pivots on doing it correctly I think the potential is we have to think about these machines having a potential source of lots of bias benchmarked against the enormous amounts of problems within ourselves so I want to close here on the rights and liberties aspect which is I want to talk about this problem as being a different kind of rights problem that we should also pay attention to and I'm going to do this by giving one example the example is from Community College my time is almost up so I'll stop here one of the biggest issues we have with economic mobility in the United States is actually shows up in community college a lot of people show up they take classes they drop out they fail thank you very hard to get ahead one of the reasons this happens is imagine you're you're no one in your family has ever been to college you get there it turns out you're giving a quiz on the first day the quiz on the first day is what class are you gonna take it's actually a really hard quiz you take the advanced math class you take the regular math class you take the remedial math class you may not realize it but a lot rides on this decision take the regular math class do badly what do you start saving yourself maybe I don't belong here why am i raising this problem because that student facing that quiz can go home that night and have the world's best data scientists help them answer the following question how will I spend the next two hours of my life watching a movie so that right is covered but in deciding how they're gonna spend the next six months of their lives there's a guidance counselor on the fifth floor so to me the biggest problem right now we have in this space is that these things are being used for purposes that very much serve someone else's desires I think there's an enormous potential for these things if they can be turned to serving the people that typically don't get served and I think that's what I see in the jail example that's what I see in the recommender system for courses examples and there's endless amounts of such applications and I think we really have the potential do something fairly useful here okay let me stop thank you and now we have Genevieve Val who is our anthropologist for the evening somebody's gonna help us understand AI from the who are you AI right she's a senior fellow at Intel now a professor at Australia's National University and a wonderful speaker so come on up there's always no pressure being the evenings anthropologist so how do you follow all of that right and how do you think about AI in this context of rights and liberties and I really wanted to kind of move this conversation in a slightly different direction think of this is the kind of meta moment before drinks which I'm assured are outside and there's an open bar and you should never put Australians between this and an open bar so how do you start right in thinking about AI in the context of rights and liberties well you think you have to start at first principles which is how do you define it you heard Kate and Meredith stand right on this stage here and say the thing about AI is it's a complex of technologies that include everything from machine learning to computer vision to natural language processing you've also heard them stand on this stage and indeed of one a year ago and say but the other thing about AI is it's also a constellation of cultural and social practices and that turns out to be hugely important when you want to talk about it in the context of rights and liberties because those are all so intensely cultural and social practices so how is it anthropologist would you want it to find AI it's one thing to say it's social and cultural but how do you give that a little bit more specificity how would you put a little bit more attention on that system well one way to do would be to go look at the kind of classic anthropologist go get a bit of Claude lévi-strauss and say okay if artificial intelligence is on one side what's on the other side organic emotion I don't know what the oppositional point would be but you kind of go the raw in the cooked you know AI and the other thing you could certainly subject it to a kind of classic Williams Bradley ethnographic interview and ask AI to describe itself descriptively structurally with a contrast question and you get somewhat sort of interesting I also think there's another slightly different kind of anthropological but critical theory lens you could take on this which is to think about AI as having marked and unmarked categories so what do I mean by that well you had a couple of examples about that earlier in these kind of conversations about let's pick for instance scientists scientist we frequently add as the descriptor female scientist because the understanding is that scientists writ large are male and female scientists are the exception they're the marked category the unmarked category doesn't need a descriptor right it's just taken as red when you put that marked category in front of the word you open up its meaning so I want to do that too AI and just suggest for words you could put in front of AI that might illuminate it's unmarked categories by attempting to market because I've recently moved to him I thought I'd start by saying well is AI Australian and you're all laughing because you know the answer is no way and how do we know that well firstly we know that because our behind fine and now somewhat distressed colleagues at Volvo I hasten to add Sweden brought their autonomous car to Australia and discovered a really critical thing caribou are not at all like kangaroos they may both be animals that look by the side of the road but they behave completely differently kangaroos bounce in the air apparently the bouncing in the air bit makes it very hard to determine this plane which means that all those cars run into them problem so the AI there it had a country in the country wasn't mine the country also turns out not to be yours because deer and caribou aren't the same either but when you put a country in front of a are you already start to ask the question of who's AI is this we talk about rights and liberties whose rights and liberties do we imagine we are enshrining i love the idea that the Bill of Rights is a document of inefficiency but it's only one country's document and what would it mean to think about everyone else's so if AI has a country where is that country and does it matter and of course the answer is yes but how would you unpack that and start to suggest other country nurse you could also in the context of rights and liberties ask the question of well does AI have an activity base to it are we talking about an equity AI what would that look like would that be the AI that managed the FoxNews sexual harassment problem would that be the AI that ensure that women's pay wasn't calibrated by historic pay data would that be an AI that was interventionist and if it was whose intervention would it be how would you determine what equity was who would make that determination and how would it be read into the system and frankly putting that word in front of it also starts to suggest what does it mean to say that the data isn't enough anymore and that training an algorithm on a piece of data may only get you what the world has been not what the world needs to be or the world should be so the second question you might ask here is not just where is ai's country but where as a eyes context and in some ways it's always an already retrospective and so how you would make it prospective is actually a really interesting question third what you put in front of it just for fun would be to say Buddhist AI question mark now of course there's a couple of things about that that's important one is that artificial intelligence does sound incredibly well agnostic or atheist just as a starting point it also however has embedded in it some ideas that come out of arguably not just a neoliberal tradition but possibly a Christian one there's a notion inside AI about systems being eponymous there's some lurking notions about free will there are certainly ideas about systems becoming sentient and self-aware all of those are interesting ideas that aren't just cultural they are cultural and religious what would it mean to talk about a Buddhist AI one of my favorite Japanese robot assists wrote a book nearly 30 years ago in which he argued I would say slightly tongue-in-cheek but mostly not that robots ie the thing that goes around AI that robots would be better Buddhists than human beings because they were capable of infinite patience and infinite grace you might also argue that if you were to talk about a Buddhist AI could you talk about an artificial intelligence that was Co emergent with us where there is no us and them there is a co emergent set of properties and what would that look like and frankly you could put any other religion in front of this what would it mean to talk about a Lutheran AI an AI of submission not autonomy and not in that sense in the Lutheran sense and then last but by no means least what might it mean to talk about an emancipated AI we spend a lot of time talking about autonomous systems about what it will mean to be safe with autonomous systems how AI will be safe for us there is a question about whether we would get to a point where a system and a society was judged by how it treated its artificial objects not the other way around and what might it be to imagine that autonomous systems aren't autonomous but emancipated what does that start to look like right so what are the things I'm suggesting here about marked and unmarked nurse one is that every time we say AI there are a set of implicit tacit cultural assumptions that are unvoiced but you can get at them by starting to push on the system I could just as easily have put here queer theory AI loved up AI democratic AI totalitarian AI because all that does is help us ask the question of what is AI really and what might we want from it and how do we have that conversation inside the broader context of rights and liberties so thank you [Applause] see I told you as a dream team here we go okay so what I know we're going to do is I get the moderators prerogative to ask a few question or two of all of us but then apparently we have we're live stream so we have an audience sending in questions cold so I know it's going to be really rolled the dice no no we appreciate you please send in your questions we're very excited don't ask anything too crazy okay so but in that prerogative so as we think about the sort of themes of the night for rights and liberties two things struck me that I want us if we can drill down on right it's the what and the how right so the what question is well what civil rights and civil liberties are we talking about right and what values geneviève you talked about how it's all really culturally contingent right that you know but you you told us that like equality fairness and liberty are are the least the go the guideposts for us we didn't talk about specifically what rights and liberties but least those were like sort of we had some consensus around for you that but in a culturally contingent world right where where weren't where we're thinking we're not sure what it is we all share right do we then ultimately we're down down to the private actor that then worries about right that that if indeed we can't come up with a set of shared values and norms that and we're agnostic because we know it's all very culturally contingent then the problem is then we leave it to the hands of the private parties who is not always run by Blaise I know I can't say run things but it's not always gonna be playing right which makes me feel a whole lot better if it was you but it's not always gonna be right so so can we as a panel think about are we okay with fairness equality and autonomy are there other set of values we want to add to the mix and as we think about the what that is are they're pressing I know today one of the questions was like what are the set of problems we think are the most that we most urgently need to address right whether it is using scoring to figure out sentencing an allocation of surveillance practices are there others that we have such a great audience of people who are researchers I'm thinking about this in policy makers and watchdog groups do we want to create an agenda for them are there three things that we want to call out that we think they're the what that we want everyone focused on and then and maybe you guys will help me work through some of this the how right how do we get there you know in Europe we have this imperative that we have to explain things explain automated decisions in some way how do we do that though when even the designers can't explain it so I'm thinking blaze of the device that knows us but kind of give it an explanation and can we are there ways that we can make these machines accountable for their decisions even though we can't quite explain it are there other possibilities and and for the surveillance state that we don't know that we're under surveillance all the time we don't we know we're using these scoring systems in a meaningful way how do we how do we litigate right how do we come up with sets of problems that we have this agenda and I love the optimistic let's think about how we can use them for good right that we can use automation not to take away rights and liberties but to enhance rights of liberties and and how might we imagine that I loved it I think it's such a hopeful note for us to end on so so the what's in the house sorry I didn't ask anyone a specific question but it's an imitation if you're interested to drill down a little bit more so movie if it's okay I'm gonna start with you over there no no we or we can start with by you can go back right crap we're a playful Bunch yeah dianna helpful backstage mom so listen I think one of the challenges here is I don't mean to suggest that it all devolves to nothing but I think what does become clear is that what constitutes a conversation about rights and liberties feels different in different countries because they're adhering to different challenges you know there's been an interesting set of conversations in Australia recently about how you might use data and technology systems to think about domestic violence a thing that actually has a considerably greater impact on Australia than terrorism I mean there's been a really interesting set of conversations about how would you use predictive policing and things like that to identify victims of domestic violence now that's meant there's been a conversation about how do we think about human rights our roles and mckinnon but to allow that sitting inside that those rights were often not necessarily the rights of women this is an interesting argument that says how do you then think about rights and liberties when those are experienced differently not only by race but also by gender and other other things you could choose to tackle so I think I have to unapologetically defend a global human rights framework here and it may be that the Universal Declaration of Human Rights was written by people from a particular perspective and yet nonetheless I think that it's an effort that was necessary and we have to agree on certain rights that are so fundamental that we're not going to let majorities override them and that means majorities here and it means majorities anywhere and I think that that's a pretty good framework for Mithen about what normative concerns we want to have as we move into this kind of time of technological change speak to the Howell for a minute yeah I think there's a theme that actually came up in all of your talks which I really liked was that that was summarized by if your frictions argument which is I think to the extent that we think of the the the Bill of Rights as statement of frictions that might be a good metaphor for a lot of these artificial intelligence applications which is maybe what we need is more friction in the applica so when you talk about what type of AI that's asking us to reflect more and that's a form of friction I think part of what's happening is there's a lot of mechanical application she's gonna turn the crank there's a lot of like oh now we can do survey let's just turn the crank I mean but let's reflect on a bunch of these questions and somehow how do we put in more of these frictions how do we put in the friction that you need to process the data locally I mean so almost like we could ask one meta question to be what are the frictions that we want to put in place and friction is kind of the right metaphor I don't think from a regulatory point of view I don't think we're far enough along to have bans and barriers that maybe in some places we are but in nearly all the places you feel we must be ready to have frictions and what are they and what should they look like and how do we put them in place so that it does slow things down a bit which i think is antithetical to how the sector tends to want to go well if anything it runs counter to that wonderful logic that we've had in some ways in the last 20 years about things being seamless and I think the argument is even full it's actually really important here right it seems actually matter because they are the moments when you think about is this what I say at work versus how I might say something at home is this the email that I want read by my partner versus my boss I mean you know do I want that data on that system there's a bit where as humans we've protected those seams in all kinds of ways and the kind of impulse to make everything flow everywhere is in some ways any even say an engineering impulse but it's been as sort of an aspiration in a way that isn't in some ways human right this this idea of seams being sometimes beautiful or necessary useful or useful is is also I think an acknowledgment of the fact that that technologies are augmentative you know we this isn't new what we're talking about we've been using technology to modify ourselves wittingly or not since we speciated we have you know shortcuts because we we've used fire to cook and we have a lack of fur because we close ourselves and these are you know these are technologies that have changed what we are and that augment us both individually and as a species and you know what what's interesting about the AI technologies is that they both give us an instrument with which we can measure a lot of things about ourselves which is why a lot of these fairness questions start to come up in the context to be able to finally analyze them using those techniques but once one has opened that Pandora's box and seen all of the ugly things inside all the implicit biases and the problems we're learning you know judges you know are potentially racist or might be affected by how long since lunch or what have you then we then we have to actually act and and that opens a design challenge so when we start talking about what are the normative things that we want you know we're really asking some very difficult questions about what are what is what is the design that works for everybody what is the design that works for groups how do we think about that that entire spectrum in that entire space in an articulated way such that there are things that perhaps are universal or presumably minimal ones and there are other things that are adaptive and specific and are there areas where we think the speedbump should be or friction should be particularly tough right that is when we're using AI to give people services right to enhance their lives and not take something away maybe we need less speed bumps right god bless the FISC it can handle itself right look what our pocket buckle Sam's pocketbook looks like but when we're taking away someone's rights and liberties or is there a hierarchy of concerns for us where we said the speed bump should be darn high we should really slow down to five miles per hour we should really think hard because you're ultimately you're putting someone in jail right or that is are their sets of problems where we think you know AI really needs a huge speed bump and sets of problems that perhaps that doesn't have to be as enormous we don't have to slow down maybe that just redounds the question what rights and liberties do we know I think you put your finger on something else okay is that you know think about something like police heat maps where you know that are based on predictions about crime we feel one way about that because at least as to some of the people that police are going to interact with it's going to be a coercive interaction that is going to deprive Liberty but if you use the same kinds of technologies that you know you heard foundations a few years talking about million dollar blocks places that makind zuv problems that were extensive for society and and that targeting was used to target additional resources it's not to target additional coercion we might feel very differently about that and that actually works in a constitutional framework when we talk about depriving people of liberty or or property we say you need due process you've written about this kate has written about this Jason Shoultz who's here it's not so easy in a world that's very proprietary but we need to find a way I think another place where a speedbump says a little bit meta it's it's less about the outcome variable per se you know whether it's coke which I agree with coercion versus but it's about the nature of the thing so if you were thinking of clinical trials and drugs if tomorrow I introduced not a drug but let's say a device technology for diagnosing cancer we have an incredible machinery regulatory machinery for what do I need to show to show this as an effective diagnostic tool but if I add an algorithm that just simply helped analyze it for the doctor it's not that's not a diagnostic tool in the set of taking blood from you but it's serving the same purpose so if you're like one of the places where we need a lot of friction is when we actually come up to the point where something is being rolled out on mass in most sectors there's some procedures in place if we had a new law as to how we were gonna take bail decisions that would be debated we have systems in place hopefully that at least may they have problems but at least they exist but these things almost find a way into the backdoor of getting to scale without there ever having triggered any of those things and so it seems like that last bit not the research not the piloting not the proving the proof of concept but we're all of a sudden it goes to scale without any friction that seems like we're a lot of crazy stuff could happen well in some ways it's an interesting um fracture of the what's been the logic of Silicon Valley to which is build it in beta throw it out get people to test it for you go oh that didn't really work as well as we expected we should fix that whereas you think about some of the consequences here would you want an algorithm for sentencing to be sent out in beta well I didn't really work solo try that again you're like there's a bit that says kind of the logic of the logic that has driven a certain sort of innovation over the last huggably sort of 15 or 20 years particular software side really has been a notion of iterating in real time with a user base who are coincidentally also the subject based and it's something that says that may not work as well here so the metaphor of both drug testing but also a man I in keep thinking about domesticated animals too you know from a well the land of rabbits and camels and frogs right we bought into Australia well it'll all be fine and then they went to feral and it wasn't it was trying to fix it so the bit that says you know how do you imagine building systems where you don't what does the fence look like in that sense you know the friction and are there ways to think about you know we talked about human bias is bad right but one bad judge is is it replicable exactly in the same way as the next awful judge that might not be true about this before parts of that judge are replicable in much of the country that judge is elected by voters and so that judge is not going to be thinking about accuracy that judge is going to be thinking about the risk of a particular kind of failure right so that's a problem that might account for some of the disparities the positive disparities that you saw in that presentation yeah and is there a way to think about how we can combat so let's just assume that human bias is is it bad a problem as we think it is right that we can ways to combat automation bias besides just telling judges or hearing officers or police officers don't rely too much on the computer don't over lie it's not always perfect other strategies for automation bias we're overlooking poor interventions part of the part of the problem with this is that it's not usually as simple as you know measuring a variable and agreeing that this should always be 50/50 and then that that's how it goes I mean a lot of these things are embedded inside feedback loops this is also what makes designing the right things ahead of time or regulating the matter of things so it's so difficult you know there's there's this really interesting statistic that an NPR made a beautiful plot about I think that a year ago or so about the number of female CS majors and how that began to crap out around 1984 or so which coincidentally is when computers started to get marketed as gaming machine to boys so in this case something that happened in in media in mass media created a set of associations on the part of children and their parents which in turn colored you know what what kind of background in computer science a lot of a lot of kids were getting which in turn had these giant knock-on effects impossible to predict at the time and the difficulty now is that a lot is it I mean I don't think we can put that genie back in the bottle a lot of our perceptions and our understanding of the world are mediated through our little screens those things you know are partial they involve ranking obviously about them involve all sorts of things that in turn color our priors which in turn affect our behaviors then how one how in engineers in such a feedback loop is is really hard problem and not one that strikes me is easy to think about from a legislative point of view I think that something you said earlier helped me a lot which is you mentioned this point about the algorithm itself helps us quantify and one of the things that I think these things can be very helpful for is almost like a mirror of sorts I saw we did this study a long time ago where we sent out a bunch of resumes with african-american names and white names and just randomize them and we found a huge gap in callback rates and I remember at some point I think it was something like the human resource Association or something whatever some big they wanted me to present the results and I thought oh it's gonna be a nightmare sounds like I presented it and it was surprising what happened afterwards all of them were like wow we didn't realize we're doing this how do we stop it and you know you can be you can say oh they were disingenuous but I'd say a big fraction of them were most human resource managers are pretty people they and there is genuine prejudice knowledgeable prejudice but there's a lot of stuff that you do without realizing it and I do think one place where these algorithms can be helpful is if we can turn the lens on ourselves a little bit and ask questions like when is it that I've made what are the who are the African Americans that I jailed that actually had no risk what did they look like what were those moments you're talking to know about essentially using these elders to shine a light on the gap between explicit and implicit lives and for myself like not as say I'm even as a mirror I think a lot of people in these decisions positions of power a lot of them want to do better not all of them I think we need lots of safeguards but some of them definitely want to and I think there there's a huge so the optimistic story that we can use it to harness it for good and you've seen it in practice and Kathy O'Neill talked about it seeing it in practice that in picking an orchestra they were game to have people not show their shoes and this was their sex so so maybe there is a real upside to all of this that we are under estimating as we get very depressed as we talk about these technologies so I think now we have to I see the time clock is telling me something that we're running out of time we're running out of time period or it's Q&A from Twitter okay but let's be clear there was human editing they're not fair not full automation here thank you yeah look back to see if a human being is responsive so even if we solve the bias and governance challenges how do we avoid the surveillance externalities of a I like that we're always under watch you know all of our personal data well that was that was sort of them the the implicit theme behind what I was talking about I I think that the fact that that when whenever we talk about AI we immediately go to the idea of a centralized service is actually really problematic like I don't know if you have a company that makes shovels you know then you don't say oh there is a power dynamic there all of the holes in the world are decided by the shovel company no no people dig holes with shovels and the fact that there is that level of indirection and that once I I you know I have the shovel it's it's actually you know a part of my body that I can use to dig the hole is fundamental and you know in a similar sense I I worry about the way thatthat slippage the fact that you know if I go on Amazon and I pick you know I can get a hardback or a paperback or a Kindle edition who the Kindle edition is actually not the same as the other two right that's one you know we're in I'm using a service as opposed to having just bought a book yeah so that's at least one part I think that's a much creepier surveillance problem right when and also when your book disappears if it's book disappears of its 1984 or or or how long I'd well on each page because I'm a big data set right so we're getting rid of some of the honey pots but it's like huge centralized databases and we're just knowing ourselves with our devices but does the device keep all the data and therefore is at least in theory could be hacked but it would have to be a very determined person to get through a lot of devices well there's a there's a big quiz a big difference in my opinion between mass surveillance and being able to hack somebody's device and you know this is something that I'm sure Ben can speak to it much more intelligently and I could but you know you don't that the idea that that there's one place where everything has been gathered is very different from you know having to make a targeted attack and this is the same for physical you know I mean somebody could you could go and break into my house and discover all sorts of things but it's not like there's the skeleton key to everybody's house somewhere what I'm gonna say has nothing to do with AI really I think that that the only way to get at the surveillance problem is for us to be less afraid of terrorism that so long as we accept [Applause] as a matter of what we call national security the terrorism is an existential threat on par with prior genuine existential threats then we're not going to make a lot of headway saying that we shouldn't deploy the best technologies in order to catch terrorists and you know there there is more talk of resilience rather than fear in some circles but our politics are really upside down here and and and I think this is a little bit outside the scope of this conference what ability do we have to opt out of AI and instead of the question being what abilities do we have because maybe they we don't we don't think we have any but should we have it and at what point should we have it and is it even a realistic question I'm adding this is it realistic but is it something we can really think about opting out meaningfully I feel like your top touched on this point which is when we ask questions like that we're almost casting such a big category on a thing that it feels like the big cognitive traps in this area come when we think of this as a thing right when it's really an insanely differentiated thing with its own structure in each category I don't know there was a nice summary it's really good and I think a minutes and it's the right caviar right to say what would it mean to opt out of AI are you gonna opt out of being in a data set that is subject to machine learning are you going to be a thing that the computer vision romba doesn't recognize I mean it's sort of a it's a difficult question to unpack there and frankly we already know and for those of you who weren't lucky enough to be here for the most of today we already know there are all kinds of categories of things that do not end up inside datasets so they are opted out which is not the same as opting out I grant you there's different loci of agency there but you know one of the real challenges and I think it's you know where Ben is rightly pointing the focus is to say who are the sort of who are the agents in this who has agency and who are subjects and objects and to whom would you choose to opt out and under what circumstances and oh by the way some circumstances where you opt out you will still be in the data set I mean I'm famously not on Facebook I don't have an account on Facebook but I have lots of people who put me on Facebook all the time so you know I can choose to not participate and it doesn't make any difference there's also a bit that says the nature of your network the nature of where you live the nature of who you are with the nature of being an immigrant in this country means that I'm opted into a whole series of things even when I would choose to structurally opt out so I I wish that were a system that made sense but I don't think it does get and I think along those lines the question of like to what extent do we have real autonomy in this area the question is federated learning might be a step forward but Google still owns the insights derive from data right so are there good are there still privacy implications as the question yeah it's it's only it's only a it's a technique that that adds to the palette of possible things but you know this I guess speaks to your your previous question too I mean I think what was it the new electricity AI is like an you are yes you can opt out of electricity by living like like the the Amish do you know and and I think it'll be a similar sort of question with with AI and in a few years if it isn't already but also it's not singular it's not one thing these are relational questions and the two that we've imagined are seams right relational and specific and contextual right so we can't be so sweeping maybe in these questions perhaps so and I think this is connected to it how do we make AI that works for the many if big companies have to make money like how do we monetize this if we're not gonna disadvantage people when inherently sorting people into boxes of risky and least risky is to make money or to the advantage of private actors can we still make money and use AI while being fair I think ultimately is this question is if not we should all just it's the first order of what what most companies and I mean I I hesitate to say all because you know if your if your company is making you know criminal judgments by the compass or some type then I don't know what the end what the motives are exactly but to most of the big companies deploying this in some way or other the goal is to make it useful and work for as many people as possible and so I mean I don't want to be too glib about it but in many cases if it's failing to be fair that means that it's failing to work for for some fraction which may not even be the minority you know when you start to put together you know all of all of the people who are not male not white but you're not exactly looking at a minority right so it's a failures of that sort are economic failures as much as they are failures of justice that we might very well turn away from because it's a market failure I think in other sectors I think we've there's sort of a very crude three-part way you can think about this one is there's parts where profit and customer or consumer interests align and that's because you know if the the person isn't selling you a product you like you won't go to them and that's a big part of the economy and that's why you get tomatoes that we're fine but then there's a second category where there's something that goes long where custom the customer can lose and the firm can make money and that's happens in financial services all the time and that's one of the biggest problem there are big sectors where this happens you can make a lot of money in retirement savings advice-giving and that's not helping anybody and so I mean I think there's a big layer of that and one of the big dangers is because there's so much money there it's very easy for technology to start to creep into that space and so that would be one set of areas I'd keep an eye on then the third area which I think is the biggest problem by far is that we have a situation where once it's it's companies selling to governments and what you have is like an incredible asymmetry right now like right now I think we're on the verge of a major disaster which is city governments and lots of different governments going out and procuring these technologies basically at the whim of whatever random companies are providing these things they're not coming particularly armed with a lot of knowledge they're being given I mean I think so that third category of procurement is a place where the worst pathologies of admah corruption right like seven are in scrap she's right there buying it they have no idea there's no transparency they say oh we can't really see what new you can't really see how good I bought it well link like with health care one has neither the advantages of capitalism nor of Central Park okay so our last question where is it okay here we go it's about what's the due process rights I can't find it now but what even if jailing by algorithm could be more efficient and fair what do process do people have a right to so I wonder Ben if you want to take that just because we have or just in terms of substance - yeah I think you process in general we should be worried about coercive government actions that can't be meaningfully challenged and so what we're seeing in some cases now is the companies who design these systems going into court and saying that this is a trade secret we can't show the defense expert the inside of this box because it'll hurt our business we won't even do it under a protective order and conceal it from the public that's really a problem you know I think the the you know ideal use of these technologies and these systems would be for much more transparency and then judges for judges to be able to use it as a tool that doesn't replace their discretion 100% right I mean I think the due process here happens it's such a basic level that it's where it's not happening is we just don't have due process in what tools are being used it's insane so thank god this was wonderful thank you so much thank you so much for that extraordinary panel and actually thank you to all of the speakers and chairs that we've had tonight for covering such a wide spectrum of important issues so we are wrapping up the evening now but we're hoping that this is not the end this is designed to give you material for ongoing conversations for research trajectories for policy agendas and possibly also ways to make AI better so with any event of this scale there are a number of important people and institutions to think first I want to thank the AI ethics and governance fund and the John D and Catherine T MacArthur Foundation for their generous support of course I want to thank the Media Lab for welcoming us into their home I'd also like to thank our research team so that's Alex Campolo Madeline Sanfilippo Andrews helped and Solem Baracus and I'd like to thank our partners in event coordination so especially Jess at MIT and Emily and her team at good sense and then I'd like to take a pause and give a huge warm thank you to our producer Mariah Peebles without whom none of this would have happened and I'd also like to thank our our friends and family who came up here have spent their time and energy really helping make this evening special and one last thing I want to thank my collaborator friend co-founder Kate Crawford it is such an honor to do this together too much too much my friend straight back a so [Applause] very emotional I I thought that was gonna be the last things I have one final one which is to thank you guys for being part of an incredibly important conversation and as you've heard the a I know initiative is going to be doing a lot more of this work and we would love to hear from you if you want to be a part of that if you're researchers who want to enjoin in those empirical efforts to figure out how these systems are working and to make them better please get in touch as you've heard tonight there's a lot of work to be done but there are also a lot of drinks to be drunk and they are waiting for you as we know Australians are pro to doing this they are waiting for you outside and we hope that you will all come and join us for a drink and talk about these issues further thank you and good night [Applause] [Applause] [Music] you comment commencer une dissertation de philosophie Webb Institute, Glen Cove.

Your One and Only Stop for All Types of Writing Services

All types of works. All kinds of complexity. Any deadline. Any time.

Our Samples

Our Samples

We write from scratch
according to your instructions.

Our Guarantees

Our Prices

Start at $10/page. Edit and
proofread your paper.

Our Prices

Our Guarantees

Get plagiarism free papers,
100% guarantee!

Want to make sure if we have an available expert? Learn now! Place a Free Inquiry
We have taken appropriate security measures to protect you against loss, misuse or alteration of the information we
have collected from you. Please read our Privacy Policy to learn more.