question stringclasses 1 value | answer stringlengths 443 1.77k | source stringclasses 1 value |
|---|---|---|
No question: Moodle | ( left figure ) and race ( right figure ) in the ultimate delivery audience. the labels refer to the race / gender of the person in the ad image ( if any ). the jobs themselves are ordered by the average fraction of men or white users in the audience. despite the same bidding strat - egy, the same target audience, and being run at the same time, we observe significant skew along on both racial and gender lines due to the content of the ad alone. targeted advertising this information can also be used to discriminate 11 https : / / mislove. org / publications / ecommerce - imc. pdf facebook / cambridge analytica ( 2016 ) context : ca obtained 50m records from facebook in 2013 through “ survey ” app that leaked friends ’ information as well as from the user answering the survey ca created a system that can target voters based on psychological profile was used to target us voters in 2016 elections and uk voters in brexit 12 consequences “ brexit vote ” and us presidential elections : two leading democracies, find themselves internally polarized information commissioner ’ s office ( uk's independent body set up to uphold information rights ) fines facebook 500, 000 gbp ( this happened before gdpr ) 13 https : / / ico. org. uk / media / action - weve - taken / 2260271 / investigation - into - the - use - of - data - analytics - in - political - campaigns - final | EPFL CS 402 Moodle |
No question: Moodle | - 20181105. pdf attribute - based targeting each facebook user has assigned attributes ● computed by facebook ● based on likes, 3rd - party browsing ( tracking via “ like ” button ), etc ● bought from “ partner ” companies ( data brokers ) > 1200 well - defined attributes > 250k “ loosely defined ” attributes ( from text - processing ) advertiser selects attributes, facebook relevant ads facebook doesn ’ t reveal user identities 14 attribute - based targeting : pii - based advertiser can bulk - upload a database ( bought from data brokers, etc. ) to facebook, who tells how many users are present on the system, and allows to target them 15 john @ gmail. com alex @ gmail. com + 1 666 555 44 33 john doe, boston … data broker facebook “ i have n of those users! do you want to create an ad for them? ” ( e. g. list of alcoholics in the us ) pii : personal identifying information attribute - based targeting : pii - based facebook discovers “ similar ” people ( that are not listed by data brokers ) based on interests or browsing patterns can be tailored per region, sex, marital status & other attributes 16 “ from the profiles of those x users, i found y users that are similar! ” john @ gmail. com alex @ gmail. com + 1 666 555 44 33 john doe, boston … data broker “ i have n of those | EPFL CS 402 Moodle |
No question: Moodle | users! do you want to create an ad for them? ” ( e. g. list of alcoholics in the us ) + athanasios andreou et al., measuring the facebook advertising ecosystem, ndss 2019 2023 - eu child sexual abuse ( csa ) regulation debate very controversial proposal regulation of the european parliament and of the council laying down rules to prevent and combat child sexual abuse 17 https : / / eur - lex. europa. eu / legal - content / en / txt /? uri = com % 3a2022 % 3a209 % 3afin “ to sway european public opinion, however, the european commission went even further. x ’ s transparency report shows that the european commission also used ‘ microtargeting ’ to ensure that the ads did not appear to people who care about privacy ( people interested in julian assange ) and eurosceptics ( people interested in ‘ nexit ’, ‘ brexit ’ and ‘ spanexit ’ or in victor orban, nigel farage, or the german political party afd ). [ 17 ] for unclear reasons, people interested in christianity were also excluded. after excluding critical political and religious groups, x ’ s algorithm was set to find people in the remaining population who were indeed interested in the ad message, resulting in an uncritical echo chamber. this microtargeting on political and religious beliefs violates x ’ s advertising policy, [ 18 ] the digital services | EPFL CS 402 Moodle |
No question: Moodle | act [ 19 ] – which the commission itself has to oversee – and the general data protection regulation. ” https : / / dannymekic. com / 202310 / undermining - democracy - the - european - commissions - controversial - push - for - digital - surveillance privacy is essential for our society 18 more on privacy harms : https : / / www. bu. edu / bulawreview / files / 2022 / 04 / citron - solove. pdf the privacy of whom? individuals ● protection against profiling and manipulation ● protection against crime / identity theft 19 thus, privacy is a security property - - there is no security without privacy ( and technically vice versa — to enforce privacy you need security ) companies ● protection of trade secrets, business strategy, internal operations, … governments / military ● protection of national secrets, confidentiality of law enforcement investigations, diplomatic activities, political negotiations privacy must be for all 20 denying privacy to some is denying privacy to all 21 security vs. privacy – common misconception common misconception : we need to tradeoff security for privacy! but : surveillance may be not effective : smart adversaries evade surveillance criminals use since long telegram, tor, signal, but average citizens do not!! surveillance tools can be abused : lack of transparency and safeguards snowden revelations : nsa spying on citizens, companies,... surveillance tools can be subverted for crime / terrorism greek vodafone scandal ( 2006 ) : “ | EPFL CS 402 Moodle |
No question: Moodle | someone ” used the legal interception functionalities ( backdoors ) to monitor 106 key people : greek pm, ministers, senior military, diplomats, journalists... more surveillance → more security → safer world 22 what is privacy? defining privacy privacy is subjective : hard to define in this lecture : three ways of thinking about privacy depending on what goal are we trying to achieve : the way of reasoning about harm prevention different mindsets lead to different technologies - - and different protection! 24 privacy as confidentiality ● goal : minimize data disclosure ( which includes distributing trust ) ● pets : encryption, privacy - preserving computation, obfuscation, decentralization homomorphic encryption, secure multiparty computation, differential privacy tor : anonymous communication network ● mathematical privacy definitions and strong proofs of security 25 “ the right to be let alone ” warren & brandeis ( 1890 ) " the individual shall have full protection in person and in property. " privacy enhancing technologies privacy as control ● goal : let the user decide how data will be shared and used ● pets : privacy settings, ( automated ) privacy policies, logging secure logging, sticky policies, ● organizational compliance and fines ( gdpr, us fair information principles ) 26 “ the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others ” westin ( 1970 ) privacy as practice ● goal : improve user agency with respect to private information ● pets : contextual feedback, privacy | EPFL CS 402 Moodle |
No question: Moodle | nudges 27 “ the freedom from unreasonable constraints on the construction of one ’ s own identity ” agre ( 1999 ) contextual feedback privacy nudges and because privacy is a security property … there is an adversary players / adversaries 29 colleagues family and friends strangers companies advertisers organizations we work for government state - level / global adversaries institutional adversaries adversaries from social context adversaries from social context concerns : problems from using technology ● “ my parents discovered i'm gay ” ● “ my boss knows i am looking for another job ” ● “ my friends saw my embarrassing photos ” pets goals : do not surprise the user - - support decision making 30 contextual feedback privacy nudges easy defaults adversaries from social context limitations : only protects from other users, service provider is trusted! ● limited by user ’ s capability to understand policies ● based on user expectations, what if they are off? 31 common industry approach : make users comfortable many systems can be ( stealthily ) used for surveillance “ we will create a new system to improve x ” “ we have this data, why don ’ t we use it for y ” function creep : expansion of a process or system where data collected for one specific purpose is subsequently used for another unintended or unauthorized purpose. 32 a recurrent function creep example : identity systems https : / / privacyinternational. org / state - privacy / 1002 / state - privacy - india aa | EPFL CS 402 Moodle |
No question: Moodle | ##dhaar - india's “ optional ” unique identity identification number scheme 12 - digit identity number based on their biometric information and demographic data > 1 billion people stored in the database goal “ promoted as providing the poor with an identity ” it became : mandatory for benefits system ( distribution of food rations and fuel subsidies ) mandatory for buying a sim card mandatory for opening a bank account required to pay taxes no education without uid 33 a recurrent function creep example : identity systems https : / / www. euractiv. com / section / justice - home - affairs / news / eurodac - fingerprint - database - under - fire - by - human - rights - activists / http : / / www. europarl. europa. eu / news / en / press - room / 20180618ipr06025 / asylum - deal - to - update - eu - fingerprinting - database eurodac - fingerprint database for asylum seekers goal : store fingerprints from all people who cross the border into a european country without permission – asylum seekers as well as irregular migrants to help immigration and asylum authorities to better control irregular immigration to the eu, detect secondary movements ( migrants moving from the country in which they first arrived to seek protection elsewhere ) and facilitate their readmission and return to their countries of origin. it became : database for police and public prosecutors, such as europol. more data : in addition to fingerprints, the facial images and alphanumerical data ( name, | EPFL CS 402 Moodle |
No question: Moodle | id or passport number ) of asylum seekers and irregular migrants will also be stored. 34 institutional adversaries concerns : ( defined by legislation ) data should not be collected without user consent or processed for illegitimate uses. data should be secured : correct, integrity, deletion pets goals : compliance with data protection principles ● informed consent : valid, freely given, specific, and active consent ● purpose limitation : data can only be used for the purpose it was collected ● data minimization : only collect data strictly necessary for service ( proportionality ) ● subject access rights : user knows what information is stored / processed and how, user has right to modification and deletion ● preserving the security of data : auditability and accountability 35 access control anonymization? logging institutional adversaries 36 limitations ● assumes : collection and processing by organizations is necessary, organizations are ( semi ) - trusted and honest, relies on punishment, no mandated technique for the protection of the data ● focuses on limiting misuse, not collection, easy to circumvent minimization to collect in bulk, auditing may require more data! the danger of informed consent : if compliant then it is ok! 37 https : / / www. iccl. ie / digital - data / europes - hidden - security - crisis / global adversary concern : how to evade / fool a global adversary? pets goals : ● minimize the need to trust others ● minimize the amount of revealed information 38 anonymous comms : tor, mix | EPFL CS 402 Moodle |
No question: Moodle | ##nets advanced crypto : private information retrieval, anonymous authentication, multiparty computation, blind signatures, cryptographic commitments obfuscation : dummy actions, hiding, generalization, differential privacy end - to - end encryption : signal, pgp, otr ( off the record messaging ) global adversary limitations : ● difficult to evolve, combine / compose ● usability problems both for developers and users ● lack of incentives : industry : loses the data! no “ general - purpose ” governments : national security, fraud detection, … 39 building privacy - preserving systems privacy engineering process step 1 : define “ desired uses ” - the purpose of the application step 2 : identify the minimal data need for this purpose step 3 : build a system that achieves the purpose minimizing the misuse possibilities of these minimal data - - use privacy enhancing technologies! ( full protection may not always be possible ) step 4 : evaluate the system against a strategic adversary 41 1 ) model the privacy - preserving mechanism as a probabilistic transformation what is the probability that, given an input the privacy mechanism returns a given output? systematic privacy evaluation 2 ) determine what the adversary will see threat model : who is the adversary? what are her “ observations ”? what is her prior knowledge? 3 ) “ invert ” the mechanism, in the way the adversary would do always assume the adversary knows the mechanism and would try to undo its effect 4 ) evaluate property after inversion this is the real probability the adversary can compute 5 ) quantify | EPFL CS 402 Moodle |
No question: Moodle | the probability of success of the adversary non trivial! 42 43 where can pets help? users data data internet service provider publishing everywhere : privacy at all layers an example of evaluation at the application layer ( and how hard it is to get it right ) 44 pets for data anonymization scenario : you have a set of data that contains personal data and you would like to anonymize it to : ● not be subject to data protection while processing ● make it public for profit ● make it public for researchers goal : produce a dataset that preserves the utility of the original dataset without leaking information about individuals. this process is known as “ database sanitization ” 45 privacy properties : anonymity “ anonymity is the state of being not identifiable within a set of subjects, the anonymity set [... ] the anonymity set is the set of all possible subjects who might cause an action ” who is...... the reader of a web page, the person accessing a service... the sender of an email, the writer of a text... the person to whom an entry in a database relates... the person present in a physical location 46 decoupling identity and action to achieve anonymity we must decouple user identities from user attributes 47 medical data qid sa zipcode age sex disease 47677 29 f ovarian cancer 47602 22 m ovarian cancer 4767 | EPFL CS 402 Moodle |
No question: Moodle | ##8 27 f prostate cancer 47905 43 f flu 47909 52 m heart disease 47906 47 f heart disease id name 13241 542562 5377 73563 994356 24562 qid - quasi - identifier, sa - sensitive attribute let ’ s make users pseudonymous to achieve anonymity we must decouple user identities from user attributes 48 name zipcode age sex alice 47677 29 f bob 47983 65 m carol 47677 22 f dan 47532 23 m ellen 46789 43 f voter registration data medical data qid sa zipcode age sex disease 47677 29 f ovarian cancer 47602 22 m ovarian cancer 47678 27 f prostate cancer 47905 43 f flu 47909 52 m heart disease 47906 47 f heart disease id name 13241 542562 5377 73563 994356 24562 qid - quasi - identifier, sa - sensitive attribute let ’ s make users pseudonymous possible existence of other databases to achieve anonymity we must decouple user identities from user attributes 49 name zipcode age sex alice 47677 29 f bob 47983 65 m carol 47677 22 f dan 47532 23 m ellen 46789 43 f voter registration data medical data qid sa zipcode age sex disease 47677 29 f ovarian | EPFL CS 402 Moodle |
No question: Moodle | cancer 47602 22 m ovarian cancer 47678 27 f prostate cancer 47905 43 f flu 47909 52 m heart disease 47906 47 f heart disease qid - quasi - identifier, sa - sensitive attribute let ’ s make users pseudonymous possible existence of other databases let ’ s remove identities to achieve anonymity we must decouple user identities from user attributes 50 name zipcode age sex alice 47677 29 f bob 47983 65 m carol 47677 22 f dan 47532 23 m ellen 46789 43 f voter registration data medical data qid sa zipcode age sex disease 47677 29 f ovarian cancer 47602 22 m ovarian cancer 47678 27 f prostate cancer 47905 43 f flu 47909 52 m heart disease 47906 47 f heart disease qid - quasi - identifier, sa - sensitive attribute let ’ s make users pseudonymous possible existence of other databases let ’ s remove identities some attributes are quasi - identifiers to achieve anonymity we must decouple user identities from user attributes 51 name zipcode age sex alice 47677 29 f bob 47983 65 m carol 47677 22 f dan 47532 23 m ellen 46789 43 f voter registration data medical data qid sa zipcode age sex disease 47677 29 * ovarian cancer 47602 | EPFL CS 402 Moodle |
No question: Moodle | 22 * ovarian cancer 47678 27 * prostate cancer 47905 43 * flu 47909 52 * heart disease 47906 47 * heart disease qid - quasi - identifier, sa - sensitive attribute let ’ s make users pseudonymous possible existence of other databases let ’ s remove identities some attributes are quasi - identifiers let ’ s remove some attributes to achieve anonymity we must decouple user identities from user attributes 52 let ’ s make users pseudonymous possible existence of other databases let ’ s remove identities some attributes are quasi - identifiers let ’ s remove some attributes impossible to know what will be a qid name zipcode age sex alice 47677 29 f bob 47983 65 m carol 47677 22 f dan 47532 23 m ellen 46789 43 f voter registration data medical data qid sa zipcode age sex disease 47677 29 * ovarian cancer 47602 22 * ovarian cancer 47678 27 * prostate cancer 47905 43 * flu 47909 52 * heart disease 47906 47 * heart disease caucasian hiv + flu asian hiv - flu asian hiv + herpes caucasian hiv - acne caucasian hiv - herpes caucasian hiv - acne bob is caucasian and i heard he was admitted to hospital with flu … qid - quasi - identifier, sa - sensitive attribute k - anonymity ● obfuscate qid such that | EPFL CS 402 Moodle |
No question: Moodle | anonymity set for a user is at least k ● each person contained in the database cannot be distinguished from at least k - 1 other individuals whose information also appears in the released database. ● k indicates the degree of anonymity 53 name age zip nationality problem john 28 13053 russian heart zoey 29 13068 american heart nathan 21 13068 japanese flu lucas 23 13053 american flu sam 50 14853 indian cancer max 55 14853 russian heart mathias 47 14850 american flu sarah 59 14850 american flu chris 31 13053 american cancer karen 37 13053 indian cancer bob 36 13068 japanese cancer jane 32 13068 american cancer quasi - identifier key attribute / identifier sensitive attribute k - anonymity example name age zip nationality problem john 28 13053 russian heart zoey 29 13068 american heart nathan 21 13068 japanese flu lucas 23 13053 american flu sam 50 14853 indian cancer max 55 14853 russian heart mathias 47 14850 american flu sarah 59 14850 american flu chris 31 13053 american cancer karen 37 13053 indian cancer bob 36 13068 japanese cancer jane 32 13068 american cancer 54 k - anonymity example 55 age zip nationality problem 28 13053 russian heart 29 13068 american heart 21 13068 japanese flu 23 13053 american flu 50 14853 indian cancer 55 14853 russian heart 47 14850 american flu 59 14850 american flu 31 13053 american cancer 37 13053 indian cancer 36 13068 japanese | EPFL CS 402 Moodle |
No question: Moodle | cancer 32 13068 american cancer age zip nationality problem < 30 130 * * * heart < 30 130 * * * heart < 30 130 * * * flu < 30 130 * * * flu > 40 1485 * * cancer > 40 1485 * * heart > 40 1485 * * flu > 40 1485 * * flu 3 * 130 * * * cancer 3 * 130 * * * cancer 3 * 130 * * * cancer 3 * 130 * * * cancer anonymization equivalence class : group of records that are indistinguishable from qid k - anonymity example 56 age zip nationality problem 28 13053 russian heart 29 13068 american heart 21 13068 japanese flu 23 13053 american flu 50 14853 indian cancer 55 14853 russian heart 47 14850 american flu 59 14850 american flu 31 13053 american cancer 37 13053 indian cancer 36 13068 japanese cancer 32 13068 american cancer age zip nationality problem 4 tuples zip code = 130 * * 23 < age < 29 average ( age ) = 25 2 heart and 2 flu 4 tuples zip code = 1485 * 47 < age < 59 average ( age ) = 53 1 cancer 1 heart and 2 flu 4 tuples zip code = 130 * * 31 < age < 37 average ( age ) = 34 all cancer patients anonymization k = 4 k - anonymity : homogeneity attack 57 age zip nationality problem 4 tuples zip code = 130 | EPFL CS 402 Moodle |
No question: Moodle | * * 23 < age < 29 average ( age ) = 25 2 heart and 2 flu 4 tuples zip code = 1485 * 47 < age < 59 average ( age ) = 53 1 cancer 1 heart and 2 flu 4 tuples zip code = 130 * * 31 < age < 37 average ( age ) = 34 all cancer patients k = 4 name age zip nationality bob 36 13068?? bob has cancer l - diversity ● an equivalence class has - diversity if there are at least well - represented values for the sensitive attribute. ● a database has - diversity if every equivalence class has – diversity. 58 age zip nationality problem < 30 130 * * * heart < 30 130 * * * heart < 30 130 * * * flu < 30 130 * * * flu > 40 1485 * * cancer > 40 1485 * * heart > 40 1485 * * flu > 40 1485 * * flu 3 * 130 * * * cancer 3 * 130 * * * cancer 3 * 130 * * * cancer 3 * 130 * * * cancer goal : preserve privacy by reducing granularity of data, extension of k - anonymity l - diversity ● an equivalence class has - diversity if there are at least well - represented values for the sensitive attribute. ● a database has - diversity if every equivalence class has – diversity. 59 age zip nationality problem < 30 130 * * * heart < 30 130 * * * heart < 30 130 * | EPFL CS 402 Moodle |
No question: Moodle | * * flu < 30 130 * * * flu > 40 1485 * * cancer > 40 1485 * * heart > 40 1485 * * flu > 40 1485 * * flu 3 * 130 * * * heart 3 * 130 * * * flu 3 * 130 * * * cancer 3 * 130 * * * cancer 3 - diverse table : there are ( at least ) 3 different sensitive conditions in each equivalence class l - diversity : limitations 60 original salary / disease table 3 - diverse version of the table ( * ) bob has a low income or is in his 20s, so he has a stomach - related disease. t - closeness ● an equivalence class has - closeness if the distance between the distribution of a sensitive attribute in this class and the distribution of the attribute in the whole table is no more than a threshold. ● a table has t - closeness if all equivalence classes have t - closeness. ● the distance is usually measured as earth mover ’ s distance ( emd ), also known as wasserstein metric. 61 n. li, t. li and s. venkatasubramanian, " t - closeness : privacy beyond k - anonymity and l - diversity ”, 2007 prior knowledge attack 62 caucasian 787xx hiv + flu asian 787xx hiv - flu asian 787xx hiv + herpes caucasian 787xx hiv - acne caucasian 787xx hiv - herpes caucasian 787xx hiv - acne this is k | EPFL CS 402 Moodle |
No question: Moodle | - anonymous, l - diverse and t - close … … so secure, right? bob is caucasian and i heard he was admitted to hospital with flu … takeaways anonymizing a dataset via generalization and suppression is extremely hard ● the k - anonymity idea focuses on transforming the dataset not its semantics ● achieving k - anonymity, l - diversity, t - closeness is hard, and still does not guarantee privacy the adversary ’ s background can be anything 63 the interactive scenario 64 database query many times we do not want the data, we want statistics! redefined goal for the interactive case : produce an answer that preserves the utility of the statistics without leaking information about individuals. the interactive scenario 65 response query : what is the average salary of female professors at ic @ epfl with spanish nationality? is there a privacy problem? database query yes no database assumed to contain numeric values. let ’ s audit the queries, if the query will leak, deny! either answer truthfully or state that there will be no answer the interactive scenario ok to publish? 66 database query yes no database assumed to contain numeric values. not answering already reveals some information! let ’ s audit the queries, if the query will leak, deny! either answer truthfully or state that there will be no answer the interactive scenario ok to publish? 67 68 variables di are real, privacy breached if adversary learns some di database give me sum ( d1, d2, d | EPFL CS 402 Moodle |
No question: Moodle | ##3 ) auditor answer = 15 give me max ( d1, d2, d3 ) “ denied ” wait … there must be a reason why second query was denied oh well when denying fails : learning exact values the only possible reason for denial is if d1 = d2 = d3 = 5 69 di ∈ [ 0, 100 ], privacy breached if adversary learns some di ± 1 give me sum ( d1, d2 ) “ denied ” give me sum ( d2, d3 ) answer = 50 first query denied ⇒ d1, d2 ∈ [ 0, 1 ], or d1, d2 ∈ [ 99, 100 ] but d2 + d3 = 50, so d2 < 99 d1, d2 ∈ [ 0, 1 ], d3 ∈ [ 49, 50 ] when denying fails : learning intervals database auditor ●privacy definition? privacy of values? groups? exact? ●algorithmic limitations deniability implies using algorithms computationally prohibitive focus mostly on simple queries ●collusion? either high cost or no security ●utility? of denials may not be the best measure 70 auditing has problems differential privacy 71 differential privacy remember the goal for the interactive case : ● produce an answer that preserves the utility of the statistics without leaking information about individuals. ● to have any utility we must allow the leakage of some information, but we can set a bound on the extent of leakage! | EPFL CS 402 Moodle |
No question: Moodle | ● differential privacy : output is similar whether any single individual ’ s record is included in the database or not. 72 cynthia dwork. differential privacy : a survey of results. international conference on theory and applications of models of computation, april 2008. differential privacy ●basic philosophy : instead of the real answer to a query, add random noise to output, such that by a small change in the database ( someone joins or leaves ), the distribution of the answer does not change much. ●a new privacy goal : minimize the increased risk incurred by an individual when joining ( or leaving ) a given database. ●differential privacy is a privacy notion not a mechanism → we use mechanisms to achieve differential privacy ( e. g., add random noise ) 73 output is similar whether any single individual ’ s record is included in the database or not. c ’ s inclusion of her record in the computation does not make her significantly worse off. differential privacy - informal definition 74 a b c d a b d ≈ if there is already some risk of revealing a secret of c by combining auxiliary information and something learned from db, then that risk is still there but not significantly increased by c ’ s participation in the database. 75 ε - differential privacy - formal definition an algorithm a satisfies ε - differential privacy if ● for every pair of neighboring databases ( d, d - r ), differing only in row r ● for every subset s of possible output values taken by a pr [ a ( d ) | EPFL CS 402 Moodle |
No question: Moodle | ∈ s ] ≤ eε pr [ a ( d - r ) ∈ s ] principle : the removal / addition of a single record in the database should / does not substantially affect the values of the computed function / statistics. properties of differential privacy 76 • composability : if algorithms a1, a2, …, ak use independent randomness and each ai satisfies εi - differential privacy, respectively. then ( a1, a2, …, ak ) is ( ε1 +, ε2 +... + εk ) - differentially private • post - processing secure : if algorithms a is ε - differentially private, then for all functions g, g ( a ) is also ε - differentially private ( cannot undo privacy with more computations ) • input perturbation • add noise directly to the database ( = perturbed dataset can be published ) + independent of the algorithm & easy to reproduce - determining the amount of required noise is difficult • output perturbation • add noise to the function ( statistic ) output + easier to control privacy & better guarantees than input perturbation - results cannot be reproduced • algorithm perturbation • inherently add noise to the algo + algorithm can be optimized with the noise addition - difficult to generalize & depends on the inputs how to ensure differential privacy? 77 more on these algorithms and variants in cs - 523 • impact on accuracy : we still add noise! • impact is disparate : | EPFL CS 402 Moodle |
No question: Moodle | we preserve the average signal … but not the outliers • very hard to implement in practice : sensitivity is not always obvious to compute, independence is not always guaranteed differential privacy comes at a cost 78 summary privacy protection goes beyond data hiding privacy is a security property, and must be considered in an adversarial environment privacy engineering is a systematic process ( like security engineering! ) evaluate well is key : remember the strategic adversary! anonymization is tremendously hard : we can only publish data securely in the interactive scenario and at a cost in utility 79 com - 402 : information security and privacy 0x3c ml security mathias payer ( infosec. exchange / @ gannimo ) learning goals for today basics of machine learning security stealing models and privacy implications how the output of models can be altered pitfalls with machine learning setups 2 3 user data machine learning services machine learning ( ml ) definition ( wikipedia ) machine learning [... ] gives " computers the ability to learn without being explicitly programmed " [ and ] [... ] explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data - driven predictions or decisions, through building a model from sample inputs. 4 machine learning is becoming ubiquitous reinforcement unsupervised supervised learning ● labeled data ● direct feedback ● predict outcome / future ● example : estimate a house price most used and where security has been mostly studied 5 machine learning taxonomy | EPFL CS 402 Moodle |
No question: Moodle | machine learning can be separated into 3 main categories supervised learning 6 features private? features private? features private? label sample feature extraction model train features label features label features label features label feature extraction label features sample supervised learning 7 model patient file > 1 week features label features label features label features > 1 week train feature extraction cough fatigue chest discomfort … feature extraction cough fatigue chest discomfort sample < 1 week prediction : will this patient need to be hospitalized for longer than 1 week? adversary goals 8 features private? features private? features private? model sample label features label features label features label features label feature extraction train feature extraction features steal the model intellectual property steal information privacy issue sample label alter the output fooling the user today ’ s agenda 9 model stealing data stealing output altering biases 10 model stealing protecting intellectual property in ml model stealing data stealing output altering biases higher success rates more threatening machine learning under adversarial conditions opaque - box attacks ● model architecture and parameters unknown ● can only interact blindly with the model grey - box attacks ● model architecture known, parameters unknown ● can only interact with the model, but has information about the type of model clear - box attacks ● known architecture and parameters ● can replicate the model and use the model ’ s internal parameters in the attack 11 secret training data classifier train test bobʼs profile features cat 1. the cloud, e. g., amazon ml or google prediction api, ( pre - ) trains a classifier using their own data 3. the user makes a | EPFL CS 402 Moodle |
No question: Moodle | query : “ given their profile ( photos, posts, metadata ), what pet has this facebook user? ” 2. make this classifier available as a service for users to query machine learning as a service 12 model stealing confidentiality of the model itself ( e. g., intellectual property ) good ml models require considerable investment : ● collecting data takes time and money ● training infrastructure is expensive ( time and $ $ $ ) goal : “ steal ” the expensive model by observing its outputs with lower cost than obtaining the data and training 13 f. tramer et al. stealing machine learning models via prediction apis, usenix security 2016 https : / / arxiv. org / abs / 1609. 02943 video : https : / / www. usenix. org / conference / usenixsecurity16 / technical - sessions / presentation / tramer sample label feature extraction features model sample sample label label f ( x )? when used for classification, a linear model employs a linear function ( separating hyperplane ) to produce a decision ● e. g., logistic regression, svm with linear kernel “ 1 ” if f ( x ) > t, the output class is “ 1 ”, otherwise is “ 0 ”. “ 0 ” machine learning 101 : linear models 14 assume adversary knows the model is a linear architecture ( grey - box model ). assume x is two - dimensional : adversary ’ s goal : steal parameters w, b how many input - | EPFL CS 402 Moodle |
No question: Moodle | output pairs ( x, f ( x ) ) the adversary must observe to steal the model? what if the x was d - dimensional? “ 1 ” “ 0 ” stealing a linear model 15 for two - dimensional x only need 3 queries! in general, if a linear model uses d features, the adversary needs d + 1 different queries to steal by solving the linear system for w, b equation - solving attack 16 “ 1 ” “ 0 ” assume adversary knows the model ’ s architecture ( grey - box model ) adversary ’ s goal : steal parameters w sample label feature extraction features model sample sample label label f ( x )? “ 1 ” “ 0 ” stealing a non - linear model tramer et al. stealing machine learning models via prediction apis 17 observe many queries x = ( x, f ( x ) ), and fit the model on x like on any other training data! takes many queries. for a neural network with 2k parameters, need 11k queries to get 99. 9 % similarity. more recent work has reduced these numbers. tramer et al. stealing machine learning models via prediction apis “ 1 ” “ 0 ” sample label feature extraction features model sample sample label label sample feature extraction features stolen model el lab sample label sample label retraining attack 18 nicholas carlini et al, cryptanalytic extraction of neural network models, eurocrypt 2020 https : / / arxiv. org / abs / 2003. 048 | EPFL CS 402 Moodle |
No question: Moodle | ##84 video available at : https : / / nicholas. carlini. com more on stealing neural networks 19 “ given ( oracle ) query access to a neural network, learned through stochastic gradient descent, can we extract a functionality equivalent model? ” achievement : extracted a 100, 000 parameter neural network trained on the mnist digit recognition task with 221. 5 queries in under an hour → implications from cryptographic research for ml assumption of the field of secure inference : observing the output of a neural network does not reveal the weights → this assumption is false, and therefore the field of secure inference will need to develop new techniques to protect the secrecy of trained models nicholas carlini et al, cryptanalytic extraction of neural network models. 2020 implication of this attack against neural networks 20 first attack = 2016 first defenses ~ 2017 output perturbation [ 1 ] : add noise to the probabilities output by the model to hinder reconstruction, but not accuracy detect suspicious queries [ 2 ] : identify deviations from expected on distribution of successive queries from a client [ 1 ] lee et al. defending against neural network model stealing attacks using deceptive perturbations. 2019 [ 2 ] juuti et al. prada : protecting against dnn model stealing attacks. 2019 preventing model stealing 21 very recent techniques! we don ’ t yet know how robust they are 22 takeaways on model stealing ● many models are susceptible to stealing ● complex topic, lots of ongoing research ● crucial issue | EPFL CS 402 Moodle |
No question: Moodle | for ml as a service ● very young field, the situation will ( probably ) improve! ● many problems, some solutions data stealing privacy challenges in ml 23 model stealing data stealing output altering biases active area 24 https : / / xkcd. com / 2169 / model sample train private? sample label features features label features label features label feature extraction feature extraction features training testing information leaks about data used for training testing data ( or the result ) might be sensitive in ml as a service model label privacy risks in machine learning 25 users ’ data services privacy risks before the ml era 26 privacy threats – collection of sensitive personal data – re - identification – inference attacks shokri et al. 2017 users ’ data services input machine learning privacy risks before the ml era output shokri et al. 2017 27 do trained models leak sensitive data? can we train good privacy - preserving models? training set query airplane automobile … ship truck prediction typical task : classification 28 shokri et al. 2017 population sensitive data ( unknown ) summary statistics ( known ) target data record [ homer et al. ( 2008 ) ] [ dwork et al. ( 2015 ) ] [ backes et al. ( 2016 ) ] was x in the “ sensitive ” dataset? more similar to the “ reference ” dataset or the “ summary statistics ”? reference data ( known ) membership inference 29 try to understand which data used / not used to train this model. shokri et al. 2017 machine learning models no | EPFL CS 402 Moodle |
No question: Moodle | knowledge of the model parameters ( grey - box attack ) training data membership inference ( against ml ) assumptions 30 shokri et al. 2017 model training api data prediction api input from the training set input not from the training set classification classification recognize the difference exploiting trained models 31 shokri et al. 2017 model training api data prediction api input from the training set input not from the training classification classificatio n train a model to … ml against ml 32 attack : leverage public data for training, compare! recognize the difference shokri et al. 2017 … without knowing the parameters of the actual model! machine learning models no knowledge of the model parameters training data membership inference ( against ml ) : assumptions 2. 0 33 shokri et al. 2017 no samples of the training data target model train 1 … test 1 train 2 test 2 train k test k in out in out in out shadow model 2 shadow model k shadow model 1 train the attack model to predict if an input was a member of the training set ( in ) or a non - member ( out ) classification classification classification shadow models 34 shokri et al. 2017 try to create ‘ shadow ’ of the actual model to understand how it behaves. training api data prediction api training set target data record was this record in the training set? membership inference attack 35 shokri et al. 2017 0. 8 0. 9 minimum attack accuracy on 75 % of classes purchase dataset — classify customers ( 100 classes ) 36 | EPFL CS 402 Moodle |
No question: Moodle | attack : leverage data distributions for training, compare! shokri et al. 2017 machine learning models no knowledge of the model parameters training data no knowledge of the underlying distribution of training data no samples of the training data membership inference ( against ml ) : assumptions 3. 0 37 shokri et al. 2017 target model prediction api random data slightly modify prediction confidence data synthesis for shadow models 38 shokri et al. 2017 purchase dataset — classify customers ( 100 classes ) google real data 39 attack : random data for training, compare and iterate! shokri et al. 2017 synthetic data model training api data prediction api membership inference overfitted! why do these attacks work? 40 shokri et al. 2017 data universe training set model overfitting is the common enemy shokri et al. 2017 41 shokri et al. 2017 privacy : does the model leak information about data in the training set? learning : does the model generalize to data outside the training set? for once, privacy and utility are not in conflict : overfitting is the common enemy ● overfitted models leak training data ● overfitted models lack predictive power need generalizability and accuracy privacy utility 42 shokri et al. 2017 many problems, some solutions very young field, the situation will improve! more research needed 43 takeaways on ml privacy issues altering the output 44 model stealing data stealing output altering biases inputs that will make ml fail 45 panda adversaria | EPFL CS 402 Moodle |
No question: Moodle | ##l perturbation ( noise ) gibbon inputs to a model that an attacker has designed to cause the model to make a mistake. goodfellow et al. explaining and harnessing adversarial examples, 2014 https : / / arxiv. org / abs / 1412. 6572 panda adversarial perturbation gibbon independent and identically distributed ( iid ) assumptions no longer hold. ( 1 ) identical : inputs are intentionally manipulated to not belong to the training distribution. ( 2 ) independence : inputs are no longer drawn independently ; the attacker may sample from a single input repeatedly. 46 objective : find model parameters that minimize empirical loss loss at ( x, y ) if model parameters are w training data data distribution machine learning 101 : quick refresher 47 how do we “ train ” those weights? often, the answer is to use a ( simplified ) flavor of gradient descent : image : saugat bhattarai loss learning rate learning step machine learning 101 : quick refresher 48 goal : find a perturbation ( adversarial noise ) that maximizes loss ( x, y ) is the initial example some similarity relation that usually encodes “ imperceptible change ” adversarial example problem 49 the similarity relation is often represented as adversarial cost constraint. if the goal is to be imperceptible, the common cost is a norm or perturbation : cost : norm of the per | EPFL CS 402 Moodle |
No question: Moodle | ##turbation budget : max allowed norm of the perturbation. e. g., inter - class distance how to define similarity? 50 panda gibbon norm ( “ size ” ) of the perturbation must be within epsilon is this a good constraint for imperceptible adversarial examples? 51 constraint on perturbation norm assumes that similar images are close in, e. g., euclidean distance. this is not always true! [ 1 ] for example, similarity could be defined as small affine transformations [ 2 ] : original imperceptible perturbation of norm 24. 3 semantic change norm 23. 2 [ 1 ] jacobsen et al., exploiting excessive invariance caused by norm - bounded adversarial robustness, 2019, https : / / arxiv. org / pdf / 1903. 10484. pdf [ 2 ] engstrom et al., exploring the landscape of spatial robustness, 2019, https : / / arxiv. org / pdf / 1712. 02779. pdf other similarity relations 52 sharif et al. accessorize to a crime : real and stealthy attacks on state - of - the - art face recognition. ccs 2016 https : / / www. cs. cmu. edu / ~ sbhagava / papers / face - rec - ccs16. pdf attacks do not have to be imperceptible! 53 [ gss15 | EPFL CS 402 Moodle |
No question: Moodle | ] goodfellow et al. explaining and harnessing adversarial examples [ lcl17 ] liu et al. delving into transferable adversarial examples and black - box attacks transferability property 54 “ adversarial examples have a transferability property : samples crafted to mislead a model a are likely to mislead a model b. ” in the most extreme case, it is possible to construct a single perturbation that will fool a model when added to any image. attackers need minimal resources to attack the system! banana truck cat hammer dog football [ gss15 ] goodfellow et al. explaining and harnessing adversarial examples [ lcl17 ] liu et al. delving into transferable adversarial examples and black - box attacks 55 attack surface recall the machine learning training objective : defending against adversarial examples? 56 adversarial example defending in general is very hard. can only defend against a particular threat model ( e. g., perturbations up to epsilon norm ), and normally no guarantees. standard way is adversarial training ( based on robust optimization ). it means training on simulated adversarial examples : e. g. 20 random restarts, 100 steps “ threat model ” defending against adversarial examples? 57 certified defenses [ 1 ] ensure that no example can exist inside a ball with radius the norm used for the perturbation detect suspicious queries | EPFL CS 402 Moodle |
No question: Moodle | [ 2 ] identify deviations from expected on distribution of successive queries from a client → unclear whether they are really effective preventing adversarial examples lecuyer et al. certified robustness to adversarial examples with differential privacy. 2019 chen et al. stateful detection of black - box adversarial attacks. 2019 58 twitter bot detection [ 1 ]. detection tools can easily be fooled by tweaking the number of replies or retweets. text classification [ 2 ]. attacks are not restricted to computer vision [ 1 ] kulynych et al. evading classifiers in discrete domains with provable optimality guarantees. [ 2 ] alzantot et al. generating natural language adversarial examples. 59 protecting ml against output alteration ( by adversarial examples ) is hard if the adversary controls the inputs in deployed models, they always win! unclear whether there will ever be effective defenses high - dimensional spaces difficult to characterize adversarial training is the most promising protection technique 60 takeaways on adversarial examples if adversarial examples can occur for an ml usage you are considering, be very careful because this can create a severe vulnerability biases and fallacies 61 model stealing data stealing output altering biases base rate fallacy we assume equal distribution of cases, if they are not we falsely interpret results 62 tweets that are hate speech tweets that are not hate speech base rate fallacy / prosecutor ’ | EPFL CS 402 Moodle |
No question: Moodle | s fallacy 63 quick reminder : ai performance metrics example : hate speech detection tweets that are hate speech quick reminder : ai performance metrics example : hate speech detection tweets that are not hate speech true positives prediction : hate speech true negatives prediction : not hate speech prediction : not hate speech false negative false positives prediction : hate speech base rate fallacy / prosecutor ’ s fallacy 64 true positives prediction : hate speech true negatives prediction : not hate speech prediction : not hate speech false negative tweets that are hate speech tweets that are not hate speech false positives prediction : hate speech base rate fallacy / prosecutor ’ s fallacy 65 quick reminder : ai performance metrics example : hate speech detection on recent research auditing commercial facial analysis technology https : / / medium. com / @ bu64dcjrytwitb8 / on - recent - research - auditing - commercial - facial - analysis - technology - 19148bda1832 response : racial and gender bias in amazon rekognition — commercial ai system for analyzing faces. https : / / medium. com / @ joy. buolamwini / response - racial - and - gender - bias - in - amazon - rekognition - commercial - ai - system - for - analyzing - faces - a289222eeced distributional shift 66 classification task : should we send a patient with bronchitis home? rule - based learning ● | EPFL CS 402 Moodle |
No question: Moodle | if x, then y ● human readable rules : causation is intrinsic machine learning ● better accuracy ● not directly possible to understand why a decision is made transparency : correlation or causation? 67 “ a facial recognition experiment that claims to be able to distinguish between gay and heterosexual people has sparked a row between its creators and two leading lgbt rights groups. ” https : / / www. bbc. co. uk / news / amp / technology - 41188560 https : / / medium. com / @ blaisea / do - algorithms - reveal - sexual - orientation - or - just - expose - our - stereotypes - d998faf when the algorithm was presented with two photos where one picture was definitely of a gay man and the other heterosexual, it was able to determine which was which 81 % of the time. with women, the figure was 71 % what did the algorithm learn? faces or stereotypical poses / gestures in the dating site & facebook pictures used for training would it work in other social networks? does it work evenly for different races? and for different social groups? controversial ml research 68 69 a bigger problem : bias reinforcement predictive policing : distribute resources ( policemen ) according to needs ( crime ) problem # 1 : prediction may be biased training data = available reports minority / poor neighborhoods affected problem # 2 : system thought in a static context under deployment … more police more reports 70 a bigger problem : bias reinforcement arvind narayanan : tutorial : | EPFL CS 402 Moodle |
No question: Moodle | 21 fairness definitions and their politics https : / / www. youtube. com / watch? v = jixiuydnyyk statistical bias : difference between an estimator expected value and the true value very limited! nothing about errors, nothing about distributional shift group fairness : outcome should not differ between demographic groups predictive parity : same prediction regardless of group ( aka, calibration ) equal false positive ( rates ) equal false negative ( rates ) … individual fairness : similar? individuals should be treated similarly? 71 what is bias? 72 deploying machine learning is hard : reality differs far from lab conditions base rate matters : it is always hard to get good results on weak signals biases come in many flavors, we only saw a few examples ; it is hard to remove takeaways on biases and fallacies summary machine learning exposes subtle security and privacy challenges adversaries may steal data from models by simply querying them models may leak private information about the original training set adversaries may trick the model to give wrong results protection against attacks are hard and many pitfalls exist when deploying models 73 com - 402 : information security and privacy 0x40 summary mathias payer ( infosec. exchange / @ gannimo ) daily cyber attacks 2 information basics 3 crypto basics crypto ( cryptography ) protects data using keys ● communications ( https, ssh, gsm ) ● files and disks ( bitlocker, gpg, veracrypt ) | EPFL CS 402 Moodle |
No question: Moodle | goals ● confidentiality ● integrity ● authentication crypto can only be trusted if keys are distributed securely 4 https : / / xkcd. com / 538 / lesson 1 : encryption is essential 5 encryption protects data in transit and at rest use strong encryption standards to secure sensitive data and enforce end - to - end encryption wherever possible don ’ t roll your own crypto, use established protocols. lesson 2 : principle of least privileges ( polp ) limit access rights to the minimum necessary for users, systems, and processes implement role - based access control ( rbac ) and review permissions regularly enforce two - factor for sensitive information but balance cost of authentication versus cost of breaches 6 lesson 3 : humans are the weakest link social engineering exploits human error more than technical vulnerabilities humans are terrible at randomness ( generating passwords ) need to conduct regular security awareness training ● educate about phishing ● inform about scams ● teach best practices 7 security 8 data security 9 https : / / informationisbeautiful. net / visualizations / worlds - biggest - data - breaches - hacks / lesson 4 : data is a valuable asset 10 data is the new currency and must be protected companies must classify and prioritize data based on sensitivity and business value individuals must consider if the cost of data is worth it and develop agency on where to store their data pl security and compartments programming languages ● language mechanisms ( memory / type safety, namespaces ) ● can we build a secure | EPFL CS 402 Moodle |
No question: Moodle | system without hardware support? compartmentalization ● separate both data and software on processors into different compartments ● memory protection, hardware enclaves 11 owasp and software security 12 https : / / www. horangi. com / blog / real - life - examples - of - web - vulnerabilities lesson 5a : software has bugs 13 lesson 5b : many bugs are exploitable 14 lesson 6 : software is complex google chrome : 76 mloc gnome : 9 mloc xorg / wayland : 1 mloc glibc : 2 mloc linux kernel : 17 mloc 15 chrome and os ~ 100 mloc, 27 lines / page, 0. 1mm / page ≈ 370m margaret hamilton with code for apollo guidance computer ( nasa, ‘ 69 ) applications ( of security ) 16 automated testing / fuzzing 17 mobile security 18 https : / / source. android. com / docs / security / overview network and operational security minimize the risk ● segregation of networks and data ● firewalls, vlans ● protect apps and data ● web application firewalls prepare response ● have a plan ● make backups ● save logs 19 lesson 7 : continuous monitoring and patching 20 the secure developer life cycle requires that we continuously monitor our systems and source code to preempt security incidents attack vectors and threats constantly evolve regularly monitor systems for vulnerabilities and apply updates / patches promptly integrate security into devops and give your developers time for security best practices tees | EPFL CS 402 Moodle |
No question: Moodle | and side channels properties of trusted hardware ● attestation ● sealing ● isolation side channels ● timings ● power analysis ● electromagnetic 21 incident response preparedness 22 breaches are inevitable ; how one responds determines the impact develop, test, and refine an incident response plan. include steps for containment, investigation, communication, and recovery lesson 8 : defense - in - depth 23 assume breach and prepare accordingly no single security control can provide ultimate / complete protection use layers of defense such as firewalls, encryption, multi - factor authentication, and intrusion detection systems privacy 24 privacy 25 https : / / www. businessinsider. com / strava - heatmap - most - revealing - images - 2018 - 1? r = us & ir = t machine learning security and privacy 26 https : / / www. theverge. com / 2016 / 3 / 24 / 11297050 / tay - microsoft - chatbot - racist lesson 9 : privacy by design integrate privacy and security considerations into systems and processes from the start follow frameworks like the gdpr ’ s “ privacy by design ” principle during development and operational changes 27 the big picture : threats evolve we want to protect ● information and computer systems ● real - world ( physical ) systems that depend on it systems ● people … against bad things happening we need a method to ● understand ● classify ● prevent … what can go wrong? 28 stay informed about emerging threats like ai - driven attacks, supply chain vulnerabilities or ransomware | EPFL CS 402 Moodle |
No question: Moodle | . this course provides as baseline. continue to explore! to improve the course, your feedback is essential! tell me the top three things you liked and disliked for anything you disliked, let me know what would make it better were any topics not covered in enough detail? what would make the class more engaging? what was the best / worst topic? why? 29 students that hated the class students that liked the class students that fill out surveys | EPFL CS 402 Moodle |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.