sentence
stringlengths
1
1.38k
label
stringclasses
3 values
For others: this isn't a very secure method.
p
You should use PBKDF 2 specified in PKCS#5.
o
erickson said how to do this above.
o
DarkSquid's method is vulnerable to password attacks and also doesn't work unless your plaintext's size is a multiple of AES's block size (128 bits) because he left out padding.
p
Also it doesn't specify the mode; read Wikipedia's Block Cipher Modes of Operation for concerns.
n
Generating your own key from a byte array is easy: CODESNIPPET_JAVA1 .
p
But creating a 256-bit key isn't enough.
n
If the key generator cannot generate 256-bit keys for you, then the URL_http://java.sun.com/javase/6/docs/api/javax/crypto/Cipher.html [CODETERM1] class probably doesn't support AES 256-bit either.
o
You say you have the unlimited jurisdiction patch installed, so the AES-256 cipher should be supported (but then 256-bit keys should be too, so this might be a configuration problem).
o
CODESNIPPET_JAVA2 .
o
A workaround for lack of AES-256 support is to take some freely available implementation of AES-256, and use it as a custom provider.
n
This involves creating your own URL_http://java.sun.com/javase/6/docs/api/java/security/Provider.html [CODETERM2] subclass and using it with URL_http://java.sun.com/javase/6/docs/a pi/javax/crypto/Cipher.html#getInstance%28java.lang.String,%20java.security.Pr ovider%29 [CODETERM3] .
o
But this can be an involved process.
o
You should always indicate the mode and padding algorithm.
o
Java uses the unsafe ECB mode by default.
o
I've implemented the erickson's answer in a really simple class: URL_http://pastebin.com/YiwbCAW8 [Java-AES-256-bit-Encryption/Decryption- class] If you get the CODETERM1 you have to install the Java Cryptography Extension (JCE) unlimited strength jurisdiction policy files: URL_http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html [Java-6-link] URL_http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html [Java-7-link] Just place the jars in your CODETERM2 .
p
You appear to be sharing the same fixed salt value between all key instances.
o
That's probably not a good idea.
p
It works fine.
p
However, if I am decrypting using a different instance of the AESEncrypter, the decrypted file tends to have some issues in the first few bytes (9 bytes).
o
I encrypted a file using a passphrase then later I tried to decrypt using the same passphrase.
p
All contents of the file except the first 9 bytes were decrypted properly.
o
Might work fine, but fixed salt is akin to WEP encryption on your wi-fi router.
o
Maybe you should try the URL_http://www.bouncycastle.org/ [BouncyCastle] crypto provider.
o
It is free and you can use larger key sizes than the default JDK.
p
Use this class for encryption.
o
It works.
p
CODESNIPPET_JAVA1 .
o
} And these are ivBytes and a random key; CODESNIPPET_JAVA2 .
o
NLP Library in java.
o
Possible Duplicate:** URL_http://stackoverflow.com/questions/870460/java-is-there-a-good-natural- language-processing-library [Java-:-Is-there-a-good-natural-language- processing-library] I need a simple Natural Language Processing library written in java which can be used to process a search query/question.
p
What I want actually is to separate the main subject which is being searched in a query**.
o
For an example, considering a query like "What is an apple?
o
", it's perfect if the main search word apple can be extracted.
p
This is for a semantic search engine development purpose.
o
Can anyone please suggest a suitable nlp library for this??
o
Thank You!!
p
The easiest way for you that I see is to use concept tagging** from URL_http://www.alchemyapi.com/api/ [AlchemyAPI] .
p
You can also use some plugins for libraries from questions, pointed in comments, especially pay attention to URL_http://incubator.apache.org/opennlp/ [OpenNLP] and URL_http://gate.ac.uk/ [GATE] .
o
If you are going to build ontology-based search engine**, I recommend you reading URL_http://www.springerlink.com/content/462t8871w5255l13/ [this] paper on ontology-based interpretation of keywords, that shows some tendencies in user's habits ( URL_http://videolectures.net/iswc07_tran_obi/ [video-version] ).
p
Otherwise, you'll better use some statistical techniques** like URL_http://en.wikipedia.org/wiki/Latent_semantic_analysis [LSA] .
o
Jakarta Lucene/Solr?
o
URL_http://lucene.apache.org/java/docs/index.html [ URL_http://lucene.apache.org/java/docs/index.html ] .
o
Named entity recognition with Java.
o
I would like to use named entity recognition (NER) to find adequate tags for texts in a database.
o
Instead of using tools like NLTK or Lingpipe I want to build my own tool.
o
So my questions are: Which algorithm should I use?
o
How hard is to build this tool?
o
Since there are so many ways to go about it, we could better inform you if you shared your goals and why you're trying to to DIY.
o
And are you willing to use any libraries at all, such as machine learning?
o
I did this some time ago when I studied Markov chains.
o
Anyway, the answers are: Which algorithm should I use?
o
Stanford NLP for example uses Conditional Random Field (CRF).
o
If you are not trying to do this effectively, you are like dude from Jackass 3d who was CODETERM1 .
o
There is no simple way to parse human language, as it's construction is complex and it has tons of exceptions.
n
How hard is to build this tool?
o
Well if you know what you are doing, then it's not that hard at all.
p
The process of entering the rules and logic can be annoying and time consuming, and fixing bugs can be nontrivial.
n
But in 20 years, you can make something almost useful (for yourself).
o
1.
o
There is vast of Information Extraction algorithms, to name a few: regular expressions, statical methods, machine learning based, dictionaries, etc.
o
You can find a complete overview on methods in URL_http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.164.2388&rep=rep1&type=pdf [this-survey] .
o
2.
o
Yes, it is hard to build a tool, which find tags with high precision, because it requires a lot of testing and tuning.
n
The -- easiest to implement -- algorithm for finding tags will consists of two steps: Extract candidates for tags Find most significant tags - most disti.
o
In the first step you can take one of two approaches: Use entity names to use as tag candidates (here you need to use Information Extraction framework) Use nouns or noun groups as tag candidates (here you need to use part-of-speech tagger) In the second step, you should use tf-idf to weight tags across document corpus and discard all tags which has tf-idf weight below a given trash-hold If you need a more powerful algorithm look for topic detection frameworks or URL_http://scholar.google.de/scholar?q=topic%20detection [research-papers-on- this-topic] .
p
Check also URL_http://ttp://en.wikipedia.org/wiki/Latent_semantic_analysis [LSA] , after wikipedia: Latent semantic analysis (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms.
o
Please check also his question - URL_http://stackoverflow.com/questions/5544475 /does-an-algorithm-exist-to-help-detect-the-primary-topic-of-an-english- sentence.
o
!
o
Please check also this post - URL_http://nlpers.blogspot.com/2011/04/seeding - transduction-out-of-sample.html , it describes researcher's -- hands on -- experience in creating taggers.
o
NLTK is an open-source project.
o
You might want to explore it a little bit - see how it is done, maybe get involved in the community, rather than trying to completely solve the problem by yourself from scratch... .
n
Look for a copy of this paper: Name Tagging with Word Clusters and Discriminative Training Scott Miller, Jethran Guinness, Alex Zamanian .
o
This may not be a satisfactory answer to your question, still: You might want to evaluate existing service providers for the task and either include their product or integrate one via web services.
p
My experience is that for certain well-defined and very domain-specific tasks (for example: recognizing names of medicaments within Wikipedia web pages) you _can_ manually build NER solutions.
o
URL_http://alias-i.com/lingpipe/ [LingPipe] , URL_http://opennlp.sourceforge.net/projects.html [OpenNLP] , etc.
o
are good tools for this.
p
But for generic tasks (for example: find person names in any web page on the internet), you need a lot of experience, tools, and man-power to get satisfactory results.
p
It might therefore be more effective to use an external provider.
o
URL_http://www.opencalais.com/ [OpenCalais] is a free service, for example; many commercial ones exist.
p
Natural Language date and time parser for java.
o
I am working on a Natural Language parser which examines a sentence in english and extracts some information like name, date etc.
p
for example: "_Lets meet next tuesday at 5 PM at the beach._" So the output will be something like : "_Lets meet 15/09/2009 at 1700 hr at the beach_" So basically, what i want to know is that there any framework or library available for JAVA to do these kind of operations like parsing dates from a sentence and give a output with some specified format.
o
Regards,Pranav Thanks for the replies.
o
I have looked on few NLPs like URL_http://alias-i.com/lingpipe/index.html [LingPipe] , OpenPL, URL_http://nlp.stanford.edu/index.shtml [Stanford-NLP] .
o
I wanted to ask do they hav anything for date parsing for java.
o
URL_http://natty.joestelmach.com/ [Natty] is a really good replacement for JChronic.
p
I swear Natty handles pretty much everything.
n
For example, 2 wednesdays from now can't be parsed by any other solution I've found.
o
+1.
o
I have a system where I'm being fed strings from which I need to (on a best guess basis) remove URLs, anything which might be HTML and anything which might be a date - I've found Natty is excellent for the latter, but I've built exceptions for April, May and June, which are valid girls' names.
p
You can use URL_https://github.com/samtingleff/jchronic [JChronic] , the Java port of URL_http://chronic.rubyforge.org/ [Chronic] .
o
Have you tried URL_https://github.com/samtingleff/jchronic [jchronic] ?
o
However, I doubt any library could directly work with sentences: you'd have to extract sentence fragments and feeding them to a NLP date parsing framework yourself, perhaps on a trial-n-error basis (larger and larger fragments until the framework throws an error).
o
I don't think there's any framework out there that does that out of the box.
o
What you can do is create a set of regular expressions to match those patterns.
o
I would suggest using URL_http://uima.apache.org/ [UIMA] with URL_http://opennlp.sourceforge.net/projects.html [OpenNLP] connectors and same hand made regexp rules.
o
fuzzy string search in Java.
o
I'm looking for high performance Java library for fuzzy string search.
o
There are numerous algorithms to find similar strings, Levenshtein distance, Daitch-Mokotoff Soundex, n-grams etc.
o