,0,0,,,,CC BY-SA 3.0
4921,2,,4454,1/4/2018 17:59,,4,,"I recommend you focus on quality over quantity. Publishing a paper will boost your reputation and make you more recognised within your academic field (AI); however, this is only if the paper provides useful insights into an important issue.
Your paper is more likely to be accepted if it is well written and easy to understand, stimulates new important questions, uses rigorous methods to explain why the data supports the conclusion and connections to prior work is made and serve to make your paper's arguments clear. (Elizabeth Z Elsevier blog)
Before submitting your paper, ask a mentor or a colleague to proofread it, so that you can make the relevant revisions and changes. Journal editors will look down on your work if it is poorly written or contains substandard grammar.
A way to get published is by writing reviews, especially for researchers in earlier stages of their careers. Most journal editors like to publish replies to previous publications since it stimulates debate.
Remember it is acceptable to challenge reviewers' suggestions with good justification. Many researchers fail to persevere when they are instructed to revise and resubmit their work. Don't give up, however, you can politely decline or even argue why a reviewer is wrong. Editors will accept a rational explanation if it is clear that you have considered all their feedback.
Getting published is never easy, especially in high ranking journals. If you focus on getting published quickly it could derail you from concentrating on the quality of your research. Yes, getting published can be expensive, however, it's much better for your career if you write a high-quality paper than a low-quality paper in a lowly ranked or ungraded journal since it will not be REFable.
Below is a list of Artificial Intelligence Journals that you can submit your papers to and possibly get published.
",10913,,2444,,1/31/2021 13:20,1/31/2021 13:20,,,,0,,,,CC BY-SA 4.0
4932,2,,4709,1/6/2018 17:47,,1,,"Its true that your AI model's performance depends on the quality of data that you use. However, high quality data alone is insufficient to guarantee that your model will learn effectively and score well on a particular dataset. Other factors such as smarter algorithms and the use of high performance computing infrastructure must be factored in for your AI system to perform well.
Although A.I research has made massive progress in the past decade, ML engineers are yet to build a system that can match the general scope and generalization ability of the human mind. Upto the first decade of the 2000's AI was dominated by expert systems that emulated the decision making ability of an expert. AI at this point couldn't process unstructured data and therefore it lacked the capacity to sit for and pass high school exams.
This was until 2011 when IBM Watson a question answering computer system competed against two former Jeopardy quiz show winners and placed first. IBM Watson was built on top of Deep QA (a computer system that could answer natural language questions) and UIMA (a software achitecture to process and analyse unstructured information). Below is a link to a paper giving an overview of how IBM's Watson works https://www.aaai.org/Magazine/Watson/watson.php
In 2012 a team led by Geofrey Hinton won the ImageNet competition by exploiting deep convolution networks. This was soon followed by Dahl's team winning the Merck Molecular Activity Challenge using deep neural network architecture. Yann LeCun's work in CNN's, Geoff Hinton's back propagation and Stochastic Gradient Descent aproach to training datasets alongside Andrew Ng's large scale use of GPU's ignited accelerated progress in ML. This was frequently referred to as unreasonable effectiveness of Deep learning.
Following recent advances in fields such as image captioning, natural language processing, information retrieval and computer vision it is highly probable that current generation AI systems can pass high school exams such as SAT.
The Allen AI Institute has made significant progress in developing AI systems that can read, learn and express that understanding through question answering and explanation. Founded by Paul Allen Microsoft's co-founder, the Allen AI Institute's singular focus according to their mission is to conduct high impact research in the field of AI. Below is a news link covering their cognitive system passing high school exams fortune.com/2015/09/21/computer-artificial-intelligence-math/
So far Allen AI Institute has demonstrated a cognitive platform called Geos that is capable of answering geometry questions as well as the average high school student. While another system called Aristo can answer high school science exam questions by leveraging information extraction alongside knowledge representation and reasoning models. You can access AAI's GeoS service here http://allenai.org/euclid/ and Aristo here http://allenai.org/aristo/
Meanwhile researchers working on the Todai project in Japan have demonstrated a cognitive system that is capable of passing the Tokyo University Mathematics entrance exam. My conclusion from the above examples is that possibly we already have AI that can sit for and pass high school exams.
",10913,,,,,1/6/2018 17:47,,,,0,,,,CC BY-SA 3.0
4936,2,,4709,1/7/2018 15:41,,0,,"I'm thinking that you could write an AI that takes the question as input, weights it, and googles info based on the first layer of neurons, then takes the first two to three pages of results and spits out an answer. It would be a crapshoot, but maybe you could take the list of results, choose one using another layer, choose the info from the page using a third layer, then answering the question using the info.
",11995,,,,,1/7/2018 15:41,,,,0,,,,CC BY-SA 3.0
4945,2,,4917,1/8/2018 9:15,,0,,"Theoretically it might be possible but practically it is not.
You can argue by using the analogy of a Turing machine. You can say that the Intel 8080 is a turing machine hence it can run any program including a neural network given infinite time and memory.
Inspite of the above you will face insurmountable challenges in implementing your system.
CPU's are designed to handle calculations in a sequential manner, most AI algorithms are distributed. You need a GPU (or an AI ASIC) to process the algorithms in a massively parallel manner for a significant speedup.
Additionally GPU's are excellent at floating point math, floating point arithmetic involves numbers with a variable number of decimal places which are key in running neural networks. For example an Intel core i7 6700k is capable of 200 Giga-FLOPs (floating point operations per seconds) while on the other hand an NVidia GTX 1080 GPU is capable of about 8900 Giga-FLOPs which is a significant difference. (Tyler J 2017)
If you decide to use the intel 8080 (0.290 MIPS at 2.000 MHz), you will require millions of processors and billions of dollars just to compute at one gigaflop. You can follow this link to see the cost of computing over the years https://en.wikipedia.org/wiki/FLOPS
Another problem concerns RAM. To efficiently run a neural network you need to fully load it in RAM. It will be a huge challenge to squeeze a neural network in the 64 Kb of RAM that an Intel 8080 processor offers.
The network bandwidth problem will also be a huge bottleneck. Modern GPU's support high speed technology to communicate between the GPU's. For example NVidia's NVLink has a peak speed of around 80 GBps. While PCI-E 3.0 runs at around 30 GBps. Without a high speed interconnection bandwidth you will not achieve any speedup inspite of using a distributed system with many processors.
Additionally you will face significant challenges in programing neural network algorithms for your 8080 processor based system. Most programmers today follow the standards of object oriented programming which enables code reuse, simplified design and maintanance. Besides, OOP languages such as Java, C++ and Python have libraries that significantly simplify the process of programming a neural network.
When the 8080 processor was designed back in 1974 OOP had not yet been concieved, they were also using programming tools i.e. compilers that would be considered archaic with todays standards. I mean good luck debugging that system.
Last but not least, you need big data (or atleast a substantial dataset) to train your neural network on. Without training on a big data set your model will be ineffective. The 8080 supported around 200 Kb of storage. For comparison the MNIST dataset is around 14 GB in size. This means that your processor cannot support the neccessary storage of any ML dataset.
For the above reasons my conclusion is that the 8080 processor provides insufficient resources necessary to implement any effective DL algorithm. Networking millions of them together will not provide any substantial speedup for a DL algorithm.
",10913,,,,,1/8/2018 9:15,,,,0,,,,CC BY-SA 3.0
4946,2,,4917,1/8/2018 10:04,,1,,"The building unit of a neural network is called perceptron. It cannot be represented by single transistor because it should hold arbitrary (float) value, over multiple computational iterations. (While the transistor is only binary, and does not work as memory on its own.)
Furthermore, the strengths of the NN is in it's flexibility, which you would lose if you were to bake it on silicon. In a NN you can vary the:
- number of layers
- connections between units
- activation functions
- and many, many more meta parameters
The NNs, once trained on a particular problem, are really fast to make a prediction for a new sample. The slow and computationally heavy task is the training - and it's during the training that you need flexibility to mess with the model and the parameters.
You could bake a trained NN model on a chip, if you need the computation time of a prediction to be really fast i.e. in order of nanoseconds (instead of a millisecond or a second on a modern CPU). That will have a significant downside - you won't be able to ever update it with newer NN model.
",2997,,,,,1/8/2018 10:04,,,,2,,,,CC BY-SA 3.0
4948,2,,3908,1/8/2018 15:20,,0,,"Although I (partly) agree with Nick Bostroms view that Artificial Intelligence could in some ways be dangerous. We do not need new government bodies to control or regulate AI development.
We already have sufficient cyber laws that protect us against computer crimes such as cyber terrorism, cyber bulling, creating malware, identity theft, DOSS, unauthorized access e.t.c. It is the duty of local law enforcement agencies and the FBI to prevent and investigate cybercrimes such as those listed above. Whether AI was used to perpetrate the crime is legally immaterial.
Although AI is a 'new' technology. We already have a rigorous criminal justice system within our governance structures that are well capable of handling any eventualities that may arise from AI or any other technological breakthroughs without being overwhelmed.
For example if an AI causes a car accident the manufacturer of the car can simply be charged for Product Liability for Negligence. If an AI is defective or dangerous. We already have product liability and consumer protection laws and the relevant government agencies to implement them.
If an AI uses its intelligence to maneuver within the law to its own advantage. This by definition is not a crime, big corporations do this all the time to minimize their taxes. However if necessary, the legislature can sit and pass a law criminalizing/banning this new activity.
A sovereign government already has enough powers and the neccessary instruments to excercise that power. Creation of a new government agency will lead to the unneccessary duplication of responsibilities.
The best approach is simply for the relevant government agencies to adapt by playing a proactive role and modernizing their service delivery so that it is in sync with current developments within society.This is what everyone has to do.
In reality we do not need additional agencies. I would find agencies for example Federal Ai Agency
or Federal Blockchain Commission to be baffling and unproductive.
",10913,,,,,1/8/2018 15:20,,,,0,,,,CC BY-SA 3.0
4949,1,,,1/8/2018 16:58,,2,604,"I am trying to understand if robotic process automation (RPA) is a field that requires expertise in machine learning.
Do the algorithms behind RPA use machine learning except for OCR?
",12013,,2444,,12/21/2021 12:01,12/21/2021 12:01,Is robotic process automation related to AI/ML?,,2,0,,,,CC BY-SA 4.0
4952,2,,4650,1/9/2018 9:34,,1,,"AI could hold the key in automating and optimizing networks. On the subscriber side, ML and AI will assist telecom operators in profiling the subscribers. This will be achieved by analyzing network activity, conversion rate of offers and data usage trends.
Below are a few use cases and how they will transform the telecommunication sector. (Source H2o.ai blog https://www.h2o.ai/telecom/ )
Old generation telecom technologies.
- Reactive Maintenance
- Network optimization with human intervention
- Centralized intelligence
- Security attack repair
- Backlogged customer tickets
Future generation AI based telecom technologies.
- Predictive Maintenance
- Self-optimizing network
- Optimal network quality
- Intelligence at the edge
- Security attack prediction
- Improved customer experience through customer service chat bots.
- Speech and voice services for customer which allows users to explore media content by spoken word rather than remote control.
- Predictive maintenance which is the ability to fix problems with telecom hardware such as cell towers, power lines e.t.c before they happen by detecting signals that usually lead to failure.
",10913,,10913,,1/9/2018 10:30,1/9/2018 10:30,,,,0,,,,CC BY-SA 3.0
4953,1,8536,,1/9/2018 9:48,,4,487,"Can Viola Jones algorithm be used to detect the facial emotion. Actually it was used in creating harr-cascade file for object and facial detection, but what confused me is whether it can be used to train for emotion detection.
If not, what algorithms can I use? and what are the mathematical bases? (i.e. what mathematics should I be studying?)
",12021,,1671,,5/16/2019 19:02,5/16/2019 19:02,Viola Jones Algorithm,