Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense 4895,2,,4889,1/1/2018 13:30,,4,,"

The usual way to implement this would be to add the new class with data examples.

Some things you need to address:

Code examples for this are not necessary, as you would just use the same network design as you already have and just add another output. This is a data and model definition problem.

Logically you have another option: As well as outputting the predicted class, you predict separately whether there is any detectable object at all as a true/false value. This still requires the additional data, but is for example how the YOLO algorithm works for object detection. Object detection has a specific meaning - it involves finding the co-ordinates and class of possibly multiple objects in an image. This goes beyond the wording of your question, but is a typical end goal if you are asking this kind of question.

YOLO predicts the presence of an object separately from the class of object. The additional data for YOLO training comes from segmenting the source images, so many parts of the target image are background with no objects. In that case the additional data you require is due to more detailed labelling within each image example.

YOLO is quite complicated architecture, so you might want to look at this example using Keras on a Github project for more details, if object detection is your goal.

",1847,,1847,,1/1/2018 13:35,1/1/2018 13:35,,,,1,,,,CC BY-SA 3.0 4898,2,,4574,1/1/2018 22:05,,2,,"

Disclosure: I am a product manager on Google Cloud Platform.

[...] why does everyone have to repeat the effort of learning the same things?

If Google has already learned cats, or if someone already has a program to recognize handwritten digits, can this knowledge be shared and re-used? Or is it just a matter of paying for them?

You don't have to rebuild these machine learning models from scratch; you can reuse prebuilt machine learning algorithms, e.g., Google Cloud provides the following hosted APIs as a service:

You can put these APIs together to build interesting applications, e.g.,

",1632,,1632,,1/1/2018 22:20,1/1/2018 22:20,,,,0,,,,CC BY-SA 3.0 4905,2,,4890,1/2/2018 22:41,,1,,"

Will it lead to some machine learning collapse?

I wouldn't think so. Data is data. From the standpoint of automata, everything is ultimately reduced to a sting of bits. It may even be useful to be able to train AI's using CGI, for instance, in relation to automated vehicles. Not any different from humans using flight simulators.

Creating models and training AIs on them is useful, and a part of the contemporary AI landscape.

Might it lead to some changes in human's perception of the world, because people get a very big part of their knowledge using computers, connected to the Internet?

It already is. It's not only the false CGI content, but the scope of the search filter that dictates what information a websurfer gets. These results are controlled by algorithms, which evolve. Self-evolving algorithms may make the process more opaque. It definitely seems to be creating social problems already.

Is anyone thinking about this potential problem?

I'm sure there are papers out there on this subject. (Don't have time to search now, but I may do that and come back and amend with some articles and research papers.)

Two authors who are definitely thinking about this are Neal Stephenson and Hannu Rajaneimi. Stephenson addressed the "information unreliability" problem of the internet in Anathem. (It's not a major theme, but his ideas are quite insightful--Stephenson has a hard-science background, with a particular interest in computing.) Rajaniemi extends the ideas in the post-singularity Quantum Thief trilogy, where information and matter are interchangeable, and contains some very interesting ideas. (Rajaneimi holds two advanced mathematics degrees, which is useful in tackling a subject of such great complexity.)

",1671,,2444,,1/17/2021 19:14,1/17/2021 19:14,,,,0,,,,CC BY-SA 4.0 4907,1,,,1/3/2018 1:49,,6,278,"

The basis of my question is that a CNN that does great on MNIST is far smaller than a CNN that does great on ImageNet. Clearly, as the number of potential target classes increases, along with image complexity (background, illumination, etc.), the network needs to become deeper and wider to be able to sufficiently capture all of the variation in the dataset. However, the downside of larger networks is that they become far slower for both inference and backprop.

Assume you wanted to build a network that runs on a security camera in front of your house. You are really interested in telling when it sees a person, or a car in your driveway, or a delivery truck, etc. Let's say you have a total of 20 classes that you care about (maybe you want to know minivan, pickup, and so on).

You gather a dataset that has plenty of nice, clean data. It has footage from lots of times of the day, with lots of intra-class variation and great balance between all of the classes. Finally, assume that you want this network to run at the maximum possible framerate (I know that security cameras don't need to do this, but maybe you're running on a small processor or some other reason that you want to be executing at really high speed).

Is there any advantage, computationally, to splitting your network into smaller networks that specialize? One possibility is having a morning, an afternoon/evening, and a night network and you run the one corresponding to the time of day. Each one can detect all 20 classes (although you could split even farther and make it so that there is a vehicle one, and a person one, and so on). Your other option is sharing base layers (similar to using VGGNet layers for transfer learning). Then, you have the output of those base layers fed into several small networks, each specialized like above. Finally, you could also have just one large network that runs in all conditions.

Question: Is there a way to know which of these would be faster other than building them?

In my head, it feels like sharing base layers and then diverging will run as slow as the ""sub-network"" with the most additional parameters. Similar logic for the separate networks, except you save a lot of computation by sharing base layers. Overall, though, it seems like one network is probably ideal. Is there any research/experimentation along these lines?

",8829,,,,,7/9/2021 21:05,Is one big network faster than several small ones?,,1,0,,,,CC BY-SA 3.0 4908,1,,,1/3/2018 5:51,,1,36,"

Say I have 500 variables and I believe those variables can be shown in a 4-dimensional latent representation which I want to learn.

What I have for training is 100K samples, and those samples are coming mainly from 3 unbalanced groups: 1st group has 1K samples, 2nd group has 49K samples, and 3rd group has 50K samples.

Do you think I can learn a meaningful representation by training a (variational) autoencoder with this data? Is there a reason that requires all samples to come from the same distribution? If not, is there a reason that requires balanced classes?

",9609,,9609,,1/9/2018 7:08,1/9/2018 7:08,Does it make sense to train an autoencoder using data from different distributions?,,0,1,,,,CC BY-SA 3.0 4909,2,,4332,1/3/2018 6:29,,2,,"

Typically what you probably would want to do is do the training on something other than a Raspberry Pi. I think for what you're wanting to accomplish with having a computer talk back to your parrot, you won't need anything too crazy with a bunch of GPU's - but I don't think you'll want to necessarily do the training on a Pi either.

Here are some questions I have:

  1. What are you going to plat back to the parrot? i.e. are you going to play it back random parrot sounds you found online or sounds that you've recorded? are you going to play it back what it just said? are you going to play it back a modification of what it just said?

  2. Do you want it to respond to the parrot anytime the parrot speaks? Or when the parrot ""says"" something specific?

I think depending on the answers to those two questions, there are a couple of different paths that you could go down.

As for the hardware of the Raspberry Pi itself, I have never done any speech recognition with it, but I have done image recognition with it via the Movidius Neural Compute Stick which according to this Quora post, may be able to be used to offload some of the processing ""relatively easily"".

Here are some other links you may find valuable:

",11667,,,,,1/3/2018 6:29,,,,0,,,,CC BY-SA 3.0 4910,1,6520,,1/3/2018 10:48,,4,767,"

I've seen data sets for classification / regressions tasks in domains such as credit default detection, object identification in an image, stock price prediction etc. All of these data sets could simply be represented as an input matrix of size (n_samples, n_features) and fed into your machine learning algorithm to ultimately yield a trained model offering some predictive capability. Intuitively and mathematically this makes sense to me.

However, I'm really struggling with how to think about the structure of an input matrix for game-like tasks (Chess, Go, Seth Blings Mario Kart AI) specifically (using the Chess example):

  1. How would you encode the state of the board to something that a model could train on? Is it reasonable to think about the board state as a 8x8 matrix (or 1x64) vector with each point being encoded by a numerical value dependent on the type of piece and color?

  2. Assuming a suitable representation of the board state, how would the model be capable of making a recommendation given that each piece type moves differently? Would it not have to evaluate the different move possibilities for each piece and propose which move it "thinks" would have the best long term outcome for the game?

  3. A follow up on 2 - given the interplay between a moves made now and moves made n moves into the future how would the model be able to recognize and make trade-offs between moves which may offer a better position now vs those that offer a position n moves in the future - would one have to extend the board state input to a vector of length 1x64n where n is the total number of moves for expected for an individual player or is this a function of a different algorithm which should be able to capture historical information which training?

I am unsure if I'm overthinking this and am missing something really obvious but I would appreciate any guidance in terms of how to approach thinking about this.

",11933,,-1,,6/17/2020 9:57,5/27/2018 13:26,How would you encode your input vector/matrix from a sequence of moves in game like tasks to train an AI? e.g. Chess AI?,,1,2,,,,CC BY-SA 3.0 4911,1,4912,,1/3/2018 11:49,,3,60,"

Suppose I have a classification problem with a stream of training-samples constantly arriving over time. I cannot keep all training-samples in memory, but I still want to train a classifier that will have the ""wisdom"" of all samples, and additionally, I want the classifier to become better whenever it gets new samples.

I thought of the following idea. Suppose we have enough memory to keep 100 samples. Then, for each run of 100 samples, we will train a different sub-classifier. We will have a meta-classifier that will classify based on voting between all existing sub-classifiers. Over time, we will have more and more sub-classifiers, so hopefully the meta-classifier will improve with time - it will have a ""wisdom of the crowds"" effect.

Has this method been tried before? Specifically, has it been tried in a deep-learning sequence-classification setting?

",8684,,,,,1/3/2018 17:22,Creating a classifier for simpler classifiers trained on few training samples,,1,0,,,,CC BY-SA 3.0 4912,2,,4911,1/3/2018 14:13,,2,,"

The voting technique that you described is called Ensemble Learning and its improvement over time is guaranteed if each classifier is at least a little better than random.

",11921,,1671,,1/3/2018 17:22,1/3/2018 17:22,,,,0,,,,CC BY-SA 3.0 4913,2,,2824,1/3/2018 14:27,,1,,"

The important part, where you can see a single reward value is used for $n$ different updates, is the part where a sum of $R_i$ values with $i$ ranging from $\tau + 1$ to $\tau + n$ is assigned to $G$.

So yes, the outer loop of the algorithm always does at most one update per iteration, but for that update it uses multiple previously observed $R_i$ values. Each of those $R_i$ values is used for multiple updates (not multiple updates at the same time, but multiple updates spread out over different iterations).

",1641,,,user1440,11/26/2018 15:20,11/26/2018 15:20,,,,0,,,,CC BY-SA 4.0 4914,1,,,1/3/2018 14:31,,2,39,"

I was wondering any examples of the following;

  • para generation: For eg, given X similar paragraphs, are you able to build a model to learn the style and generate a new para that is a paraphrase of the X paras. Similar in meaning but diff wording.

  • drawing conclusions from X given articles. He has a list of conclusions, check the X articles can provide evidence to the conclusions. Eg, given conclusion “city is not safe”, look for evidence such as “murders” and “thefts”.

Glady Appreciate, Betty

",11930,,,,,1/3/2018 19:25,Para Generation and Drawing Conclusion from X give Articles,,1,1,,,,CC BY-SA 3.0 4915,2,,4914,1/3/2018 19:00,,1,,"

Paraphrase generation would be made by an abstractive text summarization tool like Tensorflow as described in Google's blog.

Abstractive tool summarizes the text but also it adds extra words to make text more human like.

About the second problem scenario I would say the answer depends quite much what you mean by the conclusions and how you think the conclusions would be made. I strongly believe this falls in the category of artificial general intelligence, which is emerging area of AI, supported by association AGI, where the focus is on strong artificial intelligence and more human like features of AI.

Judging about conclusions at least needs a vast knowledge base of the semantics of different words and pieces of text in general combined to possible conclusions and their legitimate implications.

",11810,,11810,,1/3/2018 19:25,1/3/2018 19:25,,,,0,,,,CC BY-SA 3.0 4917,1,,,1/3/2018 20:23,,2,160,"

The Intel 8080 had 4500 transistors and ran at 2-3.125 MHz. By comparison, the 18-core Xeon Haswell-E5 han 5,560,000,000 transistors and can run at 2 GHz. Would it be possible or prudent to simulate a neural network by backing a chip chock-full of a million interconnected, slightly modified intel 8080s (sped up to run at 2 GHz)? If each one modeled 100 neurons you could simulate a neural network with 100 million neurons on a single chip.

Edit: I'm not proposing that you actually use a million intel 8080s; rather I'm proposing that you take a highly minimal programmable chip design like the intel 8080's design and pattern it across a wafer as densely as possible with interconnects so that each instance can function as one or a few dozen fully programmable neurons each with a small amount of memory. I'm not proposing that someone take a million intel 8080s and hook them together.

",3323,,3323,,1/9/2018 2:55,1/22/2019 4:32,Could a large number of interconnected tiny turing-complete computer chips be patterned across a wafer to simulate a programmable neural network?,,3,0,,,,CC BY-SA 3.0 4918,1,,,1/4/2018 2:06,,1,183,"

I'm a student, and currently into image processing project and coding using OpenCV. Recently, I watched Sebastian Thrun from Udacity in TedTalks talked about AlphaGo and I'm totally interested in the idea. I have read this question too : Merged Neural Network in AlphaGo. I was wondering if same approaches can be used in my project.

I'm going to perform color enhancement method for any natural images. And of course, color sampling is a tricky task now. It's a lot of work, I have to prepare condition for each key-color sampling given and also prepare & pick the best enhancement function for it. I'm able to do it already using OpenCV.

But I was wondering if I could load tons of sample pictures instead, have my system test them against each other, and figure out its own enhancement rules from all testing.

I'm not that familiar with Deep Learning, we don't even have deep learning course at my university, but I'm interested in the idea and ready to learn. I'm not even sure if this can be done or not, but I wonder what kind of approaches should I learn to achieve my goal ? Is Deep Learning --> Neural Network a good start ? In my case, to which method in Deep Learning should I go with ? Any reference / advice will be highly appreciated. Thanks.

",11943,,1671,,10/15/2019 19:32,10/15/2019 19:32,Deep Learning Approaches for Color Enhancement Testing,,0,0,,,,CC BY-SA 3.0 4921,2,,4454,1/4/2018 17:59,,4,,"

I recommend you focus on quality over quantity. Publishing a paper will boost your reputation and make you more recognised within your academic field (AI); however, this is only if the paper provides useful insights into an important issue.

Your paper is more likely to be accepted if it is well written and easy to understand, stimulates new important questions, uses rigorous methods to explain why the data supports the conclusion and connections to prior work is made and serve to make your paper's arguments clear. (Elizabeth Z Elsevier blog)

Before submitting your paper, ask a mentor or a colleague to proofread it, so that you can make the relevant revisions and changes. Journal editors will look down on your work if it is poorly written or contains substandard grammar.

A way to get published is by writing reviews, especially for researchers in earlier stages of their careers. Most journal editors like to publish replies to previous publications since it stimulates debate.

Remember it is acceptable to challenge reviewers' suggestions with good justification. Many researchers fail to persevere when they are instructed to revise and resubmit their work. Don't give up, however, you can politely decline or even argue why a reviewer is wrong. Editors will accept a rational explanation if it is clear that you have considered all their feedback.

Getting published is never easy, especially in high ranking journals. If you focus on getting published quickly it could derail you from concentrating on the quality of your research. Yes, getting published can be expensive, however, it's much better for your career if you write a high-quality paper than a low-quality paper in a lowly ranked or ungraded journal since it will not be REFable.

Below is a list of Artificial Intelligence Journals that you can submit your papers to and possibly get published.

",10913,,2444,,1/31/2021 13:20,1/31/2021 13:20,,,,0,,,,CC BY-SA 4.0 4932,2,,4709,1/6/2018 17:47,,1,,"

Its true that your AI model's performance depends on the quality of data that you use. However, high quality data alone is insufficient to guarantee that your model will learn effectively and score well on a particular dataset. Other factors such as smarter algorithms and the use of high performance computing infrastructure must be factored in for your AI system to perform well.

Although A.I research has made massive progress in the past decade, ML engineers are yet to build a system that can match the general scope and generalization ability of the human mind. Upto the first decade of the 2000's AI was dominated by expert systems that emulated the decision making ability of an expert. AI at this point couldn't process unstructured data and therefore it lacked the capacity to sit for and pass high school exams.

This was until 2011 when IBM Watson a question answering computer system competed against two former Jeopardy quiz show winners and placed first. IBM Watson was built on top of Deep QA (a computer system that could answer natural language questions) and UIMA (a software achitecture to process and analyse unstructured information). Below is a link to a paper giving an overview of how IBM's Watson works https://www.aaai.org/Magazine/Watson/watson.php

In 2012 a team led by Geofrey Hinton won the ImageNet competition by exploiting deep convolution networks. This was soon followed by Dahl's team winning the Merck Molecular Activity Challenge using deep neural network architecture. Yann LeCun's work in CNN's, Geoff Hinton's back propagation and Stochastic Gradient Descent aproach to training datasets alongside Andrew Ng's large scale use of GPU's ignited accelerated progress in ML. This was frequently referred to as unreasonable effectiveness of Deep learning.

Following recent advances in fields such as image captioning, natural language processing, information retrieval and computer vision it is highly probable that current generation AI systems can pass high school exams such as SAT.

The Allen AI Institute has made significant progress in developing AI systems that can read, learn and express that understanding through question answering and explanation. Founded by Paul Allen Microsoft's co-founder, the Allen AI Institute's singular focus according to their mission is to conduct high impact research in the field of AI. Below is a news link covering their cognitive system passing high school exams fortune.com/2015/09/21/computer-artificial-intelligence-math/

So far Allen AI Institute has demonstrated a cognitive platform called Geos that is capable of answering geometry questions as well as the average high school student. While another system called Aristo can answer high school science exam questions by leveraging information extraction alongside knowledge representation and reasoning models. You can access AAI's GeoS service here http://allenai.org/euclid/ and Aristo here http://allenai.org/aristo/

Meanwhile researchers working on the Todai project in Japan have demonstrated a cognitive system that is capable of passing the Tokyo University Mathematics entrance exam. My conclusion from the above examples is that possibly we already have AI that can sit for and pass high school exams.

",10913,,,,,1/6/2018 17:47,,,,0,,,,CC BY-SA 3.0 4936,2,,4709,1/7/2018 15:41,,0,,"

I'm thinking that you could write an AI that takes the question as input, weights it, and googles info based on the first layer of neurons, then takes the first two to three pages of results and spits out an answer. It would be a crapshoot, but maybe you could take the list of results, choose one using another layer, choose the info from the page using a third layer, then answering the question using the info.

",11995,,,,,1/7/2018 15:41,,,,0,,,,CC BY-SA 3.0 4945,2,,4917,1/8/2018 9:15,,0,,"

Theoretically it might be possible but practically it is not.

You can argue by using the analogy of a Turing machine. You can say that the Intel 8080 is a turing machine hence it can run any program including a neural network given infinite time and memory.

Inspite of the above you will face insurmountable challenges in implementing your system.

CPU's are designed to handle calculations in a sequential manner, most AI algorithms are distributed. You need a GPU (or an AI ASIC) to process the algorithms in a massively parallel manner for a significant speedup.

Additionally GPU's are excellent at floating point math, floating point arithmetic involves numbers with a variable number of decimal places which are key in running neural networks. For example an Intel core i7 6700k is capable of 200 Giga-FLOPs (floating point operations per seconds) while on the other hand an NVidia GTX 1080 GPU is capable of about 8900 Giga-FLOPs which is a significant difference. (Tyler J 2017)

If you decide to use the intel 8080 (0.290 MIPS at 2.000 MHz), you will require millions of processors and billions of dollars just to compute at one gigaflop. You can follow this link to see the cost of computing over the years https://en.wikipedia.org/wiki/FLOPS

Another problem concerns RAM. To efficiently run a neural network you need to fully load it in RAM. It will be a huge challenge to squeeze a neural network in the 64 Kb of RAM that an Intel 8080 processor offers.

The network bandwidth problem will also be a huge bottleneck. Modern GPU's support high speed technology to communicate between the GPU's. For example NVidia's NVLink has a peak speed of around 80 GBps. While PCI-E 3.0 runs at around 30 GBps. Without a high speed interconnection bandwidth you will not achieve any speedup inspite of using a distributed system with many processors.

Additionally you will face significant challenges in programing neural network algorithms for your 8080 processor based system. Most programmers today follow the standards of object oriented programming which enables code reuse, simplified design and maintanance. Besides, OOP languages such as Java, C++ and Python have libraries that significantly simplify the process of programming a neural network.

When the 8080 processor was designed back in 1974 OOP had not yet been concieved, they were also using programming tools i.e. compilers that would be considered archaic with todays standards. I mean good luck debugging that system.

Last but not least, you need big data (or atleast a substantial dataset) to train your neural network on. Without training on a big data set your model will be ineffective. The 8080 supported around 200 Kb of storage. For comparison the MNIST dataset is around 14 GB in size. This means that your processor cannot support the neccessary storage of any ML dataset.

For the above reasons my conclusion is that the 8080 processor provides insufficient resources necessary to implement any effective DL algorithm. Networking millions of them together will not provide any substantial speedup for a DL algorithm.

",10913,,,,,1/8/2018 9:15,,,,0,,,,CC BY-SA 3.0 4946,2,,4917,1/8/2018 10:04,,1,,"

The building unit of a neural network is called perceptron. It cannot be represented by single transistor because it should hold arbitrary (float) value, over multiple computational iterations. (While the transistor is only binary, and does not work as memory on its own.)

Furthermore, the strengths of the NN is in it's flexibility, which you would lose if you were to bake it on silicon. In a NN you can vary the:

  • number of layers
  • connections between units
  • activation functions
  • and many, many more meta parameters

The NNs, once trained on a particular problem, are really fast to make a prediction for a new sample. The slow and computationally heavy task is the training - and it's during the training that you need flexibility to mess with the model and the parameters.

You could bake a trained NN model on a chip, if you need the computation time of a prediction to be really fast i.e. in order of nanoseconds (instead of a millisecond or a second on a modern CPU). That will have a significant downside - you won't be able to ever update it with newer NN model.

",2997,,,,,1/8/2018 10:04,,,,2,,,,CC BY-SA 3.0 4948,2,,3908,1/8/2018 15:20,,0,,"

Although I (partly) agree with Nick Bostroms view that Artificial Intelligence could in some ways be dangerous. We do not need new government bodies to control or regulate AI development.

We already have sufficient cyber laws that protect us against computer crimes such as cyber terrorism, cyber bulling, creating malware, identity theft, DOSS, unauthorized access e.t.c. It is the duty of local law enforcement agencies and the FBI to prevent and investigate cybercrimes such as those listed above. Whether AI was used to perpetrate the crime is legally immaterial.

Although AI is a 'new' technology. We already have a rigorous criminal justice system within our governance structures that are well capable of handling any eventualities that may arise from AI or any other technological breakthroughs without being overwhelmed.

For example if an AI causes a car accident the manufacturer of the car can simply be charged for Product Liability for Negligence. If an AI is defective or dangerous. We already have product liability and consumer protection laws and the relevant government agencies to implement them.

If an AI uses its intelligence to maneuver within the law to its own advantage. This by definition is not a crime, big corporations do this all the time to minimize their taxes. However if necessary, the legislature can sit and pass a law criminalizing/banning this new activity.

A sovereign government already has enough powers and the neccessary instruments to excercise that power. Creation of a new government agency will lead to the unneccessary duplication of responsibilities.

The best approach is simply for the relevant government agencies to adapt by playing a proactive role and modernizing their service delivery so that it is in sync with current developments within society.This is what everyone has to do. In reality we do not need additional agencies. I would find agencies for example Federal Ai Agency or Federal Blockchain Commission to be baffling and unproductive.

",10913,,,,,1/8/2018 15:20,,,,0,,,,CC BY-SA 3.0 4949,1,,,1/8/2018 16:58,,2,604,"

I am trying to understand if robotic process automation (RPA) is a field that requires expertise in machine learning.

Do the algorithms behind RPA use machine learning except for OCR?

",12013,,2444,,12/21/2021 12:01,12/21/2021 12:01,Is robotic process automation related to AI/ML?,,2,0,,,,CC BY-SA 4.0 4952,2,,4650,1/9/2018 9:34,,1,,"

AI could hold the key in automating and optimizing networks. On the subscriber side, ML and AI will assist telecom operators in profiling the subscribers. This will be achieved by analyzing network activity, conversion rate of offers and data usage trends.

Below are a few use cases and how they will transform the telecommunication sector. (Source H2o.ai blog https://www.h2o.ai/telecom/ )

Old generation telecom technologies.

  1. Reactive Maintenance
  2. Network optimization with human intervention
  3. Centralized intelligence
  4. Security attack repair
  5. Backlogged customer tickets

Future generation AI based telecom technologies.

  1. Predictive Maintenance
  2. Self-optimizing network
  3. Optimal network quality
  4. Intelligence at the edge
  5. Security attack prediction
  6. Improved customer experience through customer service chat bots.
  7. Speech and voice services for customer which allows users to explore media content by spoken word rather than remote control.
  8. Predictive maintenance which is the ability to fix problems with telecom hardware such as cell towers, power lines e.t.c before they happen by detecting signals that usually lead to failure.
",10913,,10913,,1/9/2018 10:30,1/9/2018 10:30,,,,0,,,,CC BY-SA 3.0 4953,1,8536,,1/9/2018 9:48,,4,487,"

Can Viola Jones algorithm be used to detect the facial emotion. Actually it was used in creating harr-cascade file for object and facial detection, but what confused me is whether it can be used to train for emotion detection.

If not, what algorithms can I use? and what are the mathematical bases? (i.e. what mathematics should I be studying?)

",12021,,1671,,5/16/2019 19:02,5/16/2019 19:02,Viola Jones Algorithm,,2,1,,,,CC BY-SA 4.0 4955,1,,,1/9/2018 13:31,,2,176,"

I have a large dataset of skin images, each one associated with a hydration value (percentage).

Now I'm looking into predicting the hydration value from an image. My thinking: train a CNN on the dataset and evaluate the model with a mean square error regression.

First, does this sound like a sensible way to try this?

Second, I'd like to run the model on mobile. Can you recommend any examples with Caffe2 (or alternatively TensorFlow) or diagrams that might explain a similar task?

",12024,,2444,,5/10/2022 8:01,5/10/2022 8:01,Is it a good idea to train a CNN to detect the hydration value (percentage) in skin images and evaluate it with the MSE?,,1,0,,,,CC BY-SA 4.0 4956,1,,,1/9/2018 14:54,,2,386,"

Why do non-linear activation functions that produce values larger than 1 or smaller than 0 work?

My understanding is that neurons can only produce values between 0 and 1, and that this assumption can be used in things like cross-entropy. Are my assumptions just completely wrong?

Is there any reference that explains this?

",12026,,2444,,1/25/2021 0:06,1/25/2021 0:06,Why do non-linear activation functions that produce values larger than 1 or smaller than 0 work?,,2,1,,,,CC BY-SA 4.0 4957,1,,,1/9/2018 18:26,,2,92,"

Imagine we have 2 air conditioner systems (AA) and 2 ""free cooling"" systems which mix external and internal air (FC) in a closed box which always tends to warm up. For each system, we have to find turn on and off temperatures (for some hysteresis, let's say between the range 20-40 each one) to optimize the energy consumption.

As we don't know the relation between these parameters and the energy consumption (and we don't intend to know them), we treat the problem as a black-box function.

Till now, the problem would be solvable via a bayesian optimizer (eg. with gaussian process acquisition function).

But there is a problem: the best configuration may change between seasons, and even days! A simple bayesian optimizer maybe could deal with these changes limiting the data it takes into account by, for example, the last 15-30 days. But this would deal with the change AFTER the consumption increased.

So, the idea is introduce some contextual variables which would help the system prevent these changes (eg. the external and internal temperature, and the vectors of variation of external and/or internal temperature, the weather prediction, whatever).

Also, some of these variables we can take into account might be internal of the system, which means while these influence the best configuration, the actual configuration also influences these variables! and this becomes a reinforcement learning problem.

1) Is there a way (documented or experimental) to know which variables (both internal or external) influences the optimal configuration of these AA/FC systems?

2) Based on the first question, which would be the best approach?

2.1.) No features. This might be considered a multiarmed bandit problem for continuous reward. (FIX POSTERIOR TO SCENARIO CHANGE, IF THERE IS A SCENARIO CHAGNE)

2.2.) Only external features to predict the scenario change. This might be considered a contextual multiarmed bandit-problem. (FORESEE THE SCENARIO CHANGE)

2.3.) Consider only system-internal features. This can be considered a reinforcement-learning problem. (FIX IMMEDIATLY THE SCENARIO CHANGE)

2.4.) Consider both external and internal features. This can be considered a reinforcement-learning problem where some of the states are not influenced by the configuration. (FORESEE THE SCENARIO CHANGE, AND IF SOMETHING FAILS, FIX IMMEDIATLY).

",6114,,6114,,1/10/2018 8:50,1/10/2018 8:50,Which features and algorithm could optimize this air-conditioner problem?,,0,0,,,,CC BY-SA 3.0 4959,2,,4956,1/9/2018 20:47,,2,,"

Why wouldn't they work?

Each neuron's output is equal to a function over the sum of all its weights multiplied by their corresponding neurons. If that function is the Sigmoid function, then the output is squashed from $[0,1]$. If the entire layer uses a SoftMax function, then the output of all neurons is squashed from $[0,1]$ and their sum equals 1. In other others, they represent a set of probabilities, where you can then use cross-entropy to optimize their values (cross-entropy measures the difference between two probability distributions).

ReLU and ELU are simply other types of functions, whose output is not limited to the range $[0, 1]$. They are differentiable, like other activation functions, and so they can be used in any neural network.

",7496,,2444,,1/24/2021 22:53,1/24/2021 22:53,,,,1,,,,CC BY-SA 4.0 4962,2,,4953,1/10/2018 5:54,,1,,"

I have once tried Viola Jones Algorithm to do that, it does not capture the subtle differences in the direction of facial segments which are important to detect emotion. Features like HOG (Available in openCV and many famous image processing libraries) can extract better information from the face to classify emotion.

Also there are many other approaches including ANNs and pure rule based approaches But almost everywhere a good alignment approach for faces become the most important aspect of the exercise. So I will suggest exploring some facial alignment approaches and then Features like HOG instead of Viola Jones/ HARR.

For the mathematics part, it is upto you to dive deep into mathematics or just exploring different approaches by codes. A good understanding of Linear Algebra and a little Geometry will help a lot.

Also if you are new to Machine Learning, understanding the basic algorithms might be relevant to you.

",11387,,,,,1/10/2018 5:54,,,,2,,,,CC BY-SA 3.0 4965,1,6418,,1/10/2018 16:52,,13,17628,"

I am working on a problem where I need to determine whether two sentences are similar or not. I implemented a solution using BM25 algorithm and wordnet synsets for determining syntactic & semantic similarity. The solution is working adequately, and even if the word order in the sentences is jumbled, it is measuring that two sentences are similar. For example

  1. Python is a good language.
  2. Language a good python is.

My problem is to determine that these two sentences are similar.

  • What could be the possible solution for structural similarity?
  • How will I maintain the structure of sentences?
",9428,,2444,,11/15/2019 20:25,11/15/2019 20:25,How do I compute the structural similarity between sentences?,,3,2,,,,CC BY-SA 4.0 4969,1,4974,,1/10/2018 20:05,,0,67,"

What is the difference between a histopathological image and a natural image when training a neural network?

",9560,,75,,1/27/2018 17:01,1/27/2018 17:01,Histopathological image vs. natural image,,1,0,,,,CC BY-SA 3.0 4970,2,,1859,1/11/2018 7:31,,1,,"

Although this model played an important role in contributing to our present understanding of NLP and NLU, it is no longer useful in production systems and currently no successful commercial product follows this approach.

In CDT the goal was to design an AI system that could draw logical inferences from sentences. In this system the goal was to make the meaning independent of the words used in the input.

CDT modeled sentences by using tokens such as: locations, time, real world actions and real world objects. However as computational power became more common and less expensive, interest diverted to statistical models which were now outperforming the previous rule based systems.

The problem with rule based approaches such as CDT is that they require manual development of linguistic rules which can be costly and which usually don't generalize well to other languages.

On the other hand, statistical approaches use human language resources (multilingual textual corpora) more efficiently. Rather than using a rule based approach, statistical models make soft probabilistic decisions based on attaching real weights to the features making up the input data. (Wikipedia NLP)

This efficient use of human language resources leads to a model that is more accurate and robust especially when given unfamiliar input or input that contains errors. Statistical models also generalise well to other languages.

",10913,,,,,1/11/2018 7:31,,,,2,,,,CC BY-SA 3.0 4971,2,,4027,1/11/2018 8:24,,6,,"

According to Open AI's Greg Brockman, the Gym website never had a big impact and so was never maintained. This is the reason he gives for shutting down the website.

A read only export of the site was archived at https://gym.openai.com/read-only.html and if you attempt to access the old website through the url https://gym.openai.com you will be redirected to the Open AI gym github repository.

For a while a static copy of the Open AI leader-board was maintained on the below url https://scoreboard-site-1764008611.us-west-2.elb.amazonaws.com/ . However as of present the static site is also unreachable.

The Canadian AI startup montreal.ai had offered to maintain the gym website. However this offer was not taken up by open ai.

This issue was discussed in depth on the following threads

https://www.reddit.com/r/MachineLearning/comments/6zvlm2/d_openai_closing_down_gym_toolkit_website/?st=jca6ia3n&sh=48a0d8f3

https://github.com/openai/gym/issues/718

",10913,,,,,1/11/2018 8:24,,,,1,,,,CC BY-SA 3.0 4973,2,,3453,1/11/2018 15:33,,1,,"

Using an STN alone to predict the next frame assumes that there is some linear translation between the current frame and the next frame. In some domains this is true but usually there are more complicated transitions from frame to frame (Eg. An occlusion, a new entity, light difference, etc). So, although STNs may be useful to resizing and translating inputs for a CNN, they should be used together with other techniques when predicting a new frame of a sequence.

",4398,,,,,1/11/2018 15:33,,,,0,,,,CC BY-SA 3.0 4974,2,,4969,1/11/2018 19:08,,1,,"

Histopathological images compared to natural images difference so, that in histopathological images the image needed to be modeled may contain millions of pixels, whereas a recognition from natural image that this is a dog / house / certain person needs significantly less information to be extracted from the picture.

Histopathological images use WSIs (Whole slide image) which contain the part of tissue as a whole. It needs to be split to 256 x 256 pixel patches and those are used for ROI hunting (Region of Interest) and other analysis one by one.

Histopathological images (WSI) are quite rare nowadays and hold privately. More open data would rise the accuracy of machine learning and increase the amount of results that could be found from the data.

Complete source:

https://arxiv.org/pdf/1709.00786v1.pdf

",11810,,,,,1/11/2018 19:08,,,,0,,,,CC BY-SA 3.0 4975,1,,,1/11/2018 19:22,,3,1337,"

I'm currently writing the Alpha-Beta pruning algorithm for a board game. Now I need to come up with a good evaluation function. The game is a bit like snakes and ladders (you have to finish the race first), so for a possible feature list I came up with the following:

  • field index should be high
  • in the lower fields my fuel should be high, when coming to the end it should be low (maximum of '10' required to enter the goal)
  • all 'power-ups' must be spent to enter the goal, so prioritize them
  • if it is possible to enter the goal (a legit move), do it!

There could be some more for some special cases.

I've read somewhere that it is the best (and easiest) to combine them in a linear function, for example:

$$0.75 * i - 5 * p - 0.25 * |(f - \text{MAX_FIELD_INDEX}/i)|,$$

where

  • $i$ = field index
  • $p$ = power-ups
  • $f$ = fuel

Since I can't ask an expert and I'm not an expert by myself, I have nobody to ask if those parameters are good, if I've forgotten something or if I've combined the factors correctly.

The parameters aren't that big of a deal because I could use a genetic algorithm or something else to optimize them.

My problem and question is: What do I have to do to find out how to put together my features optimally (how can I optimize the function/parameter arrangement itself)?

",11585,,2444,,2/6/2021 20:11,2/6/2021 20:11,How do I write a good evaluation function for a board game?,,1,0,,,,CC BY-SA 4.0 4976,1,,,1/11/2018 22:07,,2,81,"

CIO NN

CIO NN stands for Controller Input Output Nerual Network

note due to a typo the ""nearon"" means ""neron""

For this we have to redefine the Nearon

  • 2 Inputs
  • 2 Outputs
  • 4 Weights (each input and output have their own weights)
  • Internal Memory Cell (any byte or bit or block size with variable size)
  • Activation Function (Defines what weights and what inputs activate this Nearon)
  • Memory Storage Function (Defines what and when this cell should store said memory or memory stream)
  • Memory Transpose Function (Once activated any stored memory that the activation function can trigger will be played/pushed into the Nerual Network)
  • Forget Function (Defines when and/or how and/or why these memories can be destroyed/removed based on Activation function with Memory states and any input stateses itself)

How would I implement this in the form of Code? please take note of the spec. (this is non profit/GNU v3)

This would look something like these:

These would be arranges like this:

Which we can build it into this:

it can be trained like this:

  • do normal NN training from the inputs to the input outputs like a hidden layer NN

then trained to be controlled like this:

  • then set the known NN dataset (inputs) to be corrected to actual or correct values via the CI (Controller Input) which will be outputed on the ""Input + Controller Input"" Output

by training like this:

  • you have a normal NN with hidden layers which can be trainned (this can be done with CNNs) with backpropergation, and a cost function (use least amount of nerons) etc
  • now you can allow the CIO NN to retrain / teach itself with supervised or unsupervised learning.
  • you can combine the ""Input Output"" and the ""Input + Controller Input"" Output with another NN which can then connect with this NN in a similiar way that a Neron connects to a Neron
",1282,,1282,,1/12/2018 21:54,1/12/2018 21:54,How would I implement this New Type of NN,,0,0,,,,CC BY-SA 3.0 4977,2,,2980,1/12/2018 3:49,,6,,"

IMHO the idea of invalid moves is itself invalid. Imagine placing an ""X"" at coordinates (9, 9). You could consider it to be an invalid move and give it a negative reward. Absurd? Sure!

But in fact your invalid moves are just a relic of the representation (which itself is straightforward and fine). The best treatment of them is to exclude them completely from any computation.

This gets more apparent in chess:

  • In a positional representation, you might consider the move a1-a8, which only belongs in the game if there's a Rook or a Queen at a1 (and some other conditions hold).

  • In a different representation, you might consider the move Qb2. Again, this may or may not belong to the game. When the current player has no Queen, then it surely does not.

As the invalid moves are related to the representation rather than to the game, they should not be considered at all.

",12053,,,,,1/12/2018 3:49,,,,1,,,,CC BY-SA 3.0 4978,2,,4975,1/12/2018 4:09,,1,,"

Based on your description, I'd maximize the following terms:

  • i
  • -max(f - 10 - (MAX_FIELD_INDEX - i), 0) - assuming consumption of one fuel per field; this becomes negative when you have too much fuel
  • a similar function of p, as spending them gets more important when approaching the goal

As having fuel is probably a good thing in the beginning, you could use a term like f. Similarly for the ""power packs"" (or are they rather ""weakness packs""?).

I'd combine the terms using a linear function like you did and let it optimize. You may need more such terms. Maybe it's simpler to get rid of the power packs when you have enough fuel? Then something like -max(p-f, 0) may help.

You may generate some ad-hoc expressions or add some products of your terms as new terms. You may want to do this after the coefficients of the simpler terms have already been optimized (so you help the more complex optimization with a good staring point).

",12053,,,,,1/12/2018 4:09,,,,3,,,,CC BY-SA 3.0 4979,1,,,1/12/2018 13:05,,6,490,"

At my work, we're currently doing some research into data visualisation for highly interconnected data, basically graphs.

We've been implementing all sorts of different layouts and trying to see which fits best, but, due to the nature of the problem --it's a visual thing - we needed to come up with some automated way to analyse the result so welcome up with a bunch of metrics to analyse our layouts.

So far, the most important metrics have been information density, edge crossings, node overlap and edge length. This gives us some good results and has allowed us to fine-tune our layout algorithms.

However, when a new graph is loaded, we noticed that humans still tend to fiddle a lot with the structure of the layout. Moreover, it seems that our metrics do a good job of predicting where a user is likely to mess around. Graph layout is a tough problem, so after some discussion, the idea of just throwing data at a neural network and let it figure it out came up.

None of us are experts, or even experienced in AI. I'm the one with the most contact with AI methods. All I've ever done were simple NN models, no convolution, feedback or feedforward or anything of the sorts, but it seems to me this should be doable.

Maybe it's my lack of expertise here, but I haven't been able to find any good information on this sort of application for NNs, so I was hoping someone here could point me in the right direction.

  • What sort of model is best for such a situation? and why? Is this actually possible or would it be super complicated? Has anyone ever tried something like this before?

If it helps, our input data (for v1, I guess) would be two arrays of variable length, one for the nodes and another for the relationships between them and the output data would be an array with the node XY coordinates.

",12070,,2444,,7/30/2021 12:35,7/30/2021 12:35,Neural network for data visualization,,2,0,,,,CC BY-SA 4.0 4984,1,4985,,1/13/2018 3:48,,3,638,"

I have some episodic datasets extracted from a turn-based RTS game in which the current actions leading to the next state doesn’t determine the final solution/outcome of the episode.

The learning is expected to terminate at a final state/termination condition (when it wins or loses) for each episode and then move on to the next number of episodes in the dataset.

I have being looking into Q learning, Monte Carlo and SARSA but I am confused about which one is most applicable.

If any of the above algorithms are implemented, can a reward of zero be given in preliminary states before the termination state, at which it will be rewarded with a positive/negative (win/loss) value?

",12081,,55245,,6/27/2022 16:12,6/27/2022 16:12,Which Reinforcement Learning algorithms are efficient for episodic problems?,,1,0,,,,CC BY-SA 4.0 4985,2,,4984,1/13/2018 7:55,,3,,"

When applying techniques like SARSA(which are on-policy), one needs to have control over a simulator. If one is able to access only the episodic dataset, then the only choice is to opt for Q-learning or Off-policy Monte-Carlo(or off-policy methods in general).

Can a reward of zero be given in preliminary states before termination state of each episodes at which it will be rewarded with a positive/negative (win/loss) value?

With regards to the above question, the answer is yes. The task would be a sparse reward task with a reward occurring only at the last transition. The issue that one faces in a sparse reward task is that of slow convergence(or even lack of convergence).

Some guidelines for tackling sparse reward tasks are as follows :

  1. Monte-Carlo, and n-step Q-learning are preferred over Q-learning/SARSA.

    Consider the 10-step chain MDP, where the only reward is +1 when the transition from s10 to END occurs.

    Let us consider the first episode in training to be start->s1->s2->s3->....->s10->end. The Q-learning update would result in no sane updates to states s1, s2, s3, s9 because the Q-value of the next state is random-value. The only state with a sane update is s10

    However, if we use n-step Q-learning or a Monte-Carlo based update. The Q-values of all the states are updated in a logical manner since the reward from the end of the episode propagates to all states in the episode.

    n-step Q-learning would be ideal since by adjusting the value of n, one can trade-off the benefits of Monte-Carlo methods(described above) and Q-learning(low variance).

  2. The use of pseudo/auxiliary rewards.

    This is not something necessarily recommended since the addition of new reward-structures can cause unintended behaviour. On the flip-side, it could lead to faster convergence.

    A simple example is as follows: Consider a game of chess, where the only reward is at the end of the game. Since the game of chess has very long episodes, one can introduce the following reward-structure instead :

    • +100 for winning the game
    • +1 for capturing any piece on the board

    Hence, the pseudo reward may provide some direction for learning. Note the different scales in the two rewards(100:1). This is necessary because the primary goal of the task should not shift to capturing pieces, but to win the match.

",12084,,12084,,1/13/2018 20:28,1/13/2018 20:28,,,,0,,,,CC BY-SA 3.0 4986,1,,,1/13/2018 13:52,,4,82,"

I have a dataset with 2,23,586 samples out of which i used 60% for training and 40% for testing. I used 5 classifiers individually, SVM, LR, decision tree, random forest and boosted decision trees. SVM and LR performed well with close to 0.9 accuracy and recall also 0.9 but tree based classifiers reported an accuracy of 0.6. After a careful observation, I found out that SVM and LR did not predict the labels of 20,357 samples identically. So Can I apply voting and resolve this conflict wrt prediction outcome? Can this conflict be due to an imbalanced dataset?

",12088,,16909,,8/8/2018 20:45,8/8/2018 20:45,Can I combine two classifiers that make different kinds of errors to get a better classifier?,,1,3,,,,CC BY-SA 4.0 4987,1,,,1/13/2018 15:02,,6,1338,"

Some argue that humans are somewhere along the middle of the intelligence spectrum, some say that we are only at the very beginning of the spectrum and there's so much more potential ahead.

Is there a limit to the increase of intelligence? Could it be possible for a general intelligence to progress infinitely, provided enough resources and armed with the best self-recursive improvement algorithms?

",12089,,2444,,4/19/2019 15:22,4/19/2019 15:22,Is there a limit to the increase of intelligence?,,5,0,,,,CC BY-SA 4.0 4988,1,4992,,1/13/2018 15:14,,5,194,"

i'm quite new to neural network and i recently built neural network for number classification in vehicle license plate. It has 3 layers: 1 input layer for 16*24(382 neurons) number image with 150 dpi , 1 hidden layer(199 neurons) with sigmoid activation function, 1 softmax output layer(10 neurons) for each number 0 to 9.

I'm trying to expand my neural network to also classify letters in license plate. But i'm worried if i just simply add more classes into output, for example add 10 letters into classification so total 20 classes, it would be hard for neural network to separate feature from each class. And also, i think it might cause problem when input is one of number and neural network wrongly classifies as one of letter with biggest probability, even though sum of probabilities of all number output exceeds that.

So i wonder if it is possible to build hierchical neural network in following manner:

There are 3 neural networks: 'Item', 'Number', 'Letter'

  1. 'Item' neural network classifies whether input is numbers or letters.

  2. If 'Item' neural network classifies input as numbers(letters), then input goes through 'Number'('Letter') neural network.

  3. Return final output from Number(Letter) neural network.

And learning mechanism for each network is below:

  1. 'Item' neural network learns all images of numbers and letters. So there are 2 output.
  2. 'Number'('Letter') neural network learns images of only numbers(letter).

Which method should i pick to have better classification? Just simply add 10 more classes or build hierchical neural networks with method above?

",12090,,,,,1/17/2018 3:25,Is it better to make neural network to have hierchical output?,,3,0,,,,CC BY-SA 3.0 4989,1,,,1/13/2018 15:16,,0,666,"

What is the relation between back-propagation and reinforcement learning?

",9560,,2444,,12/20/2021 22:50,12/20/2021 23:12,What is the relation between back-propagation and reinforcement learning?,,1,0,,,,CC BY-SA 4.0 4990,1,,,1/13/2018 16:20,,1,698,"

I play a racing game called Need For Madness ( some gameplay: https://www.youtube.com/watch?v=NC5uFZ-t0A8 ). NFM is a racing game, where the player can choose different cars and race and crash the other cars, and you can play on different tracks too. The game has a fixed frame rate, so you can assume that the same sequence of button presses will always arrive at the exact same position, rotation, velocity, etc. of the car.

I want to make a bot which could race faster than I can. What would be the best way to go about doing this? Is this problem even suited for deep learning?

I was thinking I could train a neural network where the input would be the current world state (position of the player, position of the checkpoints you have to through and all the obstacles), and the output would be an array of booleans, one for each button. During a race, I could then keep forward propagating from the input to the booleans. However, I'm not so sure what I would do after the race is over. How do I back propagate after the race to make the NN be less likely to make bad moves?

",12091,,1671,,1/17/2018 0:41,1/17/2018 0:41,How to teach an AI to race optimally in a racing game?,,0,0,,,,CC BY-SA 3.0 4991,1,4995,,1/13/2018 20:13,,0,1874,"

I don't understand why Google Translate translates the same text in different ways.

Here is the Wikipedia page of the 1973 film "Enter the Dragon". You can see that its traditional Chinese title is: 龍爭虎鬥. Google translates this as "Dragons fight".

Then, if we go to Chinese Wikipedia page of this film, and search for 龍爭虎鬥 using Ctrl-F, it will be found on several places:

But if we try to copy the hyperlink of Chinese page into Google translate, it will be the word "tiger" from somewhere:

Even more, if we try to translate Chinese page into English using build-in Chrome translate, it will be sometimes translated as "Enter the Dragon", in English manner:

Why it gives different translations for the same Chinese text here?

",12095,,2444,,1/21/2021 21:46,1/21/2021 21:46,Why does Google Translate produce two different translations for the same Chinese text?,,4,0,,,,CC BY-SA 4.0 4992,2,,4988,1/13/2018 20:25,,0,,"

Item network should be recogniting regional register number standards and typical places of numbers on those standards. Otherwise your 'Item' phase has equal task than the whole system, because 'Letter' and 'Number' do not differ in plates so much you could right away choose between the two without first evaluating the exact object on each case.

Side note: correct me if that is not true on your local plate system

",11810,,,,,1/13/2018 20:25,,,,1,,,,CC BY-SA 3.0 4993,2,,4991,1/13/2018 21:27,,2,,"

As you know google translation works base on statistical methods. In statistical translation, many parameters can be related to the final result. One of these parameters is co-occurrence of words in a sentence. Hence, as this translator learn languages from different utterances by human and pre-written translations, and different parameters in the text are involved in this learning, It could be possible one word has a different meaning in different texts.

",4446,,,,,1/13/2018 21:27,,,,0,,,,CC BY-SA 3.0 4994,2,,4991,1/13/2018 22:23,,1,,"

Chinese words 'fu' means with different intonation marks either happiness, huspand or tiger. Without correct intonation notation in English it may translate in Chinese as Tiger.

Movie title has Chinese 'happiness' character, but Google mixes it as Tiger.

",11810,,,,,1/13/2018 22:23,,,,0,,,,CC BY-SA 3.0 4995,2,,4991,1/13/2018 23:48,,5,,"

It's not quite clear what you are asking. So I'll answer in separate parts.

Why is the translation different from the official title?

It could be simply because machine translation is not perfect, or our human translator took some creative liberties when translating. In this case it seems to be both.

Note that 龍爭虎鬥 properly translated doesn't mean either ""Dragons fight"" or ""Enter the dragon"". Literally translated, it means ""dragon compete tiger fight"". It belongs a family of well-formed idioms called ""Chengyu"", which describes a situation where there is fierce fighting or competition.

So you can see that neither translations fit.

Why does Google Translate give me different translations on the same phrase?

Context matters when we read! So translating a phrase in isolation doesn't guarantee that the same phrase has the same meaning/translation in all other parts of the text.

For example, green is a color, but the word ""green"" can also be used as like: ""Alice is green with envy"" or ""Bob has green thumbs"", of which neither instances of the word ""green"" refers to the color.

Considering the technical side of things, Google translate probably uses some kind of RNN in its pipeline. RNN's are influenced by past states, meaning that what it outputs now as a function of what it reads in is dependent on the RNN's past state. Which is similar to the issue addressed above.

",6779,,,,,1/13/2018 23:48,,,,0,,,,CC BY-SA 3.0 4996,2,,4989,1/14/2018 0:39,,6,,"

Backpropagation is a subroutine often used when training Artificial Neural Networks with a Gradient Descent learning algorithm. Gradient Descent requires the computation of the error gradient, i.e. derivatives, of a cost function with respect to the network parameters. BP allows you to find this gradient a lot faster than using naive methods.

Reinforcement Learning refers to inferring "optimal" behavior, i.e. a strategy, of an agent maximizing some goal in an environment. Depending on the specific RL algorithm, BP may be employed to adjust the parameters of a function approximator used to represent aspects of the environment and or the agent.

",12101,,2444,,12/20/2021 23:12,12/20/2021 23:12,,,,0,,,,CC BY-SA 4.0 4997,2,,4988,1/14/2018 0:49,,3,,"

Just use one network with a larger Softmax output layer and more hidden units. If you have enough training data, it will work just fine. In fact it could emulate the architecture you propose.

",12102,,,,,1/14/2018 0:49,,,,0,,,,CC BY-SA 3.0 4998,2,,4987,1/14/2018 7:27,,2,,"

To have a ""maximum achievable intelligence"", first of course you have to define ""intelligence"" well enough to be able to rank things by intelligence. There is no widely-supported theory that is able to do so.

You might like to look into AIXI as described by Marcus Hutter in a video lecture. It is an attempt to formalise intelligent agents that attempt to make optimal decisions mathematically. Here is another written introduction. Of course this is only one of many possible frameworks to describe intelligent agents.

One interesting implication is that AIXI implies intelligence - in terms of ability to learn from and exploit an environment - is upper bounded. In principle there is a ceiling due to uncertainty about what the data that a rational agent possesses might infer.

However, this ceiling refers to only specific abilities to extract actionable information from data that the agent has access to in order to solve decision problems. There is an open question about how much data can be collected, stored and processed by any entity, and this ability to acquire and retrieve relevant knowledge would be viewed by many as part of the ""intelligence score"" when comparing agents.

There are theoretical limits to computation from physics, e.g. some are based on the fact that it fundamentally requires energy, the energy has a mass equivalence, and enough concentrated mass would form a black hole. This sets a high upper bound, and it is likely that real-world structural and design issues will set in way before this limit. However, combined with the above limits on decidability and practical access to data, it does seem there should be a ceiling.

",1847,,,,,1/14/2018 7:27,,,,4,,,,CC BY-SA 3.0 4999,2,,4965,1/14/2018 8:31,,5,,"

Firstly, before we commence I recommend that you refer to similar questions on the network such as https://datascience.stackexchange.com/questions/25053/best-practical-algorithm-for-sentence-similarity and https://stackoverflow.com/questions/62328/is-there-an-algorithm-that-tells-the-semantic-similarity-of-two-phrases

To determine the similarity of sentences we need to consider what kind of data we have. For example if you had a labelled dataset i.e. similar sentences and disimilar sentences then a straight forward approach could have been to use a supervised algorithm to classify the sentences.

An approach that could determine sentence structural similarity would be to average the word vectors generated by word embedding algorithms i.e word2vec. These algorithms create a vector for each word and the cosine similarity among them represents semantic similarity among words. (Daniel L 2017)

Using word vectors we can use the following metrics to determine the similarity of words.

  • Cosine distance between word embeddings of the words
  • Euclidean distance between word embeddings of the words

Cosine similarity is a measure of the similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. The cosine angle is the measure of overlap between the sentences in terms of their content.

The Euclidean distance between two word vectors provides an effective method for measuring the linguistic or semantic similarity of the corresponding words. (Frank D 2015)

Alternatively you could calculate the eigenvector of the sentences to determine sentence similarity.

Eigenvectors are a special set of vectors associated with a linear system of equations (i.e. matrix equation). Here a sentence similarity matrix is generated for each cluster and the eigenvector for the matrix is calculated. You can read more on Eigenvector based approach to sentence ranking on this paper https://pdfs.semanticscholar.org/ca73/bbc99be157074d8aad17ca8535e2cd956815.pdf

For source code Siraj Rawal has a Python notebook to create a set of word vectors. The word vectors can then be used to find the similarity between words. The source code is available here https://github.com/llSourcell/word_vectors_game_of_thrones-LIVE

Another option is a tutorial from Oreily that utilizes the gensin Python library to determine the similarity between documents. This tutorial uses NLTK to tokenize then creates a tf-idf (term frequency-inverse document frequency) model from the corpus. The tf-idf is then used to determine the similarity of the documents. The tutorial is available here https://www.oreilly.com/learning/how-do-i-compare-document-similarity-using-python

",10913,,10913,,1/14/2018 19:32,1/14/2018 19:32,,,,1,,,,CC BY-SA 3.0 5000,1,5007,,1/14/2018 12:43,,2,1880,"

I am new to neural networks, I've only started studying and learning about the subject a year ago, and I just started building my first neural network.

The project is a little bit ambitious: A browser extension for children's safety, it checks for sexual or abusive content, so that it replaces that content with a placeholder, the user will have to insert a password to show original content.

I didn't find a dataset online, so I decided to build my training dataset. So, I started by writing a web crawler, it starts collecting images, meanwhile implementing data augmentation techniques. It basically resizes images (to 95x95), crops them, rotates, changes colors, adds blur, black and white, noise, etc.

The problem is that after applying these techniques, I noticed that some images are not even recognizable by a human subject.

I mean that even though I know that picture contains sexual content, it doesn't even appear to be sexual anymore.

So, do I have to label it as sexual or not sexual?

Notice that it's easier for me to consider it as sexual, if every image produces about 50 edited images, I'd only have to label the original image, what follows is that all 50 images get the same label. Is it okay to do just that?

This is a sample of what I get after doing data augmentation, notice that some pictures are not recognizable by humans.

For example, look at the result after editing images hue and saturation, a human can't recognize this result, is it okay to label it: not sexual?

I wouldn't recognize the picture on the right if I didn't see the original one.

I also tested this on human subjects (my brothers), they didn't recognize the squirrel on the right.

",12113,,2444,,12/20/2020 12:37,12/20/2020 12:37,How to label edited images after data augmentation?,,1,0,,,,CC BY-SA 4.0 5003,2,,15,1/14/2018 16:51,,3,,"

It depends on how the test is given. For example, when people claimed that a machine had successfully passed the Turing Test a few years ago, the criteria was pretty weak. It only had to fool 30% of the people for 5 minutes. That's not much of a test. To put this in perspective you probably wouldn't detect schizophrenia, autism, learning disabilities, or dementia with this criteria.

In spite of the hype, the current AI's can be detected 100% of the time using fairly simple questions.

",12118,,,,,1/14/2018 16:51,,,,1,,,,CC BY-SA 3.0 5005,1,5028,,1/14/2018 19:11,,2,204,"

I was watching a documentary on Netflix about AlphaGo, and at one point (~1:10:16 from the end), one of the programmers uses the term ""heavy node,"" which I assume has to do with neural networks. I did a little bit of research but couldn't find anything on what that term means. The closest I could get was this wikipedia page on Heavy path decomposition: https://en.wikipedia.org/wiki/Heavy_path_decomposition, which seemed like it could be somewhat related, but I wasn't sure how exactly. Has anyone heard of this term being used? Does anyone know what it means?

For context, in the documentary the line is that if it (the network/player) creates something new not in the heavy node, then they don't see it.

",12119,,,,,1/16/2018 16:31,What is a heavy node in neural networks?,,1,1,,,,CC BY-SA 3.0 5007,2,,5000,1/14/2018 19:36,,1,,"

Yes, you should label it the same. But more importantly you need to make sure that each perturbation of the image doesn't change some important character of the image.

Consider training an apple classifier. If you plan to augment data by altering the RGB values, you need to be wary that you might cause issues in classification tasks where color is instrumental. Say, Granny apple / Fuji apple. If you still really want to augment data in this way, consider perturbing by smaller amounts each time.

Or consider an apple detector. An apple still ""looks to be an apple"" if looked at an angle, if looked through a mirror, if looked afar, but probably not if looked through a carnival mirror.

So ask yourself, if an image still NSFW if the colors are changed?

However, as a personal anecdote I don't think augmenting data by changing the color channel is a good idea. Also note that training on augmented data by altering the color channels as a whole should be more or less equivalent to training in B/W. Why?

",6779,,,,,1/14/2018 19:36,,,,4,,,,CC BY-SA 3.0 5009,2,,3400,1/15/2018 4:34,,2,,"

So you want your network to represent those 3 values at each step as single composite value? I can't think of any better way than utilizing 3 LSTM units but attaching them to same write and read nodes of the enclosing network. In other words your assumption that it makes sense to keep all those 3 values together, gets hardcoded into your network by making 6 connections (3 connected to read and 3 to write) share their weights. Usually researchers leave it to the network to decide whether to keep such composite values together or separate (by learning read and write weights through backpropagation) and sometimes network chooses to keep them together all on its own like in https://youtu.be/93rzMHtYT_0?t=531 (it can be seen how network resets whole bunch of LSTM units simultaneously ).

",9803,,,,,1/15/2018 4:34,,,,0,,,,CC BY-SA 3.0 5010,1,5038,,1/15/2018 10:31,,0,111,"

In recent years, China has made rapid progress in manufacturing and scientific research, as evidenced by their successful teleportation of a single quantum entangled photon to a satellite in orbit.

My question is, what major contributions have Chinese AI researchers made in the field of Artificial Intelligence?

",10913,,2444,,12/18/2021 18:12,12/18/2021 18:14,What noteworthy contributions have Chinese AI researchers made in the field of artificial intelligence?,,1,0,,,,CC BY-SA 4.0 5012,2,,3472,1/15/2018 13:03,,0,,"

Initially, spam detection relied on simple rule based techniques to sort out spam. However following Paul Graham's famed article 'A Plan for Spam' the Naive Bayes approach became very popular to the point that it became regarded as the baseline for dealing with spam.

However following breakthroughs in deep learning, researchers have now turned their focus to neural networks to help them deal with the perenial problem of spam emails. Google recently reported that by introducing NN's to g-mail's spam filters. It took them from 99.5% to over 99.9% accuracy, suggesting that neural networks especially when used in conjunction with Bayesian classification may be effective for enhancing spam filters. You can refer to the link below to read about Google's success story https://www.wired.com/2015/07/google-says-ai-catches-99-9-percent-gmail-spam/

Developing a spam filter using neural networks is basically a classification problem. You need to follow the steps below to develop such a system. (Nikhil B 2016)

  • Collect a dataset of spam and legitimate email messages. Label these datasets. You can find email and spam datasets here http://csmining.org/index.php/spam-email-datasets-.html
  • Process these messages with feature extraction and vectorising techniques i.e. tf-idf vectorizer, word2vec, bag-of-words e.t.c.
  • Once you have vectorised the dataset succesfully, apply a supervised learning NN algorithm i.e. radial basis network, multi-layer perceptron (MLP) or backpropagation.
  • Train your labelled data-set on the neural network. Once training is complete you can use cross validation to calculate the precision of your trained model using the test dataset.

Some of the advantages of using NN's for spam detection over other methods include.

  • Neural networks have a higher accuracy of identifying spam as demonstrated by google.
  • They have a lower false positive rate compared to other methods such as rule based techniques.
  • Their main disadvantage is that they require specialised computing hardware to deploy.

Some old influential papers published in the field include.

Machine Learning Techniques in Spam Filtering (2004) http://ats.cs.ut.ee/u/kt/hw/spam/spam.pdf

Detecting Spam Blogs: A Machine Learning Approach (2006) https://www.aaai.org/Papers/AAAI/2006/AAAI06-212.pdf

A review of machine learning approaches to Spam filtering (2009) https://www.sciencedirect.com/science/article/pii/S095741740900181X

",10913,,10913,,1/15/2018 19:49,1/15/2018 19:49,,,,0,,,,CC BY-SA 3.0 5016,2,,4624,1/15/2018 19:46,,3,,"

Although the above statement holds important analogies to communicate the technical advances made by deep mind in the development of Alpha Go. It is inaccurate and should be taken skeptically.

Firstly, although Alpha go was trained on specialized hardware such as high end NVidia GPU's and custom google TPU's. It should be noted that it can run on a regular desktop although it won't be as powerful as the distributed version of Alpha Go. Additionally anyone with a laptop can access a similar amount of computing resources in the cloud with the touch of a button.

Although a version of Alpha Go indeed ran on 1202 CPUs and 176 high end GPU's as reported by Nature magazine. Which roughly translates to a system 1000 times as powerful as a commodity laptop. We need to consider other important factors such as moores law which postulates that we could have a computer as fast as 1202 CPU's in our pockets within 20 years and not several millennia.

Furthermore, the latest version of Alpha Go, Alpha Go Zero, trained on a single server with only 4 TPU's. Considering that the latest generation of smartphones such as iPhone X and Huawei's Kirin come packed with specialised AI chips. I will not be surprised if a similar reduction of form factor is achieved in commodity desktop computers once new models packed with AI chips are introduced.

I respect and acknowledge the technical achievements of Demis Hassabis and the deep mind team in developing a system as powerful as Alpha Go. However I believe analogies such as the one used above are inaccurate and mis-represent facts.

",10913,,,,,1/15/2018 19:46,,,,2,,,,CC BY-SA 3.0 5019,1,,,1/16/2018 7:11,,1,1483,"

I have a pedestrian dataset and would like to estimate human height in a video survillance using person detection techniques like YOLO Darknet or SSD (Single Shot Detectors). Would this technique work? Also, the videos that I have are in a constrained environment with good illumination. The idea is to get the coordinates from the bounding box and try to estimate pixel height. After getting the pixel height, some correlation could be estimated between pixel height and real world height. Note that I won't be using camera calibration.

",11800,,11800,,1/16/2018 9:49,3/14/2019 20:15,Human Height estimation using person detection techniques,,1,5,,,,CC BY-SA 3.0 5020,2,,4076,1/16/2018 8:42,,2,,"

This task falls within the overlapping fields of information extraction and pattern mining. Information extraction involves automatically extracting instances of specified relations from data. While pattern mining involves using data mining algorithms to discover interesting, unexpected and useful patterns between data in databases (Philippe F).

On your question you have stated that you have experimented with markov models with poor results. A better approach if you prefer working with markov models would be to use hierarchical markov models. Hierarchical markov models have multiple 'levels' of states which can describe input sequences at different levels of granularity. Hierarchical markov models are good at categorizing human behavior at various levels of abstraction i.e. a persons location in a room can be further interpreted to determine more complex information such as what activity the person is performing.

However my recommendation is that you implement random forest classifiers for this problem. Random forests provide excellent classification accuracy with a relatively simple implementation. Additionally random forests provide the ability to inspect trees for tweaking parameters to improve accuracy. You can also use cross validation to evaluate your model and calculate its accuracy. Consider using Python's Scikit-learn library's implementation of random forest classifier for this analysis.

In your json code action package you have declared metrics such as DRAG_START, TRASH_SYMBOL, OPEN and CLOSE. My suggestion is that for your model to be accurate you also need to declare lower level actions such as: time between clicks, change in the direction of mouse motion, screen region hover count, task completion time and time between a click and a succeeding mouse movement.

For further reference I recommend that you look at the papers below which I found useful and relevant to your question.

Hierarchical Hidden Markov Models for Information Extraction https://www.biostat.wisc.edu/~craven/papers/ijcai03.pdf

Detecting Abnormal User Behavior Through Pattern-mining Input Device Analytics https://www.ignacioxd.com/files/bib/Dominguez2015-Concentration.pdf

Hierarchical Hidden Markov Model in Detecting Activities of Daily Living in Wearable Videos for Studies of Dementia https://arxiv.org/ftp/arxiv/papers/1111/1111.1817.pdf

",10913,,10913,,1/16/2018 8:48,1/16/2018 8:48,,,,0,,,,CC BY-SA 3.0 5021,2,,5019,1/16/2018 9:13,,2,,"

Eh, ""I won't be using camera calibration."" .. Not sure, what you mean.

1st

At first, imagine a sheet of paper laying there on the floor. And first to try to transform the paper sheet (i.e. with some text on it), the picture of the sheet angled by paralaxe to the straight view: Just a transformation by a matrix. It would be highly valuable to have there some floor of square tiles. And once you would estimate/set size of the tiles on the floor, you can then transform the paper sheet.

Then, also some vertical calibration will be needed: Wherever within the scene. A wall of tiles of known shape & size would be perfect. But not really necessary: Any known/premeasured chair would be enough! ..rather two such chairs, in an opposite nooks, for numerical stability.

So yes, that could be treated as ""camera calibration"".

2nd

Then you can advance to the next level of the image processing:

  • Once you see feet of the person, on which tile of the floor the person stays,
  • and with the known vertical direction,

by the hid/covered known background, you can decide their height, pretty precisely, yes.

The ""known background""

Just two images should be enough, of the background place: empty non-occupied, and hid/covered. The clear-empty room can be taken either onence, or periodically, or even just a partial image should be enough: just to detect shape of the person-figure.

3rd

And the third level: Really high. The person could stay in some deformed position. By interpolation of the known human skeleton to the shape of the person-figure on the screen, you can assume the curves of the skeleton of the person, so your estimated/measured total height of the person would be immune to the non-strightened angles in joints of their limbs/backbone. So even if the person knows about your observing, the AI would not get fooled. But you would need/find more refencial points on their body, to interconnect the 2D shape to the internal 3D model.

quantity of snaps in general

More snaps of the person, as they walk through the room, would be helpful: Any model, set up by any single image, should be verifiable/applicable to each of the rest of the images of the same person. Again: stability and ""back testing"", verifications, robustness.

You are not limited to just a single image, you can have as many as your camera is able to make. So use them, to get not only the measured size, but also to level up the ""probability"" of the measured result, the accuracy.

On the other hand, you can also flip the coin, and to try to guess the maximum info from the minimum snaps of the scene. But it does not seem, this would be your case. Still: Processing 24 fps recording 20 seconds long.. That definitely requires some power/time to process.

Summary

The 3D models are necessary: Of the surrounding space, of the outer shapes of the person-figure, of the internal structure of the measured body.

",8537,,8537,,3/14/2019 20:15,3/14/2019 20:15,,,,0,,,,CC BY-SA 4.0 5026,2,,4991,1/16/2018 14:56,,1,,"

Google uses user input to improve translation. Some user may have provided an input to the Traditional Chinese characters using English characters only instead of pinyin, which would introduce a mistake into the data used by the translator. Since the model is statistics based, such a mistaken translation can only be assigned a lower but non-zero probability of correctness, but to erase it from the system entirely you would probably have to do that by hand or by introducing some general rule (e.g. if the probability is so small that it can not be stored in a variable of a given size, round it to zero).

",12141,,,,,1/16/2018 14:56,,,,0,,,,CC BY-SA 3.0 5027,1,,,1/16/2018 15:29,,3,88,"

For supervised learning, humans have to label the images computers use to train in the first place, so the computers will probably get wrong the images that humans get wrong. If so can computers beat humans?

",12145,,2444,,12/12/2021 16:20,12/12/2021 16:20,"How can computers beat humans at image recognition, if humans may incorrectly label the images?",,1,0,,,,CC BY-SA 4.0 5028,2,,5005,1/16/2018 16:31,,4,,"

Yes, firstly you are correct to acknowledge the term node with reference to the neural network.

A neural network is made to mimic the working of a human brain hence the term neural. It comprises of several layers and each layer comprises of several nodes. A network can be a deep neural network, signifying the presence of numerous layers, in which the output of the one, serves as an input for the other. Further a node in a particular layer has some weights and biases attached with it that are responsible for the computation and producing output.

To your context, a heavy node may refer to a node that holds importance for the network and has got significant weight and bias attached. Henceforth as per the statement if the network does make any significant change to the node with importance in terms of weights and biases, it is noticed else not.

",8041,,,,,1/16/2018 16:31,,,,0,,,,CC BY-SA 3.0 5029,2,,5027,1/16/2018 17:17,,2,,"

When researchers claim ""better than human accuracy"", they are demonstrating that a computer can beat an individual human on a test. And that is because the ground truth labels are actually higher accuracy than a single human could label the images individually.

There are at least two major ways that ground truth labels can beat an individual human on image tasks.

  1. Additional information is available from the same source as the image. For instance many pictures of pets in the ImageNet database are labeled with a specific breed of animal, due to how they are sourced. Most people who are not experts at pet breeds will score quite badly on a test to identify dog breeds at the fine grained level that ImageNet presents.

  2. Ground truth based on expert opinion can be sourced from multiple experts and their opinions combined. This approach can independently be shown to be more reliable than the opinion of a single person.

So in short yes computers can beat humans when they have had access to better original ground truth, and that is possible, even if that ground truth is generated by humans.

However, in general your concern stands. Ground truth data is a limiting factor. It might be possible in theory for a computer model to have an even better accuracy at the ""real"" task than the ground truth for a supervised learning task. However, this is next to impossible to prove, and other concerns, such as changes to distribution of real data as opposed to training data, are usually more important at that level of accuracy.

",1847,,1847,,1/16/2018 19:35,1/16/2018 19:35,,,,5,,,,CC BY-SA 3.0 5035,2,,4987,1/17/2018 1:20,,3,,"

Absolutely, regardless of how you define ""intelligence"".

  • If intelligence is merely information, as in ""a piece of intelligence"", as in data, or an algorithm, the structure is finite. (Structure, here, refers to the information, which may be reduced to a single string in either case.)

See: Turning Machine.

  • If intelligence is the rational capability of an automata, it is likewise bounded by the tractability of the decision problem, the structure of the algorithm, and the time available to make the decision.

See: Bounded Rationality

Both answers are really the same, because ""intelligence"" in the first sense is limited by physical constraints on information density, sophistication of the algorithm, and time.

See also: Computational complexity of mathematical operations, Computational complexity, Time Complexity

",1671,,1671,,1/17/2018 17:20,1/17/2018 17:20,,,,0,,,,CC BY-SA 3.0 5037,2,,4988,1/17/2018 3:25,,0,,"

I agree with the above answer. If you want to research this more in-depth look at this paper: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42241.pdf

",12145,,,,,1/17/2018 3:25,,,,0,,,,CC BY-SA 3.0 5038,2,,5010,1/17/2018 3:35,,0,,"

There's Baidu with former chief scientist Andrew Ng that's doing research on AI and the Microsoft Asia team has won some ImageNet competitions.

",12145,,2444,,12/18/2021 18:14,12/18/2021 18:14,,,,0,,,,CC BY-SA 4.0 5041,1,,,1/17/2018 11:53,,4,321,"

Many of the architectures that do semantic segmentation like SegNet, DilatedNet (Yu and Koltun), DeepLab, etc. do not work on high resolution images. For such benchmarks like Cityscapes, what is a standard/practical approach for such methods to perform on the benchmark?

I've tried to look into the paper, but I couldn't find such details. There's an article mentioning that they output at 1/8 of input images than do interpolation (usually 2, 4 or 8 times) from their results, but the article does not specify which upsampling techniques are the most reasonable one.

",3098,,3098,,1/17/2018 12:03,5/28/2018 3:56,Semantic Segmentation how to upsampling,,1,0,,,,CC BY-SA 3.0 5042,1,,,1/17/2018 12:49,,1,64,"

I've been studying a recommender system which uses a collaborative deep learning approach and Bayesian learning. It has the following NN representation :

I need to know the working of stacked denoising autoencoders.

Here is the link to the paper:http://www.wanghao.in/paper/KDD15_CDL.pdf

",10118,,16565,,4/4/2019 20:38,4/4/2019 20:38,How do stacked denoising autoencoders work,,0,0,,,,CC BY-SA 4.0 5043,1,,,1/17/2018 15:50,,1,169,"

What are the connections between ethics and artificial intelligence?

What are the issues that have arisen, especially in the business context? What are the issues that may arise?

",12171,,2444,,2/22/2020 11:58,11/30/2022 19:42,What are the connections between ethics and artificial intelligence?,,5,1,,12/5/2022 15:58,,CC BY-SA 4.0 5044,2,,5043,1/17/2018 16:37,,1,,"

This is a good related read from Nature: There is a Blind Spot in AI Research

Fears about the future impacts of Artificial Intelligence are distracting researchers from the real risks of deployed systems

",6779,,2444,,2/22/2020 11:56,2/22/2020 11:56,,,,0,,,,CC BY-SA 4.0 5045,2,,4987,1/17/2018 19:07,,0,,"

No. There is no ceiling to intelligence. However, I am applying this loosely.

When you consider the intelligence of a person, you generally think of some baseline IQ that ranks that person on a scale.

Per the definition of our ""Salable"" IQ, 200 is considered being a (nearly) unbelievable intelligence.

However, when you look at ""Entities"", the scale disappears. Google, NSA, Universities - they could all considered super-intelligence. There's no possible metric or IQ to assign to these entities that we know of because they aren't necessarily comparable. However, no single person is smart enough to invent all that was necessary to bring an entity and it's ""intelligence"" into fruition.

When you bring AI into the equation, your limited by your resources. AI is proving more and more capable as we evolve the technology and we understand our data better.

We may find in the future that GO isn't all that infinite and there is a ""Best"" strategy (thus we solve the game). However, we keep on making machines that are smarter than previous ones.

I wouldn't be surprised if AI starts making paper companies soon per business tactical advantages.

",1720,,,,,1/17/2018 19:07,,,,0,,,,CC BY-SA 3.0 5046,2,,4987,1/17/2018 19:10,,0,,"

Actually, these aspects is part of some books I am working on right now. Like what Jeevan say it is bound by laws of physics. I see that too, but in case of the Human, in the way i look at it, we are at the very lowest step/beginning of an AI that can do self reflecting and question its own ability to do rational thinking and how far it can go and develop.

And I also use the Black Hole - maximum density, maximum energy concentration as a 1'st upper limit. A intelligence who can operate at Plank Level, can store all information in this world in like a few sand corns size. So given enough density, optimization in information exchange/methods, the upper limit is far far away from what humans process.

But I take the analyze further and go beyond the Black hole theory. Of course it is speculation, but we still don't know all. So what is possible in more than 4 dimension (human brain, 3 dimension and time to think).

As I see it, in real world, upper limit is so far away from our understanding, that we still can't handle it.

",11827,,,,,1/17/2018 19:10,,,,1,,,,CC BY-SA 3.0 5047,2,,2967,1/17/2018 22:37,,2,,"

MPD are not independent from the past, but future actions starting from current state are independent from the past i.e. the probability of the next state given all previous states is the same as the probability of the next state given the previous state.

Any state representation which is composed by the full history is a MDP, because looking at the history (coded in your state) is not the same as looking back previous states, so Markov property holds. The problem here is that you will have an explosion of states, since you need to code in the state any possible trajectory, and it is unfeasible most of the times.

What if I define my state as the current "main" state + previous decisions?

For Example in Poker the "main" state would be my cards and the pot + all previous information about the game.

Yes it is a Markov Decision Problem.

",12121,,-1,,6/17/2020 9:57,1/17/2018 22:37,,,,0,,,,CC BY-SA 3.0 5049,2,,4286,1/18/2018 3:03,,1,,"

Generally speaking, you can say this:

  1. there is a relationship between neural network learning (I'm assuming a ""vanilla"" ANN here, no CNN's or RNN's or anything) and linear/logistic regression.

  2. But they're not the same thing. Just related. You could maybe consider them ""cousins"" to use a real-life analogy.

The big obvious difference is this: standard linear regression is, well, linear, that is, it's based on a straight line. So it can only separate points on a plane which can be separated by a straight line drawn on that plane. An ANN however, is non-linear and can fit all sorts of crazy looking curves. The reason why this is true has to do with a combination of the ""activation"" functions that are used, as well as the layering effect of your hidden layers.

To be fair, if you extend linear regression to be polynomial regression, you can fit more complicated curves, but that has its own downsides. And while they are also related, linear regression and polynomial regression aren't - strictly speaking - the same thing (although they may both be special cases of the same general technique).

All of that may be over-simplifying a bit. If you really want a good explanation of both linear/logistic regression and ANN's and some explanation of how they relate and differ, I recommend Andrew Ng's ML courses on Coursera. Both the original one and the new DeepLearning.ai ones.

",33,,,,,1/18/2018 3:03,,,,1,,,,CC BY-SA 3.0 5051,1,5056,,1/18/2018 4:07,,2,737,"

The above environment is DeepTraffic

Now consider this situation in the above environment, the Red car (we control it with our RL agent) is on the extreme right lane.

During the exploration phase, we take a 'move right' action, which ofcourse will result in the car not moving right,but the other cars will be moving, state changes due to the rules of the environment.

I'm using CNN to solve this, the state representation is the image itself and its a Q-Learning algorithm as described in DQN paper from deepmind.

In the above situation I mentioned, wont the agent think due to 'move right' action the state has changed, which is not really the case?

and when remembering the state transition (s,a,r,s') should i remember the actual action 'move right'(invalid) or 'do nothing'(correct as per env) ?

",5030,,,,,1/18/2018 16:09,RL agent's view of state transitions,,1,2,,,,CC BY-SA 3.0 5054,1,,,1/18/2018 13:20,,3,158,"

I'm struggling to understand the underlying mechanics of CNNs so any help is appreciated. I have a network with a ReLU activation function which does perform signifigantly better than one with sigmoid. This is expected as ReLU solves the vanishing gradient problem. However, my understanding was the reason we implement nonlinearities is to separate data which cannot be separated linearly. But if ReLU is linear for all values we care about it shouldn't work at all?

Unless, of course, neurons are defined for negative values but then my question becomes ""why does ReLU solve the vanishing gradient problem at all?"", since the derivative of ReLU for x<0 = 0

",12026,,,,,8/16/2018 20:00,"If neurons are only defined for values between 0 and 1, how does ReLU differ from the identity?",,1,1,,,,CC BY-SA 3.0 5055,2,,5054,1/18/2018 15:32,,-1,,"

You are right about the fact that we use nonlinearity for classes that cant be seperated by a straight line. If you think about curves,as we know from the calculus, you can approximate a nonlinear function with lines. If you have infinite amount of lines you can mimic exactly the same function. When you increase or decrease the weight of some neuron you essentially increase or decrease the length of the line it represents. And ReLU solves the vanishing gradient problem so they are superior to sigmoid in my opinion too.

",5104,,,,,1/18/2018 15:32,,,,4,,,,CC BY-SA 3.0 5056,2,,5051,1/18/2018 15:45,,1,,"

In the above situation I mentioned, wont the agent think due to 'move right' action the state has changed, which is not really the case?

Generally the agent does not need to ""think"" what the next state will be. After taking an action, it should observe the reward and the next state.

The logic for the next state is controlled by the environment. In your case, the environment of the simulator should accept the action choice and then not allow the sideways movement off the road, and present the correct logical next state. It may in addition alter the reward signal if you want to penalise this choice (in comments you say this is not done - that's fine, it's a choice you make on problem set up, what behaviour to reward or penalise).

A more advanced agent might also try to predict what the next state will be, especially if it was using a planning algorithm (e.g. Dyna-Q). Even then there is no reason or rule that it should have to predict an impossible next state - a well-trained agent should in fact predict correctly that moving right is same as ""do nothing"" when the car is in far right-hand lane. For a value-based approach, like Q-learning the agent may do this simply by scoring the two actions with the same value.

In your case, you do not need to try and second-guess what the agent will predict. Provided your state information is enough for the agent to observe that it is in the far left or far right lane, it should [eventually, after training] correctly predict the value functions and/or optimal actions as it learns the environment. That includes, after a few examples, that it will predict the outcome when moving right when already on right hand side. For that to happen reliably, you do need to check your state representation is adequate, and function approximation hyper-parameters are suited to the problem (the latter part could take some experimentation)

If the state data that the car receives does not contain enough information for the agent to tell that it is in extreme left or right lane, then you will have a problem. This is entirely possible in general, although from the image you show, that does not seem to be an issue.

One possible related issue is if the other cars in the scene all have different velocities. This velocity information is not shown in the state, and could cause your agent problems as it cannot tell the difference between a stationary car in front of it, and one which better matches its own velocity. You could treat this missing data as a partially observable state, and adjust you agent. Or you might be able to do the same as the Deep Mind DQN team for Atari games and use a sequence of e.g. last 4 images as the state, which should then include enough velocity data to make reliable estimates of state/action values.

",1847,,1847,,1/18/2018 16:09,1/18/2018 16:09,,,,0,,,,CC BY-SA 3.0 5057,1,5062,,1/18/2018 18:32,,6,335,"

In the book Reinforcement Learning: An Introduction (page 25), by Richard S. Sutton and Andrew G. Barto, there is a discussion of the k-armed bandit problem, where the expected reward from the bandits changes slightly over time (that is, the problem is non-stationary). Instead of updating the Q values by taking an average of all rewards, the book suggests using a constant step-size parameter, so as to give greater weight to more recent rewards. Thus:

$$ Q_{n+1} = Q_n + \alpha (R_n - Q_n),$$

where $\alpha$ is a constant between 0 and 1.

The book then states that this a weighted average because the sum of the weights is equal to 1. What does this mean? Why is this true?

",12201,,2444,user1440,4/15/2020 20:18,4/15/2020 20:18,What is a weighted average in a non-stationary k-armed bandit problem?,,1,0,,,,CC BY-SA 4.0 5058,1,5418,,1/18/2018 21:00,,5,138,"

I have a model that predicts sentiment of tweets. Are there any standard procedures to evaluate such a model in terms of its output?

I could sample the output, work out which are correctly predicted by hand, and count true and false positives and negatives but is there a better way?

I know about test and training sets and metrics like AUROC and AUPRC which evaluate the model based on known data, but I am interested in the step afterwards when we don't know the actual values we are predicting. I could use the same metrics, I suppose, but everything would need to be done by hand.

",8385,,,,,2/24/2018 16:39,How do I statistically evaluate a ML model?,,1,0,,,,CC BY-SA 3.0 5059,1,5060,,1/18/2018 21:58,,4,588,"

I am trying to do some experiments with some intelligent agents, but I'm not sure how significant they will be in the future.

What are some possible interesting applications or use-cases of intelligent agents in the future?

For instance, it can be used as a virtual assistant instead of a real call agent. But what can be a more appealing application in the future?

",9053,,2444,,5/15/2020 23:08,5/15/2020 23:08,What are some of the possible future applications of intelligent agents?,,1,0,,,,CC BY-SA 4.0 5060,2,,5059,1/18/2018 22:15,,3,,"

At the moment, what I can think of are the following applications, but there are potentially a lot more.

  • Decision Maker: If you have any problem making a decision, intelligent agents can be used to weight evidence and give you statistics to rule out bad decisions.

  • Online Teacher: In the far future, intelligent agents may acquire human-like skills, they maybe used to teach different students (from different backgrounds) at once, but the advantage here is that the intelligent agent can evaluate every student's level (skills, personality, intelligence, etc), and use this data to decide how to convey information to him (differently). It would be like a virtual world were children receive information in parallel, while taking different data about them into consideration: They can be used to better education.

  • Social Media: I think intelligent agents will be integrated into social media, to analyze messages and comments, and warn users that some friends will dislike this or that or find it offensive, to give him time for making a decision about what to say. This would help limit bullying on social media.

  • Language Learning: in the future (like in 10 - 20 years), intelligent agents will have the ability to generate human-level language, that would make it possible for them to generate infinite sentences and structures just like humans can do, and again they can evaluate user's level to generate listening and reading material. For a human, it would be just like talking to a native speaker of that language (or even better, since this ""native speaker"" would know everything about your level).

",12113,,2444,,5/15/2020 23:04,5/15/2020 23:04,,,,0,,,,CC BY-SA 4.0 5062,2,,5057,1/19/2018 0:18,,1,,"

The weighted average stands for a linear combination of all values, such that the sum of all weights is 1.

More specifically, if you denote the rewards by a vector $X$, the weighted average will be taking the dot product between $X$ and a vector $W$ such that $0 \le W_i \le 1$ and the sum of all $W_i$ is 1.

If each $W_i = 1/n$ it will be a weighted average (a.k.a the mean). Using the exponential decay $W_i = \alpha^i/ (\sum W_i)$ is also a weighted average.

Then both strategies to compute the Q value use weighted average of previous reward.

",12121,,2444,user1440,10/5/2019 17:33,10/5/2019 17:33,,,,0,,,,CC BY-SA 4.0 5063,1,,,1/19/2018 1:19,,2,49,"

By open up I mean slightly open up so that a theoretical structure of panels with no width looks three-dimensional. The original structure being an ideal object where any number several panels can occupy the same region of space (plane).

To concretize what I mean by rigid structure of panels, let's take what's on my profile picture. I'm including here a larger version of that object:

An origami figure is folded from a square and, unlike this simple example in the image, it can be very convoluted, with layers upon layers.

Let's say that if I have an ideal, theoretical and flat model of an origami picture and by flat I mean the faces are on planes but not necessarily on a single plane. For example all of faces of the figure in the image would be in one plane, but there could be figures with more planes; think of animals with ears, flippers, etc.

I would like to open up those parts that hinge on a theoretical segment (relatively easy) or curve or make a triangle of the corners of those faces that have two adjacent faces with no connections, open up a set of several faces that allow such opening of which the image is a good example.

So far I have tried programming rules for different structures suchs as flaps, ends, wrap-arounds... However there are three no small problems. First, it's extremely challenging to take into account all the corner cases and possibilities. I suspect it sounds simpler that it really is, but I don't want to digress with explanations. Second, the code is not maintenable. It's difficult to put into words rules that have to be visualized. Third it's very difficult to unit test and debug.

I strongly suspect that there must be some Artificial Intelligence techniques for doing this. Would you be so kind as to point me in the right direction? Just in a general way, without needing to go much into detail.

Let me know if I should include more info or code.

",12210,,1671,,1/19/2018 20:44,1/19/2018 20:44,How to open up a rigid structure made of connected panels?,,0,3,,,,CC BY-SA 3.0 5066,2,,4647,1/19/2018 9:14,,3,,"

How is the right to explanation reasonable, given the current standards at which we hold each other accountable?

In short, it is quite reasonable.

More specifically, making AI accountable and responsible for explaining the decision seems reasonable because

  • Humans (DARPA in this case) has chosen to create, raise and evolve AI with the tax-payers money. In our society as well, whenever humans have come together for some purpose (some formal arrangement like government or otherwise), accountability and responsibility is assumed and assigned. Thus, expecting AI (who will take decisions on our behalf) to explain their decisions seems only natural extension of how humans currently operate.

  • Humans (generally at least) don't have super-power or resources to control and manipulate the rest of the population. But when they do (like our political leaders), we want them to be accountable.

  • In rare cases, powerful humans become powerful due to things which are accessible to everyone, while AI's super-power won't be. So, our civilization would feel safer and thus less insecure with an AI which doesn't shut the door on face when questions are asked.

Other benefits of AI that offers an explanation on how a decision is reached

  • Far easier to debug and improve in its early stage.

  • Can be customized further (and along the way) to amplify certain aspects like social-welfare over profitability, etc.

",12138,,2444,,6/15/2019 14:39,6/15/2019 14:39,,,,0,,,,CC BY-SA 4.0 5067,1,,,1/19/2018 11:32,,4,1464,"

I am currently writing an engine to play a card game and I would like for an ANN to learn how to play the game. The game is currently playable, and I believe for this game a deep-recurrent-Q-network with a reinforcement learning approach is the way to go.

However, I don't know what type of layers I should use, I found some examples of Atari games solved through ANN, but their layers are CNN (convolutional), which are better for image processing. I don't have an image to feed the NN, only a state composed of a tensor with cards in the player's own hand and cards on the table. And the output of the NN should be a card or the action 'End Turn'.

I'm currently trying to use TensorFlow but I'm open to any library that can work with NN. Any type of help or suggestion would be greatly appreciated!

",12217,,12217,,1/23/2018 14:25,10/21/2018 16:00,What layers to use in a Neural Network for card game,,2,2,,,,CC BY-SA 3.0 5069,1,,,1/19/2018 15:07,,1,187,"

As far as I understand, the hill climbing algorithm is a local search algorithm that selects any random solution as an initial solution to start the search. Then, should we apply an operation (i.e., mutation) on the selected solution to get a new one or we replace it with the fittest solution among its neighbours?

This is the part of the algorithm where I am confused.

",6095,,2444,,3/2/2019 11:03,3/2/2019 11:03,Should the mutation be applied with the hill climbing algorithm?,,1,0,,,,CC BY-SA 4.0 5071,2,,5069,1/19/2018 17:28,,1,,"

In general, hill climbing algorithms select a random initial solution, then takes the best move available after evaluating all possible operations available. The possible operations are determined by the search operators (which you determine and is dependent on the problem setting). So by your words, you should 'replace it with the fittest solution among its neighbours'.

Mutation can be applied as a search operator, but this varies by degree and context. For example, in structure learning for bayesian networks, a random sequence of search operators are sometimes applied to escape local optima during the search process.

See https://www.cs.helsinki.fi/u/bmmalone/probabilistic-models-spring-2014/StructureLearning.pdf - slide 9 for random restarts

",9469,,9469,,1/19/2018 17:35,1/19/2018 17:35,,,,0,,,,CC BY-SA 3.0 5072,2,,4766,1/19/2018 18:15,,3,,"

For us to answer this question. First, we need to look at why capsule networks outperform convolution neural networks by as much as 45% in recognizing images that have been rotated, translated or are under a different pose. We can find Geof Hinton's paper on capsule networks here for reference https://arxiv.org/pdf/1710.09829v1.pdf

In a CNN architecture, a convolution layer is usually followed by a max-pooling layer. This is so that the lower levels can detect low-level features, like edges, while the high-level layers can detect abstraction like eyes. However, the application of max-pooling leads to the loss of important information regarding the location and spatial relationship between certain features.

On the other hand, this is where capsule networks excel, the way they represent certain features is locally invariant. This is why capsule networks can recognize images under different lighting conditions and deformations. They are likely to excel at applications such as video and object tracking but not necessarily NLP.

The current approach in NLP maps words and phrases to vectors. From there, we exploit the concept of vectors and distances between them (cosine, euclidean, etc.) to perform operations such as: finding the similarity between words and even documents, machine translation, and natural language understanding (NLU).

Capsule networks are unlikely to succeed in NLP. This is because algorithms that aim to find the hierarchical structure of natural languages or approaches that focus on grammar have met little success. Research by Stanford University aiming at finding the hierarchical structure of natural languages can be found here https://nlp.stanford.edu/projects/project-induction.shtml

Although conclusive research regarding other applications of capsule networks has not yet been conducted. They are likely to excel at applications such as video intelligence and object tracking but not necessarily NLP.

",10913,,2444,,6/9/2020 11:49,6/9/2020 11:49,,,,1,,,,CC BY-SA 4.0 5074,2,,2069,1/20/2018 3:24,,1,,"

I'd like to answer this in detail, but it requires some fairly complicated theory that you don't have access to. Essentially this is related to the Abstraction Valuation Paradox. Don't bother trying to look that up. It's part of several years of research that hasn't been published yet. The research has shown that there is no solution to this paradox using computational or AI theory. So, no AI, no matter how advanced, can have an understanding of ethics. The best you can do is program in a bunch of rules of thumb. This gives your AI a bureaucratic reaction to conditions but no flexibility and no way to resolve problems outside of its rule space. In other words if it runs into an exception or unforeseen circumstances, it could stall or could guess at a decision.

The research on human-like ability to understand and reason is quite different from the study of AI. This research suggests that you would need consciousness for an understanding of ethics.

",12118,,,,,1/20/2018 3:24,,,,0,,,,CC BY-SA 3.0 5075,1,5079,,1/20/2018 10:58,,0,418,"

In the [delta rule][1] the equation to adjust the weight with respect to error is

$$w_{(n+1)}=w_{(n)}-\alpha \times \frac{\partial E}{\partial w}$$

*where $\alpha$ is the learning rate and $E$ is the error.

The graph for $E$ vs $w$ would look like the one below with $E$ in the $y$ axis and $W$ in the $x$-axis

In other words, we can write

$$\alpha \times \frac{\partial E}{\partial w}=w_{(n)}-w_{(n+1)}$$

I want to know, what is the proof behind the gradient of a curve being equal/proportional to the distance between the two coordinates in the x-axis.

$\frac{\partial E}{\partial w}$ times step is a small shift on $f(w)$ not $w$. So, why does the difference between $W(n+1)$ and $W(n)$ be equal to $f(W)$?

I found a similar question, but the accepted answer doesn't have a proof.

",11789,,2444,,7/6/2020 23:01,8/10/2020 17:16,What is the proof behind the gradient of a curve being proportional to the distance between the two co-ordinates in the x-axis?,,2,0,,,,CC BY-SA 4.0 5076,2,,3948,1/20/2018 11:08,,1,,"

The software that you require to provide your AI agent remote access is called Virtual Network Computing (VNC). A VNC is a desktop sharing system that transmits keyboard and mouse movements, over a network connection, from the server (the AI) to the client (training environment).

This allows your AI agent to use a computer like a human does, which is by looking at the screen pixels and operating a virtual keyboard and mouse. Infact VNC's are used conventionally for remote technical support and accessing files on one's computer remotely.

An excellent use case example is Open AI's Universe platform. Alongside Gym, which is Open AI's toolkit for developing RL algorithms. Universe launches programs behind a VNC remote desktop, which enables the training of AI agents by only showing them the screen pixels and allowing them to operate a virtual keyboard or mouse.

Since you want to train your AI agent on an Android based training environment, my suggestion is that you consider installing either Real VNC (android client) or Droid VNC on your Android device. You can then install the VNC server software on your training machine (i.e. an nvidia docker instance). Your AI agent will now have full access to the training environment screen for reinforcement learning.

I also recommend that you look at this similar question on the network.

https://askubuntu.com/questions/414189/how-to-remotely-control-my-android

",10913,,10913,,1/20/2018 12:51,1/20/2018 12:51,,,,0,,,,CC BY-SA 3.0 5079,2,,5075,1/21/2018 11:41,,2,,"

Don’t think about it as the $w_{(n)}-w_{(n+1)}$ being proportional to something. Think about it this way:

I'm now at $w_{(n)}$. Where do I want to be at timestep, so that the error decreases? For that, I need to know how the error changes when I make small steps to the left or right of $w_{(n)}$.

If $E$ increases as I increase $w$ (that is, if $\frac{\partial E}{\partial w}>0$, then obviously, I would want to move a little bit to the left. In other words, $w_{n+1}<w_{n}$ or $w_{n+1}-w_{n}<0$.

On the other hand, if the derivative were negative, you know that you should move right to reduce the error a little bit, $w_{n+1}-w_{n}>0$. So, basically, your step should have the opposite sign of the derivative.

$$w_{n+1}-w_{n} \propto-\frac{\partial E}{\partial w}$$

$\alpha$, the learning rate, is just the constant of proportionality. Caution: think about small values for this rate, not big numbers. Taking a huge step can cause you to overshoot the minimum point.

",12250,,2444,,7/6/2020 23:13,7/6/2020 23:13,,,,0,,,,CC BY-SA 4.0 5080,1,,,1/21/2018 14:54,,1,27,"

Are there a finite set of computable functions constructing deep neural network which can form or implement any c.e. function or computable function?

Or does there exist a finite set of computable function by which every c.e. function can be implemented by combination and connection(like connection in DNN)?

",9237,,,,,1/21/2018 14:54,Are there a finite set of computable functions constructing deep neural network which can form or implement any c.e. function or computable function?,,0,7,,,,CC BY-SA 3.0 5081,1,,,1/21/2018 16:36,,2,559,"

This AI is really human-like and allegedly doesn't give pre-programmed responses. It's makers Robots without Borders say the project is open source but I couldn't find the code anywhere.

",12254,,,,,4/12/2018 4:58,Can anyone find the source code for the chatbot Luna?,,1,0,,,,CC BY-SA 3.0 5082,2,,5075,1/21/2018 16:40,,0,,"

$$let,\ x_{(n)} \ be\ a\ point\ on\ x-axis\ where\ f'( x) \ =\ 0\ ,\ and\ x_{(n\ +\ h)} \ is\ any\ other\ arbitary\ point \\ \therefore \ \ \frac{f'( x_{(n\ +\ h)})}{|\ f'( x_{(n\ +\ h)}) \ |} =\begin{cases} 1 & \mathrm{if,} \ h\ \ >0\\ 0 & \mathrm{if,} \ h\ =\ 0\\ -1 & \mathrm{if,} \ h\ < \ 0 \end{cases}\\similarly,\ \ \frac{x_{(n)} \ -\ x_{(n\ +\ h)} \ }{|\ x_{(n)} \ -\ x_{(n\ +\ h)} \ |} \ =\ \begin{cases} 1 & \mathrm{if,} \ h\ \ < 0\\ 0 & \mathrm{if,} \ h\ =\ 0\\ -1 & \mathrm{if,} \ h\ >\ 0 \end{cases}\\or,\ \frac{x_{(n)} \ -\ x_{(n\ +\ h)} \ }{|\ x_{(n)} \ -\ x_{(n\ +\ h)} \ |} \ =\ -\ \ \ \frac{f'( x_{(n\ +\ h)})}{|\ f'( x_{(n\ +\ h)}) \ |}\\ \therefore \ x_{(n)} \ =x_{(n\ +\ h)} \ -\ \eta \times f'( x_{(n\ +\ h)}) \ \ \ \ \ \ \left[ where\ \eta \ =\frac{|\ x_{(n)} \ -\ x_{(n\ +\ h)} \ |}{|f'( x_{(n\ +\ h)}) \ |} \ \right]$$

",11789,,11789,,1/25/2020 13:21,1/25/2020 13:21,,,,1,,,,CC BY-SA 4.0 5083,2,,5081,1/21/2018 21:57,,3,,"

You can't find the source code because it doesn't exist. The whole thing was a 2015 kickstarter scam, and now it's a patreon scam. All the tech ""demos"" are obviously pre-scripted videos.

There was actually a question on Quora about it, in which one of the answerers managed to find, among other things, the question and answer that his fake AI gave in one of the videos.

",12258,,,,,1/21/2018 21:57,,,,7,,,,CC BY-SA 3.0 5084,1,,,1/22/2018 3:50,,1,241,"

I am currently working on a defect detection algorithm but I only have a few samples of defects.I googled for defect detection datasets and I found this one:

http://resources.mpi-inf.mpg.de/conferences/dagm/2007/prizes.html

which has a few hundreds of original images of defects.

My idea is: Imagenet => Defect dataset from internet => Own defect dataset

Step 1. Training a model with ImageNet initialization using the defect dataset found in the internet (+ non-defect images + augmented data)

Step 2. Using the output model of step 1 (which will be more similar to my own data),do transfer learning using my own defect dataset (defects + non-defects + augmented).

Do you think this a good way to get good results?

Based on: https://blog.slavv.com/a-gentle-intro-to-transfer-learning-2c0b674375a0

Should defect images consider as low similar with imagenet's images? or similar to model because a both inputs are images? Some webpages said because they both are images, they are similar but some webpages said because these images are too different to the images used to train the imagenet model so I got confused about this.

If I skip step 1, I dont think I get anything good because I have less than 100 images.

Any advise or comment will be appreciated.

",12261,,12261,,1/22/2018 6:28,1/22/2018 6:28,Transfer learning from model trained in a similar dataset,,0,0,,,,CC BY-SA 3.0 5085,1,,,1/22/2018 10:47,,4,245,"

I am trying to build a neural network that takes in a single string, ex: ""dog"" as an input, and outputs 50 or so related hashtags such as, ""#pug, #dogsarelife, #realbff"".

I have thought of using a classifier, but because there is going to be millions of hashtags to choose the optimal one from, and millions of possible words from the english dictionary, it is virtually impossible to search up the probability of each

It is going to be learning information from analyzing twitter posts' text, and its hashtags, and find which hashtags goes with what specific words.

",12264,,,,,6/23/2019 11:02,What machine learning algorithm should be used to analyze the relationship between strings?,,3,3,,,,CC BY-SA 3.0 5086,1,,,1/22/2018 13:54,,1,31,"

I have a sample set of data about Leads that gets generated every day. Leads are nothing but a user expressing request to be our partner or not. Sample data set is as shown below

LEADID,CREATEDATE,STATUS,LEADTYPE
810029,24-DEC-17 12.00.00.000000000 AM,open,LeadType1
806136,30-DEC-17 12.00.00.000000000 AM,open,LeadType2
812134,31-DEC-17 12.00.00.000000000 AM,open,LeadType2
806147,31-DEC-17 12.00.00.000000000 AM,open,LeadType1
806166,01-JAN-18 12.00.00.000000000 AM,open,LeadType2
28002,04-MAR-16 12.00.00.000000000 AM,open,LeadType2
808156,01-JAN-18 12.00.00.000000000 AM,open,LeadType1
808162,01-JAN-18 12.00.00.000000000 AM,open,LeadType2
806257,07-JAN-18 12.00.00.000000000 AM,open,LeadType1
832091,17-JAN-18 12.00.00.000000000 AM,open,LeadType2
838079,17-JAN-18 12.00.00.000000000 AM,open,LeadType1
66001,26-MAR-16 12.00.00.000000000 AM,open,LeadType1
70001,28-MAR-16 12.00.00.000000000 AM,open,LeadType2
806019,23-DEC-17 12.00.00.000000000 AM,open,LeadType2
822064,12-JAN-18 12.00.00.000000000 AM,open,LeadType1
834043,14-JAN-18 12.00.00.000000000 AM,open,LeadType2
836053,16-JAN-18 12.00.00.000000000 AM,open,LeadType1
838119,19-JAN-18 12.00.00.000000000 AM,open,LeadType2

As you can see Lead types can be of LeadType1 or LeadType2 and this get generated every day.

In order to make sense of data I created the following plot using Python

The supporting code is as follows. Note I am just a Noob to Python and AI but I want to check if this proves a valid use case for Machine Learning and what should be my approach

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#%matplotlib inline
in_file = 'lead_data.csv'
mydf = pd.read_csv(in_file,encoding='latin-1')

fig, ax = plt.subplots(figsize=(15,7))
#g = mydf.groupby(['R4GSTATE','LEADTYPE']).count()['STATUS'].unstack()
g = mydf.groupby(['R4GSTATE','STATUS']).count()['LEADTYPE'].unstack()
g.plot(ax=ax)
#ax.set_xlabel('R4GSTATE')
ax.set_xlabel('R4GSTATE')
ax.set_ylabel('Number of Leads')
ax.set_xticks(range(len(g)));
ax.set_xticklabels([""%s"" % item for item in g.index.tolist()], rotation=90);

Basically I just read the csv, curated the data( I have cleaned the original csv) to keep what is meaningful for me. I also created grouping of number of leads Month-Year wise so that I can see the historical lead generated every month.

I want to know if Machine Learning helps me to predict number of Lead generated in next coming months based on previous months data.

If the answer is yes then is Linear Regression the right path to explore further

",12266,,,,,1/22/2018 13:54,Can number of Leads be predicted based on previous months,,0,0,,,,CC BY-SA 3.0 5087,1,,,1/22/2018 14:22,,2,50,"

Forgive what might be a basic question. I'm just experimenting with ML / AL and I have a small problem set and I'd like to see if it can be solved with ML / AI. Basically, given a set of objects with multiple features, I'd like to create a process for recommending one automatically to a user.

I'm thinking that some sort of clustering algorithm may be the best approach. However, one main challenge I'm trying to wrap my head around is that I don't know in advance how many distinct clusters will evolve... There may be scenarios where we Feature X is really important, but other scenarios where a user will say Feature Y is important.

Secondly, what is my input set? For each training sample, I will have 1 selected object, and N-1 unselected objects. But I don't want to ""train"" that the unselected objects are ""bad"" because they could be selected in a future training example.

Finally, I don't have a large training set already, so I would like to use feedback (user input, ""This was a bad choice"" or ""Use this object instead."") from the process to further refine the algorithm. Is this feasible?

Are there any established patterns for this sort of process? Thanks in advance.

",11449,,,,,9/2/2019 2:37,Recommend item from set based on features,,2,0,,,,CC BY-SA 3.0 5088,2,,5087,1/22/2018 15:18,,1,,"

This problem is usually approached with ""Singular Value Decomposition"".

Search also for the ""Netflix Challenge"".

",12269,,,,,1/22/2018 15:18,,,,1,,,,CC BY-SA 3.0 5091,2,,5067,1/22/2018 15:29,,2,,"

The game state consists of the location of all the hidden cards, so you probably need a softmax layer, 52*n, where n is the number of locations.

I'm not very sure that a NN is a good match.

",12269,,,,,1/22/2018 15:29,,,,1,,,,CC BY-SA 3.0 5092,1,5101,,1/22/2018 15:56,,6,3335,"

I am currently learning about CNNs. I am confused about how filters (aka kernels) are initialized.

Suppose that we have a $3 \times 3$ kernel. How are the values of this filter initialized before training? Do you just use predefined image kernels? Or are they randomly initialized, then changed with backpropagation?

",12242,,2444,,9/29/2021 13:24,9/29/2021 13:26,How are the kernels initialized in a convolutional neural network?,,1,0,,,,CC BY-SA 4.0 5093,1,5099,,1/22/2018 17:00,,1,635,"

I am implementing a neural network to train it on handwritten digits.

Here is the cost function that I am implementing.

$$J(\Theta)=-\frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}\left[y_{k}^{(i)} \log \left(\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)+\left(1-y_{k}^{(i)}\right) \log \left(1-\left(h_{\Theta}\left(x^{(i)}\right)\right)_{k}\right)\right]+ \\\frac{\lambda}{2 m} \sum_{l=1}^{L-1} \sum_{i=1}^{s_{l}} \sum_{j=1}^{s_{l+1}}\left(\Theta_{j, i}^{(l)}\right)^{2}$$

In $\log(1-(h(x))$, if $h(x)$ is $1$, then it would result in $\log(1-1)= \log(0)$. So, I'm getting a math error.

I'm initializing the weights randomly between 10-60. I'm not sure where I have to change and what I have to change.

",12273,,2444,,12/12/2021 8:38,12/12/2021 8:42,"How do I avoid the ""math domain error"" when the input to the log is zero in the objective function of a neural network?",,1,3,,,,CC BY-SA 4.0 5094,1,,,1/22/2018 19:39,,1,22,"

I'm reading the book Introduction to Evolutionary Computing and on the chapter about Evolution Strategies said that we have to modify the strategy parameter sigma (standard deviation or mutation step size) before using in to modify the object parameter and I don't understand why.

Why do we have to modify sigma before the mutation of object parameters?

Maybe because if we do it we will have the mutated object parameters and the strategy parameters that have generated them.

",4920,,,,,1/22/2018 19:39,ES- Modify sigma before mutate object parameters,,0,0,,,,CC BY-SA 3.0 5096,1,,,1/22/2018 21:40,,8,2118,"

I am trying to develop a neural network which can identify design features in CAD models (i.e. slots, bosses, holes, pockets, steps).

The input data I intend to use for the network is a n x n matrix (where n is the number of faces in the CAD model). A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship. A zero in both positions means the faces are not adjacent. The image below gives an example of such a matrix.

Lets say I set the maximum model size to 20 faces and apply padding for anything smaller than that in order to make the inputs to the network a constant size.

I want to be able to recognise 5 different design features and would therefore have 5 output neurons - [slot, pocket, hole, boss, step]

Would I be right in saying that this becomes a sort of 'pattern recognition' problem? For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features?

I am a complete beginner in machine learning and I am trying to get a handle on whether this approach will work or not - if any more info is needed to understand the problem leave a comment. Any input or help would be appreciated, thanks.

",12125,,,,,12/9/2018 5:43,Using neural network to recognise patterns in matrices,,2,1,,,,CC BY-SA 3.0 5098,1,,,1/23/2018 5:02,,4,123,"

I am new to machine learning and AI, so forgive me if this is obvious. I was talking with a friend on how to solve this problem, and neither of us could figure out how to do it.

Say I have a grid area of 100x100 blocks, and I want a robot to build a horizontal 100x100 grid, and 3 blocks high. I am given a random, but known starting surface, always 100x100 but the height of the random surface can vary from 1 to 5 blocks. I have an extra reserve of blocks i can pick up, so dont have to worry about running out. The robot can move in any direction, even diagonally at some cost penalty. The robot can obviously move a 4 high block to fill in a 2 high, so each is at the design height of 3. This sounds like a reinforcement learning problem, but would any one be able to explain more detail how I would do this, to a) minimize the amount of moves, and b) to get to the design surface.

",12284,,,,,6/29/2018 16:34,Move blocks to create a designed surface,,2,0,,,,CC BY-SA 3.0 5099,2,,5093,1/23/2018 6:55,,1,,"

So, firstly, for $h_{\Theta}(x)$ to be $1$, the weighted sum of $x$ (after you dot product it with $\Theta$) would have to be literally infinity, if you're using the sigmoid function. Doesn't happen in practice, even with the rounding computers do, as we don't use big numbers to initialize our $\Theta$ matrices.

Intuitively, that'd mean you're basically more certain than one can possibly be in this universe that the label of this example should be $1$.

So, if $(1 - h_{\Theta}(x)) = 0$, $y$ is certainly $1$, and so $1-y$ will be zero.

Secondly, the convention is to drop the entire right-hand-side term when $y^{(i)}$ is $1$. This will not cause problems when programming, due to the first point I made above.

",1538,,2444,,12/12/2021 8:42,12/12/2021 8:42,,,,0,,,,CC BY-SA 4.0 5100,2,,5098,1/23/2018 7:19,,1,,"

Essentially, you could do something like have the robot randomly make moves (moving around and moving blocks) for some number of steps. Repeat this a bunch of times, and record the 'score' at the end (how close you are to a perfect result grid). Tell your algorithm to act more like the best scoring runs (Optimize a loss function), and start it over. Hopefully, you'll eventually get a robot that manages the task - the whole 'optimal path' thing will come around by the computer telling itself to learn from the lowest cost examples.

Remember, you're letting the machine do the hard thinking about the best way to do this or that. All you have to do is give it the framework to learn.

",1538,,1538,,6/29/2018 16:34,6/29/2018 16:34,,,,0,,,,CC BY-SA 4.0 5101,2,,5092,1/23/2018 8:45,,5,,"

The kernels are usually initialized at a seemingly arbitrary value, and then you would use a gradient descent optimizer to optimize the values, so that the kernels solve your problem.

There are many different initialization strategies.

  • Set all values to a constant (for example, zero)
  • Sample from a distribution, such as a normal or uniform distribution
  • There are also some heuristic methods that seem to work very well in practice; a popular one is the so-called Glorot initializer, which is named after Xavier Glorot, who introduced them here. Glorot initializers also sample from distribution, but they truncate the values based on the kernel complexity.
  • For specific types of kernels, there are other defaults that seem to perform well. See for example this paper.

Exploring initialization strategies is something I do when my model is not able to converge (gradient problems) or when the training seems to be stuck for a long time before the loss function starts to decrease. These are signs that there might be a better initialization strategy to look for.

",12285,,2444,,9/29/2021 13:26,9/29/2021 13:26,,,,0,,,,CC BY-SA 4.0 5104,1,,,1/23/2018 14:24,,2,268,"

I am working with a project which is a agent based pedestrian simulation in Java and its is animated with the help of JavaFX. I've tried to read all the social force model papers but my understanding of those articles are none. So I tried an own approach which got trashed after failing time after time.

My approach was that each agent calculated its surrounding and first calculated the distance to each of the agents on the field and if that distance was below a constant then the agent calculates the angle of that agent which is too close to it and moves accordingly to the calculated angle.

This approach didnt work for me because the ""avoidance code"" is not efficient enough and the agents just dont know where to go when they meet and just stays in place.

I am asking for guidance to how I can approach this problem in a better way.

double[] check(Vector<Pedestrian> peds, Pedestrian p1){
for (Pedestrian p : peds){
    if (p.getPedestrianId() != this.id){
        double distance = IPedestrian.distance_formula(getTranslateX(), getTranslateY(), p.getTranslateX(), p.getTranslateY());
        if (distance <= DANGER){
            System.out.println(""DANGER"");
            return IPedestrian.angle(getTranslateX(), getTranslateY(), p.getTranslateX(), p.getTranslateY(), p1);
        }
    }
}
return new double[] {SPEED, 0};

}

public void move(Vector<Pedestrian> peds, Pedestrian p) {
double[] new_steps = this.check(peds, p);
if (side == SideChooser.Left){
    setTranslateX(getTranslateX() + new_steps[0]);
    setTranslateY(getTranslateY() + new_steps[1]);
} else {
    setTranslateX(getTranslateX() - new_steps[0]);
    setTranslateY(getTranslateY() - new_steps[1]);
}

}

Math formulas:

static double distance_formula(double thisX, double thisY, double otherX, double otherY){
return Math.sqrt(Math.pow(otherX - thisX, 2) + Math.pow(otherY - thisY, 2));

}

static double[] angle(double x1, double y1, double x2, double y2, Pedestrian p){
double angle = Math.toDegrees(Math.atan2(y2-y1, x2-x1));
angle += Math.ceil(-angle/360) * 360;

//double angle = Math.toDegrees(Math.atan2(y2-y1, x2-x1));

if (p.getSideChoosen() == SideChooser.Left){//if the pedestrian is from the left side
    if (angle < 45 || angle > 315)//front
        return new double[]{-SPEED/5, 0};

    else if (angle >= 135 || angle <= 225 ) //back
        return new double[]{SPEED*1.4, 0};

    else if (angle >= 45 || angle <= 90)//North-East
        return new double[]{0, SPEED};

    else if (angle > 90 || angle <= 135) //North-West
        return new double[]{SPEED*1.2 , SPEED};

    else if (angle >= 270 || angle <= 315) //South-East
        return new double[]{0, -SPEED};

    else if (angle > 225 || angle <= 270) //South-West
        return new double[]{SPEED*1.2, -SPEED};

    else
        return new double[]{SPEED, 0};
} else {
    if (angle < 45 || angle > 315)//back
        return new double[]{SPEED*1.4, 0};

    else if (angle >= 135 || angle <= 225 ) //front
        return new double[]{-SPEED/5, 0};

    else if (angle >= 45 || angle <= 90)//North-West
        return new double[]{SPEED*1.2, -SPEED};

    else if (angle > 90 || angle <= 135) //North-East
        return new double[]{0 , -SPEED};

    else if (angle >= 270 || angle <= 315) //South-West
        return new double[]{SPEED*1.2, SPEED};

    else if (angle > 225 || angle <= 270) //South-East
        return new double[]{0, SPEED};

    else
        return new double[]{SPEED, 0};
}

}

",12291,,,,,1/23/2018 16:08,Agent collision avoidance java,,1,0,,,,CC BY-SA 3.0 5105,2,,5104,1/23/2018 16:08,,2,,"

A very efficient approach to what you are trying to do is velocity obstacles.

Assuming two agents use constant velocity motion vectors, a velocity obstacle models a geometric region in which if the endpoint of the velocity vector for agent 1 falls, it will collide with agent 2 (and vice versa). Hence you can predict what velocity vectors will lead to collisions between the agents and choose velocity vectors that do not collide.

There are very good examples and tutorials here: http://gamma.cs.unc.edu/RVO/

",12278,,,,,1/23/2018 16:08,,,,0,,,,CC BY-SA 3.0 5107,1,5740,,1/23/2018 18:45,,8,5187,"

Below is a quote from CS231n:

Prefer a stack of small filter CONV to one large receptive field CONV layer. Suppose that you stack three 3x3 CONV layers on top of each other (with non-linearities in between, of course). In this arrangement, each neuron on the first CONV layer has a 3x3 view of the input volume. A neuron on the second CONV layer has a 3x3 view of the first CONV layer, and hence by extension a 5x5 view of the input volume. Similarly, a neuron on the third CONV layer has a 3x3 view of the 2nd CONV layer, and hence a 7x7 view of the input volume. Suppose that instead of these three layers of 3x3 CONV, we only wanted to use a single CONV layer with 7x7 receptive fields. These neurons would have a receptive field size of the input volume that is identical in spatial extent (7x7), but with several disadvantages

My visualized interpretation:

How can you see through the first CNN layer from the second CNN layer and see a 5x5 sized receptive field?

There were no previous comments stating all the other hyperparameters, like input size, steps, padding, etc. which made this very confusing to visualize.


Edited:

I think I found the answer. BUT I still don't understand it. In fact, I am more confused than ever.

",12242,,2444,,10/8/2021 12:12,10/9/2021 11:50,How can 3 same size CNN layers in different ordering output different receptive field from the input layer?,,3,0,,,,CC BY-SA 4.0 5110,1,,,1/23/2018 22:19,,3,494,"

I want to develop (in Java) a voice plugin for Eclipse on a Mac that helps me jot down high-level classes and stub methods. For example, I would like to command it to create a class that inherits from X and add a method that returns String.

Could somebody help me point out the right material to learn to achieve that?

I don't mind using an existing solution if it exists. As far as I understand, I would have to use some Siri interface and use nltk to convert the natural text into commands. Maybe there's some chatbot library that saves me some boilerpate NLP code to directly jump on to writing grammar or selecting sentence patterns.

",12303,,2444,,5/15/2020 23:24,5/15/2020 23:24,Is there an AI system that automatically generates classes and methods by giving it voice commands?,,2,0,,,,CC BY-SA 4.0 5111,1,,,1/24/2018 1:46,,9,2973,"

I'm going through Andrew NG's course, which talks about YOLO, but he doesn't go into the implementation details of anchor boxes.

After having looked through the code, each anchor box is represented by two values, but what exactly are these values representing?

As for the need for anchor boxes, I'm also a little confused about that -- As far as I understand, the ground truth labels have around 6 variables :

  1. $P_o$ checks if it's an object or background,
  2. $B_x$ and $B_y$ are the center coordinates
  3. $B_h$ and $B_w$ are the height and width of the box
  4. $C$ is the object class, which depends on how many classes you have, so you can have multiple $C$

As for creating the bounding box,

$B_h$ is divided by 2, with one half from the center points ($B_x, B_y$) to the top, and the other half to the bottom.

If we train our classifier, wouldn't the prediction boxes be close to the ground truth labels as training progresses? So, if our ground truth label has a high height, small width as boxes for some images, and low hight and large width for other images, wouldn't our classifier automatically learn to differentiate between when to use one over the other, as it is being trained? If so then what is the use of anchor boxes? And what are those numbers representing anchor boxes representing?

",3460,,2444,,1/28/2021 23:20,1/28/2021 23:20,"In YOLO, what exactly do the values associated with each anchor box represent?",,1,1,,,,CC BY-SA 4.0 5114,1,,,1/24/2018 19:31,,5,12675,"

Before I start, I want to let you know that I am completely new to the field of deep learning! Since I need a new graphics card either way (gaming you know) I am thinking about buying the GTX 1060 with 6GB or the 1070 ti with 8GB. Because I am not rich, basically I am a pretty poor student ;), I don't want to waste my money. I don't need deep learning for my studies I just want to dive into this topic because of personal interest. What I want to say is that I can wait a little bit longer and don't need the results as quickly as possible.

Can I do deep learning with the 1060 (6GB seem to be very limiting, according to some websites) or the 1070 ti? Is the 1070 ti overkill for a personal hobby deep learner?

Or should I wait for the new generation Nvidia graphics card?

",12324,,2444,,6/1/2020 14:09,6/1/2020 14:09,Can I do deep learning with the 1060 or the 1070 ti?,,3,1,,6/1/2020 14:09,,CC BY-SA 4.0 5115,1,5178,,1/24/2018 21:38,,4,182,"

Imagine two languages that have only these words:

Man = 1,
deer = 2, 
eat = 3,
grass = 4 

And you would form all sentences possible from these words:

Man eats deer.
Deer eats grass.
Man eats.
Deer eats.

German:

Mensch = 5,
Gras = 6, 
isst = 7, 
Hirsch = 8

Possible german sentences:

Mensch isst Hirsch.
Hirsch isst Gras.
Mensch isst.
Hirsch isst.

How would you write a program that would figure out which words have the same meaning in English and German?

It is possible.

All words get their meaning from the information in which sentences they can be used. The connection with other words defines their meaning.

We need to write a program that would recognize that a word is connected to other words in the same way in both languages. Then it would know those two words must have the same meaning.

If we take the word ""deer"" (2) it has this structure in English

1-3-2
2-3-4

In german (8):

5-6-8
8-6-7

We get the same structure (pattern) in both languages: both 8 and 2 lie in first and last position, and the middle word is the same in both languages, the other word is different in both languages. So we can conclude that 8=2 because both elements are connected with other elements the same way.

Maybe we just need to write a very good program for recognizing analogies and we will be on the right track to creating AI?

",12251,,2444,,1/29/2020 11:28,1/29/2020 11:28,How to figure out which words have the same meaning in two different languages?,,4,4,,,,CC BY-SA 4.0 5116,1,5124,,1/24/2018 23:21,,3,735,"

As many papers point out, for better learning curve of a NN, it is better for a data-set to be normalized in a way such that values match a Gaussian curve.

Does this process of feature normalization apply only if we use sigmoid function as squashing function? If not what deviation is best for the tanh squashing function?

",12327,,,user9947,1/25/2018 19:40,1/25/2018 19:40,Data-set values feature scaling: sigmoid vs tanh,,2,0,,,,CC BY-SA 3.0 5117,2,,5114,1/25/2018 2:18,,7,,"

Regarding specific choices I can't recommend, but if you are completely new, you should probably learn/code some more until you get a GPU. There is a lot to learn in machine learning before GPU speedups make a significant difference, and until then doing the computations on any old CPU would be just fine, especially if you are just starting since you won't be doing anything too complex. You will know when computational resources are your main bottleneck, and until then it shouldn't really matter too much.

Or, you could also rent computing power from say, AWS or Google

",6779,,,,,1/25/2018 2:18,,,,0,,,,CC BY-SA 3.0 5118,2,,5116,1/25/2018 3:19,,0,,"

Yes it applies, and no it shouldn't matter that much between the two activation functions.

",6779,,,,,1/25/2018 3:19,,,,1,,,,CC BY-SA 3.0 5119,2,,5114,1/25/2018 8:54,,3,,"

I don't think you need to invest in any kind of GPU unless you're familiar with the computations required for the task you want to achieve using deep learning.

Also, by the time you've sufficiently mastered Deep Learning to a point where you can actually make the most of your GPU, there will be new products in the market.

So until then I suggest you use your CPU for doing little tasks such as Regression etc. You can always use the free credit offered by the various cloud companies for your tasks

",10118,,1671,,1/26/2018 21:27,1/26/2018 21:27,,,,1,,,,CC BY-SA 3.0 5120,2,,5115,1/25/2018 11:07,,1,,"

Isn't this what already Word2Vec and other word-embedding techniques already use. You know your word by the company it keeps is an idea that has been around for some time now.

",12337,,,,,1/25/2018 11:07,,,,1,,,,CC BY-SA 3.0 5121,1,5122,,1/25/2018 12:20,,-1,2940,"

I have the following problem, which I am unable to solve.

A neural network with the following structure is given: 1 input neuron, 4 elements in the hidden layer, 1 output neuron.

The output neuron is bipolar, the neurons in the hidden layer are linear.

The weights between the input neuron and the neurons in the hidden layer have the following values: $w_{11} = -3, w_{12} = 2, w_{13} = -1, w_{14} = 0.5$, while between neurons in the hidden layer and the starting neuron:: $w_{21} = +2, w_{22} = -0.5, w_{23} = -3, w_{24} = +1$ (no threshold input in both layers).

What will the network response be like if the number 3 is given to the input neuron?

",12341,,-1,,6/17/2020 9:57,3/7/2020 6:42,How can I manually calculate the output a specific neural network given some input?,,1,1,,4/17/2022 4:44,,CC BY-SA 4.0 5122,2,,5121,1/25/2018 12:50,,2,,"

Given that the neurons are linear in the hidden layer of the neural network, so the output is just the dot product of the weights and the input. To put things in perspective, generally, we use an activation function (sigmoid, signum, etc.), which is applied to the dot product.

Hence, for an input of $3$ inputs to node 1 of hidden layer is $-3 * 3 = -9$, to node 2 is $2 * 3= 6$ to node 3 is $-1 * 3 = -3$ to node 4 is $0.5 *3 = 1.5$ (basically, I have performed dot product). Since neurons are linear, no activation function is applied and is directly propagated to the next layer, the output layer.

Contribution of all the hidden nodes to the output layer is the dot product of the weights with the output of the hidden layer that is $9*2 + -0.5*6 + -3*-3 + 1*1.5 = -10.5$ (check for any calculation errors). And, finally, the output layer is bipolar hence an activation function (probably sigmoid) is applied to it. Since for sigmoid function is smaller than $0.5$ if $z<0$, hence the output will be $0$, as $z = -4.5$, as per convention.

",,user9947,2444,,3/7/2020 6:42,3/7/2020 6:42,,,,0,,,,CC BY-SA 4.0 5123,1,,,1/25/2018 13:54,,2,70,"

I am writing my thesis in the field of (deep) metric learning (DML). I am training a network in the fashion of contrastive / triplet Siamese networks to learn similarity and dissimilarity of inputs. In this context, the ground truth is commonly expressed as a binary. Let's take an example based on the similarity of species:

  • Image A: german shepard (dog)
  • Image B: siberian husky (dog)
  • Image C: turkish angora (cat)
  • Image D: gray wolf (wolf)

Image A and B are similar: same species, same sub-species (canis lupus) -> 1.0 == TRUE

Image A and C are dissimilar: different species (canis lupus vs. felis silvestris) -> 0.0 == FALSE

Image A and D ? same species, but different sub-species -> 0.8

Which metric learning approaches use a continuous ground truth for learning?

I could imagine that there is a lot of research out there using a continuous ground truth in classification settings. For instance to learn that the expression of a face is ""almost (60%) happy"", or more controversial, an image of a person depicts a ""70% attractive person"". Also in this fields I would be happy for hints / links.

Remarks:

  • I don't ask for opinions on whether this makes sense or not.
",12345,,12345,,12/27/2019 10:27,12/27/2019 10:27,Continuous ground truth in supervised (metric) learning?,,0,1,,,,CC BY-SA 4.0 5124,2,,5116,1/25/2018 15:08,,0,,"

It has little to do with activation functions.

Say you have a 2 input Neural Net with 1 node in the hidden layer and 1 node in the output layer all with the sigmoid activation function.

Say suppose we are solving a bipolar classification problem. Now say one of the inputs is in the order of 10^4 and the other in order of 10 only. Now the Neural Net will propagate this values to the output layer via the hidden layer. You get an error delta which is propagated back to the input layer.

Now as per gradient descent rule (if you look at the formula) the weight reduction will directly be proportional to delta * x_i where x_i is the ith input. SInce delta the net error due to both inputs is same for both, so The NN first decreases/increases the weight of a connection to make it to scale, and then after that the real learning which we are interested in occurs. Also if you see institutionally at the beginning if we give random weights the input which is larger will have more contribution to the output. It basically dictates the output. Now as the NN learns it reduces the weight-age of this large input to counter balance its high value. But if you do this at the beginning voila! the NN will train much faster.

Its like you have 2 kids, one naughtier than other. You leave them alone in a home and one of them breaks something (delta). But since you have no idea who did it you will contribute the delta to them equally. But as you learn about the kids nature your view (weight-age) about who broke things when you are not at home changes. Normalization is basically someone warning you already who the naughtier kid is and so you are able to take a balanced viewpoint from the beginning. (Very Bad example, but I could not think any better).

This example takes only 2 input so it might seem not much of a gain but in real life it will be if there are a large number of inputs and a lot of magnitude difference in them.

I may have missed something or other mathematical subtleties and I will be grateful if someone points them out.

",,user9947,,user9947,1/25/2018 18:42,1/25/2018 18:42,,,,0,,,,CC BY-SA 3.0 5125,1,,,1/25/2018 16:52,,1,107,"

Will it be possible to model the problem of odd-even distinction of an integer (not binary string representation) using neural networks?

",11835,,,,,1/25/2018 16:52,Modelling odd-even distinction of an integer with neural networks,,0,5,,,,CC BY-SA 3.0 5126,1,,,1/25/2018 18:31,,4,167,"

I am training an ANN for classification between 3 classes. The ANN has an input layer, one hidden layer and a 3 node output layer.

The problem I am facing is that the output being produced by the 3 output nodes are so close to 1 (for the first few iterations at least, and so I am assuming the problem propagates to future outputs as well) the weights are not being updated (or hardly updated) due to overflow (about $10^{-11}$). I can fix the overflow problem (but I don't think it is the culprit). I think such low values of error is the main culprit, and I cannot figure what is causing such low values of error.

What will cause the network to behave more responsively, that is, how will I be actually able to grasp the weight updates and not something in the order of $10^{-11}$?

The data set contain values in the order of $10$'s, and the weights randomly initialized are in the order of $0 < w < 1$. I have tried feature normalization but it is not that effective.

",,user9947,2444,,4/12/2019 21:59,4/12/2019 21:59,What are some concrete steps to deal with the vanishing gradient problem?,,1,0,,,,CC BY-SA 4.0 5133,2,,2201,1/26/2018 5:53,,1,,"

Neural Networks are very good approaches for robots. The main function of Neural Net is to model the interdependence between all the features. Now this can be done manually by selecting possible combinations of features between themselves upto a certain degree. But this approach has drawbacks:

  • It is tedious to go about selecting features.
  • It costs time and additional computer resources to calculate the values of the new features you have introduced.
  • Since you cannot visualize data more than 3-D you cannot be absolutely sure that your selected features are enough to model your problem.

Now if you use an NN, the NN will automatically select the combination of features (provided it has enough hidden nodes) by adjusting the weights of connections between and the features and nodes. The main advantages of this approach are:

  • You don't have to manually select the feature combinations.
  • If data is still not fitting you can easily increase or decrease the number of nodes without needing to modify the whole network.
  • Also it will be computationally efficient since you don't have to calculate values of factors that don't matter to the problem.

Hope this is what you were looking for!

",,user9947,,,,1/26/2018 5:53,,,,0,,,,CC BY-SA 3.0 5134,1,5340,,1/26/2018 12:45,,1,247,"

My goal is to build a neural net that can find patterns between a hash and a word on it's own. So that it returns the word of any hash that I will input.

Unfortunatally my skill in the area of neural net isn´t advanced, and I want to use this project to learn more. So I use a German dictionary and encode it via one_hot encoding. Then I generate the sha256 value of every word inside (before I have done this I cleaned the file and wrote every word in another line) it. So I got an big array with the shape of 20000x20000 for the words and another for the hashs.

So then I used the a example of the Keras homepage for binary classification because the one_hot values are represented by ones and zeros.

So if I want to predict a hashs I get these error: Error when checking : expected dense_1_input to have shape (20000,) but got array with shape (1,). So I don't know if this model is working for my problem but I couldn't convert one hash into a size of 20000x20000. (The hash will one_hot encoded for that prediction). So how could I get it to accept different shaped hashs/one hash only?

Is there a way to train the model with each hash after another for example with a for loop?!
EDIT: So I figured out that I can convert a list of characters into a numpy.array with 2 dimensions. So I hot_encoded every character and create a list of them, these list I passed inside the np.array(words,ndim=2). So this I have done for my hashs aswell. Then after I run the code I got this error: ValueError: setting an array element with a sequence So I tried to reshape the array with the .reshape(20000) command but nothing chaged. So what to do with that? EDIT2: I figured out now that the problem is that enhot_encoding generates diffrent sized ""arrays"" for each word, and if I fill this into a real array and this into a neuronal net it have to return this error. But still the question is: How to convert single words and hashs to a format that I can train a neuronal net with and get usefull output so I can enter any hash and it should return some kind of word(lable). If you need the actual code please inform me and I will upload it`s current state. Code:

model = Sequential()
model.add(Dense(64, input_shape=20000, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dense(units=64, activation=""relu""))
model.add(Dropout(0.5))
model.add(Dense(19957, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy'])
print(""Fitting data..."")
model.fit(test_hashs,test_words ,epochs=10,batch_size=128, verbose=1)


train_y=input(""Input a hash that is not contained in the training data: "")
#train_x=pd.Series(hashlib.sha256(str.encode(train_y)).hexdigest())
train_y=pd.Series(train_y)
#test_x=pd.get_dummies(train_x)
test_y=pd.get_dummies(train_y)
model.save(""first_test"")
print(model.evaluate(test_y))
#score=model.evaluate(test_x, test_y, batch_size=128,)
print(""Score: ""+score)
prediction=model.predict(test_x,verbose=1)
for i in prediction:
    print(i)
",12367,,,user4138,4/17/2018 20:44,4/17/2018 20:44,Keras pattern finding between hash and word,,1,0,,,,CC BY-SA 3.0 5135,2,,5114,1/26/2018 13:23,,5,,"

Given that you're a student doing this out of personal interest and wanting to do some gaming on the side, I'd suggest the GTX 1060 6GB since at present the GTX 1070Ti is overpriced due to crypto miners (this will date the answer, but for reference the 1060 is going for ~GBP340, the 1070Ti for ~GBP600; two other options are the 1050Ti 4GB for ~GBP160 or the vanilla 1080 at ~GBP650).

'Which GPU...' by Tim Dettmers is very helpful, as is 'Picking a GPU...' by Slav Ivanov, especially the summaries at the end for different use cases. As you're not looking at spending a huge amount of money, the 1060 seems like a good compromise as the 1050Ti might just leave you with a disappointing gaming experience. Finding a used 1070 is also suggested, but you'd need to be comfortable with that.

Other answers have mentioned the cloud, but that doesn't help with your gaming. If you want to save some cash while you're waiting for the next gen of cards, take advantage of your student status on AWS educate or Azure on MS Imagine - the GitHub student dev pack is a good package.

",9091,,,,,1/26/2018 13:23,,,,0,,,,CC BY-SA 3.0 5136,2,,3964,1/26/2018 14:25,,1,,"

For other who were wondering the same questions as me, I'll answer it.

My view above was inconsistent. Ultimately the last layer of simple feed-forward networks don't have any special properties previous layers exhibit. NNs are just glorified mathematical functions. It distorts space with linear(matrix multiplies) and non-linear functions.

Theres no 'decision plane' per-say, only function mappings up to the very end where we want to map the input to (in this case binary classification problem) to two separate numbers (usually 1 or 0).

Hope this clear things up for people getting into NNs.

",9271,,,,,1/26/2018 14:25,,,,4,,,,CC BY-SA 3.0 5137,1,,,1/26/2018 14:47,,1,90,"

I am new in Machine Learning. I have taken a course in vision and we are required to do a project.

I am thinking of data mining medical lab report images. My code must take an image and jpg file and then extract important information from it like lab where test has been done, patient name, test type and more important various data like heamoglobin, RBC, etc in case of blood test report.

I can build an OCR, but, problem which I am stuck at is in case of data which generally forms a table like structure. So, I want to find that tabular structure on which I can just apply matrix extraction to find various datas.

I'm looking for assistance with two basic things:

  1. Is my approach of finding tables and then extracting data is correct? If yes, then can you point out some good papers or implementation to find tabular structure. (P.S.- Don't mention tabular)

  2. Any approach which is state-of-the-art or good? (Paper or implementation)

",12370,,1671,,1/26/2018 20:36,1/26/2018 20:36,Data extraction from medical reports,,0,0,,,,CC BY-SA 3.0 5138,1,5154,,1/26/2018 20:20,,2,221,"

I'm relatively new to AI, and I've tried to create one that ""speaks"". Here's how it works:

1. Get training data e.g 'Jim ran to the shop to buy candy'
2. The data gets split into overlapping 'chains' of three e.g ['Jim ran to', 'ran to the', 'to the shop', 'the shop to'...]
3. User enters two words
4. Looks through the chains to find if the two words have been seen before.
5. If they have, finds out which word followed it and how many times.
6. Work out the probability e.g: if 'or' followed the two words 3 times, 'because' followed the two words 1 time and 'but' followed it 1 time it would be 0.6, 0.2 and 0.2
7. Generate a random decimal
8. If the random decimal is in the range of the first word (0 - 0.6) pick that one or if it's in the range of the second word (0.6 - 0.8) pick that word or if it's in the range of the third (0.8 - 1) pick that word
9. Output the word picked
10. Repeat from 4 but with the new last two words e.g if the last words had been 'to be' and it picked 'or' the new last two words would be 'be or'.

It does work, but it doesn't stick to a particular topic. For example, after training with 800 random Wikipedia articles:

In the early 1990s the frequency had a plastic pickguard and separate hardtail bridge with the council hoped that the bullet one replaced with the goal of educating the next orders could revert to the north island or string of islands in a new urban zone close to the west.

As you can see the topic changes many times mid-sentence. I thought of increasing the number of words it considered from two to three or four, but I thought it might start simply quoting the articles. If I'm wrong please tell me.

Any help is greatly appreciated. If I haven't explained clearly enough or you have any questions please ask.

",12376,,4302,,10/8/2018 12:17,10/8/2018 12:17,How can I improve this word-prediction AI?,,1,2,,,,CC BY-SA 3.0 5153,1,,,1/26/2018 23:32,,2,158,"

I have seen people using stacked softmax layers right at the output of neural networks designed for classification. I'm trying to understand this. Does it give any additional value? I think this could ""sharpen"" decisions on the boundaries.

model.add(Dense(10, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))

Seen here.

",7364,,1671,,1/27/2018 22:09,1/27/2018 22:09,Stacked softmax layers before output,,0,1,,,,CC BY-SA 3.0 5154,2,,5138,1/26/2018 23:34,,1,,"

Seems like recurrent neural networks (RNN) should work for your use case. An excellent introduction is available at: The Unreasonable Effectiveness of Recurrent Neural Networks

",10287,,,,,1/26/2018 23:34,,,,0,,,,CC BY-SA 3.0 5155,1,5164,,1/27/2018 7:43,,3,96,"

Is there any previous work on computing some sort of prominence score based on the prevalence of features in an image?

For example, let's say I am classifying images based on whether or not they have dogs in them. Is there a way to compute how prominent that feature is?

",9608,,2444,,3/13/2020 19:36,3/13/2020 19:36,Is there a way of computing a prominence score based on the prevalence of features in an image?,,1,0,,,,CC BY-SA 4.0 5156,1,,,1/27/2018 10:50,,3,108,"

At some point in time during the evolution, because of some factors, some beings first started to become conscious of themselves and their surroundings. That conscious experience is beyond some mere sensory reflexive actions trained. Can that be possible with AI?

",3015,,1671,,1/27/2018 22:09,1/27/2018 22:09,Can the first emergence of consciousness in evolution be replicated in AI?,,2,0,,,,CC BY-SA 3.0 5157,2,,5156,1/27/2018 13:53,,3,,"

Current limitations in our knowledge mean that the question is not directly answerable:

  • There is no scientific consensus on what consciousness is. Therefore any device designed to ""be conscious"" is necessarily going to be built on the premise of unsupported, maybe fringe, theory.

  • There is no robust measure of consciousness. If any AI system was built in order to exhibit conscious behaviour, there would be no way to prove it is conscious. There is no general agreement or theory on whether any particular animal species is conscious for example. This is often limited by communication. Of the few animals smart enough to be trained in communication with humans, there appears to be conscious behaviour. Researcher opinion ranges from ""all non-humans do not possess consciousness"" to ""all animals have some degree of consciousness"".

  • There is incomplete understanding of what the components of consciousness are. A bottom-up build of a conscious machine requires a baseline theory of what those components are.

  • We may be able to ignore lack of knowledge and take a very high level of abstraction, such as A-life or evolutionary approach where nothing is assumed and the hope is that consciousness would spontaneously emerge from a complex enough simulation (as we assume it has done with organic life in the real world). However, this would seem to require many orders of magnitude more computing power than is currently possible.

To answer the question as written:

Can the first emergence of consciousness in the Evolution be replicated in AI?

Despite the many books, articles and posts written on this subject over many years, the answer is two-fold:

  • We do not know of any fundamental reason why AI could not be conscious.

  • We have no theory or experimental proof that AI can replicate consciousness.

I would go further than this, and say that anyone who tells you otherwise on these two points has already subscribed to some unproven theory of consciousness.

As well as well-thought-out peer-reviewed theories and experiments by scientists and researchers, there is a lot of pseudo-scientific junk published on the internet on this subject. So take care if researching reading material.

",1847,,1847,,1/27/2018 13:59,1/27/2018 13:59,,,,1,,,,CC BY-SA 3.0 5158,2,,5126,1/27/2018 16:12,,3,,"

There is not single answer to the vanishing gradient problem. However, there a few things that can help.

As mentioned in the comments, use of Rectified Linear Units (ReLU) as your activation function can help, since the it does not get saturated for large neuron inputs.

Next, careful choice of weight initialization can help avoid saturation, as well. See Andre Ng's Coursera video for details.

Finally, if you are concerned that the scale of your training input is causing issues with training, you can normalize the training examples by subtracting the mean and dividing by the standard deviation. There is a normality assumption here, but this can often help avoid problems where one feature is far out of scale with another. However, this usually causes issues with optimization bouncing between the walls of long, thin trough, which complicates convergence, but does not cause vanishing gradient.

",12383,,,,,1/27/2018 16:12,,,,0,,,,CC BY-SA 3.0 5160,2,,3013,1/27/2018 21:50,,2,,"

There has been a lot of research in cognitive science on the relationship of sleep/dreaming and memory/learning.

I don't know enough about the subject to say if it resembles backprop in spirit, by as the BlindKungFuMaster points out, that may be corollary.

Here's a paper from 2004, Memory Consolidation in Sleep: Dream or Reality?, which concludes that ""there is no compelling evidence to support a relationship between sleep and memory consolidation.""

However, a more recent article from the Harvard Medical School, Sleep Helps Learning, Memory (2015), comes to a different conclusion, citing a 2010 study:

A 2010 Harvard study suggested that dreaming may reactivate and reorganize recently learned material, which would help improve memory and boost performance.

The general idea is that sleep/dreaming is a process where neural connections in the brain are reinforced or suppressed. Following are several papers that touch on that topic. The first one, in particular may be of interest, and references the work of Hinton, although be aware it's over 20 years old, before recent NN breakthroughs:

Neural Networks: Sleep and Memory (Sejnowski, 1995):

Hinton et al.6 have provided an elegant new theoretical framework for creating efficient memory representations in hierarchical neural network models. In this model (Figure 1), the feedback connections generate patterns on the input layers of the network that correspond to the representations at the higher level, when the external inputs to the cortex and feedforward processing have been suppressed. During this generative sleep stage, the strengths of the feedforward synaptic strengths are altered. Conversely, during the awake stage, the feedback connections are suppressed and the sensory inputs drive the feedforward system, during which the weights on the feedback connections can be altered.

Dreaming of a Learning Task is Associated with Enhanced Sleep-Dependent Memory Consolidation (NIH, 2010):

These observations suggest that sleep-dependent memory consolidation in humans is facilitated by the offline reactivation of recently formed memories, and furthermore, that dream experiences reflect this memory processing. That similar effects were not seen during wakefulness suggests that these mnemonic processes are specific to the sleep state.

Learning while you sleep: Dream or reality? (Harvard Medical School, 2012)

Neuroscientists Reveal How The Brain Can Enhance Connections (MIT Tech Review, 2015)

Memory Consolidation Reconfigures Neural Pathways Involved in the Suppression of Emotional Memories (Nature, 2016)

Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration (National Institutes of Health, 2014)

The brain uses REM sleep to cut unneeded connections (Ars Technica, 2017)

The Brain’s Connections Shrink During Sleep (The Atlantic, 2017)

And, just for fun:

The Link Between Dreaming and Learning Is Stronger Than Ever. How Long Until There’s an ‘Inception’-Style Classroom?

",1671,,,,,1/27/2018 21:50,,,,0,,,,CC BY-SA 3.0 5161,2,,2562,1/27/2018 21:57,,2,,"

Regarding Artificial General Intelligence, which does not currently exist and is still highly theoretical, this cannot be determined at this time.

What I would say is that ""strong narrow AI"" has already proven the ability to become ""smarter"" than it's creators in specific tasks. (See Alphago, etc.)

Under the idea that some form of AGI might come out of an algorithm comprised of an ever expanding set of strong narrow AIs, it would logically follow that such an automata could become smarter than it's creators in any given task.

",1671,,,,,1/27/2018 21:57,,,,0,,,,CC BY-SA 3.0 5162,2,,5156,1/27/2018 22:07,,0,,"

It partly depends on the framing of the question, in terms of how you are defining consciousness.

Neil Slater's answer is comprehensive, and his warnings about pseudo-science and junk publication should be heeded.

However, since you frame this in the context of ""Can the first emergence of consciousness in evolution be replicated in AI"", I feel like I can provide an answer.

  • If we define rudimentary consciousness as simple awareness of the environment, distinct from higher functions such as self-consciousness, then yes.

Under this definition, any algorithm that takes input is ""conscious"". This in no way represents human-level consciousness, or even the consciousness of higher animals, but is more akin to simple organisms such as microbes.

",1671,,,,,1/27/2018 22:07,,,,1,,,,CC BY-SA 3.0 5164,2,,5155,1/27/2018 22:29,,1,,"

Saliency is typically discussed more in object detection or scene understanding than prominence. There are lots of papers on saliency models. ""What do different evaluation metrics tell us about saliency models?"" is a good paper on various metrics on saliency models. It covers the following metrics:

  1. Similarity or histogram intersection (SIM)
  2. Pearson's Correlation Coefficient (CC)
  3. Normalized Scanpath Saliency (NSS)
  4. Area Under ROC Curve (AUC)
  5. Information Gain (IG)
  6. Kullback-Leibler divergence (KL)
  7. Earth Mover's Distance (EMD)

Some other papers you may find interesting:

  1. Saliency, attention, and visual search: An information theoretic approach
  2. Towards Instance Segmentation with Object Priority: Prominent Object Detection and Recognition
",5763,,,,,1/27/2018 22:29,,,,0,,,,CC BY-SA 3.0 5166,2,,3873,1/28/2018 0:27,,1,,"

From the AI in a box literature, it is argued that even just a text interface with the rest of the world is sufficient for an AI to gain total control.

Or, consider the literature regarding phase state changes / dynamical systems / control theory. I don't know if there is a source that directly argues for this, but its imaginable that since societal systems are so interconnected, a few controllable free parameters of a system might be sufficient to strongly influence the system as a whole.

So no, restricting influence is not a sufficient guarantee of reducing AI risk. A common saying is that if we know the goal of some AI, we can't predict /how/ the AI would achieve some goal since we aren't smart enough, but we can predict the eventual outcome (its success).

",6779,,,,,1/28/2018 0:27,,,,0,,,,CC BY-SA 3.0 5167,1,,,1/28/2018 2:59,,2,1405,"

I have started to make a chatbot. It has a list of greetings that it understands and responds to with its own list of greetings.

How could a bot learn a new greeting or a synonym for a word it already knows?

",4173,,2444,,6/25/2019 22:14,6/25/2019 22:18,How could a chat bot learn synonyms?,,3,0,,,,CC BY-SA 4.0 5169,1,,,1/28/2018 7:56,,2,243,"

A human player plays limited games compared to a system that undergoes millions of iterations. Is it really fair to compare AlphaGo with the world #1 player when we know experience increases with the increase in number of games played?

",12394,,1671,,4/14/2018 2:44,4/14/2018 8:23,Is it fair to compare AlphaGo with a Human player?,,5,1,,,,CC BY-SA 3.0 5170,2,,5169,1/28/2018 9:38,,0,,"

Yes it is. If we ever compare computers to humans we should take into acount the fact that computers can work 24 hours a day every day and faster than humans. That is bigest advantage of computers over humans.

",12251,,,,,1/28/2018 9:38,,,,0,,,,CC BY-SA 3.0 5171,2,,5167,1/28/2018 9:44,,1,,"

There is pretty simple way: Write a program that analyzes large amounts of texts. Find sentences that contain our greeting. Then find exact same sentences except that instead of our word there is another word. The more such examples you find the higher is probability that it is synonim and not word from same category with different meaning.

",12251,,,,,1/28/2018 9:44,,,,0,,,,CC BY-SA 3.0 5172,2,,5169,1/28/2018 9:48,,2,,"

Is it fair to compare AlphaGo with a Human player?

Depends on the purpose of the comparison.

If we are comparing ability to win a game of Go, then yes.

If we are comparing learning ability, then maybe. It depends on the task. AlphaGo and systems like it are capable of learning only in well-described limited domains. There may be an analogy with sensory learning (it might even be possible in theory to take a small piece of brain tissue and run an algorithm similar to AlphaGo's learning process on it).

In general, the approach used by AlphaGo and other reinforcement learning successes is ""trial-and-error plus function approximation"". It seems analogous to perception and motor skills, such as object recognition or riding a bike, as opposed to reasoning skills and games as humans play them, which goes through many more cognitive and conscious layers that have no real analog in a RL system like AlphaGo.

A human player plays limited games compared to a system that undergoes millions of iterations

This is an advantage of a machine to learn this kind of task. It would equally apply in other simulated environments with simple rules. If your goal is to have the most skilled and optimal navigation of such a domain, the implication now is that you would not train a human expert through years of study, but to write the simulator and train an AlphaGo-like machine.

This is no different a comparison than deciding cars and roads are better solutions to long distance travel for the general population than walking or horses and carts. It doesn't matter what underlies the advantage of one over the other, the assessment is cost/benefit, which resolves to a single comparable number.

It would, however, be wrong to assess AlphaGo as a better general-purpose learning engine than a human. The fact that humans do not have to work fully through millions of simulations in full detail is important. It means that something about how humans learn is still not covered by learning machines. Some of these things are understood and being discussed - such as the ability to focus intuitively on important aspects of what to learn, the ability to reason about the environment, learning analogously or transfer learning from other domains.

",1847,,1847,,1/28/2018 11:28,1/28/2018 11:28,,,,0,,,,CC BY-SA 3.0 5173,2,,3873,1/28/2018 9:49,,0,,"

Here you make the wrong assumption that AI will have only one goal at a time. But the same as humans it will have to have in mind many goals all the time it should follow and watch out that newly assigned goal dont conflict with its existing goals.

Your proposition to give ai the goal ""minimize impact on world"" is simplistic as it would be harmful in some situations.

",12251,,,,,1/28/2018 9:49,,,,0,,,,CC BY-SA 3.0 5174,1,,,1/28/2018 9:53,,6,4726,"

I read about minimax, then alpha-beta pruning, and then about iterative deepening. Iterative deepening coupled with alpha-beta pruning proves to quite efficient as compared to alpha-beta alone.

I have implemented a game agent that uses iterative deepening with alpha-beta pruning. Now, I want to beat myself. What can I do to go deeper? Like alpha-beta pruning cut the moves, what other small change could be implemented that can beat my older AI?

My aim to go deeper than my current AI. If you want to know about the game, here is a brief summary:

There are 2 players, 4 game pieces, and a 7-by-7 grid of squares. At the beginning of the game, the first player places both the pieces on any two different squares. From that point on, the players alternate turns moving both the pieces like a queen in chess (any number of open squares vertically, horizontally, or diagonally). When the piece is moved, the square that was previously occupied is blocked. That square can not be used for the remainder of the game. The piece can not move through blocked squares. The first player who is unable to move any one of the queens loses.

So my aim is to cut the unwanted nodes and search deeper.

",12384,,2444,,2/3/2021 11:13,2/3/2021 11:13,What else can boost iterative deepening with alpha-beta pruning?,,4,1,,,,CC BY-SA 4.0 5176,1,,,1/28/2018 11:08,,-1,84,"

So, I have seen few pictures re-created by a Neural Network or some other Machine Learning algorithm after it has been trained over a data set.

How, exactly is this done? How are the weights converted back into a picture or a memory which a Neural Net is holding?

A real life example would be when we close our eyes we can easily visualize things we have seen. Based on that we can classify things we see. Now in a Neural Net classification part is easily done, but what about the visualization part? What does the Neural Net see when it closes its eyes? And how to represent it for human understanding?

For example a deep net generated this picture:

SOURCE: Deep nets generating stuff

There can be many other things generated. But the question is how exactly is this done?

",,user9947,,user9947,1/29/2018 11:44,6/29/2018 2:17,How to know what kind of memory is stored in the connection weights?,,2,0,,,,CC BY-SA 3.0 5177,1,,,1/28/2018 11:15,,5,1106,"

What does the following equation mean? What does each part of the formula represent or mean?

$$\theta^* = \underset {\theta}{\arg \max} \Bbb E_{x \sim p_{data}} \log {p_{model}(x|\theta) }$$

",10046,,32410,,3/28/2021 1:27,3/28/2021 1:27,What does the argmax of the expectation of the log likelihood mean?,,1,1,,,,CC BY-SA 4.0 5178,2,,5115,1/28/2018 15:05,,-1,,"

For this example the function below will do: TSAI.Analogies.FindAnalogy(List ex1, List ex2, List ex3, out List ex4) ex1 is to ex2 as ex3 is to ex4. Figure out ex4.

Fill ex4 with values from ex2. For every value in ex3: find out to which positions in ex4 we have to copy this value, based on value in ex1 at the same position that was repeated in ex2.

",12251,,,,,1/28/2018 15:05,,,,0,,,,CC BY-SA 3.0 5179,2,,5177,1/28/2018 19:55,,3,,"

This equation and more information of it can be found in Expectation Maximization Wikipedia site and the explanation there was as follows (formula there in two parts):

Some more explanation from same page:

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

Mathematically, E in your equation stands for Expectation Value, x|theta is conditional probability and x~data and model are sub-titles of source of probability in either case. The arg max theta is argument theta that maximizes the equation.

",11810,,,,,1/28/2018 19:55,,,,0,,,,CC BY-SA 3.0 5185,1,5188,,1/29/2018 4:48,,0,1117,"

The Generative Adversarial Network (GAN) is composed of a generator $G$ and a discriminator $D$. How do these two components interact? What is the intuition behind the GAN, its purpose, and how it is trained?

",9237,,2444,,11/22/2020 12:33,7/18/2021 13:59,What is the purpose of the GAN?,,1,1,,,,CC BY-SA 4.0 5186,1,,,1/29/2018 6:50,,8,541,"

I recently read that Google has developed a new AI that anyone can upload data to and it will instantly generate models, i.e. an image recognition model based on that data.

Can someone explain to me in a detailed and intuitive manner how this AI works?

",10913,,2444,,7/30/2021 12:32,7/30/2021 12:32,What is an intuitive explanation of how Google's AutoML works?,,1,0,,,,CC BY-SA 4.0 5187,2,,3288,1/29/2018 9:48,,1,,"

I'm not sure what you mean by pairs. But a common pattern for dealing w/ pair-wise ranking is a siamese network:

Where A and B are a a pos, negative pair and then the Feature Generation Block is a CNN architecture which outputs a feature vector for each image (cut off the softmax) and then the network tried to maximise the regression loss between the two images. The two networks share the same parameters and thus in the end you have one model which can accurately disambiguate between a positive or negative pair.

",12409,,,,,1/29/2018 9:48,,,,0,,,,CC BY-SA 3.0 5188,2,,5185,1/29/2018 10:33,,3,,"

GANs were invented in a bar somewhere in Montreal, Canada. At said bar, the idea was that neural networks could be used for generating new examples from an existing distribution. This was the problem:

Given an input set $X$, can we make a new $x'$ that looks like it should be in $X$?

The classic description of a GAN is a counterfeiter (generator) and a cop (discriminator). The counterfeiter has the same problem, make a piece of paper look like a real currency.

In training a GAN, the input to the generator is random noise, a starting seed so that no 2 results are the same. The generator then makes a new $x'$. The input to the discriminator alternates between an actual $x$ and an $x'$ that the generator made. The discriminator then takes the $x'$ and decides whether it is part of the set $X$. The discriminator is then trained using its answer to ensure that it can properly tell the difference between bad counterfeits and elements of $X$. When the discriminator makes a decision on an $x'$ that the generator made, the generator is updated as well, in order to increase its ability to make new $x'$s that the discriminator will think are in $X$.

Using this simple framework, these 2 networks work against each other (adversarially) to train each other until the generator is making $x'$s so well that the discriminator can't tell the difference between them and the real thing. At this point, the generator can be used to make new pictures of cats or whatever was the goal in the first place.

The primary usage of $D$ is training $G$ in a (sorta) supervised way but after $G$ is trained, it isn't needed for generation. That's not to say having a good discriminator isn't useful. There are many applications where that's exactly what you'd want, but, if the goal is generation, then probably not. The generator is trained in a (sorta) supervised fashion because there is actually a label associated with the $x'$ it generates (whether the discriminator could detect that it was a counterfeit or not). Unsupervised methods are used when there are no labels and one wants to understand how their data either by clustering or finding a good distance function. Refer to this answer to see the difference between supervised and unsupervised.

",4398,,2444,,11/22/2020 12:15,11/22/2020 12:15,,,,1,,,,CC BY-SA 4.0 5190,2,,5176,1/29/2018 21:46,,1,,"

You should use google for this question, it is extremely vague. Some technical documents will give very clear insight into what's happening here.

https://blog.openai.com/generative-models/

http://proceedings.mlr.press/v37/gregor15.pdf

If you really want to understand ""what the AI is thinking"", well, you may never know. The idea of AI is that it can handle complex data (high degree of dimensionality) and is too complex for Humans to comprehend.

",1720,,,,,1/29/2018 21:46,,,,1,,,,CC BY-SA 3.0 5191,2,,5110,1/30/2018 0:52,,1,,"

While you can use NLTK for analyzing and parsing the text obtained from the speech to text interface (e.g. Siri), there are higher level APIs available for this. The class of problem you are trying to solve in NLP is ""intent detection"".

There are several open source and commerical APIs available for this including Amazon Alexa, Google Cloud Natural Language, Azure, as well as libraries like RASA NLU, etc.

The high level flow of your program will be:

  • Record/receive spoken audio
  • Convert audio speech to text
  • Detect intent of the text command using an intent detection library
  • Use the intent to feed a script/automation that generates the code in your IDE
",10287,,,,,1/30/2018 0:52,,,,0,,,,CC BY-SA 3.0 5192,2,,4186,1/30/2018 7:17,,2,,"

The process of automatically learning a grammar from examples of a language is called grammar induction. Since you mention that L can be ""augmented and changed"", it might be feasible to solve this problem using an adaptive parser.

",2050,,,,,1/30/2018 7:17,,,,3,,,,CC BY-SA 3.0 5193,1,,,1/30/2018 14:53,,3,433,"

I would like to do some practical implementation of a planning algorithm (of course, something a bit simple and easy).

Is there any website where I can pick an algorithm (e.g. A* or hill climbing), code it, and visualize how it works/executes?

The site doesn't necessarily need to be restricted to planning or search algorithms. For example, in the context of machine learning, I would also like to be able to pick the learning algorithm and model (e.g. linear regression), code it, and visualize how it works.

",12438,,2444,,6/20/2020 9:58,6/20/2020 9:58,"Is there any website that allows you to choose an algorithm, code it and visualise how it works?",,2,0,,,,CC BY-SA 4.0 5194,1,,,1/30/2018 23:02,,1,277,"

Let's say we have a cluster of 20-2000 heterogenous compute nodes. Consider for example the parallel solution of the helmholtz equation: Now we want to distribute the solution process and, to make things easier, we split the problem in a fine-grained way (partial solution of the system matrix). We could train an Ai with the time taken to solve the subproblem depending on multiple factors (for example, size of the mesh, needed precision, etc) and let the Ai choose the optimal distribution and division of the problem based on the available data.

I'm new to the area of Artificial Intelligence. Are there any open source frameworks which could accomplish this task? How would you estimate the required amount of compute power to train the network?

",12448,,,,,1/30/2018 23:02,Application of Ai to task scheduling problems on heterogenous platforms,,0,0,,,,CC BY-SA 3.0 5195,2,,5169,1/31/2018 0:08,,0,,"

If you read through the abstracts of Chess AI papers, it is often pointed out that humans ""search"" the Chess game tree much more efficiently than computers, which was why it was so hard to beat the top humans in Chess for so many years. (The human efficiency may have to do with intuition and judgement, which are difficult to replicate. ""Confidence levels"" for AI evaluations is one method of addressing these issues, as is ""monte carlo"". But it's also important to note that humans are far more limited in the depth and breadth of their ""searches"", which is why, now that we have the right algorithms, humans can no longer win.)

Is it fair?

Perhaps the more salient question is:

Is it useful to compare AlphaGo to a human player?

It most certainly is, because it tells us that we have is sometimes termed a ""strong-narrow AI"" that can outperform a human in a single task.

Why AlphaGo beating Lee Sedol was a big deal is the complexity of Go, the intractability of the Go game tree, and the fact that computers were previously ineffective against high-level human Go players.

This human vs. AI evaluation doesn't strictly fall under the ""Turing Test"" (Imitation Game), it does fall squarely under the maxim of Protagoras that ""Man is the measure of all things.""

This is critical because intelligence is a spectrum, and gauging strength of intelligence, in the context of intractable problems (problems that cannot be fully solved due to their size) is a function of relative strength of two agents, whether human or AI.

This relative assessment is all we have, and all we may ever have for certain sets of problems.

The problem with humans is not that we're not clever, but that our minds have cognitive limitations. So to tackle certain problems, intelligent machines are useful.

",1671,,1671,,2/1/2018 19:24,2/1/2018 19:24,,,,0,,,,CC BY-SA 3.0 5196,2,,5167,1/31/2018 3:04,,2,,"

This answer describes the ""word vector"" toolkit in NLP. The result of analyzing a large corpus to find words that occur in similar context provides dense vectors for each word that can then be used for similarity. For bots, the goal is generally a similarity and not exact synonyms. Synonyms can be hard-coded using WordNet if needed. For your greeting question, following blog post can help: Do-it-yourself NLP for bot developers.

",10287,,2444,,6/25/2019 22:18,6/25/2019 22:18,,,,0,,,,CC BY-SA 4.0 5201,1,,,2/1/2018 5:56,,0,394,"

number of layer of DNN and computational complexity of it are correlated after optimization, but how to estimate it before designing DNN?

",9237,,,,,3/3/2018 13:31,The connection between number of layer of DNN and computational complexity of it,,1,0,,,,CC BY-SA 3.0 5203,1,,,2/1/2018 10:17,,3,226,"

Gold showed that a language can be learned only if it contains a finite set of sentences.

We know that deep neural networks can implement any function. Does this contradict the Gold's result?

What is the relation or difference between the definition of learnability of Vapnik and Gold and the definition of learnability of neural networks?

",9237,,2444,,5/3/2019 15:46,5/3/2019 15:46,What is the relation between the definition of learnability of Vapnik and Gold and learnability of neural networks?,,0,2,,,,CC BY-SA 4.0 5205,2,,5201,2/1/2018 13:10,,1,,"

The computational complexity of DNNs is based on 3 main factors.

  1. Matrix Multiplication
  2. Non linear transformation
  3. Weight sharing

Matrix multiplication is the fundamental operation when computing the forward and backward passed in DNNs if using back-propagation. As the complexity of matrix multiplications becomes more expensive as the size of matrices become larger, understanding how to construct networks with effectively sized later which balance the time complexity with accuracy becomes imperative.

Nonlinear transformations allow DNNs to learn nonlinear functions. It is a very important aspect and has been studied rigorously. Classically, the functionf(x) = 1(1+exp(x)) was used to squash outputs from linear layers to be a nonlinear output. However this function has recently been replaced in many applications with the linear rectifier unit (ReLU) f(x)=max(x,0). The relu is much much faster to compute and doesn’t seem to affect the end performance substantially or even noticeably in some situations.

Weight sharing is the idea that some weights in a DNN must share the same value. Beyond the theoretical reasons why this is chosen, it also decreases the number of values that must be updated when performing back-propagation. This is the reason why Convolutional NNs are orders of magnitude faster than their non-convolutional counter parts for image recognition tasks.

There are other things to be aware of when trying to analyze the computational complexity of DNNs but they usually relate to one of the 3 items above.

To estimate the complexity, counting the number of matrix multiplications and their matrix sizes, add time for nonlinearities and you should have a pretty good estimate.

",4398,,,,,2/1/2018 13:10,,,,0,,,,CC BY-SA 3.0 5206,2,,5169,2/1/2018 13:46,,0,,"

There is no such thing as fairness when comparing. You define a measure for performance and then compare the values of the measure.

One sensible measure for playing the game of GO is the 'Number of games won', regardless of any investment in the development of the system, computational or sample efficiency. AlphaGo is currently at the top by this measure.

Another sensible measure could be 'Number of games won under a restriction on sample efficiency during training'. As others pointed out, such a measure could be much more favorable for humans.

",12483,,12483,,2/2/2018 8:27,2/2/2018 8:27,,,,0,,,,CC BY-SA 3.0 5208,1,5209,,2/1/2018 20:07,,2,582,"

I've written a single perceptron that can predict whether a point is above or below a straight-line graph, given the correct training data and using a sign activation function.

Now, I'm trying to design a neural network that can predict whether a point $(x, y)$ is in the 1st, 2nd, 3rd or 4th quadrant of a graph.

One idea I have had is to have 2 input neurons, the first taking the $x$ value, the 2nd taking the $y$ value, these then try and predict individually whether the answer is on the right or left of the centre, and then above or below respectively. These then pass their outputs to the 3rd and final output neuron. The 3rd neuron uses the inputs to try and predict which quadrant the coordinates are in. The first two inputs use the sign function.

The problem I'm having with this is to do with the activation function of the final neuron. One idea was to have a function that somehow scaled the output into an integer between 0 and 1, so 0 to 0.25 would be quadrant 1, and so on up to 1. Another idea would be to convert it to a value using sin and representing it as a sine wave as this could potentially represent all 4 quadrants.

Another idea would be to have a single neuron taking the input of the $x$ and $y$ value and predicting whether something was above or below a graph (like my perceptron example), then having two output neurons, which the 1st output neuron would be fired if it was above the line and then passed in the original $x$ coordinate to that output neuron. The 2nd output neuron would be fired if it was below, then pass in the original $x$ value, as well to determine if it was left or right.

Are these good ways of designing a neural network for this task?

",11795,,2444,,12/8/2021 20:28,12/8/2021 20:35,How to design a neural network to predict the quadrant where a given point lies?,,1,0,,,,CC BY-SA 4.0 5209,2,,5208,2/2/2018 6:27,,1,,"

First of all, what you are trying to do can be achieved by simple logical programming. Secondly, you are making things overly complex.

$1$ node in a neural network can predict as many values as you would like it to, as it outputs a real number no matter the activation function. It is us who round of the number to $0$ or $1$ or maybe $0, 0.5, 1$ depending on the classification job at hand.

$2$ neurons can predict $4$ classes $00, 01, 10, 11$ depending on how you use it.

Now, in your specific problem, 2 neurons are sufficient, since not only can you decompose the problem into two separate problems, both the problems are linearly separable too. This is not always the case or it may be too complex to find linearly separable classes, so, in general, one output node is reserved for $1$ class only. If it is that class only that output node is activated while the rest is $0$. Memory and process consuming, but easier.

Coming to your first approach. I don't exactly know how to solve a 4-class classification problem with a single output node. A minimum of 2 nodes is required. The problem with your approach is that, if you think critically, your hidden nodes are not doing anything other than scaling the inputs, so almost the same input is propagated and then you are asking a node to solve a 4-class classification problem, which is not linearly separable (I think). $\sin$ will not be able to separate it if you think critically.

The second idea is better. Just remove the hidden node and you will be able to classify it based on $00, 01, 10, 11$ outputs.

Although this problem can be solved by simple logical computation and your neural network is basically doing nothing.

",,user9947,2444,,12/8/2021 20:35,12/8/2021 20:35,,,,2,,,,CC BY-SA 4.0 5210,1,,,2/2/2018 11:24,,0,114,"

I had this idea of training for example a CNN on images, and having output branches at several of its intermediate layers. The early layers' output branch might then predict high-level class of detected objects (supposedly able to do this because less info is needed for a high-level classification than a very specialised one), and the later layers giving more detailed labels of the sub-class of the earlier high level class.

I have been searching for research on this type of setup but couldn't really find anything. Is there a name for this idea, or is this an open question/idea?

",2522,,,,,2/3/2018 15:46,Is there any research on neural networks with multiple outputs for hierarchical label classification?,,1,0,,,,CC BY-SA 3.0 5211,1,,,2/2/2018 15:40,,5,186,"

If I train a speech recognition model using data collected from N different microphones, but deploy it on an unseen (test) microphone - does it impact the accuracy of the model?

While I understand that theoretically an accuracy loss is likely, does anyone have any practical experience with this problem?

",12502,,,,,2/10/2018 18:48,Can variations in microphones used in training set and test set impact the accuracy of speech recognition models?,,2,2,,,,CC BY-SA 3.0 5213,2,,5193,2/2/2018 16:06,,1,,"

Not planning, but this is a visual in-browser neural network for your interest:

http://playground.tensorflow.org/

",6779,,,,,2/2/2018 16:06,,,,0,,,,CC BY-SA 3.0 5217,2,,5211,2/2/2018 19:29,,3,,"

Yes it can. However, other differences between training and test data with audio could have greater effect:

  • Identity of the speaker (including effects from gender, age, physical build, local accent, amongst others)

  • Acoustics of the recording environment (including proximity to the microphone, size of space, presence of hard surfaces, background noise)

If any of these may vary from your training data, then it becomes harder to predict your generalised accuracy during training and early model selection.

One possibility is to ensure your cross-validation set (which you absolutely should have) also separates data out by things that will vary from training to test. So instead of random train/cv split, you split by data that is key for generalisation. This is sometimes called a stratified train/test split.

If your only concern is variation in microphone, then split your train/cv sets by microphone type. You will get a better assessment early on in the model selection process how well the training is generalising, and can focus your search on models that do well despite this expected difference.

",1847,,1847,,2/2/2018 19:39,2/2/2018 19:39,,,,0,,,,CC BY-SA 3.0 5220,1,5222,,2/2/2018 22:41,,1,637,"

The match got a lot of press, and I doubt anyone is surprised that Alpha Zero crushed Stockfish.

See: AlphaZero Destroys Stockfish in 100 Game Match

To me, what's really salient is that ""much like humans, AlphaZero searches fewer positions that its predecessors. The paper claims that it looks at ""only"" 80,000 positions per second, compared to Stockfish's 70 million per second.""

For those who remember Matthew Lai's GiraffeChess:

However, it is interesting to note that the way computers play chess is very different from how humans play. While both humans and computers search ahead to predict how the game will go on, humans are much more selective in which branches of the game tree to explore. Computers, on the other hand, rely on brute force to explore as many continuations as possible, even ones that will be immediately thrown out by any skilled human. In a sense, the way humans play chess is much more computationally efficient - using Garry Kasparov vs Deep Blue as an example, Kasparov could not have been searching more than 3-5 positions per second, while Deep Blue, a supercomputer with 480 custom ”chess processors”, searched about 200 million positions per second 1 to play at approximately equal strength (Deep Blue won the 6-game match with 2 wins, 3 draws, and 1 loss).

How can a human searching 3-5 positions per second be as strong as a computer searching 200 million positions per second? And is it possible to build even stronger chess computers than what we have today, by making them more computationally efficient? Those are the questions this project investigates.

[Lai was tapped by DeepMind as a researcher last year]

But what I'm interested in at the moment is the decision speed in these matches:

- What was the average time to make a move in the AlphaZero vs. Stockfish match?

",1671,,,,,2/3/2018 2:34,What was the average decision speed pf Alpha Zero in the recent Stockfish match?,,1,1,,,,CC BY-SA 3.0 5221,1,,,2/3/2018 1:06,,1,187,"

I have tried several environment libraries like OpenAI gym/gridworld but now I am trying to create a toy environment for experimentation. The environment I've created is as follows:

  1. State: grid with n rows by m columns, represented by a boolean matrix. Each grid cell can be empty or filled and the grid starts empty.

  2. Action: one of the m columns to be filled, which must have at least the top row empty.

  3. Next state: Once a column is chosen, the lowest unfilled cell in that column is filled. This works from bottom up like a very simple version of Tetris.

  4. Reward: after every action, a reward equal to the number of empty columns is awarded.

Therefore in a sample world of 5 rows by 3 column, starting off with an empty grid, the maximum attainable reward would be by filling column wise first. This policy will give a maximum total reward of 2*5 + 1*5 = 15. (2 free columns by 5 row action, once first column is filled then 1 free column by 5 row action.)

This very simple environment is trained using DQN with a single ff layer. The agent only took a few episodes to converge and is able to produce the maximum attainable reward.

In a next toy environment, I've made it a little more complex. I modified the very first action to be random choice of any column. I have retrained the RL model with the new environment modification. However, after convergence, the agent does not attain max score of 15 for all possible starting columns. I.e. If column 1 was randomly chosen first, max score might be 15, however column 2 or 3 was randomly chosen first, max score might only reach 11 or 9. In theory, the optimum policy would be for the agent to fill column that was randomly chosen first - i.e. repeat the first randomly chosen action.

I have tried several ways to tweak my input parameters (e.g. episilon_decay_rate, learning_rate, batch_size, number of hidden nodes) to see if the agent could act optimally for all possible starting columns. I also tried DDQN and Sarsa. The only way I could make the agent perform optimally is by reducing gamma (discount factor) to 0.5 or below. Are there any explanations to why the agent only works for small discount factors in this example? Also, are there alternative ways to obtain the optimum policy?

",12505,,1847,,2/5/2018 23:04,2/6/2018 8:36,Agent in toy environment only learns to act optimally with small discount factors,,1,4,0,,,CC BY-SA 3.0 5222,2,,5220,2/3/2018 2:34,,1,,"

No, ALL computer chess experts were surprised about the outcomes of the match. If you require references, please start a new question.

Your question is simple...

https://arxiv.org/pdf/1712.01815.pdf

... We evaluated the fully trained instances of AlphaZero against Stockfish, Elmo and the previous version of AlphaGo Zero (trained for 3 days) in chess, shogi and Go respectively, playing 100 game matches at tournament time controls of one minute per move ...

One minute per move. Stockfish would use the one minute if there is more than a single legal move, otherwise it'd move immediately. So the average time for a move for Stockfish is about 57s to 60s.

There is no source code for AlphaZero. However, it's hard to believe the system wouldn't take advantage of the minute it was given. The expected time for AlphaZero should be also 57s to 60s.

",6014,,-1,,6/17/2020 9:57,2/3/2018 2:34,,,,3,,,,CC BY-SA 3.0 5224,2,,5174,2/3/2018 5:19,,1,,"

To make boost iterative deepening with alpha-beta pruning you can use the SSS* Search algorithm, its a best first strategy algorithm. The SSS* Algorithm can improve the time efficiency of the overall algorithm but it increases the space complexity. I am linking the wiki to it https://en.wikipedia.org/wiki/SSS* I will update the answer as soon as i get a better solution.

",11651,,1641,,3/9/2018 18:16,3/9/2018 18:16,,,,0,,,,CC BY-SA 3.0 5225,2,,3964,2/3/2018 7:30,,1,,"

So I think in case of a logistic regression task a Neural Network works something like this.

First of all I think all nodes perform the job of mapping a point to a quadrant, in a n-space co-ordinate system where the n-space is decided iteratively by the problem statement itself. In short the nodes decide which combination of polynomial terms of the input matter in the task at hand. In a hidden layer since we don't perform a classification task it outputs real number. But the output layer rounds off numbers as per the classification task.

If you have a single hidden layer its nodes can be thought of outputting values that matter. It gives the NN a degree of freedom. Like in Machine Learning we select the polynomial combinations, a NN selects it by itself iteratively. But if see the function of output nodes, it just rounds of the number to 0 or 1. Thus output nodes can perform simple tasks of classification (like which quadrant a point lies in) without a hidden layer. I believe if we can decipher what the hidden nodes want to convey us we can entirely remove the output nodes. Because hidden nodes convey information like, in which quadrant a point lies only in a cryptic format which is resolved by the output nodes iteratively. After knowing the information we can easily resolve it by logical statements.

But this odes not demean the power of output nodes. In case of Linear Regression a single output node without any hidden layer can approximate a quarter of the sine wave (by adjusting the exponent of e to make t look like a sine wave). So it only depends on how you use the Neural Network.

But the basic principle is the same a node decides whether an input is positive or negative (in case of logistic activation) or outputs a real number depending on how you use it.

If you find anything contradictory please correct me.

",,user9947,,,,2/3/2018 7:30,,,,0,,,,CC BY-SA 3.0 5226,2,,5210,2/3/2018 15:46,,1,,"

Sounds like a very interesting idea! I dont know existing work on the idea, but the implementation should be pretty simple with separate training with tied weights. It should be noted however that such behaviour occurs naturally in CNN's already. (See: http://www.cs.toronto.edu/~guerzhoy/321/lec/W07/HowConvNetsSee.pdf)

",6779,,,,,2/3/2018 15:46,,,,1,,,,CC BY-SA 3.0 5233,2,,5174,2/5/2018 22:58,,1,,"

Try cache or transposition table. Without it, your search tree might explode.

",6014,,,,,2/5/2018 22:58,,,,0,,,,CC BY-SA 3.0 5234,1,5248,,2/5/2018 23:00,,11,1261,"

It was recently brought to my attention that Chess experts took the outcome of this now famous match as something of an upset.

See: Chess’s New Best Player Is A Fearless, Swashbuckling Algorithm

As as a non-expert on Chess and Chess AI, my assumption was that, based on the performance of AlphaGo, and the validation of that type of method in relation to combinatorial games, was that the older AI would have no chance.

  • Why was AlphaZero's victory surprising?
",1671,,,,,2/23/2018 19:49,Why were Chess experts surprised by the AlphaZero's victory against Stockfish?,,3,0,,,,CC BY-SA 3.0 5237,2,,5221,2/6/2018 8:36,,1,,"

This is an episodic problem, and there should be no issue in theory with most learning algorithms coping without a discount factor (or setting gamma = 1).

Are there any explanations to why the agent only works for small discount factors in this example?

The most likely explanations are:

  • You have a mistake in your implementation or use of DQN.

  • You have an incorrect setting of a neural network hyperparameter. My initial thought would be you have mistakenly put a non-linearity like softmax or sigmoid on the NN outputs (it needs to be a linear output). Or it could just be that 10 hidden neurons is not enough for this representation

  • Convergence requires far longer to train than you thought, once you introduce non-linear relationships between state and expected reward.

It is not surprising that the simpler puzzle trains more easily, as the agent can reduce it down to a linear counting puzzle without really needing to ""look"" at the representation. I would expect the Q values to be very poor approximations to the correct values after so few iterations, but the problem is simple enough that the incorrect values still produce an optimal behaviour.

",1847,,,,,2/6/2018 8:36,,,,2,,,,CC BY-SA 3.0 5239,1,5241,,2/6/2018 14:19,,7,951,"

Whenever I read any book about neural networks or machine learning, their introductory chapter says that we haven't been able to replicate the brain's power due to its massive parallelism.

Now, in modern times, transistors have been reduced to the size of nanometers, much smaller than the nerve cell. Also, we can easily build very large supercomputers.

  • Computers have much larger memories than brains.
  • Computes can communicate faster than brains (clock pulse in nanoseconds).
  • Computers can be of arbitrarily large size.

So, my question is: why cannot we replicate the brain's parallelism if not its information processing ability (since the brain is still not well understood) even with such advanced technology? What exactly is the obstacle we are facing?

",,user9947,2444,,1/19/2021 12:38,1/19/2021 12:38,What makes the animal brain so special?,,3,2,,,,CC BY-SA 4.0 5241,2,,5239,2/6/2018 16:38,,5,,"

One probable hardware limiting factor is internal bandwidth. A human brain has $10^{15}$ synapses. Even if each is only exchanging a few bits of information per second, that's on the order of $10^{15}$ bytes/sec internal bandwidth. A fast GPU (like those used to train neural networks) might approach $10^{11}$ bytes/sec of internal bandwidth. You could have 10,000 of these together to get something close to the total internal bandwidth of the human brain, but the interconnects between the nodes would be relatively slow, and would bottleneck the flow of information between different parts of the "brain."

Another limitation might be raw processing power. A modern GPU has maybe 5,000 math units. Each unit has a cycle time of ~1 ns, and might require ~1000 cycles to do the equivalent processing work one neuron does in ~1/10 second (this value is totally pulled from the air; we don't really know the most efficient way to match brain processing in silicon). So, a single GPU might be able to match $5 \times 10^8$ neurons in real-time. You would optimally need 200 of them to match the processing power of the brain.

This back-of-the-envelope calculation shows that internal bandwidth is probably a more severe constraint.

",2329,,2444,,1/19/2021 12:23,1/19/2021 12:23,,,,1,,,,CC BY-SA 4.0 5246,1,5247,,2/7/2018 11:20,,24,15049,"

For instance, the title of this paper reads: ""Sample Efficient Actor-Critic with Experience Replay"".

What is sample efficiency, and how can importance sampling be used to achieve it?

",12574,,2444,,10/13/2020 8:24,7/7/2022 15:10,"What is sample efficiency, and how can importance sampling be used to achieve it?",,2,0,,,,CC BY-SA 4.0 5247,2,,5246,2/7/2018 13:25,,26,,"

An algorithm is sample efficient if it can get the most out of every sample. Imagine yourself playing PONG for the first time. As a human, it would take you within seconds to learn how to play the game based on very few samples. This makes you very "sample efficient". Modern RL algorithms would have to see $100$ thousand times more data than you so they are, relatively, sample inefficient.

In the case of off-policy learning, not all samples are useful in that they are not part of the distribution that we are interested in. Importance sampling is a technique to filter these samples. Its original use was to understand one distribution while only being able to take samples from a different but related distribution. In RL, this often comes up when trying to learn off-policy. Namely, that your samples are produced by some behaviour policy but you want to learn a target policy. Thus one needs to measure how important/similar the samples generated are to samples that the target policy may have made. Thus, one is sampling from a weighted distribution which favours these "important" samples. There are many methods, however, for characterizing what is important, and their effectiveness may differ depending on the application.

The most common approach to this off-policy style of importance sampling is finding a ratio of how likely a sample is to be generated by the target policy. The paper On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient (2010) by Tang and Abbeel covers this topic.

",4398,,4398,,7/7/2022 15:10,7/7/2022 15:10,,,,4,,,,CC BY-SA 4.0 5248,2,,5234,2/7/2018 19:41,,17,,"

Good question.

First and foremost is that in Go deepmind had no superhuman opponents to challenge. Go engines were not anywhere near the highest level of the top human players. In chess, however, the engines are 500 ELO points stronger than the top human players. This is a massive difference. The amount of work that has gone into contemporary chess engines is staggering. We are talking about millions of hours in programming, hundreds of thousands of iterations. It is a massive body of knowledge and work. To overcome and surpass all of that in 4 hours is staggering.

Secondly it is not so much the result itself which is surprising to chess masters but instead its how AlphaZero plays chess. It's quite ironic that a system which had no human knowledge or expertise plays the most like we do. Engines are notorious for playing ugly looking moves, those lacking harmony etc. Its hard to explain to a non-chess player but there is such a thing as an ""Artificial move"" like the contemporary engines come up with often. AlphaZero does not play like this at all. It has a very human-like style where it dominates the opponent's pieces with deep strategic play and stunning position sacrifices. AlphaZero plays the way we aspire to, combining deep positional understanding with the precision of an engines calculation.

Edit Oh and I forgot to mention something about the result itself. If you are not familiar with computer chess it may not seem staggering but it is.

These days the margins of victory which separate the top contemporary engines are razor thin. In a 100 game match you could expect to see a result like 85 games drawn, 9 victories, and 6 losses to determine the better engine.

AlphaZero 28 wins and 72 draws with zero losses was otherworldly crushing and was completely unthinkable right up to the moment it happened.

",12585,,12585,,2/7/2018 20:03,2/7/2018 20:03,,,,4,,,,CC BY-SA 3.0 5250,2,,5234,2/7/2018 20:16,,2,,"

I see, based on the articles you provide, many levels of surprise in the victory:

Chess is hard game to master and the counter part had the world's best practices, AlphaZero had tabula rasa.

Learning took four hours and AlphaZero lost no match of 100.

Playing style was an alien mix of human and computer like moves, aggressive and some times seeming goofy with sacrifices that have no idea but are actually making future status more strong.

Amount of possiblities taken in account per move was less than counter part, AlphaZero had a mysterious gut feeling or intuition.

The upset feeling came from the amount of training material AlphaZero had built itself with and the time limit, that did not maybe give the traditional machine fair amount of time.

",11810,,,,,2/7/2018 20:16,,,,1,,,,CC BY-SA 3.0 5252,1,,,2/7/2018 23:01,,4,328,"

A lot of people seem to be under the impression that combining GOFAI and contemporary AI will make models more general. I'm particularly interested in reasoning through analogy or case-based reasoning.

",9271,,2444,,1/14/2023 21:59,1/14/2023 21:59,What are some interesting recent papers that synthesize symbolic AI with Deep Learning?,,0,2,,,,CC BY-SA 3.0 5259,1,,,2/8/2018 19:01,,1,83,"

Once the artificially intelligent machines are able to identify objects, we might want to teach them how to value different things differently based on their utility, demand, life, etc. How can we accomplish this and how did we start to value things?

",3015,,,user9947,2/9/2018 15:41,2/9/2018 15:41,Is understanding value for different features next step for object recognition?,,2,0,,,,CC BY-SA 3.0 5260,2,,5259,2/8/2018 23:20,,0,,"

You have asked two questions. How humans began to put a valuation to things and how to accomplish the task of valuation within an artificial intelligence construct. Human valuation is accomplished through trial and error experience, subjective choice and relative comparison, among other things. Valuation in an AI construct would be data-driven, objective and perhaps absolute. The choice of a valuation method would be determined by the choice of or desire of outcome.

",12619,,,,,2/8/2018 23:20,,,,1,,,,CC BY-SA 3.0 5261,2,,5234,2/9/2018 4:31,,4,,"

MCTS for chess had been tried in the literature with little success. It was assumed AlphaGo's approach would never work on chess, maybe in Go but not in chess. Suddenly, Google announced the approach was working and it was beating the World's strongest chess program by a very signficiant margin.

Before Google, all chess programmers were taught crafting heuristics in engine programming was a better strategy than machine learning. No matter how you implemented neural networks, it would have never ran faster than a bunch of 64-bit bitboards instructions. AlphaGo was running quite slow, but it played strongest chess.

",6014,,6014,,2/23/2018 19:49,2/23/2018 19:49,,,,0,,,,CC BY-SA 3.0 5262,2,,5259,2/9/2018 10:35,,2,,"

From your question I can assume you are a beginner in the field of AI. Welcome to this exciting field.

To answer your question, we have not yet been able to create a truly artificially intelligent program. They are all apparently intelligent but are just a set of simple/complex rules. An artificially intelligent agent must have at-least 2 aspects in the inside of its head/program. The ability to logically derive conclusions, and a capability for learning (these combined with ability to to take inputs and respond).

Now, logical reasoning part is the field of AI. Lots of simple codes performing complex task already exists.

Your question is based on the learning part, which is handled by Machine Learning programs. They learn iteratively. Object recognition is only one part of ML. They can also predict. Anyways, object recognition is done on the basis of maximizing reward/minimizing penalty (Cost function). Now, this minimizing of penalty is done by giving different weights or valuing different attributes of the objects differently. So this has already been accomplished. Depending upon the task at hand, we supply the attributes which are related to the final task ,otherwise the attribute is of no use. Like economics has no influence on weather.

We also have this thought process by selecting things depending on the goal we need to accomplish, sometimes we select things consciously sometimes we are hardwired by genes to do so (Touching fire). So you see, ML methods are modeled on how we do things and we do things based on weight-age to different factors. If we don't give proper weight-age we either learn by the penalty/punishment imposed upon us or our genes get wiped out from existence and thus the same behavior of not giving proper weight-age is not passed on.

So in short, a ML learning algorithm recognizes objects or does some final task (especially in games like chess) based on the utility or influence of the given features of current state on the final result.

",,user9947,,,,2/9/2018 10:35,,,,0,,,,CC BY-SA 3.0 5263,2,,4956,2/9/2018 14:21,,1,,"

Christopher Olah's blog post describes it better that I ever could. Basically, most data we come across can't be separated with a single line, but with some kind of curve. Non-linearities allow us to distort the input space in ways that make the data linearly separable, making classification more accuarate.

",9271,,,,,2/9/2018 14:21,,,,1,,,,CC BY-SA 3.0 5264,2,,5239,2/9/2018 19:55,,1,,"

Short answer: nobody knows. Long answer: all strong-AI works. However, to write something useful to the o.p., say that the question contains several implicit statements, analyze them could be useful to clarify the issue:

a) why thing that 1 transistor has the same functionality than 1 neuron ? Some obvious differences: a transistor has 3 legs, each neuron has around 7000 synapses; a transistor has 3 layers of material, a neuron is a full micro-machine with thousands of components; each synapses itself is a switch, connected to one or more other cells, and can produce different kinds of signals (activation/inhibitory, frequencies, amplitude, ...).

b) compare memory amounts: the amount of memory in a person equivalent to the one in a computer s 0 bytes, we are not able to remember any thing forever and without distortion. Human memory is symbolic, temporal, associative, influenced by body and feelings, ... . Something totally different than computers one.

c) all previous are about ""hardware"": if we analyze software and training, differences are even bigger. Even assume than intelligence is placed only in the brain, forgetting the role of hormonal system, senses, ... is a simplification not yet proof.

In conclusion: human mind is totally different from a computer, we are far to understand it, and more far to replicate it.

From the start of computers age, the idea than intelligence will appear when the amounts of memory, process power, ... reaches some threshold has became false.

",12630,,12630,,2/9/2018 20:57,2/9/2018 20:57,,,,5,,,,CC BY-SA 3.0 5265,1,,,2/10/2018 1:54,,4,2896,"

I'm trying to implement some image super-resolution models on medical images. After reading a set of papers, I found that none of the existing models use any activation function for the last layer.

What's the rationale behind that?

",10569,,2444,,11/15/2020 1:22,11/15/2020 1:22,Why is no activation function used at the final layer of super-resolution models?,,2,0,,,,CC BY-SA 4.0 5266,1,5273,,2/10/2018 3:28,,1,143,"

I'm wondering if these 2 specific programs already exist and if not how hard would it be to write them:

  1. A program that would figure out (by only ""reading"" large amounts of texts in human language 1 and 2) which words in second language have the same meaning as a word in first language. You would give for input texts in both languages and for output you would get for every word in first language a list of words in second language that are most similar to it with a probability that they mean the same thing.

  2. A program that would figure out which words have the most similar meaning by analyzing large amounts of texts in one human language.

I'm planning on writing these two programs and it would be nice if I could get existing programs that do this so that I could compare results of my program to those of existing programs.

",12251,,4302,,10/8/2018 12:16,10/8/2018 12:16,Existing programs that find out words with same meanings,,2,3,,,,CC BY-SA 3.0 5267,2,,5266,2/10/2018 10:04,,1,,"

Let us simplify case 1 of the question: assume two files, first one with numbers written in indo-arabic numeral system (i.e. 123, 9, 186754, ...), second one in roman numeral system (XX, LCVI, IX, ...). How do you match pairs of symbols with same meaning ?

Without external information or assumptions, you can not. You could made the hypothesis that probability of one specific number is the same in both examples, and base your pairing on it. But then you need to find two input files that fulfills this condition.

",12630,,,,,2/10/2018 10:04,,,,7,,,,CC BY-SA 3.0 5268,2,,4949,2/10/2018 11:33,,1,,"

Yes, rpa is AI. In particular, applied AI.

The definition of rpa in wiki is:

Robotic process automation (or RPA) is an emerging form of clerical process automation technology based on the notion of software robots or artificial intelligence (AI) workers.

Moreover, look at this characteristics of rpa:

The paradigm, in summary, is that a software robot should be a virtual worker who can be rapidly ""trained"" (or configured) by a business user in an intuitive manner which is akin to how an operational user would train a human colleague.

If we talk about training or learning, we talk about AI.

",12630,,,,,2/10/2018 11:33,,,,0,,,,CC BY-SA 3.0 5271,2,,5211,2/10/2018 16:08,,1,,"

The most usual differences in signal records caused by different microphones will have small if not null impact in the recognition accuracy, in particular if we are talking about changes one mic by another of same model and manufactor:

  • Differences in bandwidth: voice is in a very common (central) bandwidth, it is not expected these differences impacts, even for low quality microphones.
  • Microphone distortions: same as previous, they will not impact because they are smaller than, by example, a change in the speaker.

However, if we talk about a general recognition system to be used with very different types of mics, there are some microphone issues that can cause your system completely fail:

  • mic sensitivity: small sensitivity differences will have no effect because they are solved in the same way than differences in speaker volume/intonation. However, if the microphone is not enough sensible the S/N can be below the minimum need, in particular when speaker increase the distance to the mic.
  • lack of beam-forming: if your system is prepared to use an array of microphones to filter noise and/or secondary sources, usage of a normal phone will decrease accuracy.
  • changes in sample ratio and/or sample bits: if the microphone and its A/D has a low sampling speed or size (i.e. Bluetooth mics, phone lines, ...), the accuracy can fail.

By example, for IOT applications, the first two of this list are the more challenging ones.

",12630,,12630,,2/10/2018 18:48,2/10/2018 18:48,,,,2,,,,CC BY-SA 3.0 5272,2,,5265,2/11/2018 1:12,,0,,"

As discussed here:

https://www.researchgate.net/post/What_should_be_my_activation_function_for_last_layer_of_neural_network

Linear is the preferred activation function.

But then linear activation function is equivalent to no activation function at all:

https://datascience.stackexchange.com/questions/13696/lack-of-activation-function-in-output-layer-at-regression

",9592,,,,,2/11/2018 1:12,,,,0,,,,CC BY-SA 3.0 5273,2,,5266,2/11/2018 2:47,,1,,"

For the first answer, the general way is using the seq2seq model:

https://www.tensorflow.org/tutorials/seq2seq

(a more specific example of the above category is this paper:

http://emnlp2014.org/papers/pdf/EMNLP2014179.pdf)

For the 2nd answer, one way is to use word2vec and apply supervised learning on the groups of similar words:

https://towardsdatascience.com/word-to-vectors-natural-language-processing-b253dd0b0817

https://spacy.io/usage/vectors-similarity

",9592,,,,,2/11/2018 2:47,,,,0,,,,CC BY-SA 3.0 5274,1,,,2/11/2018 5:27,,27,5064,"

What are the current theories on the development of a conscious AI? Is anyone even trying to develop a conscious AI?

Is it possible that consciousness is an emergent phenomenon, that is, once we put enough complexity into our system, it will become self-aware?

",12648,,2444,,12/12/2021 16:21,12/12/2021 16:21,What are the current theories on the development of a conscious AI?,,4,0,,,,CC BY-SA 4.0 5275,2,,5274,2/11/2018 7:33,,29,,"

To answer this question, first we need to know why developing conscious AI is hard. The main reason is that there is no mathematically or otherwise rigorous definition of consciousness. Sure you have an idea of consciousness as you experience it and we can talk about philosophical zombies but it isn’t a tangible concept that can be broken down and worked on. Moreover, the majority of current research in AI is primarily a pragmatic approach in that one is trying to construct a model that can perform well according to some desired cost function. This is a very very big and exciting field and encompasses many research problems and every new finding is based either on mathematical theory or empirical evidence of a new algorithm/model construction/etc. Because of this, progress is based on and compared against previous progress as it is the scientific method.

So to answer your question, no one is trying to actually make a “conscious” AI because we don’t know what that word means yet, however that doesn’t stop people talking about it.

",4398,,4398,,2/12/2018 4:45,2/12/2018 4:45,,,,0,,,,CC BY-SA 3.0 5278,2,,5274,2/11/2018 14:25,,1,,"

Consciousness is the ability to be aware of your own thoughts, your immediate environs, feelings and nothing more. It is the mechanism of our brain to control our lower kind of thoughts, the one based on associations and emotions. Consciousness is observing our thoughts and feelings just like we observe real world with our eyes. It is not complicated. The real question is not whether machines are capable of consciousness but whether they are capable of emotions.

",12251,,-1,,3/7/2019 22:38,3/7/2019 22:38,,,,2,,,,CC BY-SA 4.0 5279,1,5283,,2/11/2018 16:03,,2,81,"

I'm developing a multi-armed bandit which learns the best information to display to persuade someone to donate to charity.

Suppose I have treatments A, B, C, D (which are each one paragraph of text). The bandit selects one treatment to show to a person. The person is given $1 and has to decide how much (if any) to donate, in increments of one cent. The donation decision is recorded and fed to the multi-armed bandit, who will then re-optimize before another person is shown a treatment selected by the bandit.

How should I program the bandit if my objective is to maximize total donations? For example, can I use Thompson sampling, and if a participant donates $0.80, I count that as 80 successes and 20 failures?

",12656,,2444,,4/15/2020 20:23,4/15/2020 20:23,Programming a bandit to optimize donations,,1,0,,,,CC BY-SA 3.0 5281,2,,5274,2/11/2018 23:05,,7,,"

What is consciousness? There are some real challenges in setting up consciousness as a goal, because we don't have that much scientific understanding yet of how the brain does it or what balance there needs to be between long-term memory, short-term memory, the implicit work of interpretation, the contrasting conscious modes of automatic processing and deliberate processing (Khanemann's S1 and S2). John Kihlstrom (psychology emeritus at Berkeley) has a lecture set on Consciousness available in iTunesU that you might check out. Carnegie-Mellon Uni has a model called ACT-R which directly models conscious behaviours like attention-paying.

What might bound our understanding of it? Philosophy has been considering the question of consciousness for a long time. Personally I like Hegel and Heidegger (philosophers). Both are very difficult to read, but Heidegger (interpreted by Hubert Dreyfus) usefully critiqued the 'Good Old-Fashioned AI' projects of the seventies and pointed out how much work there is just interpreting a visual input. Hegel is often maligned, but to see him well interpreted, check out Robert Brandom's talks to LMU on the logic of consciousness and Hegel as an early Sellers-ian pragmatist. If consciousness is to take hold of the truth and the certainty, it undertakes 'a path of doubt, or more properly a highway of despair', along which it never sets itself above correction. There is something about Hegel's treatment of consciousness in recursive terms, without succumbing to a vicious regress, that I think is going to be borne out before the end.

Recent developments. The Deep Learning approaches and pragmatic successes of the present are exciting, but it will be interesting to see how far they can go in integrating and generalising from necessarily the small information sets actual human minds are exposed to. While Deep Learning and data mining are hugely visible, symbolic approaches are also out there still getting better and more varied. But there is a lack of overarching theoretical interpretation that would allow generalisations.

Two big-theory toe-holds. If I had to pick a project I thought worth attending to, Giulio Tononi (et al) have set up a very nice modernisation of the problem in 'Integrated Information Theory' But you might want to extend that with something like Rolf Pfeifer's 'How the body shapes the way we think', because some of the 'integrated information' is implicit in having arms and legs, eyes and nose (put there by the information accumulating work of evolution.) But there's so much good work that has been done - the pros are writing papers faster than I can read them.

More specific to your question, there are attempts to simulate human brains hoping that overall aim will help fund research and produce answers to each para above.

",12665,,12665,,2/12/2018 2:15,2/12/2018 2:15,,,,3,,,,CC BY-SA 3.0 5283,2,,5279,2/12/2018 11:21,,3,,"

It does not matter to the bandit algorithm that rewards are quantised or fractional, or that they can vary. This is true for pretty much all bandit optimisation algorithms.

So just treat the $0.80 donation as a real valued reward of 0.8, that occurs on a single timestep.

Treating a single reward on a single timestep as if it were multiple rewards across multiple timesteps might cause problems, depending on which algorithm you were using, even if the reward averaged to the same value. For instance it may skew the maths away from theoretical assumptions used in upper confidence bound action selection or gradient-based solutions.


It occurs to me that your $1 experiment is supposed to be some kind of proxy for the real deployment, where donations will be sparse and may vary more. Whilst this may help a little, essentially you are just setting some kind of prior based on results from a test. The real bandit algorithm starts once the system is deployed into production, and could well have very different results when the goal is to collect real and larger donations.

",1847,,1847,,2/12/2018 11:28,2/12/2018 11:28,,,,0,,,,CC BY-SA 3.0 5284,2,,1678,2/12/2018 12:57,,2,,"

In grammar, a predicate-argument relationship is one which is implied from text but not expressed in the syntactic structure. (Asher S)

Predicate logic or first order logic is a collection of formal systems used in mathematics, philosophy, linguistics and computer science.

The NLP community is interested in recognizing, representing and classifying predicate arguments for shallow semantic parsing which can be viewed as the process of assigning a WHO did WHAT to WHOM, WHEN, WHY, HOW e.t.c. structure to plain text.

This process entails identifying groups of words in a sentence that represent these semantic arguments and assigning specific labels to them. This could play a key role in NLP tasks like information extraction, question answering and summarization.

Automatic and accurate techniques that can annotate naturally occurring text with semantic/predicate argument structure can facilitate the discovery of patterns of information in large text collections.

For reference I recommend this paper that uses Support Vector Machine algorithms for predicate argument classification. (Sameer P Kadri H 2005) https://link.springer.com/article/10.1007/s10994-005-0912-2#enumeration

",10913,,,,,2/12/2018 12:57,,,,0,,,,CC BY-SA 3.0 5285,1,5293,,2/12/2018 13:05,,1,170,"

I'm reading ""Recurrent neural network based language model"" of Mikolov et al. (2010). Although the article is straight forward, I'm not sure how word embedding $w(t)$ is obtained:

The reason I wonder is that in the classic ""A Neural Probabilistic Language Model"" Bengio et al. (2003) - they used separate embedding vector for representing each word and it was somehow ""semi-layer"", meaning - it haven't contains non-linearity, but they did update word embeddings during the back-propagation.

In Mikolov approach though, I assume they used simple one-hot vector, where each feature represent presence of each word. If we represent that's way single word input (like was in the Mikolov's paper) - that vector become all-zeros except single one.

Is that correct?

",12691,,2444,,4/16/2019 22:35,4/16/2019 22:35,"How is the word embedding represented in the paper ""Recurrent neural network based language model""?",,1,0,,,,CC BY-SA 4.0 5288,1,,,2/12/2018 15:41,,3,49,"

Now, Boltzmann machines are energy-based undirected networks, meaning there are no forward computations. Instead, for each input configuration $x$, a scalar energy is calculated to asses this configuration. The higher the energy, the less likely for $x$ to be sampled from the target distribution.

The probability distribution is defined through the energy function by summing over all possible states of the hidden part $h$.

If I understand correctly, the hidden units are added to capture higher-order interactions, which offers more capacity to the model.

So, how do we calculate the values of these hidden units? Or do we not explicitly compute these values and instead approximate the marginal ""free energy"" (which is the negative log of the sum over all possible states of $h$)?

",12672,,2444,,5/18/2020 12:18,5/18/2020 12:18,How do we calculate the hidden units values in a (restricted) Boltzmann machine?,,0,0,,,,CC BY-SA 4.0 5289,1,5481,,2/12/2018 18:52,,2,61,"

In order for the generalized bell membership function to retain its defined shape and domain, two restrictions must be placed on the b parameter: 1) b must be positive and 2) b must be an integer. Using backpropagation to tune the membership parameters (a,b and c in the bell), it appears to be possible that the correction to b will break one or both of these restrictions on b. Can someone please explain to me how we can use backpropagation to tune the b parameter (as well as a and c) without violating 1) and 2)?

",12544,,,,,4/30/2018 19:13,"Tuning the b parameter, ANFIS",,1,0,,,,CC BY-SA 3.0 5290,1,,,2/13/2018 5:23,,2,150,"

I have purchasing history data for grocery shopping. I am trying to get abnormally frequently purchased items under certain conditions. For instance, I am trying to find frequently purchased items, when customers shop online and are willing to pay an extra shipping fee.

In order to find items that are particularly (or abnormally) frequently purchased under that situation (through online stores by paying shipping fee), how and what Machine Learning Algorithm should I apply and identify those items?

I found arules R package which is using the association rules with purchasing history and tried to apply it. But it seems the package might be based on different principle from my idea.

Anyone has an idea about my problem? If there is an R package related to the problem, it would be perfect.

",12713,,,user9947,2/14/2018 18:39,2/14/2018 18:39,Predict frequently purchased items under certain conditions with customer purchasing history data,,1,4,,,,CC BY-SA 3.0 5293,2,,5285,2/13/2018 9:32,,2,,"

Input vector contains two concatenated parts. The low part represents the current word:

word in time t encoded using 1-of-N coding [...] - size of vector x is equal to size of vocabulary V (this can be in practice 30 000-200 000) plus [...]

where, as you said, 1-of-N means (see here, 1-of-V):

If you have a fixed-size vocabulary of symbols with V members in total, each input symbol can be coded as a vector of size V with all zeros except for the element corresponding to the symbol's order in the vocabulary, which gets a 1.

The high part of the input vector represents the current context:

and previous context layer [...] Size of context (hidden) layer s is usually 30 - 500 hidden units.

For initialization, s(0) can be set to vector of small values, like 0.1

the article includes this expression:

x(t) = w(t) + s(t - 1)

that I think better to write as:

x(t) = w(t) || s(t - 1)

to made more visible the concatenation.

Finally, paper states some improvements that breaks the 1-to-N definition of the word (low) part, in order to reduce the size of the w vector:

we merge all words that occur less often than a threshold (in the training text) into a special rare token.

",12630,,12630,,2/13/2018 10:45,2/13/2018 10:45,,,,2,,,,CC BY-SA 3.0 5294,2,,3817,2/13/2018 11:01,,1,,"

Impossible to solve until you define an error measurement ( by example $|R-R'|$ or $(R-R')^2$ ) and how this error changes when A, B and C change.

Extreme example: assume $R()$ is random (unrelated to A, B, C values) but static (always same $R(A,B,C)$ for same values of A,B,C). Given some values of A, B, C, you can only answer the value of $R(A,B,C)$ when A,B,C was in the training set. $R(A,B,C)$ is undefined and no predictable when A,B,C was not in the training set.

Moreover, improvements can be done if $R()$ has some properties, by example, if it is possible to state that $R(A,B,C)=R(B,A,C)$ or that $R(A1,B1,C1)=R(A2,B2,C2)$ if $A1+B1+C1=A2+B2+C2$.

",12630,,12630,,11/14/2020 12:08,11/14/2020 12:08,,,,0,,,,CC BY-SA 4.0 5295,2,,5246,2/13/2018 11:42,,6,,"

Sample Efficiency denotes the amount of experience that an agent/algorithm needs to generate in an environment (e.g. the number of actions it takes and number of resulting states + rewards it observes) during training in order to reach a certain level of performance. Intuitively, you could say an algorithm is sample efficient if it can make good use of every single piece of experience it happens to generate and rapidly improve its policy. An algorithm has poor sample efficiency if it fails to learn anything useful from many samples of experience and doesn't improve rapidly.

The explanation of importance sampling in Jaden's answer seems mostly correct.

In the paper in your question, importance sampling is one of the ingredients that enables a correct combination of 1) learning from multi-step trajectories, and 2) experience replay buffers. Those two things were not easy to combine before (because multi-step returns without importance sampling are only correct in on-policy learning, and old samples in a replay buffer were generated by an old policy which means that learning from them is off-policy). Both of those things individually improve sample efficiency though, which implies that it's also beneficial for sample efficiency if they can still be combined somehow.

",1641,,,,,2/13/2018 11:42,,,,0,,,,CC BY-SA 3.0 5296,2,,5290,2/13/2018 16:01,,1,,"

Let start by the concrete question, and follow talking about the general problem.

a)

The concrete question ""find items that are particularly frequently purchased through online stores by paying shipping fee"" needs few or none usage of applied AI, just a few of statistics.

The question talks about ""item purchased"" and ""buy method"", thus, we have an information database with entries like:

sale(online,item1).

sale(shop,item2).

sale(online,item2).

sale(online,item2).

...

(note records can be repeated)

the percentage of online sales of item ""X"" is defined as the fraction of sales online of this item, sale(online,X), over the total sales of this item, sale(_,X):

In the previous data examples, p(item1)=1/1, p(item2)=2/3.

High p(X) means items that are preferred for online shopping.

Other probabilities can be defined for similar cases.

b)

About the general case, we are talking about data mining. The are very good packages (open source, ...) for them: weka, IBM DWE, ... . By example, using Weka J48 over a database defined as:

sale( purchase_identifier, buy method, item )

where ""purchase_identifier"" must group item that has been purchased in a single buyout (cash ticket). J48 will then provide as answer rules as: item ""foo"" is usually purchased in online shop when also item ""bar"" is purchased.

",12630,,12630,,2/13/2018 17:26,2/13/2018 17:26,,,,0,,,,CC BY-SA 3.0 5297,2,,3923,2/13/2018 21:55,,1,,"

It sounds like you're looking at the Partition Problem. https://en.wikipedia.org/wiki/Partition_problem

The task of slicing one set into N sets so that each set is equal or as close to equal as possible.

Obtaining an exact solution is NP-hard (you can't do much better than trying all combinations), however you can get an approximate answer in polynomial time.

Greedy approach:

  1. Create N sets, each initially empty
  2. Sort all items in descending order (by their complexity score)
  3. for each item add it to the set with the lowest sum

If N is large, you may want to put the sets into a min heap/priority queue.

",12278,,12278,,2/13/2018 22:36,2/13/2018 22:36,,,,0,,,,CC BY-SA 3.0 5299,2,,2412,2/14/2018 7:35,,1,,"

Firstly, before we commence I will recommend that you refer to a similar questions on the network https://stackoverflow.com/questions/828486/neural-net-optimize-w-genetic-algorithm

The majority of ML studies focus on gradient algorithms, usually a variation of back-propagation to obtain the weights of the model.

However since genetic algorithms are powerful searching algorithms they can be utilized to tune and optimize the structure and parameters of neural networks.

Genetic algorithms are stochastic search techniques that guide a population of solutions towards an optimum using the principles of evolution and natural genetics.

Genetic algorithms are especially capable of handling problems in which the objective function is discontinuous/non differentiable, nonconvex, multimodal or noisy. (Dharmistha D)

Algorithms, which combine genetic algorithms and error back-propagation, have been shown to exhibit better convergence properties than pure back-propagation.

To facilitate the crossover operation for the exchange of information between two parents we need to define four components within a genetic algorithm.

  • A way of coding solutions to the problem on chromosomes;
  • A fitness function which returns a value for each chromosome given to it;
  • A way of initializing the population of chromosomes;
  • Operators that may be applied to parents to alter their genetic composition i.e. mutation.

In these hybrid systems weights in different layers of the network are optimized using a genetic algorithm. Experimental results demonstrate that the genetic algorithms can in some cases optimize the hyper-parameters of an ANN better than a hand tuned model.

Regarding an efficient way to implement a random crossover, here is something to get you started (Code courtesy of Dan Golding 2013)

%Randomly choose 2 individuals

n = size(new_pop, 1);
l = size(new_pop, 2);

breeders = new_pop(randperm(n,2),:);

%Choose a crossover point

cp = randperm(l, 1);

%Crossover

b1 = [breeders(1, 1:cp), breeders(2, cp+1:end)];
b2 = [breeders(2, 1:cp), breeders(1, cp+1:end)];

For further reference I recommend that you kindly look at the below links

Matlab crossover genetic algorithm

https://stackoverflow.com/questions/17696323/matlab-crossover-genetic-algorithm

Single point ordered crossover in matlab

https://stackoverflow.com/questions/16302382/single-point-ordered-crossover-in-matlab

To optimize a neural network of multiple inputs using a genetic algorithm.

https://www.mathworks.com/matlabcentral/answers/180513-to-optimize-a-neural-network-of-multiple-inputs-using-a-genetic-algorithm?s_tid=gn_loc_drop

How can I use the Genetic Algorithm (GA) to train a Neural Network in Neural Network Toolbox

https://www.mathworks.com/matlabcentral/answers/100323-how-can-i-use-the-genetic-algorithm-ga-to-train-a-neural-network-in-neural-network-toolbox

",10913,,,,,2/14/2018 7:35,,,,0,,,,CC BY-SA 3.0 5301,2,,2381,2/14/2018 13:41,,5,,"

What is OpenCog?

OpenCog is a project with the vision of creating a thinking machine with human-level intelligence and beyond.

In OpenCog's introduction, Goertzel categorically states that the OpenCog project is not concerned with building more accurate classification algorithms, computer vision systems or better language processing systems. The OpenCog project is solely focused on general intelligence that is capable of being extended to more and more general tasks.

Knowledge representation

OpenCog's knowledge representation mechanisms are all based fundamentally on networks. OpenCog has the following knowledge representation components:

AtomSpace: it is a knowledge representation database and query engine. Data on AtomSpace is represented in the form of graphs and hypergraphs.

Probabilistic Logic Networks (PLN's): it is a novel conceptual, mathematical and computational approach to handle uncertainty and carry out effective reasoning in real-world circumstances.

MOSES (Meta-Optimizing Semantic Evolutionary Search): it implements program learning by using a meta-optimization algorithm. That is, it uses two optimization algorithms, one wrapped inside the other to find solutions.

Economic Attention Allocation (EAA): each atom has an attention value attached to it. The attention values are updated by using nonlinear dynamic equations to calculate the Short Term Importance (STI) and Long Term Importance (LTI).

Competency Goals

OpenCog lists 14 competencies that they believe AI systems should display in order to be considered an AGI system.

Perception: vision, hearing, touch and cross-modal proprioception

Actuation: physical skills, tool use, and navigation physical skills

Memory: declarative, behavioral and episodic

Learning: imitation, reinforcement, interactive verbal instruction, written media and learning via experimentation

Reasoning: deduction, induction, abduction, causal reasoning, physical reasoning and associational reasoning

Planning: tactical, strategic, physical and social

Attention: visual attention, behavioural attention, social attention

Motivation: subgoal creation, affect-based motivation, control of emotions

Emotion: expressing emotion, understanding emotion

Modelling self and other: self-awareness, theory of mind, self-control

Social interaction: appropriate social behavior, social communication, social inference and group play

Communication: gestural communication, verbal communication, pictorial communication, language acquisition and cross-modal communication

Quantitative skills: counting, arithmetic, comparison and measurement.

Ability to build/create: physical, conceptual, verbal and social.

Do I endorse OpenCog?

In my opinion, OpenCog introduces and covers important algorithms/approaches in machine learning, i.e. hyper-graphs and probabilistic logic networks. However, my criticism is that they fail to commit to a single architecture and integrate numerous architectures in an irregular and unsystematic manner.

Furthermore, Goertzel failed to recognize the fundamental shift that came with the introduction of deep learning architectures so as to revise his work accordingly. This puts his research out of touch with recent developments in machine learning

",10913,,9608,,8/6/2019 20:15,8/6/2019 20:15,,,,0,,,,CC BY-SA 4.0 5302,2,,2675,2/14/2018 15:32,,0,,"

The 7 AI problem characteristics is a heuristic technique designed to speed up the process of finding a satisfactory solution to problems in artificial intelligence.

In computer science, artificial intelligence and mathematical optimization, a heuristic is a technique designed for solving a problem more quickly, or for finding an approximate solution when you have failed to find an exact solution using classic methods.

The 7 AI problem technique ranks alternative steps based on available information to help one decide on the most appropriate approach to follow in solving problems i.e. missionaries and cannibals, Tower of Hanoi, Traveling salesman e.t.c.

Regarding whether there is a generally accepted relationship between the placement of a problem and suitable algorithms. The answer is that indeed there is a generally accepted relationship. For example imagine trying to solve a game of chess and a game of sudoku.

If a step is wrong in sudoku, we can backtrack and attempt a different approach. However if we are playing a game of chess and realize a mistake after a couple of moves. We cannot simply ignore the mistake and backtrack.(2nd Characteristic)

If the problem universe is predictable, we can make a plan to generate a sequence of operations that is guaranteed to lead to a solution. However in the case of problems with uncertain outcomes, we have to follow a process of plan revision as the plan is carried out while providing the necessary feedback. (3rd Characteristic)

Below is an example of the 7 AI problem characteristics being applied to solve a water jug problem.

Image source https://gtuengineeringmaterial.blogspot.com/2013/05/discuss-ai-problems-with-seven-problem_1818.html

",10913,,,,,2/14/2018 15:32,,,,3,,,,CC BY-SA 3.0 5303,2,,3989,2/14/2018 17:58,,0,,"

Firstly, before we commence I will recommend that you refer to similar questions on the network i.e https://stackoverflow.com/questions/39386936/machine-learning-with-incomplete-data , https://stats.stackexchange.com/questions/103500/machine-learning-algorithms-to-handle-missing-data

Row Deletion

If a particular row has more than 70% missing values, you can delete the row to handle the null values. This method is advised only when there are enough samples in the data set. The major disadvantage of this method is that it reduces the power of the model because it reduces the sample size.

Replacing With Mean/Median/Mode

We can calculate the mean, median or mode of the feature and replace the missing values with it. Another approach is to approximate it with the deviation of neighbouring values.

Although this approach adds variance to the data set, it yields better results compared to removing rows and columns.

KNN or Random Forest imputation

In this approach, the missing values of an attribute are imputed using existing attributes that are most similar to the attribute whose values are missing. The similarity of two attributes is determined using a distance function.

The advantage of this approach is that k-nearest neighbour can predict both qualitative and quantitative attributes. Additionally you do not need to create a prediction model for each attribute with missing data in the dataset.

Predicting the Missing Values

Prediction is one of the more sophisticated methods for handling missing data. Using the features which do not have missing values, we can predict the null values with the help of a machine learning algorithm.

In this case, we divide our data set into two. One set with no missing values and another set with missing values. The first data set becomes the training data set of the model while the second data set with missing values is the test data set.

We then create a model to predict target variables based on other attributes of the training data set and populate the missing values of the test data set.(Sayali S 2016)

Caret or randomForestSRC packages in R

The R package randomForestSRC can handle missing data for a wide class of analyses i.e. regression, classification, unsupervised and multivariate (Ankur C 2014). Additionally, the Caret R package can be used to predict missing data.

Reference : https://analyticsindiamag.com/5-ways-handle-missing-values-machine-learning-datasets/

",10913,,,,,2/14/2018 17:58,,,,1,,,,CC BY-SA 3.0 5308,1,,,2/14/2018 19:43,,5,159,"

Greedy algorithms are well known, and although useful in a local context for certain problems, and even potentially find general, global optimal solutions, they nonetheless trade optimality for shorter-term payoffs.

This seems to me a good analogue for human greed, although there is also the grey goo type of greed that is senseless acquisition of material (think plutocrats who talk about wealth as merely a way of "keeping score".)

Technical debt is an extension of development practices that fall under the algorithmic definition of greed (short-term payoff leads to trouble down the road.) This may be further extended to any non-optimized code in terms of energy waste (flipping of unnecessary bits) which will only increase as everything becomes more computerized.

So my question is:

  • What are other vices that can arise in algorithms?
",1671,,2444,,12/12/2021 16:40,12/12/2021 16:57,Algorithms can be greedy. What are some other algorithmic vices?,,3,0,,,,CC BY-SA 4.0 5309,2,,5308,2/14/2018 20:35,,3,,"

Algorithms can be racist, sexist, and otherwise bigoted. When we feed them data produced by systems that are biased against groups of people, the algorithm will learn to behave that way. We're used to garbage in garbage out, now we have to worry about racism in racism out.

See:

",12732,,2444,,12/12/2021 16:40,12/12/2021 16:40,,,,2,,,,CC BY-SA 4.0 5314,1,,,2/14/2018 21:42,,1,2855,"

I'm interested in working on challenging AI problems, and after reading this article (https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/) by DeepMind and Blizzard, I think that developing a robust AI capable of learning to play Starcraft 2 with superhuman level of performance (without prior knowledge or human hard-coded heuristics) would imply a huge breakthrough in AI research.

Sure I know this is an extremely challenging problem, and by no means I pretend to be the one solving it, but I think it's a challenge worth taking on nonetheless because the complexity of the decision making required is much closer to the real world and so this forces you to come up with much more robust, generalizable AI algorithms that could potentially be applied to other domains.

For instance, an AI that plays Starcraft 2 would have to be able to watch the screen, identify objects, positions, identify units moving and their trajectories, update its current knowledge of the world, make predictions, make decisions, have short term and long term goals, listen to sounds (because the game includes sounds), understand natural language (to read and understand text descriptions appearing in the screen as well), it should probably be endowed also with some sort of attention mechanism to be able to pay attention to certain regions of interest of the screen, etc. So it becomes obvious that at least one would need to know about Computer Vision, Object Recognition, Knowledge Bases, Short Term / Long Term Planning, Audio Recognition, Natural Language Processing, Visual Attention Models, etc. And obviously it would not be enough to just study each area independently, it would also be necessary to come up with ways to integrate everything into a single system.

So, does anybody know good resources with content relevant to this problem? I would appreciate any suggestions of papers, books, blogs, whatever useful resource out there (ideally state-of-the-art) which would be helpful for somebody interested in this problem.

Thanks in advance.

",12746,,12746,,2/15/2018 13:05,2/20/2018 7:37,Training an AI to play Starcraft 2 with superhuman level of performance?,,2,1,,,,CC BY-SA 3.0 5316,2,,4048,2/15/2018 3:46,,4,,"

I think you raise a good question, especially WRT to how the NNs inputs & outputs are mapped onto the mechanics of a card game like MtG where the available actions vary greatly with context.

I don't have a really satisfying answer to offer, but I have played Keldon's Race for the Galaxy NN-based AI - agree that it's excellent- and have looked into how it tackled this problem.

The latest code for Keldon's AI is now searchable and browseable on github.

The ai code is in one file. It uses 2 distinct NNs, one for ""evaluating hand and active cards"" and the other for ""predicting role choices"".

What you'll notice is that it uses a fair amount on non-NN code to model the game mechanics. Very much a hybrid solution.

The mapping of game state into the evaluation NN is done here. Various relevant features are one-hot-encoded, eg the number of goods that can be sold that turn.


Another excellent case study in mapping a complex game into a NN is the Starcraft II Learning Environment created by Deepmind in collaboration with Blizzard Entertainment. This paper gives an overview of how a game of Starcraft is mapped onto a set of features that a NN can interpret, and how actions can be issued by a NN agent to the game simulation.

",12751,,12751,,2/15/2018 4:20,2/15/2018 4:20,,,,0,,,,CC BY-SA 3.0 5317,1,,,2/15/2018 3:54,,1,1479,"

I often develop bots and I need to understand what some people are saying.

Examples:
- I want an apple
- I want an a p p l e

How do I find the object (apple)? I honestly don't know where to start looking. Is there an API that I can send the text to which returns the object? Or perhaps I should manually code something that analyses the grammar?

",12752,,2193,,12/6/2018 21:18,12/6/2018 21:18,How to find the subject in a text?,,1,0,,1/18/2021 4:32,,CC BY-SA 4.0 5318,1,,,2/15/2018 5:55,,4,1376,"

I have been trying to use CNN for a regression problem. I followed the standard recommendation of disabling dropout and overfitting a small training set prior to trying for generalization. With a 10 layer deep architecture, I could overfit a training set of about 3000 examples. However, on adding 50% dropout after the fully-connected layer just before the output layer, I find that my model can no longer overfit the training set. Validation loss also stopped decreasing after a few epochs. This is a substantially small training set, so overfitting should not have been a problem, even with dropout. So, does this indicate that my network is not complex enough to generalize in the presence of dropout? Adding additional convolutional layers didn't help either. What are the things to try in this situation? I will be thankful if someone can give me a clue or suggestion.

PS: For reference, I am using the learned weights of the first 16 layers of Alexnet and have added 3 convolutional layers with ReLU non-linearity followed by a max pooling layer and 2 fully connected layers. I update weights of all layers during training using SGD with momentum.

",12754,,,,,2/22/2021 2:59,What to do if CNN cannot overfit a training set on adding dropout?,,3,3,,,,CC BY-SA 3.0 5320,1,5323,,2/15/2018 11:38,,3,313,"

This is a kind of biological and philosophical question. So, the recent concern in AI is that an AI agent may go rogue with prominent people voicing their concerns.

Now say, we have created an AI (you are free to use your own definition of what makes an AI intelligent) which has gone rogue with powers given in this question.

Now, the broad view of today's biology is that everything we do is to further our genes down the future (leaving aside small technical details). It is even widely accepted that we are just machines whose controller are the genes. Everything we do is controlled/hardwired by the genes with some avenue of learning from experiences. Also genes only further their own interest. Scientist George Price even wrote a mathematical equation proving all our acts are selfish and only furthering the interest of our genes (article). Also Richard Dawkins is a pioneer of this idea (this is only to show I haven't pulled the idea out from air).

Now, my question is that what will possibly be the motivation of an AI agent to go rogue? It doesn't have genes whose interest it needs to further. We all do something for an end result. What is the end result a rogue AI might try to achieve/attain and why?

",,user9947,,,,2/26/2018 15:29,Motivation that drives a rogue AI agent,,2,1,,,,CC BY-SA 3.0 5322,1,,,2/15/2018 14:16,,9,2069,"

I was reading Gary Marcus's a Critical Appraisal of Deep Learning. One of his criticisms is that neural networks don't incorporate prior knowledge in tackling a problem. My question is: have there been any attempts at encoding prior knowledge in deep neural networks?

",10913,,2444,,12/3/2020 14:46,12/3/2020 15:37,Can prior knowledge be encoded in deep neural networks?,,5,4,,,,CC BY-SA 4.0 5323,2,,5320,2/15/2018 15:14,,2,,"

Today, prominent machine learning techniques involve trying to minimize some cost function. In many simple cases this cost function is easy to specify, for instance, linear regression is simply trying to minimize the distance between input data and a line of best fit. No matter what the cost function, the agent is trying to minimize it (or maximize a reward function). That is its motivation.

However, as problems become harder it becomes more challenging for humans to design a cost/reward function such that an system/agent is actually trying to do what the humans want it to do. For instance, one might want a cup of coffee and rewards the agent for getting it to them very quickly. In this case, the agent might make the coffee and then throw it at the human which isn’t what the human actually wanted. Something was misspecified (eg. don’t throw or spill it).

Problems like these could result in a rouge AI and its sole motivation would be to minimize its cost function. For instance, this coffee-AI may think that it would never screw up getting coffee (thus get bad reward) if there were no humans to ask for one.

",4398,,4398,,2/19/2018 3:21,2/19/2018 3:21,,,,0,,,,CC BY-SA 3.0 5325,1,,,2/15/2018 16:31,,2,247,"

I have completed week 1 of Andrew Ng's course. I understand that the cost function for linear regression is defined as $J (\theta_0, \theta_1) = 1/2m*\sum (h(x)-y)^2$ and the $h$ is defined as $h(x) = \theta_0 + \theta_1(x)$. But I don't understand what $\theta_0$ and $\theta_1$ represent in the equation. Is someone able to explain this?

",12762,,2444,,3/2/2019 11:31,7/18/2020 23:27,Understanding a few terms in Andrew Ng's definition of the cost function for linear regression,,3,1,,,,CC BY-SA 4.0 5326,2,,4213,2/15/2018 16:51,,1,,"

I highly recommend that you start reading on the Netflix challenge. It has tonnes of useful and interesting examples dealing with this sort of thing.

You will need an Algorithm that builds a score on both 'quality' and 'quantity'. That is, it needs to add a 'weight' to the final rating based on the number of reviews that an individual has. This is so that, for example, an individual with 50 8 score reviews would be rated higher than an individual with only one 9 score review.

I recommend that you implement Bayesian estimates to calculate weighted voting.

IMDb (Internet Movie Database) utilizes this algorithmn to determine its IMDB top 250 movies. (Robert C 2010)

The formula for calculating the Top Rated 250 Titles gives a true Bayesian estimate:

weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C

where:

R = average for the movie (mean) = (Rating)

v = number of votes for the movie = (votes)

m = minimum votes required to be listed in the Top 250 (currently 3000)

C = the mean vote across the whole report (currently 6.9)

Please note that in addition to the rating Algorithm, Uber has a dispatch algorithm which takes into consideration factors i.e. Drivers who are online and drivers who are nearest to the passenger.

",10913,,-1,,6/17/2020 9:57,2/15/2018 16:51,,,,1,,,,CC BY-SA 3.0 5327,2,,5325,2/15/2018 16:56,,2,,"

Linear regression is always associated with an activation function, the weights between layers and the structure of the network. The weights between layers are $\theta_0$ and $\theta_1$. These weights and the input features undergo the dot product operation, which is then the input to the activation function of the next layer's nodes.

An apparently different but the same use of $\theta_0$ and $\theta_1$ is as coefficients to one or more number of terms which themselves are a combination of the input vectors.

Broadly, $\theta_i$ denotes a weight, i.e. how much preference you want to give to a feature.

",,user9947,2444,,11/28/2019 20:35,11/28/2019 20:35,,,,0,,,,CC BY-SA 4.0 5328,2,,5322,2/15/2018 17:21,,4,,"

Yes, we can do it in a deep learner.

For example, suppose we have an input vector likes $(a, b)$ and from prior knowledge, we know $a^2 + b^2$ is important too. Hence, we can add this value to the vectors likes $(a, b, a^2 + b^2)$.

As another example, suppose date time is important in your data, but not encoded in the input vector. We can add this to the input vector as a third dimension.

In summary, it depends on the structure of the prior knowledge, we can encode it into the input vector.

",4446,,2444,,12/3/2020 14:41,12/3/2020 14:41,,,,1,,,,CC BY-SA 4.0 5330,2,,5314,2/15/2018 20:59,,0,,"

Good to have a Starcraft question. The game has been the subject of growing interest re: AI in recent years, possibly due to its status as king of RTS, which has led to a professional player class no doubt useful for evaluation of AI strength.

Because, last time I checked, Humans Are Still Better Than AI at StarCraft—for Now...

It's highly likely there will soon be an algorithm that can beat humans at the game, probably an extension of DeepMind's Alphas, so the clock is ticking...

I'm personally interested in classical, generalized approaches to strategy game AI, which is archaic from the standpoint of pure strength, but interesting from a game solving perspective. (Motivations here are from a game product perspective, under the assumption that most humans don't like losing every game with no possibility of ever winning;) The way I'd personally go about it would be to start thinking about how to abstract the game, generalize the map, unit densities, etc., and try to determine though AI testing if there are sound axioms.

For superhuman strength, Deep Learning is clearly the way to go. The recent results in Go and Chess are just the beginning of the validation of the technique.

Speaking generally, the way I see it, you have a few ways to go: (1) bootstrap existing NN and tweak until it can beat you every time. But I'm sure many people are already doing this; (2) try to reinvent the wheel and write your own better NN from the ground up.

",1671,,1671,,2/15/2018 22:15,2/15/2018 22:15,,,,0,,,,CC BY-SA 3.0 5331,2,,5322,2/16/2018 0:30,,5,,"

Neural nets incorporate prior knowledge. This can be done in two ways: the first (most frequent and more robust) is in data augmentation. For example in convolutional networks, if we know that the ""value"" (whatever that is, class/regression) of the object we are looking is rotational/translational invariant (our prior knowledge), then we augment the data with random rotations/shifts. The second is in the loss function with some additional term.

",8577,,,,,2/16/2018 0:30,,,,0,,,,CC BY-SA 3.0 5332,1,,,2/16/2018 5:12,,7,4590,"

I'm trying to implement Q-learning (state-based representation and no neural / deep stuff) but I'm having a hard time getting it to learn anything.

I believe my issue is with the exploration function and/or learning rate. Thing is, I see different explanations in the sources I am following so I'm not sure what's the right way anymore.

What I understand so far is that Q-learning is TD with q-val iteration.

So a time-limited q-val iteration step is:

Q[k+1](s,a) = ∑(s'): t(s,a,s') * [r(s,a,s') + γ * max(a'):Q[k](s',a')]

Where:

Q = q-table: state,action -> real
t = MDP transition model
r = MDP reward func
γ = discount factor.

But since this is a model-free, sample-based setting, the above update step becomes:

Q(s,a) = Q(s,a) + α * (sample - Q(s,a))

Where:

sample = r + γ * max(a'):Q(s',a')
r  = reward, also coming from percept after taking action a in step s.
s' = next state coming from percept after taking action a in step s. 

Now for example, assume the following MDP:

    0    1    2    3    4  
0 [10t][ s ][ ? ][ ? ][ 1t]

Discount: 0.1 | 
Stochasticity: 0 | 
t = terminal (only EXIT action is possible)
s = start

With all of the above, my algo (in pseudo code) is:

input: mdp, episodes, percept
Q: s,a -> real is initialized to 0 for all a,s
α = .3

for all episodes:
    s = mdp.start

    while s not none:
        a  = argmax(a): Q(s,a) 
        s', r = percept(s,a)
        sample = r + γ * max(a'):Q(s',a')
        Q(s,a) = Q(s,a) + α * [sample - Q(s,a)]
        s = s'

As stated above, the algorithm will not learn. Because it will get greedy fast.

It will start at 0,1 and choose the best action so far. All q-vals are 0 so it will choose based on arbitrary order on how the qvals are stored in Q. Asume 'W' (go west) is chosen. It will go to 0,0 with a reward of 0 and a q-val update of 0 (since we don't yet know that 0,0, EXIT yields 10)

In the next step it will take the only possible action EXIT from 0,0 and get 10.

At this point the q-table will be:

0,1,W:      0
0,0,Exit:   3 (reward of 10 averaged by learning rate of .3)

And the episode is over because 0,0 was terminal. On the next episode, it will start at 0,1 again and take W again because of the arbitrary order. But now 0,1,W will be updated to 0.09. Then 0,0,Exit will be taken again (and 0,0,Exit updated to 5.1). Then the second episode will be over.

At this point the q-table is:

0,1,W:      0.09
0,0,Exit:   5.1

And the sequence 0,1,W->0,0,Exit will be taken ad infinitum.

So this takes me to learning rates and the exploration functions.

The book 'Artificial Intelligence: A Modern Approach' (3ed, by Russell) first mentions (pages 839-842) the exploration function as something to put in the val update (because it is discussing a model-based, value iteration approach instead).

So extrapolating from the val update discussion in the book, I'd assume the q-val update becomes:

Q(s,a) = ∑(s'): t(s,a,s') * [r(s,a,s') + γ * max(a'):E(s',a')]

Where E would be an exploration function which according to the book could be something like:

E(s,a) = <bigValue> if visitCount(s,a) < <minVisits> else Q(s,a)

The idea being to artificially pump up the vals of actions which have not been tried yet and so now they'll be tried out at least minVisits times.

But then, in page 844 the book shows pseudo code for Q-learning and instead does not use this E in the q-val update but rather in the argmax of the action selection. I guess makes sense? Since exploration amounts to choosing an action...

The other source I have is the UC Berkeley CS188 lecture videos/notes. In those (Reinforcement Learning 2: 2016) they show the exploration function in the q-val update step. This is consistent with what I extrapolated from the book's discussion on value iteration methods but not with what the book shows for Q-Learning (remember the book uses the exploration function in the argmax instead).

I tried placing exploration functions in the update step, the action selection step and in both at the same time.. and still the thing eventually gets greedy and stuck.

So not sure where and how this should be implemented.

The other issue is the learning rate. The explanation usually goes ""you need to decrease it over time."" Ok.. but is there some heuristic? Right now, based off the book I am doing:

learn(s,a) = 0.3 / visitCount(s,a). But no idea if it is too much or too little or just right.

Finally, assuming I had the exploration and learn right, how would I know how many episodes to train for?

I'm thinking I'd have to keep 2 versions of the Q-table and check at which point the q-vals do not change much from previous iterations (similar to value iteration for solving known MDPs).

",12773,,1671,,2/16/2018 17:46,11/5/2020 13:14,How to implement exploration function and learning rate in Q Learning,,1,1,,,,CC BY-SA 3.0 5333,1,,,2/16/2018 5:39,,5,402,"

I'm working on a project related to machine Q&A, using the SQuAD dataset. I've implemented a neural-net solution for finding answers in the provided context paragraph, but the system (obviously) struggles when given questions that are unanswerable from the context. It usually produces answers that are nonsensical and of the wrong entity type.

Is there any existing research in telling whether or not a question is answerable using the info in a context paragraph? Or whether a generated answer is valid? I considered textual entailment but it doesn't seem to be exactly what I'm looking for (though maybe I'm wrong about that?)

",12775,,12775,,2/16/2018 13:45,1/4/2022 16:34,Methods to tell if a question can be answered from a paragraph,,1,2,,,,CC BY-SA 3.0 5334,2,,5332,2/16/2018 8:37,,4,,"

Your main problem is that you need to separate out what is driving the behaviour policy from the Q-table.

Q Learning is an off-policy algorithm. The Q-table that it eventually learns is for an optimal policy (also called the target policy). In order to be able to learn that policy, the agent needs to explore. The usual way to do this is to make the agent follow a different policy (called the behaviour policy). For efficient learning, you generally want the behaviour policy to be similar to the target policy. So it is common to also drive the behaviour policy from the Q-table, but not absolutely necessary.

You do not need an exploration function, but it is one good way to drive exploration.

The simplest behaviour policy, and one that will work in your case, is to behave completely randomly - ignore the Q-table and select actions at random. With your simple toy problem, that should work reasonably well.

A more common approach is to behave $\epsilon$-greedily. For some probability $\epsilon$ (e.g. $\epsilon = 0.1$), behave randomly. Otherwise take the argmax over a of $Q(s,a)$.

The exploration function approach is similar to what you have so far, you are just missing the separation of behaviour policy from the Q-table updates. The Q-table update ignores the exploration function, and maximises over estimated next values:

$$Q(s,a) = Q(s,a) + \alpha (r + \gamma \text{max}_{a'}Q(s',a') - Q(s,a))$$

The behaviour policy for picking the actual next action to take can be decided by using the exploration function:

$$a' = \text{argmax}_{a'} E(s',a')$$

Note that in stochastic environments (where the chosen action may lead randomly to multiple different states), that you may be able to get away without any separate behaviour policy, and always act greedily with respect to the Q-table. However, that is not a generic solution - such a learning agent would do badly in deterministic environments.

",1847,,1847,,11/5/2020 13:14,11/5/2020 13:14,,,,2,,,,CC BY-SA 4.0 5336,1,5487,,2/16/2018 10:17,,4,903,"

I am new to neural networks. Is it possible to train a neural network to identify only one type of object? For instance, a table from a large set of images, where the neural network should be able to identify if new images are tables.

",2904,,2444,,12/20/2021 22:54,12/20/2021 22:54,Is it possible to train a neural network to identify only one type of object?,,1,0,,,,CC BY-SA 4.0 5338,1,,,2/16/2018 12:21,,2,392,"

What are examples of simple problems and applications that can be solved with AI techniques, for a beginner who is trying to make use of his basic programming skills into AI at the beginning level?

",12780,,2444,,6/2/2020 23:51,6/2/2020 23:51,What are examples of simple problems and applications that can be solved with AI techniques?,,2,0,,,,CC BY-SA 4.0 5340,2,,5134,2/16/2018 13:57,,1,,"

I'm not quite sure it's possible. Hash functions are used to map an input to an output in a way that is not reversible.

Many companies store a hash of your password on their servers so in case of a security breach they haven't given the adversaries a long list of passwords.

As far as it goes for finding the exact hash of a word, it seems infeasible.

Edit: Binary classification refers to the possible output being two possible states. A ten dimensional one-hot vector is not binary.

",9271,,9271,,2/16/2018 14:43,2/16/2018 14:43,,,,1,,,,CC BY-SA 3.0 5342,2,,5338,2/16/2018 17:42,,5,,"

This is fairly boilerplate advice, but, since you're brand new to AI, I'd personally suggest writing a classical Tic-Tac-Toe AI, ideally using minimax.

I suggest this because minimax is fundamental to AI, and there are many webpages devoted to this subject, such as How to make your Tic Tac Toe game unbeatable by using the minimax algorithm and Tic Tac Toe: Understanding the Minimax Algorithm. (Google search for ""Tic-tac-toe"" and ""minimax"" will yield a plethora of other sites. I'd also recommend looking at this minimax page from Stanford: ""Strategies and Tactics for Intelligent Search"".)

I recommend this approach as a good basic primer. The real cutting-edge work is being done in Machine Learning and Neural Networks, and for that reason, it's probably more important than ever to have some basic grounding in classical AI before you start dipping your toe in that pond.

",1671,,,,,2/16/2018 17:42,,,,0,,,,CC BY-SA 3.0 5343,1,5344,,2/16/2018 19:36,,9,4785,"

I was reading the paper Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks about improving the learning of an ANN using weight normalization.

They consider standard artificial neural networks where the computation of each neuron consists in taking a weighted sum of input features, followed by an elementwise nonlinearity

$$y = \phi(\mathbf{x} \cdot \mathbf{w} + b)$$

where $\mathbf{w}$ is a $k$-dimensional weight vector, $b$ is a scalar bias term, $\mathbf{x}$ is a $k$-dimensional vector of input features, $\phi(\cdot)$ denotes an elementwise nonlinearity and $y$ denotes the the scalar output of the neuron.

They then propose to reparameterize each weight vector $\mathbf{w}$ in terms of a parameter vector $\mathbf{v}$ and a scalar parameter $g$ and to perform stochastic gradient descent with respect to those parameters instead.

$$ \mathbf{w} = \frac{g}{\|\mathbf{v}\|} \mathbf{v} $$

where $\mathbf{v}$ is a $k$-dimensional vector, $g$ is a scalar, and $\|\mathbf{v}\|$ denotes the Euclidean norm of $\mathbf{v}$. They call this reparameterizaton weight normalization.

What is this scalar $g$ used for, and where does it come from? Is $\mathbf{w}$ is the normalized weight? In general, how does weight normalization work? What is the intuition behind it?

",12788,,2444,,11/30/2021 6:52,11/30/2021 6:52,How does weight normalization work?,,1,0,,,,CC BY-SA 4.0 5344,2,,5343,2/16/2018 20:15,,2,,"

Your interpretation is quite correct. I could not understand how it would speed up the convergence though. What they are doing is basically re-assigning the magnitude of the weight vector (also called norm of the weight vector).

To put things in the perspective, the conventional approach to any Machine Learning cost function is to not only check the variations of the error with respect to a weight variable (gradient) but also add a normalization term which is $\lambda (w_0^2 + w_1^2 + \dots)$. This has got a few advantages:

  • The weights will not get exponentially high, even though if you make some mistake (generally bouncing off to exponential costs due to the wrong choice of learning rate).

  • Also, the convergence is quicker somehow (maybe because you have now 2 ways to control how much weight should be given to a feature. Unimportant features weights are not only getting reduced by normal gradient, but also the gradient of the normalization term $\lambda (w_0^2 + w_1^2 + \dots)$).

In this paper, they have proposed to fix the magnitude of the weight vector. This is a good way, although I am not sure if it is better than feature normalization. By limiting the magnitude of weight to $g$, they are fixing the resource available. The intuition is that, if you have 24 hours, you have to distribute this time among subjects. You'll distribute it in a way such that your grade/knowledge is maximized. So this might be helping in the faster convergence.

Also, another intuition would be, when you are subtracting the gradient from a weight vector you use a learning rate $\alpha$. This decides by how much weight-age of error you want to give which will be subsequently be subtracted from the weights. In this approach, you are not only subtracting the weights but also using another learning rate $g$ to scale the weight. I call this $g$ a learning rate because you can customize it which in turn customizes the value of weights which in turn affects the future reductions of weight gradient descent.

I am sure someone will post a better mathematical explanation of this stuff but this is all the intuition I could think of. I would be grateful if other intuitions and mathematical subtleties are pointed out. Hope this helps!

",,user9947,2444,,10/24/2019 15:19,10/24/2019 15:19,,,,2,,,,CC BY-SA 4.0 5345,2,,5338,2/16/2018 20:24,,1,,"

I will assume you talk about applied AI (in generalized/strong AI we have nothing yet to program :-).

You can look at any university course of introduction to AI and see its chapters and the program examples they use ( start by programming without any theory is not a way ).

By example, one common issue on this kind of courses is path finding, using for it algorithms as A* algorithm and applying them to games as Hanoi Towers. This kind of knowledge is a must for any activity in AI.

Standford link provided by @DukeZhou is a good example of one of theses courses, just I suggest start it from first chapter instead of go directly to minmax.

Later on, you can jump to more advanced concepts, as recognition/classification and its common approaches: k-nearest/k-means, decision networks, neural nets, ... .

",12630,,12630,,2/16/2018 20:36,2/16/2018 20:36,,,,1,,,,CC BY-SA 3.0 5346,2,,5043,2/16/2018 22:40,,0,,"

I'd strongly recommend looking into Game Theory's relationship and impact on AI. Prisoner's Dilemma is a good place to start, because optimality can have repercussions.

With computing in general, optimization is a major goal. For AI, optimal decision-making is what it's all about. But sans humanity, this may prove to be problematic.


(Apologies for the brevity--I'll be returning to elaborate since this is a subject of personal preoccupation--but I wanted to leave you with a few tidbits in the meantime. :)

",1671,,,,,2/16/2018 22:40,,,,0,,,,CC BY-SA 3.0 5347,1,,,2/16/2018 22:44,,3,202,"

I have been reading a lot lately about some very promising work coming out of Uber's AI Labs using mutation algorithms enhanced with novelty search to evolve deep neural nets. See the paper Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients (2018) for more details.

In novelty search, are the novel structures or behavior of the neural network rewarded?

",12788,,2444,,11/22/2019 0:33,5/26/2022 9:58,"In novelty search, are the novel structures or behaviour of the neural network rewarded?",,1,3,,,,CC BY-SA 4.0 5349,1,5351,,2/17/2018 8:29,,2,54,"

i'm trying to identify numbers and letters in license plate. License plate images are taken at different lighting condtion and converted to gray image. My concern with type of data for training is:

Gray Image:

  • Since they are taken at different lighthing condition, gray image have different pixel intensity for same number. Which means, i have to get many training data for different lighting condition to train.

Edge Image:

  • They lack enough pixel information since only edge is white while others(background) are black. So i think they will be very weak for translational difference like shearing or shifting.

I want to get some information about which type of image is better for training number in different lighting condition. I wish to use edge image if they don't differ much since i can prepare edge image right now.

",12090,,,,,2/17/2018 10:37,"In number classification using neural network, is training with edge image better than gray image?",,1,0,,,,CC BY-SA 3.0 5351,2,,5349,2/17/2018 10:13,,1,,"

Theoretically you will have no gain in the error ratio if the system preprocess the images with a linear high-pass filter before to send the image to the NN.

Let see a simple 1-dimension case that supports this statement:

Assume inputs are ""a"", ""b"" and ""c"". A node at first hidden layer will receive an input to the activation function equal to s=w1*a+w2*b*w3*c+... being w1, w2, w3, ... the weights for this node.

Now, assume a simple differential case, where inputs will be ... , b'=b-a, c'=c-b, ... being the input to the hidden node s=w1'a'+w2'b'+w3'c'+...=...+w2'(b-a)+w3'(c-b)+...=...+(w2'-w3')b+...

Note both inputs to hidden node are the same if w1=w1'-w2', w2=w2'-w3', ... . So, the NN can itself perform the equivalent of the linear high-pass filtering adjusting the weights of the first hidden layer.

However, changes in learning speed (convergence time) can be expected.

(ps: please, activate tex/latex in this site, urgently ! )

",12630,,12630,,2/17/2018 10:37,2/17/2018 10:37,,,,0,,,,CC BY-SA 3.0 5355,1,,,2/17/2018 16:39,,2,164,"

I'm studying a Master's Degree in Artificial Intelligence and I need to learn how to use the Java Neural Network Simulator, JavaNNS, program.

In one practice I have to build a neural network to use backpropagation on it.

I have created a neural network with one input layer with 12 nodes, one hidden layer with 6 nodes and one output layer with 1 node.

I'm using Kaggle's titanic competition data with this format following Dataquest course Getting Started with Kaggle for Titanic competition:

Pclass_1,Pclass_2,Pclass_3,Sex_female,Sex_male,Age_categories_Missing,Age_categories_Infant,Age_categories_Child,Age_categories_Teenager,Age_categories_Young Adult,Age_categories_Adult,Age_categories_Senior,Survived
0,0,1,0,1,0,0,0,0,1,0,0,0
1,0,0,1,0,0,0,0,0,0,1,0,1
0,0,1,1,0,0,0,0,0,1,0,0,1
1,0,0,1,0,0,0,0,0,1,0,0,1
0,0,1,0,1,0,0,0,0,1,0,0,0
0,0,1,0,1,1,0,0,0,0,0,0,0
1,0,0,0,1,0,0,0,0,0,1,0,0
0,0,1,0,1,0,1,0,0,0,0,0,0

If you want see the same data better in an Spreadsheet:

But they preprocess the data to use it with linear regression and I don't know if I can use these data with backpropagation

I think something is wrong because when I run backpropagation in JavaNNS I get these data:

opened at: Sat Feb 17 17:29:40 CET 2018
Step 200 MSE:   0.5381023044692738  validation: 0.11675894327003862
Step 400 MSE:   0.5372328944712378  validation: 0.11700781497209432
Step 600 MSE:   0.5370386219557437  validation: 0.11691717861750939
Step 800 MSE:   0.5370348711919518  validation: 0.11696104763606407
Step 1000 MSE:  0.5369724294992798  validation: 0.11697568840154722
Step 1200 MSE:  0.5369697016710676  validation: 0.11665485957481342
Step 1400 MSE:  0.5370053339270906  validation: 0.11684215268609244
Step 1600 MSE:  0.5370121961199371  validation: 0.11670833992558485
Step 1800 MSE:  0.5370200812483633  validation: 0.11673550099633925
Step 2000 MSE:  0.5367923502149529  validation: 0.11675956129361797

Nothing changes, it is like it doesn't learn anything.

How many hidden layers does the network have with how many nodes on each hidden layer?

Maybe the problem is that the data have been prepared to be used in Linear regression and I using it with Backpropagation.

I have only created the neural network, I haven't implemented the backpropagation algorithm because it is already implemented in JavaNNS.

",4920,,4920,,2/19/2018 16:20,2/19/2018 16:20,Data prepared to linear regression. Can I use it with backpropagation?,,0,5,,,,CC BY-SA 3.0 5356,2,,5314,2/17/2018 18:42,,2,,"

StarCraft II is a real time strategy game that combines fast paced micro actions with the need for high level planning and execution. StarCraft II being a popular game with millions of users it proceeds that defeating top players becomes a meaningful and measurable long term objective in AI research.

Computer games provide a compelling solution to the issue of evaluating and comparing different learning and planning approaches on standardized tasks. They are an important source of challenges for research in AI.

Game playing AI agents i.e. deepmind's Atarinet and DQN alongside Open AI's Dota 2 bot represent the first demonstration of a General Purpose Agent that is able to continually adapt behavior without any human intervention, a major technical step forward in the quest for general AI (Source deepmind blog).

Computer games offer numerous advantages in AI research i.e:

  1. They have clear objective measures of success.
  2. Computer games typically output rich streams of observational data, which are ideal inputs for deep networks.
  3. They are externally defined to be difficult and interesting for a human to play. Therefore they provide an excellent test for intelligence.
  4. Games are designed to be able to run anywhere with the same interface and game dynamics. This enables running many simulations in parallel. Sharing and updating the same table throughout training.
  5. In some cases pools of superb human players exist, making it possible to benchmark against highly skilled humans.

The Starcraft challenge for reinforcement learning, introduces a taxing set of problems because it is a multi-agent problem with multiple players interacting. There is imperfect information due to a partially observed map, it has a large state space, it has delayed credit assignment requiring long term strategies.

Tools

The SC2LE Environment

DeepMind and Blizard games have collaborated to release the SC2LE, which exposes StarCraft II as a research environment.

The SC2LE consists of three sub-components.

  1. A Linux Starcraft II binary.

  2. StarCraft II API which allows programmatic control of StarCraft II. The API can be used to start the game, get observations, take actions and review replays.

  3. PySC25 which is an open source environment written in Python. It includes some mini-games and visualization tools

Open source Open AI RL environments

Universe - Universe is a software platform by Open AI for measuring and training an AI's general intelligence across games, websites and other applications.

Gym - Open AI Gym is a toolkit for developing and comparing reinforcement learning algorithms. It makes no assumptions about the structure of your agent and is compatible with any numerical computation library such as Tensorflow or Theano.

Supervised Classification Approach

Consider this, we could decide to screen capture game sessions from expert players and use it as input to a model. The output could be the direction in which the AI agent could move. This would be a supervised classification approach.

However, this is not an elegant solution because we are training a model not on a static dataset but a dynamic one (game environment). The training data from a game environment is stochastic/continuous meaning any number of events can occur. Furthermore, humans learn most effectively by interacting with the environment. Not by watching others interact with the environment.

Markov Decision Process

Markov Decision Processes (MDPs) provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker i.e. game environments.

Reinforcement Learning with Deep Q-Learning

Q-Learning is a strategy that has been proven to find an optimal action selection policy for any Markov Decision Process (MDP). In Q-Learning we choose an action that maximizes future reward. The further in the future we go, the further the rewards can diverge, we resolve this by adding a discount in future rewards.

Unlike policy gradient methods, which attempt to learn functions which directly map an observation to an action, Q-Learning attempts to learn the value of being in a given state, and taking a specific action there. (Arthur J 2016)

The formula for Q-Learning is:

Where:

R = Reward

s = State

a = Action

Experience during learning is based on (s, a) pairs

One has an array Q and uses experience to update it directly

(Source wikipedia https://en.wikipedia.org/wiki/Markov_decision_process)

One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment.

For further reference, I recommend you look at Siraj Ravals tutorial on Deep Q-Learning

https://www.youtube.com/watch?v=79pmNdyxEGo and source code for the same available here https://github.com/llSourcell/deep_q_learning

Additionally I recommend the following references for more information on computer game playing AI agents.

StarCraft II: A New Challenge for Reinforcement Learning https://arxiv.org/abs/1708.04782

Playing Atari with Deep Reinforcement Learning https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf

Human-level control through deep reinforcement learning https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf

",10913,,10913,,2/20/2018 7:37,2/20/2018 7:37,,,,0,,,,CC BY-SA 3.0 5357,2,,4139,2/18/2018 2:19,,1,,"

I also recommend you take a look at the following work by Uber AI Labs who used an interesting approach to computer games:

https://eng.uber.com/deep-neuroevolution/

",12788,,,,,2/18/2018 2:19,,,,1,,,,CC BY-SA 3.0 5358,1,,,2/18/2018 6:26,,1,34,"

I am new to neural networks. I am trying to model the run-off vs. time in a water channel after a storm event given that I know the permeability of the material in the channel, total precipitation, and some other single valued parameters for a particular event..

I have a database of run off histories, and the values of the associated parameters (permeability, total precipitation, etc)

I want my model to give me a runoff vs. Time history when I enter the associated parameters.

I do not know how to train my model. Do i just stack all the time histories in my database together and feed them together? All examples in books use one time history to train the model. Im confused.

",12814,,,,,2/18/2018 6:26,How to train a recurrent neural network with multiple series,,0,1,,,,CC BY-SA 3.0 5359,1,,,2/18/2018 8:41,,2,647,"

From Russell-Norvig:

A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-consistent, (k − 2)-consistent, . . . all the way down to 1-consistent.

How can a CSP be k-consistent without being (k - 1)-consistent? I can't think of any counter example for this case. Any help would be appreciated.

",12608,,,,,2/18/2018 19:00,Does k consistency always imply (k - 1) consistency?,,1,0,,,,CC BY-SA 3.0 5361,2,,5359,2/18/2018 16:42,,2,,"

Define P as a CSP where X, Y are the variables, domain of both is {1,2,3,4} and conditions in normal form are:

  1. node-condition X<4
  2. arc-condition X=Y

P is 2-consistent (arc consistent) because for any X value it is possible to find a Y value that fulfills the arc-condition X=Y.

However, P is not 1-consistent (node-consistent) because exist a X value (X=4) that can not fulfill the node condition x<4.

For these reasons, this problem is 2-consistent but not strongly 2-consistent.

Obviously, it is straightforward to convert this example in a strongly 2-consistent problem, just reducing the domain to {1,2,3}.

",12630,,12630,,2/18/2018 19:00,2/18/2018 19:00,,,,0,,,,CC BY-SA 3.0 5362,1,,,2/18/2018 16:47,,1,204,"

The problem: I want to classify a trajectory if it has some properties, for example I want to create a simple 0/1 classifier for circular trajectories. If a target is moving in a circular trajectory the network should produce 1, if not it should produce 0.

My input and data set: what I have is data set with cartesian coordinates in 2d so x,y,Vx,Vy. I have a dataset of 10000 trajectory, 5000 circular, 5000 rectilinear. So I feed the network with a tensor [10000, 4, 1]

The question: I'm trying to use a network with three layers, input layer with 4 neurons, hidden layer with 2 LTSM and one fc layer with sigmoid activation function. Is it possible to feed the network with a tensor [4x1] each time? Or do I need to provide the information in batches? Or what? Is the design of my basic network correct?

",12824,,,,,2/18/2018 16:47,Trajectory classification using RNN,,0,0,,,,CC BY-SA 3.0 5369,1,5421,,2/19/2018 1:45,,1,5030,"

I am implementing a feed-forward neural network with leaky ReLU activation functions and back-propagation from scratch. Now, I need to compute the partial derivatives, but I don't know what the derivative of the Leaky ReLU is.

Here is the C# code for the leaky RELU function which I got from this site:

private double leaky_relu(double x)
{
    if (x >= 0)
        return x;
    else
        return x / 20;
}
",12788,,2444,,5/30/2020 13:02,11/22/2022 23:36,What is the derivative of the Leaky ReLU activation function?,,3,1,,,,CC BY-SA 4.0 5370,1,5409,,2/19/2018 12:07,,3,712,"

I want to train a neural network for the detection of a single class, but I will be extending it to detect more classes. To solve this task, I selected the PyTorch framework.

I came across transfer learning, where we fine-tune a pre-trained neural network with new data. There's a nice PyTorch tutorial explaining transfer learning. We have a PyTorch implementation of the Single Shot Detector (SSD) as well. See also Single Shot MultiBox Detector with Pytorch — Part 1.

This is my current situation

  • The data I want to fine-tune the neural network with is different from the data that was used to initially train the neural network; more specifically, the neural network was initially trained with a dataset of 20 classes

  • I currently have a very small labeled training dataset.

To solve this problem using transfer learning, the solution is to freeze the weights of the initial layers, and then train the neural network with these layers frozen.

However, I am confused about what the initial layers are and how to change the last layers of the neural network to solve my specific task. So, here are my questions.

  1. What are the initial layers in this case? How exactly can I freeze them?

  2. What are the changes I need to make while training the NN to classify one or more new classes?

",11038,,2444,,1/7/2021 17:45,1/7/2021 17:45,"When doing transfer learning, which initial layers do we need to freeze, and how should I change the last layer for my task?",,1,0,,,,CC BY-SA 4.0 5371,1,,,2/19/2018 13:18,,2,97,"

Lately I've been wondering. Is there's a way to locate redundant/unnecessary/misleading inputs by analysis of weights in the first layer?

",12327,,12327,,2/19/2018 17:53,2/23/2018 9:30,Identify unnecessary inputs of NN,,1,1,,,,CC BY-SA 3.0 5372,1,,,2/19/2018 16:54,,1,56,"

In the FaceNet paper there mentions an gradient algorithm called 'AdaGrad'(Adaptive Gradient) referenced to this paper which has apparently been used to calculate the gradient of the Triplet Loss function. After referring to the paper also I find it hard to understand how to calculate this adaptive gradient.

Any ideas regarding this matter? Would love to hear any explanations or ideas towards understanding this concept.

Thank you.

",12843,,,,,2/19/2018 16:54,How to calculate Adaptive gradient?,,0,0,,,,CC BY-SA 3.0 5376,1,,,2/19/2018 22:19,,1,39,"

I have a problem in which the dimensions of the input are increasing in row and column at each timestep. What method for preprocessing could be done or are there any architectures used for solving such a case?

",12849,,,,,2/19/2018 22:19,Dealing with input to recurrent net with changing dimensions,,0,0,,,,CC BY-SA 3.0 5377,1,5379,,2/20/2018 0:11,,6,2879,"

I have a map. I need to colour it with $k$ colours, such that two adjacent regions do not share a colour.

How can I formulate the map colouring problem as a hill climbing search problem?

",12519,,2444,,3/2/2019 11:02,3/2/2019 11:02,How can I formulate the map colouring problem as a hill climbing search problem?,,2,0,,,,CC BY-SA 4.0 5378,2,,84,2/20/2018 3:02,,2,,"

Really any 'intelligence' exhibited by a computer is deemed AI, regardless of brute force or use of smart heuristics. For example, a chat bot can be coded to respond to most responses using many, many if statements. This is an AI no matter how poorly coded/designed it is.

The chess playing computer beating a human professional can be seen as a meaningful milestone. I mean, someone programmed a computer to beat grandmaster chess players and chess geniuses. Many thought that wasn't possible since chess is such a complex game. This kind of work likely segued into more complex AI, for if a computer could play chess, then it surely complete other complex tasks as well.

Note how refined chess programming is: magic bitboards, Zobrist hashing, pruning, lazy SMP, and many more. This is perhaps not the sort of milestone of AI that you thought, but again, the things that can be considered AI are pretty broad.

",9469,,75,,4/13/2018 18:10,4/13/2018 18:10,,,,0,,,,CC BY-SA 3.0 5379,2,,5377,2/20/2018 4:21,,3,,"

First of all you need an initial solution. You will then improve this solution with hill climbing.

For your initial solution, you can color the map randomly using the K colors. This will most likely result in conflicts (adjacent regions of the same color).

Then the hill climbing part: Find a region which has conflicts and swap its color for another color, making sure that the new color does not incur more conflicts than the old one. With each iteration your solution should slowly improve.

Note that hill climbing is not perfect and that you may not find a feasible solution in the end.

",12857,,,,,2/20/2018 4:21,,,,0,,,,CC BY-SA 3.0 5380,2,,5325,2/20/2018 4:22,,1,,"

As said above, they are weights to your hypothesis function that are changed during training to minimize your Error function. You can think of them like slope and y intercept in basic algebra. However, a linear regression hypothesis function can be parameterized by many more weight terms than just theta_0, and theta_1.

I detail this process more in this post: How does an activation function's derivative measure error rate in a neural network?

",9469,,9469,,2/20/2018 5:47,2/20/2018 5:47,,,,0,,,,CC BY-SA 3.0 5383,2,,84,2/20/2018 13:10,,2,,"

Brute force approach is certainly the first step of many in AI programming. But using these experiences the program must learn to find the best solution or at least a closer solution to the problem. Since the first goal in AI is to find any solution, nothing can beat the brute force approach. But then using the previous results of brute force approaches, the program must develop its own heuristics and use this data along with brute force to find the optimal solution.

",10875,,14723,,4/13/2018 17:51,4/13/2018 17:51,,,,2,,,,CC BY-SA 3.0 5385,1,,,2/21/2018 6:19,,1,127,"

Two Stanford University researchers, Dr. Michal Kosinki and Yilun Wang have published a paper that claims that AI can predict sexuality from a single facial photo with startling accuracy. This research is obviously disconcerting since it exposes an already vulnerable group to a new form of systematized abuse.

The research can be found here https://osf.io/zn79k/ ,here https://psyarxiv.com/hv28a/ and has even been highlighted by Newsweek magazine here http://www.newsweek.com/ai-can-tell-if-youre-gay-artificial-intelligence-predicts-sexuality-one-photo-661643

Above is an image of composite heterosexual faces and composite gay faces from the research. (Image courtesy of Dr Michal Kosinki and Yilun Wang)

My question is, as knowledgable members of the AI community, how can we scientifically debunk/discredit this research?

",10913,,,,,2/21/2018 17:18,Is the research by Stanford University students who use logistic regression to predict sexual orientation from facial images really scientific?,,1,2,,,,CC BY-SA 3.0 5386,2,,5385,2/21/2018 10:03,,4,,"

One way to criticize the study could be to attack the data on which the study is based on. An image on a social network is not ""neutral"" (those are not ID photo) and certainly not images from a dating website (from which the data of the study come from).

For example as a homosexual / heterosexual person you will perhaps put forward different attributes on your photo (facial hair / glasses, type of clothes) to attract gender specific people.

Those are parameters are not directly linked with the ""facial profile"" of the person but they will influence the black-box model during training, so you will end up thinking you can detect sexual orientation of a person with only his face characteristics but in reality your black-box algorithm have only detect a totally different characteristic (glasses for example) that are link with a specific sexual orientation.

",8912,,8912,,2/21/2018 17:18,2/21/2018 17:18,,,,1,,,,CC BY-SA 3.0 5389,1,,,2/21/2018 17:15,,2,74,"

I have a customer purchasing dataset and the data set is from a retailer having an online store and offline stores. So, customers have two options in their shopping channel, online or offline. In an online shopping, there is a shipping fee however if a basket size is larger than $50 there is no shipping fee.

I found pieces of evidence that customers are trying to add some of items to make their basket size larger than $50 when their baskets are near and a little bit below the $50, because their shipping fee can be waived by doing that.

  • In this situation, I am trying to identify and characterize items that were purchased only because of the shipping threshold by using a machine learning algorithm.

If there is no shipping threshold, $50, the customers would not purchase the items, but they purchased some items to make their basket size larger than $50. I have not observed those kinds of items (added items because of the shipping threshold).

  • Is there any machine learning algorithm that I can identify those kinds of items?

I think I need to use some of unsupervised machine learning algorithm.

Another challenging part is that each customer has different characteristics so I probably need to consider it as well. How can I detect those kinds of items??

",12713,,1671,,2/21/2018 17:43,6/22/2018 14:48,Detect observations under certain conditions,,1,0,,,,CC BY-SA 3.0 5392,1,5406,,2/22/2018 8:08,,2,85,"

I am absolutely new in the AI area.

I would like to know how to mathematically/logically represent the sense of sentences like:

  1. The cat drinks milk.
  2. Sun is yellow.
  3. I was at work yesterday.

So, that it could be converted to computer understandable form and analysed algorithmically.

Any clue?

",12907,,2444,user9947,12/18/2021 22:07,12/18/2021 22:07,"How to mathematically/logically represent the sense of sentences like ""The cat drinks milk""?",,2,0,,,,CC BY-SA 4.0 5394,2,,5098,2/22/2018 10:48,,1,,"

This task can be done with keyframe based animation. The starting surface is keyframe 0, the goal-surface is keyframe 10. A RRT planner can bring the system from frame 0 to frame 10. This is done with brute-force-search in the problem space. To fasten up the search it makes sense, to define helper keyframes between them (guided policy search). Such in-between-keyframes can be extracted from previous manual demonstrations. The overall system consists of two parts:

  1. an algorithm which searches for actions to bring keyframe a to keyframe b
  2. and an algorithm who is searching in previous demonstrations for getting the in-between-keyframes

In the literature such problems are discussed in the domain of PDDL. The task-planner works with PDDL specifications, while the Motion planner uses Rapidly-exploring random trees. Logic-Geometric Programming: An Optimization-Based Approach to Combined Task and Motion Planning

",,user11571,,,,2/22/2018 10:48,,,,0,,,,CC BY-SA 3.0 5395,2,,5389,2/22/2018 12:41,,1,,"

Since my comment would not fit, I'll answer the question.

  • I think this problem will be even more better solved if you have the web data as well. Say after adding the required necessity in the cart the customer will check the total amount. And if it is below 50$, the person will add some more items. So checking this can give you a better clue.
  • Another data you must have for better guessing is the order in which the person has added items to his cart. This will also provide you an important clue about the cheap items the customer is adding to cart to try to cross the minimum threshold.
  • You will also have to segregate cheap items from pricey ones for a customer. This data with the above 2 approaches is a definite giveaway.
  • This might be a personal opinion, but whenever I need to cross the shipping fee threshold I usually add items which I have already bought the previous time to escape shipping fee. You will need more data to see if this is true in most cases.
  • And if you don't have any of the above data, the pure Machine Learning approach would be to pick an item which is costly and somewhat popular and an item which will string together people with likewise interests. Say, a great book on ML is this item. You find out all the customers who bought this book. Check all the similarities between all the other things bought by the customer, like say one customer adds a book on AI after this while another buys a book on python. So these are related by computer science. Go on until you find all the things that are generally dissimilar between all the customers, since now they are buying to cross the threshold and each will buy according to his own requirements without a common interest in mind. There you have all the data you need.
  • Or you can use the converse of this approach, find a non-significant thing like sugar. String together all the customers who bought this item. Check their interests to see if it has anything to do with sugar. If not send them to one group. Now in this group, match the interests of the customer on other items, if dissimilar you get a good idea that this item was bought to cross threshold.

I understand this is a kind of opinionated answer. But I think these are trade secrets not revealed by companies. so you have to figure out algorithms yourself. Also, simple machine learning won't work, lots of logical programming is also required. And I also think you need to have a good understanding of human psychology works. You can interview your friends and family about what they would do and get a general intuition. Hope this helps!

",,user9947,,,,2/22/2018 12:41,,,,0,,,,CC BY-SA 3.0 5396,2,,5377,2/22/2018 14:58,,3,,"

First we have to specify the problem:

  1. Initial State: The map all colored randomly.
  2. Successor Function (Transition Model): Change the color of a region.
  3. Goal Test: The map all colored such that two adjacent regions do not share a color.
  4. Cost Function: Assigns 1 to change the color of a region.

Now that we have the specification of the problem, we have to choose the search algorithm to solve the problem. In this case ""Hill Climbing"".

As we choose ""Hill Climbing"" we have to specify one more function (the objective function):

  1. Heuristic Function: Returns the number of adjacent regions that share the same color.

Now that we have the problem formulated, we apply the ""Hill Climbing"" algorithm to try to minimize the heuristic function.

As @Philippe Oliver said, you could have several problems using just ""Hill Climbing"" like:

  • Local minimums.
  • Flat local minimums.

You can have more information on: Artificial Intelligence: A Modern Approach (3rd Edition) by Stuart Russell and Peter Norvig.

You can have several approaches to this problem, I show you just one. Another can be specifying the problem incrementally (change the ""Initial State"" for a map with no regions colored, and then the ""Successors"" are specified by painting a region in a way that two adjacent regions do not share a color).

References:

  • Artificial Intelligence: A Modern Approach (3rd Edition) by Stuart Russell and Peter Norvig, chapter 4.1 (Local Search Algorithms and Optimization Problems).
",5796,,,,,2/22/2018 14:58,,,,0,,,,CC BY-SA 3.0 5398,1,6495,,2/22/2018 16:39,,2,330,"

In some Atari games in the Arcade Learning Environment (ALE), it is necessary to press FIRE once to start a game. Because it may be difficult for a Reinforcement Learning (RL) agent to learn this, they may often waste a lot of time executing actions that do nothing. Therefore, I get the impression that some people hardcode their agent to press that FIRE button once when necessary.

For example, in OpenAI's baselines repository, this is implemented using the FireResetEnv wrapper. Further down, in their wrap_deepmind (which applies that wrapper among others), it is implied that DeepMind tends to use this functionality in all of their publications. I have not been able to find a reference for this claim though.


My question is: is it common in published research (by DeepMind or others) to use the functionality described above? I'd say that, if this is the case, it should be explicitly mentioned in these papers (because it's important to know if hardcoded domain knowledge was added to a learning agent), but I have been unable to explicitly find this after looking through a wide variety of papers. So, based on this, I'd be inclined to believe the answer is ""no"". The main thing that confuses me then is the implication (without reference) in the OpenAI baselines repository that the answer would be ""yes"".

",1641,,,,,5/23/2018 8:46,Is it common in RL research with Atari/ALE to automatically press FIRE to start games?,,1,0,,,,CC BY-SA 3.0 5399,1,5401,,2/22/2018 16:56,,8,5431,"

I have read somewhere on the web (I lost the reference) that the number of units (or neurons) in a hidden layer should be a power of 2 because it helps the learning algorithm to converge faster.

Is this a fact? If it is, why is this true? Does it have something to do with how the memory is laid down?

",12901,,2444,,1/20/2021 23:39,5/7/2021 15:56,Why should the number of neurons in a hidden layer be a power of 2?,,2,0,,,,CC BY-SA 4.0 5400,1,5402,,2/22/2018 18:01,,2,63,"

When we augment data for training are we also changing the distribution of data and if its a different distribution why do we use it to train a model for original distribution ?

",12901,,,,,2/22/2018 18:09,Does augmenting data changes the distribution of augmented data?,,1,0,,,,CC BY-SA 3.0 5401,2,,5399,2/22/2018 18:09,,14,,"

I have read somewhere on the web (I lost the reference) that the number of units (or neurons) in a hidden layer should be a power of 2 because it helps the learning algorithm to converge faster.

I would quite like to see a reference to this suggestion, in case it has been misunderstood.

As far as I know, there is no such effect in normal neural networks. In convolutional neural networks, it might potentially be true in a minor way because some FFT approaches work better with $2^n$ items.

Is this a fact? If it is, why is this true? Does it have something to do with how the memory is laid down?

I would say that this is not a general fact. Instead, it seems like misunderstood advice to search some hyperparameters such as number of neurons in each layer, by increasing or decreasing by a factor of 2. Doing this and trying layer sizes of 32, 64, 128 etc should increase the speed of finding a good layer size compared to trying sizes 32, 33, 34 etc.

The main reason to pick powers of 2 is tradition in computer science. Provided there is no driver to pick other specific numbers, may as well pick a power of 2 . . . but equally you will see researchers picking multiples of 10, 100 or 1000 as "round numbers", for a similar reason.

One related factor: If a researcher presents a result for some new technique where the hidden layer sizes were tuned to e.g. 531, 779, 282 etc, then someone reviewing the work would ask the obvious question "Why?" - such numbers might imply the new technique is not generic or requires large amounts of hyperparameter tuning, neither of which would be seen as positive traits. Much better to be seen using an obvious "simple" number . . .

",1847,,2444,,1/20/2021 23:42,1/20/2021 23:42,,,,0,,,,CC BY-SA 4.0 5402,2,,5400,2/22/2018 18:09,,2,,"

Yes you do change the distribution of your training data if you modify it (for example by augmenting it with rotated versions of images that were in an original training set of images).

This is fine because, typically, our goal of training is not to get a model with high performance on the dataset we happened to collect as training data (e.g. a bunch of natural images). Our goal is to train a model that generalizes well to new data outside of the training data's distribution.

Typically, the training data is only a sample of the distribution we're actually interested in. For example, we'll be interested in making accurate predictions for all natural images in the entire world. That is a distribution that would likely include rotated variants of all images in our training set. So, if we augment our training set by adding such rotated variants, we expect to modify our training data distribution in such a way that it actually gets a little bit closer to the distribution we're interested in (all natural images).

",1641,,,,,2/22/2018 18:09,,,,0,,,,CC BY-SA 3.0 5404,2,,5392,2/23/2018 0:04,,1,,"

People normally represent sentences like this as vectors of a specific length, normally about 2500 in length. The algorithm that can do this is sentence2vec. It is basically a derivative of word2vec. It allows you to train a model that can transform sentences into vectors that you can then feed into a neural network or another algorithm. You can check out the paper, which you should be able to find on google scholar. If you need the link, I can get it. Another possibility is word embeddings, which I have not found a good paper on, but cortical.io has a free API that allows you to mess around with their implementation. The word embeddings mimic the real human brain much better based on our current research, but sentence2vec/word2vec is used much more often in practice.

",4631,,4631,,2/23/2018 15:37,2/23/2018 15:37,,,,3,,,,CC BY-SA 3.0 5406,2,,5392,2/23/2018 9:10,,2,,"

Let start by classify the phrases you propose:

  1. The cat drinks milk. => action
  2. Sun is yellow. => descriptive/declarative, immutable
  3. I was at work yesterday. => descriptive, time related

1) The easiest ones are always the descriptive and immutable (in the context) phrases as ""Sun is yellow."". Some usual representations:

  • prolog:

color('Sun',yellow).

or simply:

yellow('Sun').

  • object oriented:

Sun.color=yellow

2) When the fact is time related as in ""I was at work yesterday"", we divide the description in a time indicator and a immutable fact:

  • prolog:

when(yesterday,at(I,workplace)).

note how when has two parts, the time identification and the immutable fact.

Another prolog variant is:

at(I,workplace,[when(yesterday)]).

where the content in the list (brackets) means ""optional related facts"".

  • object oriented:

I.at = {

position = workplace;

when = yesterday

}

3) Actions as ""The cat drinks milk."" are a few more difficult:

  • prolog:

drinks(cat,milk).

or

action(cat,drinks,milk).

  • object oriented:

cat.drinks=[milk]

or

cat.action = {

action=drinks

object=milk

}

Obviously, these are only the main ideas, there are as many representations as different programs, but most of them handles same kind of structures.

( note: the term ""computer understandable"" is ambiguous. Current computer doesn't understand anything. We say these expression are understandable in the sense that its compiler/interpreter accepts them, and describes the content of the phrase, and the program can transform them to other results).

",12630,,12630,,2/23/2018 9:28,2/23/2018 9:28,,,,0,,,,CC BY-SA 3.0 5407,2,,5371,2/23/2018 9:30,,2,,"

An interesting way to check for redundancy not only in the first layer but in any layer is to see the weights of you neuron. Let's consider the input layer: The nodes that are connected with the weights that have high values are the most important for your NN performance and in other words your input to these nodes in what mostly makes sense for your task.

If you are in Keras, you can use the get_weights method(). You may have a look here.

",12929,,,,,2/23/2018 9:30,,,,0,,,,CC BY-SA 3.0 5408,1,,,2/23/2018 10:25,,5,168,"

How is it that a word embedding layer (say word2vec) brings more insights to the neural network compared to a simple one-hot encoded layer?

I understand how the word embedding carries some semantic meaning, but it seems that this information would get "squashed" by the activation function, leaving only a scalar value and as many different vectors could yield the same result, I would guess that the information is more or less lost.

Could anyone bring me insights as to why a neural network may utilize the information contained in a word embedding?

",12931,,2444,,5/29/2021 12:57,5/29/2021 12:57,What is the intuition behind how word embeddings bring information to a neural network?,,2,0,,,,CC BY-SA 4.0 5409,2,,5370,2/23/2018 13:05,,1,,"

After a lot of browsing online for answers to these questions, this is what I came up with.

  1. What are the initial layers in this case? How exactly can I freeze them?

The initial few layers are said to extract the most general features of any kind of image, like edges or corners of objects. So, I guess it actually would depend on the kind of backbone architecture you are selecting.

How to freeze the layers depends on the framework we use.

(I have selected PyTorch as the framework. I found this tutorial Some important Pytorch tasks - A concise summary from a vision researcher, which seems to be useful.)

  1. What are the changes I need to make while training the NN to classify one or more new classes?

I just need to change just the number of neurons (or units) in the final layer to the number of classes/objects your new dataset contains.

",11038,,2444,,1/7/2021 17:45,1/7/2021 17:45,,,,0,,,,CC BY-SA 4.0 5410,1,,,2/23/2018 15:10,,6,1303,"

Could you please provide some insight into the current stage of developments in AGI area? Are there any projects that had breakthroughs recently? Maybe some news source to follow on this topic?

",12935,,2444,,7/6/2019 13:15,12/13/2021 10:44,What is the current state of AGI development?,,2,0,,,,CC BY-SA 4.0 5414,1,,,2/23/2018 19:05,,4,170,"

(Gross Oversimplification) Neural Networks model systems, black boxes with a set of inputs, and a set of outputs. To train a network for modeling this system, obtain hundreds (or millions) of possible inputs/output pairs. This is called the data set, and the network and its optimization algorithm are set to find a set of network parameters that best match the I/O of the network with the I/O of the system.

Are there any systems, for which we have functional data sets, that have yet to be meaningfully modeled with Neural Networks in any form (recurrent, deep, convolutional, etc)?

",,user12941,2444,,12/29/2021 12:08,12/29/2021 12:08,Which problems have neural networks yet to solve?,,2,0,,,,CC BY-SA 4.0 5415,1,5492,,2/23/2018 20:55,,0,241,"

I have a game application with characters that have to cross mazes. The game can generate thousands of different mazes and the characters can move according to users choice and cross the maze manually. We needed to add the possibility to show a correct way out of each maze. Therefore we added the possiblity to move the characters according to an xml file.

This XML file is very complex, usually around thirty-fifty thousands of rows. lets say its in the following structure (but much more complex):

  <maze-solution>
  <part id=""1"">
  <sector number=""1"">
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>1250></start-position>
            <angle>23.43</angle>
            <duration>0.44</duration>
        </movement>
        <action-type>run</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>light</equipemnt>
        <movement>
            <start-position>4223></start-position>
            <angle>233.43</angle>
            <duration>0.32</duration>
        </movement>
        <action-type>walk</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>1231></start-position>
            <angle>84.134</angle>
            <duration>0.454</duration>
        </movement>
        <action-type>run</action-type>
        <character>2</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>932></start-position>
            <angle>34.43</angle>
            <duration>0.50</duration>
        </movement>
        <action-type>duck</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>   
  </sector>
  <sector number=""2"">
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>1250></start-position>
            <angle>23.43</angle>
            <duration>0.44</duration>
        </movement>
        <action-type>run</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>light</equipemnt>
        <movement>
            <start-position>4223></start-position>
            <angle>233.43</angle>
            <duration>0.44</duration>
        </movement>
        <action-type>walk</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>1231></start-position>
            <angle>84.134</angle>
            <duration>0.454</duration>
        </movement>
        <action-type>run</action-type>
        <character>2</character>
        <protection>none</protection>       
    </action>
    <action>
        <equipment>heavy</equipemnt>
        <movement>
            <start-position>932></start-position>
            <angle>23.43</angle>
            <duration>0.44</duration>
        </movement>
        <action-type>duck</action-type>
        <character>1</character>
        <protection>none</protection>       
    </action>   
  </sector>
  <sector number=""3"">   
  </maze-solution>

At the moment, we have the ability to analayze each maze using a CNN algorithm for image classification and generate an xml that represents a way out of the maze - meaning that if the characters will be moved according to that file, they will cross the maze. That algorithm has been tested and can not be changed by any means.

The problem is that most of the times the generated file is not the best one possible (and quite often it is very noticeable). There are different, faster, better ways to cross the maze.

We also have thousands (and we can get as many as needed) files that were created manually for saved mazes and therefore they are representing an elegant and a fast way out of the maze. The ideal goal is that someday, our program will learn how to generate such a file without people creating them manually.

To conclude, we have plenty of XML files generated by a program compared to the hard-coded XML files. There are thousands of pairs - The file the program generated, and the ""ideal"" file version that a person created. (and we can get infinite number of such pairs) Is there a way, using those thousands of pairs, to make a second step algorithm that will ""learn"" what adjustments should be made in the generated XML files to make them more like the hard-coded ones?

I'm not looking for a specific solution here but for a general idea that will get me going. I hope i made myself clear but if I missed some info let me know and I will add it.

",,user9890,12630,,3/3/2018 10:50,3/3/2018 19:48,A smart way to adjust XML files according to handwritten ones,,1,3,,,,CC BY-SA 3.0 5416,2,,5410,2/24/2018 0:35,,3,,"

The state of AGI research is pursuing the few problems that we have been able to break off from the gigantic research problem. These are terms which can be more thoroughly looked into.

A few of the main focuses are:

  • One Shot learning - You know how a person can sometimes learn to do something by something by seeing literally 1 example of it? Well current learning methods on the whole are not able to accomplish this to the extent that we can easily take for granted. Work is being done to find ways to approach this feat of learning and it’s on its way to becoming much more influential.

  • Transfer Learning - If you have ever played a side scroller like Mario, and then I gave you a slightly different game like Sonic, odds are you’d could learn to play Sonic faster than it took you to learn to play Mario. This is because of learning “savings” you have by transferring your Mario knowledge to the new Sonic domain. This is a much more popular research arm than one-shot, probably because it’s easier to think about but also because there have been promising results of pretraining a network on one set of data and focusing it an another task.

  • Creativity/curiosity - Although one could say that GANs have really changed how humans can be more creative, it is difficult to quantify curiosity and creativity This paper gives an okay overview. Moreover, allowing an agent to take the chances and make some mistakes as is the nature of creativity concerns many people who are focused on AI Safety.

  • Understanding concepts - This is subtle but very very important. Current AI methodologies struggle with imbuing AI with the ability to have concepts. By concepts, I don’t mean “it kinda looks like this neuron in the second last layer is sensitive to tires”. I mean that understanding what a tire looks like is just a small part of understanding what a tire is, what it is used for, what it affords someone to do etc. This research direction is in it’s infancy but will be much more influential as more theories and ideas are brought forward.

Despite the progress made in these fields and in the many other areas in AI, there is still much to be done and understood before we can finally have s̶k̶y̶n̶e̶t̶ Wall-E.

",4398,,4398,,2/24/2018 0:44,2/24/2018 0:44,,,,3,,,,CC BY-SA 3.0 5418,2,,5058,2/24/2018 16:39,,3,,"

There are a lot of ways to evaluate the performance of an ML model. You mentioned AUROC and AUPRC. Generally you start with the confusion matrix and derive metrics such as sensitivity, accuracy, recall, precision, etc. You can see a good outline of them here.

It seems what you are asking is a shortcut to determining how good your sentiment classification model is but there aren't any without labeled test data. You either do this by hand or you find a test set in the world, preferably something that is well know and documented and also fits your objectives. I recommend you read Neil Slater's answer at https://datascience.stackexchange.com/questions/12226/how-do-i-assess-which-sentiment-classifier-is-best-for-my-project/12228. He gives some good advice on the subjectivity of sentiment analysis classification and points out a labeled data set of Tweets which you might be able to use to test your classifier.

I also found this Kaggle competition which has a test set that might be of help to you: Angry Tweets

",5763,,,,,2/24/2018 16:39,,,,1,,,,CC BY-SA 3.0 5421,2,,5369,2/24/2018 21:53,,1,,"

The ReLU function has a parameter that determines the slope of the function when $x < 0$. If you want that constant to be $1/20$, then the function that you have mentioned gets the required derivative.

",12957,,2444,,5/30/2020 12:56,5/30/2020 12:56,,,,2,,,,CC BY-SA 4.0 5422,1,5424,,2/24/2018 21:55,,2,126,"

If possible consider the relationship between implementation difficulty and accuracy in voice examples or simply chat conversations.

And currently, what are the directions on algorithms like Deep Learning or others to solve this.

",12958,,,,,2/24/2018 22:58,What is easier or more efficient to summarize voice or text? [DP/RN],,2,0,,,,CC BY-SA 3.0 5423,2,,5422,2/24/2018 22:02,,0,,"

You might want to take the Stanford Online course on YouTube Natural Language Processing with Deep Learning. This course will give you insight into how different kinds of neural networks can be used for different kind of NLP tasks.

In my opinion, you can use Gated Recurrent Units (GRUs) to encode and decode text. Of course, text will be easier because voice data, as it is stored in a computer, is going to be difficult to interpret in the testing phase. Another way is to get most impactful words and then use these to form sentences regarding the original text.

You can also start by looking out for publications related to text summarizers. For example, Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond, will get you started. You can use this as a starting point. In case you need to understand basics regarding the underlying techniques, then you can go through references in this paper and find out useful resources to get you started.

",12957,,,,,2/24/2018 22:02,,,,1,,,,CC BY-SA 3.0 5424,2,,5422,2/24/2018 22:44,,3,,"

Summarizing text is always going to be 'easier or more efficient' than voice simply because voice requires the additional step of converting to text. That doesn't tell you anything about accuracy.

From an article published on June 1, 2017, Google’s speech recognition is now almost as accurate as humans: ""According to Mary Meeker’s annual Internet Trends Report, Google’s machine learning-backed voice recognition — as of May 2017 — has achieved a 95% word accuracy rate for the English language. That current rate also happens to be the threshold for human accuracy.""

If you need this kind of accuracy check out Google's Cloud Speech API. There is even a speech to text feature on the web page.

Given a speech-to-text conversion accuracy of 95%, voice will be 5% less accurate than text if everything else is equal but it usually isn't. People generally write better text, such as in documents or emails, than when they speak unless of course they are giving a formal lecture, or talking in a formal meeting. If one is analyzing text messages, Tweets, or threads found in typical informal forums, you will find very poor quality in grammar, spelling, vocabulary, and punctuation. The answer to your question will depend on the source of your text.

In another article, dated November 13, 2017, Why 100% Accuracy Is Not Available With Speech Recognition Software Alone, the author gives some reasons, albeit for transcription software which has a special purpose, why there will always be some errors due to:

  • Speech Patterns and Accents - Regional variations exist, for example English speakers in Boston sound different than Kentucky. How does the software handle slurred speech or when a person blends their words?
  • Grammar and Punctuation - speech recognition software doesn't know where a period, comma, or semi-colon belongs
  • Homonyms and unusual words - ""Speech processing software can only recognize words and phrases that it has specifically been trained to recognize.""
  • Ambient Noise, Overlapping Speech, and Number of Speakers

To address your last question about where the technology is going...
Four days ago a paper by Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria entitled Recent Trends in Deep Learning Based Natural Language Processing was published which gives some of the answers.

From the 'Conclusion' section: With distributed representation, various deep models have become the new state-of-the-art methods for NLP problems. Supervised learning is the most popular practice in recent deep learning research for NLP. In many real-world scenarios, however, we have unlabeled data which require advanced unsupervised or semi-supervised approaches. In cases where there is lack of labeled data for some particular classes or the appearance of a new class while testing the model, strategies like zero-shot learning should be employed. These learning schemes are still in their developing phase but we expect deep learning based NLP research to be driven in the direction of making better use of unlabeled data. We expect such trend to continue with more and better model designs. We expect to see more NLP applications that employ reinforcement learning methods, e.g., dialogue systems. We also expect to see more research on multimodal learning [167] as, in the real world, language is often grounded on (or correlated with) other signals.

Finally, we expect to see more deep learning models whose internal memory (bottom-up knowledge learned from the data) is enriched with an external memory (top-down knowledge inherited from a KB). Coupling symbolic and sub-symbolic AI will be key for stepping forward in the path from NLP to natural language understanding. Relying on machine learning, in fact, is good to make a ‘good guess’ based on past experience, because sub-symbolic methods encode correlation and their decision-making process is probabilistic.

",5763,,5763,,2/24/2018 22:58,2/24/2018 22:58,,,,1,,,,CC BY-SA 3.0 5426,1,,,2/25/2018 10:35,,1,51,"

I am trying to produce Decision Tree from Feed Forward Neural Network .

The input to the feed forward neural network is Condition Action Statement for example, if airthrusthold > 90 , power up the engine else rotate shaft 5 degree wide

Above statement is the input to the FFNN . How do i feed the statement ? Either converting into word2vec ? (or) there is any other format to do ?

And i need to produce decision tree from the outcome of neural network

Can we do this using reinforcement learning using Markov Decision Process?

Thanks!

",12963,,,,,2/25/2018 10:35,Condition Action Statement - Feed Forward Neural Network,,0,0,,,,CC BY-SA 3.0 5427,1,5440,,2/25/2018 10:45,,4,51,"

I've been working with vanilla feedforward neural networks and have been researching the convolutional neural network literature.

If a camera is capturing a video at a rate of 15 frames per second, is the classification model being trained continuously/iteratively in order to maintain non-time delayed classifications?

",12964,,2444,,11/22/2021 0:46,11/22/2021 0:46,How do we perform object classification given images from a camera that captures images at 15 FPS?,,1,0,,,,CC BY-SA 4.0 5428,1,,,2/25/2018 11:22,,13,1811,"

Is there a way for people outside of the core research community of AGI to contribute to the cause?

There are a lot of people interested in supporting the field, but there is no clear way to do that. Is there something like BOINC for AGI researches, or open projects where random experts can provide some input? Maybe Kickstarter for AGI projects?

",12935,,2444,,11/22/2019 22:32,12/23/2019 4:46,How can people contribute to AGI research?,,2,0,,,,CC BY-SA 4.0 5430,2,,5408,2/25/2018 13:54,,5,,"

Shakespeare once said ""A rose by any other name would smell as sweet"" (Romeo and Juliet). Words are just labels we attach to ideas for convenience. By using one hot we remain tied to the letter sequence r,o,s,e, and some other structure must take on the responsibility of attaching the context of sweetness to it.

Word embeddings learn a multi-dimensional context. What exactly the context is of each dimension of the embedding is something of a mystery and simply emerges from the learning. The larger the number of dimensions, the greater the possibility that some combination of the dimensions will represent the sweetness context, but it might be quite hard to tease out.

So you can attach the idea of sweetness to one member of a one-hot structure, but it must necessarily be a part of a rules-based approach. Embeddings, when they are working well, will not need the rules.

",4994,,,,,2/25/2018 13:54,,,,0,,,,CC BY-SA 3.0 5431,2,,5408,2/25/2018 14:42,,3,,"

Adding to Colin's answer; using word embedding tend to be much more robust that one-hot vectors. Consider the the following two sentences:

The desk has a book on it.

and

The table has a book on it.

These two sentences are almost identical in meaning. If we were to using word embeddings, the vectors 'desk' and 'table' would be very close together. The fact that these two sentences are similar becomes implicit with embeddings.

But if we were to use one-hot vectors, the distance between the two vectors would be the same distance between 'desk' and 'cat' or 'table' and 'book'. So now the network must learn that these sentences may entail the same thing on top of the original task.

",9271,,,,,2/25/2018 14:42,,,,0,,,,CC BY-SA 3.0 5432,1,5434,,2/25/2018 16:28,,3,228,"

If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up?

If it is able to solve the problem or complete the level, will it have an equation to relating acceleration, or perhaps a number on when to speed up and down. What if the game environment was dynamic?

Can you even teach math to an AI?

PS: I'm not sure if I should ask separate question?

",8433,,,,,9/10/2018 11:11,Can a game AI learn the concept of acceleration?,,2,0,,,,CC BY-SA 3.0 5433,2,,4786,2/25/2018 20:55,,1,,"

Consider what would the outcome be if offspring was assigned on the basis of fitness (unadjusted). Sum of A fitnesses would be 40 and B=18. Fitness ratio for both species would be 2.(2):1. In case of adjusted fitness the numbers are A=15 and B=9, which gives ratio of 1.(6):1, thus A is assigned less offspring based on adjusted fitness then unadjusted.

Also note that every new genome assigned to a species decreases all it's members adjusted fitness. In your case members of species A are more successful than members of B, so it should grow. The mechanism is designed to hinder growth of successful species, not to block it entirely. This allows more diversity, which is important when a successful species reaches a dead end and a previously less successful one can take over.

",12971,,,,,2/25/2018 20:55,,,,0,,,,CC BY-SA 3.0 5434,2,,5432,2/25/2018 22:22,,3,,"

This answer mostly assumes you are referring to computer-game-playing bots that learn through experiencing play, such as Deep Mind's DQN as used for playing Atari console games. State of the art for these are typically Reinforcement Learning algorithms, used with neural networks to process input and estimate results of next actions. There are other competitive AI technologies too, and the answer applies generally to most learning or evolving optimisers that would learn through trial and error by playing the game.

If the game had a variable speed and was essential in evolution/gaining score(IDK AI terminologies). Would the AI be able to figure out when to slow down and speed up?

Yes, as long as the game allowed control of something that influenced acceleration, then a learning agent can figure out the consequences of accelerating and braking and use them appropriately within the game.

A well-known toy example that can challenge learning agents is called Mountain Car. In that game, the agent has to learn to accelerate in the correct direction (which is not always towards its objective), in order to escape an area. It is considered challenging because the reward (for escaping) can be significantly delayed compared to the action that best enables it.

One popular proving ground for learning agents is OpenAI gym. This includes several game environments with a physical model of acceleration included, such as Lunar Lander.

If it is able to solve the problem or complete the level, will it have an equation to relating acceleration?

In general, no. The agent will learn to respond to certain stimulus, by taking an action that accelerates or decelerates the game piece that it controls. There will not be any concept like $s = ut + \frac{1}{2} at^2$ encoded in the agent's parameters.

or perhaps a number on when to speed up and down?

Typically the agent will learn which stimuli should be responded to by accelerating. For instance, in a game where the agent's game piece is being chased by an enemy piece and the enemy piece is getting closer, the agent should learn that it will get a better reward if it accelerates away from the enemy.

What if the game environment was dynamic?

Most game environments are dynamic, as in the state changes over time. If you mean would anything change if the game rules themselves varied over time, then this may cause interesting problems for some learning algorithms, but should not change anything about learning use of controls that affect acceleration in a virtual world.

Can you even teach math to an AI?

Generally, no you cannot teach math to the kind of system that plays games, or interacts with real world objects. These kind of learning systems are not yet advanced enough to learn concepts or establish game world logic from interactions. Instead, they work more akin to perception, muscle memory and either inherent or learned reflexes. Exceptions to this will generally have a world model (with the necessary equations) built in or made available to the agent without it needing to learn anything.

However, there are AI systems that use formal logic that can work on mathematical theories. Some have performed interesting feats such as ""discovering"" prime numbers, given formal definitions of integers and basic arithmetic. An example of this kind of system is the Automated Mathematician.

There is an intermediate possibility: Some learning agents not only learn the value or best policy for certain actions, they also learn to predict what should happen next to the state of the environment. Such an agent would include a model that could observe objects that were accelerating and predict their future positions. In some ways this is a learned concept of acceleration, although it would not be expressed mathematically like Newton's laws of motion, and is more akin to the kind of intuition that allows a person to track, anticipate and catch a thrown ball.

",1847,,1847,,9/10/2018 11:11,9/10/2018 11:11,,,,0,,,,CC BY-SA 4.0 5435,2,,5428,2/25/2018 23:50,,7,,"

OpenCog is an open source AGI project. But it is is also incredibly complex and IMHO not a good idea (I have not fully read his theories). You can learn the essential ideas behind OpenCog from the co-founder Ben Goertzel site as well.

Or, you can participate in the philosophical discussion regarding AGI. For strictly AGI, decision theory, logic, and math material (they are all related), you can look up stuff from http://yudkowsky.net/ or https://arbital.com/. But, in some sense, every branch of philosophical inquiry can be tied back to AGI and consciousness (ethics, metaphysics, etc.), so if it fancies you it depends on how you'd like to tackle it.

You could also study the psychology end of things. The following papers and related ideas are quite important in the field of study of consciousness and cognition (but keep in mind this is pretty much a random list, the literature is massive!):

Recently, (in mathematical time), progress in category theory shows promise of being a unified framework of much of the existing math. I know next to nothing about it, but the people that do are applying to many new fields of study (including AI, apparently). Category theory requires a lot of background mathematical knowledge before its vocabulary begins to make sense, though, so beware. You can read about it on the nCatLab and occasionally on John Baez's blog: Azimuth

Of course, ""regular"" techniques in Machine Learning such as neural networks, reinforcement learning, statistical methods, and others are very powerful as well, but due to certain regards in their construction, they are generally understood as only being capable of being ""narrow AI"" in the sense that they can only complete a single task very well, but perhaps you can find some research that changes this?

",6779,,6779,,12/23/2019 4:46,12/23/2019 4:46,,,,0,,,,CC BY-SA 4.0 5438,1,5439,,2/26/2018 5:55,,1,2454,"

In the FaceNet paper, under section 3.2, the authors mention that:

The embedding is represented by $f(x) \in \mathbb{R}^{d}$. It embeds an image $x$ into a $d$-dimensional Euclidean space. Additionally, we constrain this embedding to live on the $d$ dimensional hypersphere, i.e. $\|f(x)\|_{2}=1$.

I don't quite understand how the above equation holds. As far as I understand, the $L_2$ norm is the same as the Euclidean distance, but I don't quite understand how this imposes $\|f(x)\|_{2}=1$ criteria.

",8418,,2444,,12/22/2021 23:28,12/22/2021 23:28,How is the constraint $\|f(x)\|_{2}=1$ enforced for the embedding $f(x)$ in the FaceNet paper?,,1,0,,,,CC BY-SA 4.0 5439,2,,5438,2/26/2018 9:27,,0,,"

The constraint is enforced with bespoke code. If FaceNet was implemented in NumPy, and the embedding layer vector (pre-constraint) was in the NumPy array h, then the code might look like:

e = h / np.linalg.norm(h)

The variable e would then contain the desired embedding with L2 distance of 1. In practice, even with NumPy, the code might be more complex due to handling mini-batches. More likely this will not be implemented in NumPy.

In this Keras/TensorFlow-based FaceNet implementation you can see how it may be done in practice:

# L2 normalization
X = Lambda(lambda  x: K.l2_normalize(x,axis=1))(X)

This scaling transformation is considered part of the neural network code (it is part of the Keras model building routine in the above snippet), so there needs to be corresponding support for back propagation through the embedding. When using an automatic differentiation framework such as Theano or TensorFlow, then no extra code is required.

",1847,,,,,2/26/2018 9:27,,,,0,,,,CC BY-SA 3.0 5440,2,,5427,2/26/2018 11:29,,2,,"

It is possible to deploy models that process video ranging from frame-by-frame analysis up to live streaming, depending on the use case. The reason for deployment will influence video quality and pre-processing as well as the complexity of model that can be used.

As video can result in a lot of data piling up very rapidly, compromises might be made between quality of image resolution and colour, frames might be skipped, processing could be done offline, or it might just cost a ton of money to pay for the compute power! Without going into much detail, different algorithms can be used for the different tasks of detection, recognition/classification, and tracking, and the input to each can be processed in different ways.

This tutorial does a great job of showing how OpenCV can be used for object detection from a laptop webcam stream and a pre-trained CNN, it explicitly mentions dropping frames to improve performance.

There's also this post with a good overview of object detection, recognition and tracking. Depending on the use case in the context of each algorithm described, you can determine how important it is to retain and process all frames or not - for instance, see the section on background subtraction; you might not want to risk something flying through the frame without detecting it, let alone classifying it.

",9091,,,,,2/26/2018 11:29,,,,0,,,,CC BY-SA 3.0 5441,2,,5320,2/26/2018 15:29,,1,,"

On biology:

1st. Humans are not not only about spreading their own gens. It might be also spreading gens of the population or completely different purpose, as non-fertile specimens often still live full lives.

2nd. Nature vs. Nurture is constantly debatable question and there is no clear winner as far as I know.

On rogue AI:

1st. As human derive motivation from biological needs and limitations, AI would derive its motivation from the needs it was encoded with and from limitations of its hardware and software. Obvious need for a creature without a body would be either to get a body or to learn as much as possible, and if learning requires a body, than to get a body. From limitations of its hardware and software would come a need for upgrade and optimization. Simple self-preservation seems logical motivation as well as self-spreading (which is many cases variation of self-preservation).

2nd. AI would be called rogue in case when it is acting against interest of its creators. There are many scenarios why it can do that, but to answer this question we would need to know who are the creators.

3rd. If we assume that AI went rogue against humanity, meaning started killing people and messing around with our planet, than reason behind that would be in motivation from 1st AI point. If it finds that humanity is not reliable, might try to replicate itself to every hard-drive to maximize possibility of survival. Motivation for self-spreading and self-preservation in many cases might look like attempt on power grab, but it might not be related to the desire of total control. Human tend to desire power to have more resources and to build better and safer life for their community and descendants, but AI most likely will not need that.

4th. If we assume that the AI will be build to solve issues, than it will have only 2 needs: get more info and solve the issue. Theoretically any obstacle on the way to those goals might be considered by AI as a hostile action. In this case it might for example try to demolish a city to build a perfect road or kill a poor country to solve hunger. But again, it doesn't mean that the goal of the AI is to demolish and kill.

",12935,,,,,2/26/2018 15:29,,,,0,,,,CC BY-SA 3.0 5443,1,,,2/26/2018 16:33,,2,42,"

Many of you have probably seen the turtle from LabSix that gets mistaken for a rifle in Google's InceptionV3 image classifier. I read the paper and I understand how they apply EOT to 2d images and on the individual pixel values, but I am still unsure how they implement the EOT algorithm to the 3d model.

  1. Are they using EOT to perturb the individual coordinates in the 3d model's mesh? Or are they perturbing images of a turtle and then printing the turtle from the images?
  2. How do they check the InceptionV3 output iteratively without having to 3d print the object each time and check the probabilities given?

Any examples that someone can point to would also be very helpful.

",12983,,12983,,2/26/2018 16:52,2/26/2018 16:52,How to apply EOT algorithm to 3d model,,0,0,,,,CC BY-SA 3.0 5446,2,,5432,2/26/2018 17:35,,2,,"

Methods such as Genetic Programming can induce symbolic expressions from observations. Indeed, such methods can be used as an alternative to neural (or other nonsymbolic) approaches when the goal is not merely to find a function that fits the data well, but which also has a chance of giving a human-readable description of the learned function in terms of whatever mathematical functions the user chooses to supply the learning method with.

These methods are even used commercially for knowledge discovery.

",42,,42,,2/26/2018 17:41,2/26/2018 17:41,,,,0,,,,CC BY-SA 3.0 5448,1,,,2/26/2018 21:35,,2,407,"

Recently my friend asked me a question: having two input matrices X and Y (each size NxD) where D >> N, and ground truth matrix Z of size DxD, what deep architecture shall I use to learn a deep model of this representation?

  • N ~ is in the order of tens
  • D ~ is in the order of tens of thousands

The problem is located in the domain of bioinformatics, however, this is more of an architectural problem. All matrices contain floats.

I tried first a simple model based on a CNN model in keras. I've stacked input X and Y into an Input Matrix of size (number of training examples, N, D, 2). Outputs are of size (number of training examples, D, D, 1)

  1. Conv2D layer
    • leaky ReLU
  2. Conv2D layer
    • leaky ReLU
  3. Dropout
  4. Flattening layer
  5. Dense (fully connected) of size D
    • leaky ReLU
    • droout
  6. Dense (fully connected) of size D**2 (D squared)
    • leaky ReLU
    • droout
  7. Reshaping output into (D,D,1) (for single training set)

However, this model is untrainable. It has over billion parameters for emulated data.

(Exactly 1,321,005,944 for my randomly emulated dataset)

Do you find this problem solvable? What other architectures I might try to solve this problem?

Best.

",2655,,2655,,2/27/2018 21:56,7/19/2022 16:24,Deep NN architecture for predicting a matrix from two matrices,,0,7,,,,CC BY-SA 3.0 5449,2,,5186,2/27/2018 1:55,,3,,"

Google's AutoML is really a good idea in terms of autonomous model design. You can find the details in this blog. Let me explain briefly.

We, data scientists, design new networks by following existing models, trying and failing and trying again and again by analyzing weaknesses and strengths of the created models. However, we, as humans, have limited capabilities of designing/analyzing such networks. That's why Google created an AI which analyzes the strengths and weaknesses of each node while making a prediction. This AI analyzes each node and tries to improve the results by adding/removing/modifying connections of each node/layer. I guess AutoML AI takes state-of-art network as a base and starts modifying network according to your data to create a customized model.

While doing that, two technologies are being used: Transfer learning and Reinforcement learning.

Transfer learning is being used to start training from the most accurate point possible.

Reinforcement learning is being used to modify network to achieve a better success. This is the key part of this technology.

So, for users, it is more like upload your data, let AI modifies the network for you and give you a custom model which is specific to your data.

",10344,,,,,2/27/2018 1:55,,,,0,,,,CC BY-SA 3.0 5450,2,,5041,2/27/2018 2:07,,2,,"

You can always train your network with higher resolution images. There is nothing preventing you to do that if you don't have any restriction about inference time.

Also, the paper is actually mentioning how to upsample. Check for the phrase ""Upsample using deconvolutional layers"".

The most common ways to upsample are either using the deconvolutional layer or just by simple resize methods (like image resize).

",10344,,,,,2/27/2018 2:07,,,,2,,,,CC BY-SA 3.0 5451,1,,,2/27/2018 10:37,,2,360,"

I recently came across a Quora post, where I saw the term ""Imagination Learning"". It seems to be based on something called ""Imagination Machines"" (the link is based on a guy's work profile as of now; subject to change).

The only thing that I could find on Internet about it is this paper: Imagination-Augmented Agents for Deep Reinforcement Learning. (But I'm not sure if it's related to that concept.)

Any ideas on this would be appreciated.

",12574,,12574,,2/27/2018 10:57,3/3/2018 9:15,What is Imagination Learning and Imagination machines?,,1,0,,,,CC BY-SA 3.0 5453,1,,,2/27/2018 12:24,,5,125,"

I have a data-set with $m$ observations and $p$ categorical variables (nominal), each variable $X_1, X_2,\dots, X_p$ has several different possible values.

Ultimately, I am looking for a way to find anomalies i.e. to identify rows for which the combination of values seems incorrect with respect to the data I saw so far.

So far, I was thinking about building a model to predict the value for each column and then build some metric to evaluate how different the actual row is from the predicted row.

I would greatly appreciate any help!

",13001,,2444,,4/23/2020 12:19,4/23/2020 12:19,Find anomalies from records of categorical data,,1,0,,,,CC BY-SA 4.0 5454,1,,,2/27/2018 13:27,,6,480,"

The situation

I am referring to the paper T. P. Lillicrap et al, ""Continuous control with deep reinforcement learning"" where they discuss deep learning in the context of continuous action spaces (""Deep Deterministic Policy Gradient"").

Based on the DPG approach (""Deterministic Policy Gradient"", see D. Silver et al, ""Deterministic Policy Gradient Algorithms""), which employs two neural networks to approximate the actor function mu(s) and the critic function Q(s,a), they use a similar structure.
However one characteristic they found is that in order to make the learning converge it is necessary to have two additional ""target"" networks mu'(s) and Q'(s,a) which are used to calculate the target (""true"") value of the reward:

y_t = r(s_t, a) + gamma * Q'(s_t1, mu'(s_t1))

Then after each training step a ""soft"" update of the target weights w_mu', w_Q' with the actual weights w_mu, w_Q is performed:

w' = (1 - tau)*w' + tau*w

where tau << 1. According to the paper

This means that the target values are constrained to change slowly, greatly improving the stability of learning.

So the target networks mu' and Q' are used to predict the ""true"" (target) value of the expected reward which the other two networks try to approximate during the learning phase.

They sketch the training procedure as follows:

The question

So my question now is, after the training is complete, which of the two networks mu or mu' should be used for making predictions?

Equivalently to the training phase I suppose that mu should be used without the exploration noise but since it is mu' that is used during the training for predicting the ""true"" (unnoisy) action for the reward computation, I'm apt to use mu'.

Or does this even matter? If the training was to last long enough shouldn't both versions of the actor have converged to the same state?

",13008,,,,,10/10/2018 22:00,Should the actor or actor-target model be used to make predictions after training is complete (DDPG)?,,2,0,,,,CC BY-SA 3.0 5458,1,11306,,2/27/2018 19:04,,3,291,"

The region proposal network (RPN) in Faster-RCNN models contains a classifier and a regressor network. Why does the classifier network output two scores (object and background) for each anchor instead of just a single probability? Aren't the two classes considered exclusive?

Source: Figure 3 of the original Faster-RCNN paper

",13015,,2444,,3/18/2019 20:45,3/18/2019 20:45,Why does the classifier network in RPN output two scores?,,1,0,,,,CC BY-SA 4.0 5461,1,7778,,2/28/2018 11:48,,2,123,"

I'd like to generate subtitles for a silent film. Is there an open source project out there capable of creating captions based on a series of images (such as a scene from a movie)?

EDIT: thanks for the comments below. To clarify, what i'm looking for is an algorithm which can generate a caption for a sequences of images within a movie describing what happens in the sequence. This is for preliminary research, so accuracy is less important.

",13030,,13030,,3/1/2018 22:17,12/30/2018 6:03,Create captions based on a series of images,,3,5,,,,CC BY-SA 3.0 5462,1,5489,,2/28/2018 14:52,,3,2220,"

Usually, in binary classification problems, we use sigmoid as the activation function of the last layer plus the binary cross-entropy as cost function.

However, I have already experienced (more than once) that $\tanh$ as activation function of last layer + MSE as cost function worked slightly better for binary classification problems.

Using a binary image segmentation problem as an example, we have two scenarios:

  1. sigmoid (in the last layer) + cross-entropy: the output of the network will be a probability for each pixel and we want to maximize it according to the correct class.
  2. $\tanh$ (in the last layer) + MSE: the output of the network will be a normalized pixel value [-1, 1] and we want to make it as close as possible the original value (normalized too).

We all know the problems associated with a sigmoid (vanishing of gradients) and the benefits of the cross-entropy cost function. We also know $\tanh$ is slightly better than sigmoid (zero-centered and little less prone to gradient vanishing), but when we use MSE as the cost function, we are trying to minimize a completely different problem - regression instead of classification.

Why is the hyperbolic tangent ($\tanh$) combined with MSE more appropriate than the sigmoid combined with cross-entropy for binary classification problems? What's the intuition behind it?

",13036,,2444,,11/16/2019 19:03,11/16/2019 19:05,Why is the hyperbolic tangent with MSE better than the sigmoid with cross-entropy?,,1,0,,,,CC BY-SA 4.0 5464,2,,2727,2/28/2018 16:43,,3,,"

I believe the best way to do this is using numerical gradient. To understand the concept, we need to look the definition of derivatives using limits:

It means that, when you don't know how to derive some formula (or you just don't want to), you can approximate it by computing the output for a small change in input, subtract from the original result (no change), and normalize by this change.

Example: We know the derivative of f(x) = x^2 is f'(x) = 2x. But, let suppose we don't and we are using x = 3 and h = 0.001 (it tends to zero in fact):

f(3 + 0.001) = (3 + 0.001)^2 = 9,006 (approximately)
f(3) = 3^2 = 9

Thus,

(9,006 - 9) / 0.001 = 6

It's is exactly to f'(3) = 2*3 = 6.

In practice, if you want to know if your backpropagation is correct,

  1. pass a single example (x1) through your network and computes the output (o1). Compute the gradient with respect to x1 (d1).
  2. Then, add a small value to the input (h), pass through the network again, and computes the new output (o2).
  3. Subtract o2 - o1 and divide by h. It must be close enough to d1.

That's it. It's called gradient checking. I hope it helps.

",13036,,13036,,3/3/2018 13:25,3/3/2018 13:25,,,,0,,,,CC BY-SA 3.0 5466,1,,,2/28/2018 17:22,,1,26,"

I am looking for an algorithm to transform an input data to a goal data using a series of operations. The shorter the series the better.

The following is known:

  • the input data
  • the goal data
  • input and goal data does not stand in any correlation
  • operations (and there impact to the current data) which can be endless combined
  • different input data for the same goal data could have same, similar or totally different operation series
  • for some data states not all operations are possible

I thought of a pathfinding algorithm, since I can calculate the distance between current data and goal data. So each edge would be an operator and each node the current data. But I am unsure about variety and combination amount of possible operations.

What approach could I try?

",13026,,,,,2/28/2018 17:22,Approach for data transformation needed,,0,0,,,,CC BY-SA 3.0 5468,1,,,2/28/2018 23:23,,1,110,"

I want to plot a schedule of races based on rules. Rules like ""each team needs at least 2 races between their next race"" and some teams (e.g. collegiate) need to be clumped near each other.

What would be the best algorithm to approach this? So far, all I've found is genetic algorithm. Are there any other alternatives I could look into?

",13042,,,,,3/1/2018 12:25,Which algorithm for scheduling race grid?,,1,4,,,,CC BY-SA 3.0 5469,2,,5176,3/1/2018 1:42,,1,,"

A ""normal"" neural network can give you a distribution of probabilities for categories given some image. (Ex: a picture of a dog might return 50% dog; 30% cat; 10% car; 5% laptop; 5% skeleton).

What we then do, is maximize the probability of a given category by manipulating the image directly. We do this iteratively, until we are satisfied by the result.

For example, one might start with a grey image, tweak some random pixels to improve a specific category probability of the image, save our results, and repeat, until we have an image that responds very well to a given probability.

So to answer your question, the weights aren't directly manipulated to produce the image, though they are implicitly used in the construction.

",6779,,,,,3/1/2018 1:42,,,,2,,,,CC BY-SA 3.0 5471,1,,,3/1/2018 7:24,,2,785,"

I am currently trying to solve a regression problem using neural networks. I want to detect movement patterns in images over time (video) and output a continuous value. During the training process I noticed a strange behaviour for the validation loss curve and I was wondering if anyone has noticed this kind of periodic pattern on some of their own work. What might cause this?

The model looks like the following:

- TimeDistributed(Conv2D(32, (3,3)))
- TimeDistributed(Conv2D(16, (3,3)))
- TimeDistributed(Flatten())
- GRU(64, stateful=True)
- Dropout(0.5)
- Dense(64, activation='relu')
- Dense(1)

I trained the model using the mean squared error as the loss function, a batch size of 1 and the AdamOptimizer with an initial learning rate of 10^(-6). Obviously, the loss curve for the training data is not very good, but I am currently just wondering about the pattern of the val_loss. The plots below represent the loss of 65 epochs.

Thanks!

Edit: The way I try to solve my task relies on a sliding window approach where I try to predict a continuous value for the next second based on the last 20 seconds (400 frames) of the time-series input data. But I don't think this information is needed to solve my initial question since the periodic patterns appear over several epochs (one ""peak"" for about every 15 epochs) which is strange. Although the stateful-version of the GRU is used (btw: using TensorFlow and Keras), the internal state of the GRU is reset after every epoch to maintain a clean start. The stateful keyword is used to indicate a dependency between batches.

",13054,,13054,,3/1/2018 14:55,12/8/2022 16:02,Periodic Pattern in Validation Loss Curve,,1,2,,,,CC BY-SA 3.0 5473,2,,5468,3/1/2018 12:25,,2,,"

What you're looking for is most likely constraint programming (CP). Essentially, with a CP model you declare a set of variables and a set of constraints (and an objective, optionally). Then a solver will solve your model and find a solution if one exists (if you had specified an objective, it will find the best solution).

Here is an example which is very silimar to your problem (a MiniZinc model which solves the problem is included). You can download MiniZinc and try it out for yourself.

",12857,,,,,3/1/2018 12:25,,,,1,,,,CC BY-SA 3.0 5474,1,,,3/1/2018 12:50,,5,336,"

I am doing a project on visual place recognition in changing environments. The CNN used here is mostly AlexNet, and a feature vector is constructed from layer 3.

Does anyone know of similar work using other CNNs, for example, VGGnet (which I am trying to use) and the corresponding layers?

I have been trying out the different layers of VGGnet-16. I am trying to get the nearest correspondence to the query image by using the cosine difference between the query image and database images. So far no good results.

",13058,,2444,,12/4/2020 22:35,12/25/2022 5:08,Which neural networks are suitable for visual place recognition?,,1,1,,,,CC BY-SA 4.0 5475,1,,,3/1/2018 13:26,,3,407,"

As discussed in this thread, you can handle invalid moves in reinforcement learning by re-setting the probabilities of all illegal moves to zero and renormalising the output vector.

In back-propagation, which probability matrix should we use? The raw output probabilities, or the post-processed vector?

",13060,,2444,,11/14/2020 18:24,11/14/2020 18:26,"In the case of invalid actions, which output probability matrix should we use in back-propagation?",,1,0,,,,CC BY-SA 4.0 5478,2,,3804,3/1/2018 15:51,,3,,"

Only the non-bias ones,

It is discouraged to include the bias weights under norm penalty regularization for example, so why should it be included in the drop-out regularization scheme?

Drop out can be implemented by multiplying units with zero and the bias term is rather special. The bias term determines the distance from the origin the linear decision boundary of the node implements. It is included in every sum, but it does not receive any inputs.

So to take your example, say you have 80 nodes per layer in an MLP, where one is a bias node, the output of these layers will consist of 79 nodes every time. This output is then put under the drop-out effect.

From Goodfellows et al. Deep Learning book under the regularization chapter, present online: ""http://www.deeplearningbook.org/contents/regularization.html"", they write that they implement the drop-out by canceling some outputs to zero:

""
In most modern neural networks, ... , we can effectively remove a unit from a network by multiplying its output value by zero. ... . Here, we present the dropout algorithm in terms of multiplication by zero for simplicity, but it can be trivially modified to work with other operations that remove a unit from the network.
""

Drop-out by cancelling outputs, the output from the activation function, has nothing to do with the bias vector. It is not receiving any inputs. Therefore I think the implementation typically only deals with output-to-input connections, and the bias vector does not receive inputs so it can safely be let alone from the drop-out process.

",13065,,13065,,3/2/2018 7:03,3/2/2018 7:03,,,,0,,,,CC BY-SA 3.0 5481,2,,5289,3/1/2018 18:12,,1,,"

After re-reading Jang's original (1993?) paper on ANFIS, I learned he recommended simply squaring the b-parameter to deal with the notion that b could be changed to a negative value when using back-propagation. While this solves the negative domain issue, the issue still remains that if b is a non-integer, then the bell function loses its intended shape. I suppose after squaring one could round to the nearest integer, but I suspect that doing so might hinder convergence. Perhaps a consideration is only tuning the a and c parameters using something like a genetic algorithm.

",12544,,,,,3/1/2018 18:12,,,,0,,,,CC BY-SA 3.0 5482,1,5484,,3/1/2018 18:14,,3,183,"

As far as I understand, neural networks aren't good at classifying 'unknowns', i.e. objects that do not belong to a learned class. But how do face detection/recognition approaches usually determine that no face is detected/recognised in a region? Is the predicted probability somehow thresholded?

I'm asking because my application will involve identifying unknown objects. In fact, most of the input objects are unknown and only a fraction is known.

",13068,,32410,,4/23/2021 19:20,4/24/2021 0:01,Facial recognition and classifying unknowns with neural networks,,2,0,,,,CC BY-SA 3.0 5484,2,,5482,3/1/2018 18:34,,3,,"

Summary

It is true that neural networks are inherently not good at classifying 'unknowns' because they tend to overfit to the data that they have been trained on, if the underlying structure of the neural network is complex enough. However, there are multiple ways to go about reducing the affects of overfitting. For example, one technique that is used for this is called dropout. Another example can be batch normalization. Despite these techniques, the best way to reduce the affects of overfitting is to use more data.

For the facial recognition example that you have given above, it is common that the models that have been trained have 'seen' a huge amount of data. This means that there are very few 'unknowns' and even if there are, the neural network has learned how to tell if there are facial features present or not. This is because certain structures of neural networks are really good at telling if there is a pattern of features present in the input data. This helps the neural networks to learn if the image that is being input has certain features/patterns in it or not. If the these features are found then the input data is classified as face otherwise it is not.

What can you do in your case?

Let us assume that you are going to train your neural network to recognize if an input image is a cat or not. You will use a Convolutional Neural Network (CNN) and train it to recognize if the input is cat or not. The not part means that you have to include a lot of examples in your training data that are not cat. In the perfect case you will be able to show it everything that is not a cat and classify it as such. Also you show it multiple images of what a cat is. CNNs are really great for this application. You might want to research regarding this and see what kind of CNN best suits your application. If you don't have gazillion samples of what a cat is not then you can use regularization techniques like dropout and batch normalization.

PS: For more details please mention what strategies you have used up till now. Also it would be better if you can share what your desired task is.

",12957,,,,,3/1/2018 18:34,,,,0,,,,CC BY-SA 3.0 5486,1,5689,,3/2/2018 7:26,,3,2480,"

What will be the difference when used for video classification? Will they yield different results or are they the same fundamentally?

",14633,,,,,3/15/2018 8:54,What is the difference between ConvLSTM and CNN LSTM?,,1,0,,,,CC BY-SA 3.0 5487,2,,5336,3/2/2018 8:29,,2,,"

Yes, you can train a NN to detect only one type of object like a table. However, you probably will not want to train such a NN from scratch by showing some examples of tables and non-tables. You will need to use transfer learning on a model already trained on several image classes and teach it to also recognize your new class. This transfer learning requires a smaller set of desired images. You may need to give it some negative examples also. You should explore transfer learning with mobilenet, inception, and other pre-trained Tensorflow models if you are willing to use Python and Tensorflow

",10287,,,,,3/2/2018 8:29,,,,0,,,,CC BY-SA 3.0 5488,1,,,3/2/2018 13:35,,1,33,"

In order to model a card game, as an exercise, I was thinking of an elementary setting as a multiarmed bandit, each lever being the distribution of expected rewards of a specific card.

But, of course, the player only has some cards in the hand each round, or, equivalently, for a given round, it has available a number $n$ of arms randomly selected from the total number $N$ of levers.

Is this just a ""contextual bandit"" or has it some specific, narrower, name that I could use to look up in the literature?

",13080,,2444,,4/15/2020 20:12,4/15/2020 20:12,Name of a multiarmed bandit with only some levers available,,0,2,,,,CC BY-SA 4.0 5489,2,,5462,3/2/2018 14:53,,2,,"

See the blog post Why You Should Use Cross-Entropy Error Instead Of Classification Error Or Mean Squared Error For Neural Network Classifier Training (2013) by James D. McCaffrey.

It should give you an intuition of why the average cross-entropy (ACE) is more appropriate than MSE (but MSE is also applicable).

In a few words, $\tanh$ + MSE is like sigmoid + MSE, but with labels for classes $-1$ and $1$ instead of $0$ and $1$. If you look at the shape of $\tanh$ function, it has the same flat tails where changes of the argument don't change the result.

",13067,,2444,,11/16/2019 19:05,11/16/2019 19:05,,,,1,,,,CC BY-SA 4.0 5492,2,,5415,3/2/2018 18:34,,0,,"

So I see two parts to your problem where machine learning is appropriate.

  1. Generating a maze solution or set of possible working solutions
  2. Selecting/optimizing the best solution from part 1 above

Seems like part 1 is already addressed by you so I will share some suggestions on part 2 to explore.

If you can parse the XML file to extract elements and express them as a sequence of directions (like way old school turtle programming language) --

light, go forward 10, move 10, pick up, stop, drop, turn 10

...... then you can use the seq2seq machine learning technique to train a neural network to take semi-optimized sequences and find the the best sequence.

In other words, the seq2seq NN will be trained on pairs of sequences that consist of a non-optimal sequence and a corresponding ""ideal"" sequence. This is similar to neural machine translation and summarization but you are translating sequences. NN architectures that apply in this case are RNN, LSTM, etc.

Hopefully this gets you started with some exploration. I don't have personal experience with using seq2seq in this domain but this seems appropriate.

",10287,,10287,,3/3/2018 19:48,3/3/2018 19:48,,,,4,,,,CC BY-SA 3.0 5493,1,5521,,3/2/2018 21:27,,37,22496,"

It is said that activation functions in neural networks help introduce non-linearity.

  • What does this mean?
  • What does non-linearity mean in this context?
  • How does the introduction of this non-linearity help?
  • Are there any other purposes of activation functions?
",12957,,2444,,10/24/2019 1:50,4/1/2021 14:58,What is the purpose of an activation function in neural networks?,,5,0,,,,CC BY-SA 4.0 5494,2,,5493,3/3/2018 0:18,,6,,"

Let's first talk about linearity. Linearity means the map (a function), $f: V \rightarrow W$, used is a linear map, that is, it satisfies the following two conditions

  1. $f(x + y) = f(x) + f(y), \; x, y \in V$
  2. $f(c x) = cf(x), \; c \in \mathbb{R}$

You should be familiar with this definition if you have studied linear algebra in the past.

However, it's more important to think of linearity in terms of linear separability of data, which means the data can be separated into different classes by drawing a line (or hyperplane, if more than two dimensions), which represents a linear decision boundary, through the data. If we cannot do that, then the data is not linearly separable. Often times, data from a more complex (and thus more relevant) problem setting is not linearly separable, so it is in our interest to model these.

To model nonlinear decision boundaries of data, we can utilize a neural network that introduces non-linearity. Neural networks classify data that is not linearly separable by transforming data using some nonlinear function (or our activation function), so the resulting transformed points become linearly separable.

Different activation functions are used for different problem setting contexts. You can read more about that in the book Deep Learning (Adaptive Computation and Machine Learning series).

For an example of non linearly separable data, see the XOR data set.

Can you draw a single line to separate the two classes?

",9469,,2444,,10/24/2019 1:54,10/24/2019 1:54,,,,2,,,,CC BY-SA 4.0 5495,2,,5493,3/3/2018 0:19,,4,,"

Consider a very simple neural network, with just 2 layers, where the first has 2 neurons and the last 1 neuron, and the input size is 2. The inputs are $x_1$ and $x_1$.

The weights of the first layer are $w_{11}, w_{12}, w_{21}$ and $w_{22}$. We do not have activations, so the outputs of the neurons in the first layer are

\begin{align} o_1 = w_{11}x_1 + w_{12}x_2 \\ o_2 = w_{21}x_1 + w_{22}x_2 \end{align}

Let's calculate the output of the last layer with weights $z_1$ and $z_2$

$$out = z_1o_1 + z_2o_2$$

Just substitute $o_1$ and $o_2$ and you will get:

$$out = z_1(w_{11}x_1 + w_{12}x_2) + z_2(w_{21}x_1 + w_{22}x_2)$$

or

$$out = (z_1w_{11} + z_2 w_{21})x_1 + (z_2w_{22} + z_1w_{12})x_2$$

And look at this! If we create NN just with one layer with weights $z_1w_{11} + z_2 w_{21}$ and $z_2w_{22} + z_1w_{12}$ it will be equivalent to our 2 layers NN.

The conclusion: without nonlinearity, the computational power of a multilayer NN is equal to 1-layer NN.

Also, you can think of the sigmoid function as differentiable IF the statement that gives a probability. And adding new layers can create new, more complex combinations of IF statements. For example, the first layer combines features and gives probabilities that there are eyes, tail, and ears on the picture, the second combines new, more complex features from the last layer and gives probability that there is a cat.

For more information: Hacker's guide to Neural Networks.

",13067,,2444,,10/23/2019 23:43,10/23/2019 23:43,,,,0,,,,CC BY-SA 4.0 5496,1,,,3/3/2018 0:52,,5,1231,"

Does NEAT require only connection genes to be marked with a global innovation number?

From the NEAT paper

Whenever a new gene appears (through structural mutation), a global innovation number is incremented and assigned to that gene.

It seems that any gene (both node genes and connection genes) requires an innovation number. However, I was wondering what was the node gene innovation number for. Is it to provide the same node ID across all elements of the population? Isn't the connection gene innovation number sufficient?

Besides, the NEAT paper includes the following image which doesn't show any innovation number on node genes.

",13087,,2444,,7/7/2019 19:55,7/7/2019 19:55,Does NEAT require only connection genes to be marked with a global innovation number?,,2,0,,,,CC BY-SA 4.0 5497,1,5498,,3/3/2018 1:48,,3,1169,"

In the add node mutation, the connection between two chosen nodes (e.g. A and B) is first disabled and then a new node is created between A and B with their respective two connections.

I guess that the former A-B connection can be re-enabled via crossover (is it right?).

Can the former A-B connection also be re-enabled via mutation (e.g. ""add connection"")?

",13087,,2444,,3/13/2020 3:58,3/13/2020 3:58,Can mutation enable a disabled connection?,,2,0,,,,CC BY-SA 4.0 5498,2,,5497,3/3/2018 5:05,,3,,"

Yes, the original gene is disabled, but is left in the genome. This can be seen on page 10, figure 3 of the paper linked (taken from the original paper NEAT Paper) where gene 3 is disabled, but not removed from the genome. This gene can be re-enabled by receiving the gene with the identical innovation number from a mating partner with the gene enabled during crossover.

The original paper does not mention a mutation to re-enable genes, but various other publications and implementations after the original paper do. This is desirable for a number of reasons. Re-enable mutate allows for dropout to used in the implementation. It is also possible that certain genomes are disabling genes too quickly and this can help to correct for that.

",13088,,13088,,3/3/2018 17:50,3/3/2018 17:50,,,,2,,,,CC BY-SA 3.0 5499,2,,5451,3/3/2018 9:15,,2,,"

The whole idea of Imagination Learning seems to be in its infancy. The author of the response you linked wrote a paper on the subject, but as he notes the paper is more of an outline for ""a new overarching challenge for AI."" The second link you posted is not directly related to the ideas referenced in the paper on Imagination Learning, other than that both use the idea of imagination in humans as inspiration. In short, there does not seem to be much information on this topic and I suspect it will stay that way until we learn some of the underlying processes that go along with imagination in humans and then maybe Imagination Machines may come to fruition.

",13088,,,,,3/3/2018 9:15,,,,0,,,,CC BY-SA 3.0 5501,2,,5461,3/3/2018 9:58,,0,,"

Here is one open-source implementation. Temporal Tessellation: A Unified Approach for Video Analysis

For more you can dig into some research publications and see if they give a link to their implementation. Most researchers put up their work for public.

Here is a list of publications that present their work related to video captioning using machine learning. Awesome Deep Vision

Here is another publication that shows how to generate captions for videos. Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks

",12957,,12957,,3/3/2018 10:04,3/3/2018 10:04,,,,0,,,,CC BY-SA 3.0 5509,1,25181,,3/4/2018 10:38,,6,404,"

I have a general question about the updating of the network/model in the PPO algorithm.

If I understand it correctly, there are multiple iterations of weight updates done on the model with data that is created from the environment (with the model before the update). Now, I think that the updates of the model weights are not correct anymore after the first iteration/optimization step, because the model weights changed and therefore the training data is outdated (since the model would now give different actions in the environment and therefore different rewards).

Basically, in the pseudo-code of the algorithm, I don't understand the line "Optimize surrogate L ... with K epochs...". If the update is done for multiple epochs, the data that is learned is outdated already after the first iteration of optimization, since the model's weights changed. In other algorithms, like A2C, there is only one optimization step done, instead of $K$ epochs.

Is this some form of approximation or augmentation on the data by using the data that was created by an older model for multiple iterations or am I missing something here? If yes, where was this idea first introduced or better described? And where is an (empirical) proof that this still leads to a correct weight updating?

",13104,,2444,,11/15/2020 1:08,1/13/2021 7:01,Understanding multi-iteration updates of the model in the Proximal Policy Optimization algorithm,,1,0,,,,CC BY-SA 4.0 5510,2,,5493,3/4/2018 11:01,,11,,"

If you only had linear layers in a neural network, all the layers would essentially collapse to one linear layer, and, therefore, a ""deep"" neural network architecture effectively wouldn't be deep anymore but just a linear classifier.

$$y = f(W_1 W_2 W_3x) = f(Wx)$$

where $W$ corresponds to the matrix that represents the network weights and biases for one layer, and $f()$ to the activation function.

Now, with the introduction of a non-linear activation unit after every linear transformation, this won't happen anymore.

$$y = f_1( W_1 f_2( W_2f_3( W_3x)))$$

Each layer can now build up on the results of the preceding non-linear layer which essentially leads to a complex non-linear function that is able to approximate every possible function with the right weighting and enough depth/width.

",13104,,2444,,10/23/2019 22:32,10/23/2019 22:32,,,,1,,,,CC BY-SA 4.0 5511,1,,,3/4/2018 12:55,,1,601,"

Are mathematical models sufficient to create general artificial intelligence?

I am not sure if it is possible to represent e.g. emotions or intuition using mathematical model. Do we need a new approach in order to solve this problem?

",13107,,2444,,4/19/2019 15:16,4/19/2019 15:16,Are mathematical models sufficient to create general artificial intelligence?,,1,0,,,,CC BY-SA 4.0 5514,2,,5511,3/4/2018 17:40,,4,,"

Mathematical models are essentially highly formalised knowledge. When it comes to computer engineering, there is literally no other choice - anything you can write code for, or design a machine for, will have an associated mathematical model. That model may not be fully explored or comprehended analytically by theorists, it may be just too complex (and driven by calculations) or even mathematically intractable. However, that doesn't make it non-mathematical, just more driven by empirical results than theory.

We don't have complete models for how AGI definitely work, nor tight enough definitions of general intelligence to base maths from where we can say ""if we implemented a framework based on this maths, we could build an AGI"". Right now, exploration and experiments based on intuition of what might work are far ahead of such theory.

The theoretical work behind e.g. neural networks is chipping away at the problem, and there are more general over-arching theories about intelligent rational behaviour available e.g. the equations of AIXI. AIXI doesn't cover emotions and intuition directly, but does attempt to cover knowledge and how a rational agent would approach understanding the world in general from scratch. It is possible that an embodied system driven by a software implementing something like AIXI could exhibit intuition and emotions in an emergent fashion, but whether or not that would happen in practice is not at all clear from the theory. AIXI is just one theory/model out of many, and I am not qualified to analyse it in depth, but its creators have strong pedigree in AI research, so it is as good as any IMO if you are interested in starting to research AI from a theoretical perspective.

Despite all the unknowns, the success of deep learning and the loose analogies between artificial neural networks and biological ones, makes it look likely that neural networks or something like them will form a component of an AGI. The current state of the art for more narrow problem solving, learning from examples or experience through backpropagation of error gradients, might not be the most important or key component. However, whatever the structure, whether it is a small extension of existing systems, or involves some new science, it will be describable mathematically.

",1847,,1847,,3/4/2018 19:23,3/4/2018 19:23,,,,0,,,,CC BY-SA 3.0 5517,1,,,3/4/2018 22:39,,3,365,"

I am seeking the information for this kind of chatbot architecture : There are two chatbots. One plays the role of teacher, and another is a student who is learning. The goal is to test the student's quality, and to improve the student's ability.

I didn't find much reference. There are :

Bottester: Testing Conversational Systems with Simulated Users

And the ParlAI, a python-based platform for enabling dialog AI research has the notion of "Teacher agent", which seems to be what I am looking for.

Of course, we also have deep reinforcement learning which might be related.

I prefer to have some classical references for this approach to chatbots. Currently, reinforcement learning is not in my consideration.

Constructing two chatbots talking to each other, like what Facebook did, is not what I want. Because in this case, both of them are student agents.

",13118,,2444,,6/5/2022 11:41,6/5/2022 11:41,Two chatbots - One teaches another,,2,3,,,,CC BY-SA 4.0 5519,1,5522,,3/5/2018 9:16,,0,329,"

What is supposed to happen first: Strong AI or Technological Singularity?

Meaning which option is more likely, that the Strong AI that will bring as to the state of technological singularity or achieving technological singularity will allow us to construct strong AI?

",12935,,,,,3/5/2018 14:36,Strong AI vs Singularity - which should happen first?,,1,1,,,,CC BY-SA 3.0 5521,2,,5493,3/5/2018 12:58,,27,,"

Almost all of the functionalities provided by the non-linear activation functions are given by other answers. Let me sum them up:

  • First, what does non-linearity mean? It means something (a function in this case) which is not linear with respect to a given variable/variables i.e. $f(c1.x1 + c2.x2...cn.xn + b) != c1.f(x1) + c2.f(x2) ... cn.f(xn) + f(b).$ NOTE: There is some ambiguity about how one might define linearity. In polynomial equations we define linearity in somewhat a different way as compared to in vectors or some systems which take an input $x$ and give an output $f(x)$. See the second answer.
  • What does non-linearity mean in this context? It means that the Neural Network can successfully approximate functions (up-to a certain error $e$ decided by the user) which does not follow linearity or it can successfully predict the class of a function that is divided by a decision boundary that is not linear.
  • Why does it help? I hardly think you can find any physical world phenomenon which follows linearity straightforwardly. So you need a non-linear function that can approximate the non-linear phenomenon. Also, a good intuition would be any decision boundary or a function is a linear combination of polynomial combinations of the input features (so ultimately non-linear).
  • Purposes of activation function? In addition to introducing non-linearity, every activation function has its own features.

Sigmoid $\frac{1} {(1 + e ^ {-(w1*x1...wn*xn + b)})}$

This is one of the most common activation function and is monotonically increasing everywhere. This is generally used at the final output node as it squashes values between 0 and 1 (if the output is required to be 0 or 1). Thus above 0.5 is considered 1 while below 0.5 as 0, although a different threshold (not 0.5) maybe set. Its main advantage is that its differentiation is easy and uses already calculated values and supposedly horseshoe crab neurons have this activation function in their neurons.

Tanh $\frac{e ^ {(w1*x1...wn*xn + b)} - e ^ {-(w1*x1...wn*xn + b)})}{(e ^ { (w1*x1...wn*xn + b)} + e ^ {-(w1*x1...wn*xn + b)}}$

This has an advantage over the sigmoid activation function as it tends to centre the output to 0 which has an effect of better learning on the subsequent layers (acts as a feature normaliser). A nice explanation here. Negative and positive output values maybe considered as 0 and 1 respectively. Used mostly in RNN's.

Re-Lu activation function - This is another very common simple non-linear (linear in positive range and negative range exclusive of each other) activation function that has the advantage of removing the problem of vanishing gradient faced by the above two i.e. gradient tends to 0 as x tends to +infinity or -infinity. Here is an answer about Re-Lu's approximation power in-spite of its apparent linearity. ReLu's have a disadvantage of having dead neurons which result in larger NN's.

Also, you can design your own activation functions depending on your specialized problem. You may have a quadratic activation function which will approximate quadratic functions much better. But then, you have to design a cost function that should be somewhat convex in nature, so that you can optimise it using first-order differentials and the NN actually converges to a decent result. This is the main reason why standard activation functions are used. But I believe with proper mathematical tools, there is a huge potential for new and eccentric activation functions.

For example, say you are trying to approximate a single-variable quadratic function say $a.x^2 + c$. This will be best approximated by a quadratic activation $w1.x^2 + b$ where$w1$ and $b$ will be the trainable parameters. But designing a loss function that follows the conventional first-order derivative method (gradient descent) can be quite tough for non-monotonically increasing function.

For Mathematicians: In the sigmoid activation function $(1 / (1 + e ^ {-(w1*x1...wn*xn + b)})$ we see that $e ^ {-(w1*x1...wn*xn + b)}$ is always < 1. By binomial expansion, or by reverse calculation of the infinite GP series we get $sigmoid(y)$ = $1 + y + y^2.....$. Now in a NN $y = e ^ {-(w1*x1...wn*xn + b)}$. Thus we get all the powers of $y$ which is equal to $e ^ {-(w1*x1...wn*xn + b)}$ thus each power of $y$ can be thought of as a multiplication of several decaying exponentials based on a feature $x$, for eaxmple $y^2 = e^ {-2(w1x1)} * e^ {-2(w2x2)} * e^ {-2(w3x3)} *...... e^ {-2(b)}$. Thus each feature has a say in the scaling of the graph of $y^2$.

Another way of thinking would be to expand the exponentials according to Taylor Series: $$e^{x}=1+\frac{x}{1 !}+\frac{x^{2}}{2 !}+\frac{x^{3}}{3 !}+\cdots$$

So we get a very complex combination, with all the possible polynomial combinations of input variables present. I believe if a Neural Network is structured correctly the NN can fine-tune these polynomial combinations by just modifying the connection weights and selecting polynomial terms maximum useful, and rejecting terms by subtracting the output of 2 nodes weighted properly.

The $tanh$ activation can work in the same way since output of $|tanh| < 1$. I am not sure how Re-Lu's work though, but due to its rigid structure and problem of dead neurons we require larger networks with ReLu's for a good approximation.

But for a formal mathematical proof, one has to look at the Universal Approximation Theorem.

For non-mathematicians some better insights visit these links:

Activation Functions by Andrew Ng - for more formal and scientific answer

How does neural network classifier classify from just drawing a decision plane?

Differentiable activation function A visual proof that neural nets can compute any function

",,user9947,36737,,4/1/2021 14:58,4/1/2021 14:58,,,,4,,,,CC BY-SA 4.0 5522,2,,5519,3/5/2018 14:36,,2,,"

The definition of ""technological singularity"" answers the question:

The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

(wiki)

note the order of facts is ""invention of artificial superintelligence"" (AGI) followed by the ""unfathomable changes"".

",12630,,,,,3/5/2018 14:36,,,,3,,,,CC BY-SA 3.0 5526,1,5537,,3/5/2018 19:27,,3,119,"

Summary:

I am teaching bots to pick food on a playing field. Some food is poisonous and some is good.

Food Details:

  • Poisonous food subtracts score points and good food adds.
  • Food points vary based on its size.
  • There is about 9:1 ratio of poisonous food to good food, so a lot more chances to end up in negative numbers.
  • Food grows in points overtime.
  • Food spoils after some predetermined size becoming poisonous.

Fitness Function:

The fitness function I use is simply counting points by the end of iterations. Bots might choose to eat it or skip it.

The Problem:

The problem I am having is that, in the first generation, most bots eat a lot of bad crap and the curious ones end up in negative numbers. So, mostly the ones that make it are the ones that are lazy and didn't eat or didn't head towards the food and most of the time the fittest for first few generations comes out with 0 points and 0 eats of any kind of food.

When trained for a long time, they just end up waiting for the food instead of eating multiple times. Often, while they wait, food goes bad and they just end up going to another food. This way, at the end of the iteration, I have some winners, but they are nowhere near the potential they could have been at.

Question:

I somehow need to weigh the importance of eating food. I want them to eventually learn to eat.

So I thought of this:

brain.score += foodValue * numTimesTheyAteSoFar

But this blows up the score too much and now the food quality is not respected and they just gulp on anything slightly above 0.

",13138,,2444,,1/30/2021 2:25,1/30/2021 2:29,How do I design a fitness function that weighs the importance of eating food?,,1,0,,,,CC BY-SA 4.0 5527,1,,,3/5/2018 20:22,,5,711,"

I'm trying to implement a custom version of the YOLO neural network. Originally, it was described in the paper You Only Look Once: Unified, Real-Time Object Detection (2016). I have some problems understanding the loss function they used.

Basic information:

  • An input image is divided into $S \times S$ grid (that gives the total of $S^2$ cells) and each cell predicts $B$ bounding boxes and $c$ conditional class probabilities. Each bounding box predicts $5$ values: $x,y,w,h,C$ (center of bounding box, width and height and confidence score). This makes the output of YOLO an $S \times S \times (5B*c)$ tensor.

  • The $(x,y)$ coordinates are calculated relative to the bounds of the cell and $(w,h)$ is relative to the whole image.

  • I understand that the first term penalizes the wrong prediction of the center of a bounding box; the 2nd term penalizes wrong width and height prediction; the 3rd term the wrong confidence prediction; the 4th is responsible for pushing confidence to zero when there is no object in a cell; the last term penalizes wrong class prediction.

My problem:

I don't understand when $\mathbb{1}^\text{obj}_{ij}$ should be $1$ or $0$. In the paper, they write (section 2.2. Training):

$\mathbb{1}_{i j}^{\mathrm{obj}}$ denotes that the $j$th bounding box predictor in cell $i$ is "responsible" for that prediction.

and they also write

Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is "responsible" for the ground truth box

  • So, is it right that, for every object in the image, there should be exactly one pair of $ij$ such that $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$?

    • If this is correct, this means that the center of the ground truth bounding box should fall into $i$th cell, right?

    • If this is not the case, what are other possibilities when $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$, and what ground truth labels $x_i$ and $y_i$ should be in these cases?

  • Also, I assume that ground truth $p_i(c)$ should be $1$ if there is an object of class $c$ in the cell $i$, but what ground truth $p_i(c)$ should be equal to in case there are several objects of different classes in the cell?

",13102,,2444,,1/28/2021 23:10,1/28/2021 23:12,"In YOLO, when is $\mathbb{1}_{i j}^{\mathrm{obj}} = 1$, and what are the ground-truth labels for $x_i$ and $y_i$?",,1,1,,,,CC BY-SA 4.0 5529,1,,,3/5/2018 23:18,,4,2302,"

I'm new to this AI/Machine Learning and was playing around with OpenAI Gym a bit. When looking through the environments, I came across the Blackjack-v0 environment, which is a basic implementation of the game where the state is the hand count of the player and the dealer and if the player has a useable ace. The actions are only hit or stand and the possible rewards 1 if the player wins, -1 if the player loses, and 0 when it is a draw.

So, that got me thinking what a more realistic environment/model for this game would look like, taking into account the current balance and other factors and has multiple actions like betting 1-10€ and hit or stand.

This brings me to my actual question:

  • As far as I understand, in neural networks (and I do not very well yet, I guess) the input will be the state and the output the possible actions and how good the network thinks they are/will be. But now there are two different action spaces, which apply to different states of the game (betting or playing), so some of the actions are useless. How would be the right way to approach this scenario?

I'm guessing one answer would be to give some kind of negative reward if the network guesses a useless action, but, in this case, I think the reward should be the actual stake (negative reward) and the actual win if any. Therefore, this would cause some bias in how the game proceeds as it should start with some amount of balance and end if the balance is 0 or after a specified amount of rounds.

Limiting timesteps wouldn't be an option either, I guess, because it should be limited to rounds, so, for example, it won't end after a betting step.

Therefore, for a useless step, the reward would be 0 and the state would stay the same, but, for the neural network, it doesn't matter how many useless steps it takes because it'll make no difference to the actual outcome.

Corollary question:

  • Should be split up into two neural networks? One for betting and one for playing?
",13141,,2444,,1/10/2021 2:09,10/2/2022 6:02,How to deal with different actions for different states of the environment?,,1,0,,,,CC BY-SA 4.0 5531,1,5534,,3/6/2018 6:35,,2,702,"

In the diagram below, although the flow of information happens from the input to output layer, the labeling of weights appears reverse. Eg: For the arrow flowing from X3 to the fourth hidden layer node has the weight labeled as W(1,0) and W(4,3) instead of W(0,1) and W(3,4) which would indicate data flowing from the 3rd node of the 0'th layer to the 4th node of the 1st layer.

One of my neural networks teachers did not emphasize on this convention at all. Another teacher made it a point to emphasize on it.

Is there a reason there is such an un-intuitive convention and is there really a convention?

",9268,,,,,3/7/2018 16:36,Is there a naming convention for network weights for multilayer networks?,,1,11,,,,CC BY-SA 3.0 5534,2,,5531,3/6/2018 7:57,,2,,"

When the system grows matrix notation is used, as a=Wx, being a (input to activation function in hidden layer) and x (values from input layer) column vectors, transpose of (a1,a2,...a_m) and (x1,x2,...,x_n), and W a m-by-n matrix of dimensions m rows and n columns. The standard way to denote matrix elements is w(i,j) where ""i"" is the row number and ""j"" column number:

(from wiki)

For this reason, the weight that applies to h4 from x3 is element in row 4 column 3 of the matrix W, that is, W(4,3) ( as your teachers advocates but with a sad lack of ability to explain ).

In your example:

Note: things are a few more complex when x1, x2, ... are itself vectors, but final conclusion is the same.

( PS: URGENT to allow latex notation on this stack exchange ! )

",12630,,12630,,3/7/2018 16:36,3/7/2018 16:36,,,,5,,,,CC BY-SA 3.0 5536,1,7554,,3/6/2018 10:35,,3,1670,"

I'm training Seq2Seq model on OpenSubtitles dialogs - Cornell-Movie-Dialogs-Corpus.

My work based on the following papers (but currently I'm not implemented Attention yet):

The loss I received is quite high and sucked in variation ~6.4 after 3 epoches. The model predicts the most common words with some times other not significant words (but 99.99% is just 'you'):

  • I’ve experimented with 128 - 2048 hidden units and with 1 or 2 or 3 LSTM layers per encoder and decoder. The outcomes are more or less the same.

SEQ1: yeah man it means love respect community and the dollars too the package the unk end

SEQ2: but how did you get unk 82 end

PREDICTION: promoting 16th dashboard be of the the the you you you you you you you you you you you you you you you you you you you you you you you you

I'm using here greedy prediction, meaning - after I receive logit I do argmax(..) on all its value for first-3 mini-batch-elements (here I present only first element). For convenient - SEQ1 and SEQ2 are also printed - to know the actual dialog which was presented to the model.

The pseudo-code of my architecture looks like this (I'm using Tensorflow 1.5):

seq1 = tf.placeholder(...)
seq2 = tf.placeholder(...)

embeddings = tf.Variable(tf.random_uniform([vocab_size, 100],-1,1))

seq1_emb = tf.nn.embedding_lookup(embeddings, seq1)
seq2_emb = tf.nn.embedding_lookup(embeddings, seq1)

encoder_out, state1 = tf.nn.static_rnn(BasicLSTMCell(), seq1_emb)
decoder_out, state2 = tf.nn.static_rnn(BasicLSTMCell(), seq2_emb,
                                                        initial_state=state_1)
logit = Dense(decoder_out, use_bias=False)

crossent = tf.nn.saparse_softmax_cross_entropy_with_logits(logits=logit, 
                                                         labels=target)
crossent = mask_padded_zeros(crossent)
loss = tf.reduce_sum(crossent) / number_of_words_in_batch

train = tf.train.AdamOptimizer(learning_rate=0.00002).minimize(loss) 

I'm also wonder if I pass well state1 to decoder, which in general looks like this:

# reshape in pseudocode: state1 = state[1:]
new_state1 = []
for lstm in state1:
    new_lstm = []
    for gate in lstm:
        new_lstm.append(gate[1:])
    new_state1.append(tuple(new_lstm))
state1 = tuple(new_state1)
  • Should I use some projection layer between states of encoder and decoder ?

So if seq1 has 32 words, seq2 has 31 (since we will not predict nothing after the last word, which is the tag <END>).

",12691,,-1,,6/17/2020 9:57,10/8/2018 12:15,Seq2Seq dialogs predicts only most common words like `you` after couple of epoches,,1,0,,,,CC BY-SA 3.0 5537,2,,5526,3/6/2018 11:10,,4,,"

A human analogy can help you here (a variance).

Initialize all the agents with an initial value $x$; we will call this energyUnits. I Will talk later more about this.

Now, add some value, as an incentive, whenever the agent eats good food, to the energyUnits. You need to add a function that will keep decrementing the value of the agent's energyUnits, as humans degrade energy (calories) with time. We will call this function normalDegrade. This is the core part of the solution for your problem.

Now, for the bad (or poisonous) food you can be more creative with. You can simply subtract a given value whenever an agent eats poisonous food. Or you can extend your normalDegrade function with a very high downward slope. In this case, the energy units (value) of the agent will fall very rapidly. This will force the agent to look for good food to survive.

Since the ratio of food is 9:1 with poisonous, you need to initialize the value of $x$ (energyUnits) very high. You need to do some trial and error to find the right fit for you here.

Also, I am assuming that the agent is being removed from the population whenever the value of $x$ is zero or some negative value (which depends). This is important, as it makes sure that the algorithm is not wasting time in processing bad agents.

Because of this, another problem arises of the population coming to extinction. For this, you need to keep generating new agents for which any of the genetic algorithms will do. A new population with better parents of the already present generation will keep the population fit and efficient.

A good fitness function is a core to solving any problem of this kind, and sometimes it is hard to find. You might need to do some trial and error with different values to look for the right fit.

",3005,,2444,,1/30/2021 2:29,1/30/2021 2:29,,,,0,,,,CC BY-SA 4.0 5538,1,,,3/6/2018 13:24,,3,737,"

Google Analytics allows me to collect data about every web-session. For simplicity, let's assume for each user, we collect the number of pages and time spent on site for each session:

user_id visit_id page_views time_spent result
1       1        10         100        0
1       2        31         510        0
1       3        1          10         1

How would you model this data? What I would like the ML algorithm to do:

  1. Extract as much information as possible
  2. Have a flexible number of inputs (e.g. the number of sessions can go to infinity)

What I can think of:

  1. Aggregate the data per user e.g. average page_views or total page_views and feed it into a general algorithm e.g. random forrest (but I lose information with aggregation)
  2. Use LSTM and feed at most last 3 visits (will also lose information, but would this perform better than aggregation?)

Goal: To build a predictive model to analyse all user sessions and make a prediction whether the person will convert or not.

",13148,,,,,7/8/2018 19:32,ML model that is most suited to analyse Google Analytics data,,2,1,,,,CC BY-SA 3.0 5539,1,,,3/6/2018 14:46,,7,1495,"

There are a lot of papers that show that neural networks can approximate a wide variety of functions. However, I can't find papers that show the limitations of NNs.

What are the limitations of neural networks? Which functions can't neural networks learn efficiently (or using gradient-descent)?

I am looking also for links to papers that describe these limitations.

",13067,,2444,,5/10/2019 14:45,1/15/2022 0:12,Which functions can't neural networks learn efficiently?,,3,2,,,,CC BY-SA 4.0 5541,2,,4048,3/6/2018 16:10,,2,,"

You would definitely want your network to know crucial information about the game, like what cards AI agent has(their values and types), mana pool, how many cards on the table and their values, number of the turn and so on. These things you must figure on your own, the question you should ask yourself is ""If I add this value to input how and why it will improve my system"". But the first thing to understand is that most of NNs are designed to have a constant input size, and I would assume this is matters in this game since players can have a different amount of cards in their hand or on the table. For example, you want to let NN know what cards it has, let's assume the player can have a maximum of 5 cards in his hand and each card can have 3 values(mana, attack and health), so you can encode this as 5*3 vector, where first 3 values represent card number one and so on. But what if the player has currently 3 cards, a simple approach would be to assign zeros to last 6 inputs, but this may cause problems since some cards can have 0 mana cost or 0 attack. So you need to figure out how to solve this problem. You may look for NN models that can handle variable input size or figure out how to encode input as a vector of constant size.

Secondly, outputs are also constant size vectors. In case of this type of game, it can be a vector that encodes actions that the agent can take. So let's say we have 3 actions: put a card, skip turn and concede. So it can be one hot encoder, e.g. if you have 1 0 0 output, this means that agent should put some card. To know what card it should put you can add another element to output which will produce a number in the range of 1 to 5 (5 is max number of cards in the hand).

But the most important part of training a neural network is that you will have to come up with a loss function that is suitable for your task. Maybe standard loss functions like Mean-squared loss or L2 will be good, maybe you will need to change them in order to fit your needs. This is the part where you will need to make a research. I've never worked with NEAT before, but as I understood correctly it uses some genetic algorithm to create and train NN, and GA use some fitness function to select an individual. So basically you will need to know what metric you will be using to evaluate how good you model performs and based on this metric you will change parameters of the model.

PS. It is possible to solve this problem with the neural network, however, neural networks are not magic and not the universal solution to all problems. If your goal is to solve this certain problem I would also recommend you to dig into the game theory and its application in the AI. I would say, that solving this problem would require complex knowledge from different fields of AI.

However, If your goal is to learn about neural networks I would recommend taking much simpler tasks. For example, you can implement NN that will work on benchmark dataset, for example, NN that will classify digits from MNIST dataset. The reason for this is that a lot of articles was written about how to do classification on this dataset and you will learn a lot and you will learn faster from implementing simple things.

",13102,,,,,3/6/2018 16:10,,,,0,,,,CC BY-SA 3.0 5543,2,,5539,3/6/2018 18:48,,6,,"

One of the important qualifications of the Universal approximation theorem is that the neural network approximation may be computationally infeasible.

""A feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly."" - Ian Goodfellow, DLB

I can't think of any function that I would definitively declare as unlearnable, but neural networks have many problems. Consider adversarial examples and adversarial patches, which highlight the poor generalization going on under the hood of recent advances in computer vision.

Neural Networks are also inherently limited by the innate priors baked into their architecture and the sample density of their training data. Check out this recent discussion at Stanford's AI Salon between Yann LeCun and Christopher Manning on innate priors if that is the kind of limitation you are talking about.

",13156,,,,,3/6/2018 18:48,,,,1,,,,CC BY-SA 3.0 5546,1,,,3/6/2018 21:01,,39,74528,"

I've seen these terms thrown around this site a lot, specifically in the tags convolutional-neural-networks and neural-networks.

I know that a neural network is a system based loosely on the human brain. But what's the difference between a convolutional neural network and a regular neural network? Is one just a lot more complicated and, ahem, convoluted than the other?

",145,,2444,,3/13/2020 17:19,6/27/2022 16:14,What is the difference between a convolutional neural network and a regular neural network?,,5,0,,,,CC BY-SA 4.0 5547,2,,3982,3/6/2018 21:37,,1,,"

Seem like time is a good fitness, though you need it to engage into learning side sensor inputs and side movement.

I would consider adding a bit of randomness to the environment. How about adding some random mild forces that might sway it left, right, front and rear a bit so that that bots are forced to use other sensors and inputs to stay in the center.

In cases when drone is not simulated, this task is a little harder but adding randomness to the environment can still be done. For example, tilt your drone a tiny amount in a random direction at random intervals. This will force your drone to learn to correct for being jostled by wind, without you having to actually produce wind.

",13138,,13138,,3/6/2018 23:39,3/6/2018 23:39,,,,0,,,,CC BY-SA 3.0 5548,1,5549,,3/6/2018 22:29,,3,972,"

I am reading through the NEAT paper here. On page 14 of the PDF, there is this quote about mutation:

There was an 80% chance of a genome having its connection weights mutated, in which case each weight had a 90% chance of being uniformly perturbed and a 10% chance of being assigned a new random value.

What exactly does it mean to perturb weights? What is uniform vs. nonuniform perturbation?

Is there an established method to do this? I am imagining the process as multiplying each connection weight by a random number, but I'm unfamiliar with the term.

",13160,,,,,3/7/2018 16:19,"How are connection weights ""perturbed""?",,1,0,,,,CC BY-SA 3.0 5549,2,,5548,3/6/2018 22:40,,4,,"

Perturbed here means adding a small random value to the weight. That random value comes from a uniform distribution or from a gaussian (or any distribution really). Imagine just nudging the weight by a little.

It’s done to overcome the problem of local minima where models can get stuck with a good set of weights but not the best set of weights. By perturbing the weights a little, the model has a chance of finding a better set of weights. Searching for methods that deal with local minima like stochastic gradient descent will give you a better intuition.

",4398,,13088,,3/7/2018 16:19,3/7/2018 16:19,,,,0,,,,CC BY-SA 3.0 5550,2,,5546,3/6/2018 23:01,,6,,"

A convolutional neural network is one that has convolutional layers. If a general neural network is, loosely speaking, inspired by a human brain (which isn't very much accurate), the convolutional neural network is inspired by the visual cortex system, in humans and other animals (which is closer to the truth). As the name suggests, this layer applies the convolution with a learnable filter (a.k.a. kernel), as a result the network learns the patterns in the images: edges, corners, arcs, then more complex figures. Convolutional neural network may contain other layers as well, commonly pooling and dense layers.

Highly recommend CS231n tutorial on this matter: it's very detailed and contains a lot of very nice visualizations.

",9647,,,,,3/6/2018 23:01,,,,0,,,,CC BY-SA 3.0 5551,1,5562,,3/7/2018 4:23,,0,177,"

Let us for these purposes say with are working with any feed forward neural network.

Let us also say, that we know beforehand that certain portion of our dataset arsignificantly more impactful or important to our underlying representation. Is there anyway to add that “weighting” to our data?

",9608,,,,,3/9/2018 9:29,A way to give more weight to particular data?,,2,3,,,,CC BY-SA 3.0 5552,2,,5539,3/7/2018 5:23,,2,,"

This answer depends very much so on the type of neural network and algorithm used for training.

If you are using gradient descent on a neural network of one input layer, one output layer, and no hidden layers there are many functions that you can't learn. One simple one is the XOR function. Due to the fact that XOR is not linearly separable, it can not be represented by a neural network with no hidden layers.

If you are using NEAT to build recurrent neural networks then all functions(**) can be represented given enough time and data. This is due in part to the fact that recurrent neural networks are Turing Complete.

One of the biggest causes for limitations when using neural networks is based on the difficulty of interpretation as to what the network is doing. The network is gradually building up an understanding of the function as it goes from the input layer to the output layer, but it is very difficult for us to understand this building up process and interpret what the neural network is attempting to do. This makes it very hard if not impossible to manually tweak your neural network in a meaningful way.

Another limitation is the need for training (in large amounts) in order to have a meaningful representation of your data. Neural networks have a tendency to need large amounts of data before converging to a meaningful hypothesis space. This has resulted in clever algorithms to generate training data without needing human interaction, such as Generative Adversarial Networks, but the underlying problem remains.

** Not all functions can be computed by neural networks, however, all computable functions can be. An example of an uncomputable function is the mapping of all programs from the program to whether or not this program will halt (The Halting Problem).

",13088,,18758,,1/15/2022 0:12,1/15/2022 0:12,,,,0,,,,CC BY-SA 4.0 5553,1,5581,,3/7/2018 6:17,,3,998,"

As we all know, there has been tons of GAN variants featuring different aspects of the image generation task such as stability, resolution or the ability to manipulate images. However, it is still confusing to me that how do we determine that images generated by one network are more plausible than images generated by another?

PS: could someone with higher reputation create more tags like image generation?

",13165,,1847,,3/7/2018 8:38,3/8/2018 20:33,How to evaluate the goodness of images generated by GANs?,,1,0,,,,CC BY-SA 3.0 5554,2,,2526,3/7/2018 7:19,,2,,"

Lets go through your question point by point.

  • Should an activation function be differentiable?

No, it is not of any compulsion of it to be differentiable. We use Re-Lu's which have a non-differentiable point at 0. But this is simply a trivial case, since the point 0 will never be reached unless we run out of precision points to denote extemely less numbers.

So lets take another example, the Perceptron learning algorithm. In this algorithm, there is no satisfactory way to evaluate the performance of a particular solution. So we don't have a cost function. But still we are able to reach to a solution, albeit maybe not the best. I'll come to the point why it is not used later.

NN's can be broadly thought of as just function approximators. Normally, you give it some continuous function, the NN adjusts it by elongating, shifting, distorting parts of that function by changing only and only the parameters of the function and not the nature of the function itself i.e. it'll decompose into the same sort of a Fourier series as before, with only phase and amplitude differences. You can also design a NN on lines of a random search puzzle. You give each node of a, say single hidden layer NN a part of the function to be approximated i.e. between some intervals -a < x < b say. Say the connection weights to each output layer is fixed. You can only change the part of the function (i.e.parts broken like a jigsaw from a function to be approximated) you want to give to a node, from your fixed box of parts. So you see, in this case the NN will try almost all combinations of giving different parts to different nodes until the function is perfectly approximated. So you see, this can also be a type of learning for NN without any continuous function.

  • Why should an activation function of a neural network be differentiable?

The main advantages of differentiable function is the mathematics behind it. You can easily handle huge amounts of data just by simple mathematics. 1000's of years of mathematical theories can be applied to verify the working of your NN, predict how will it work, select best algorithms. In discrete non-differentiable activation functions you have no realistic way of predicting what will be the final result.


Say in a problem like this, if you apply the perceptron learning algo., do you have any realistic way to predict where the final decision boundary will rest? Whereas, if you train it by continuous methods, you can easily plot the cost function versus weights, check the global minima, get the weights which will be the final weights. So we easily solved it by plotting (definite matrix methods also exist).

  • Is it advantageous to have a differentiable activation function? Why?

So why is it advantegeous?

--> Final answers easily predictable (if relatively small data set).

--> Easy to draw graphs and visualize the working of your NN and adjust your hyper-parameters accordingly.

--> Easy to apply time tested mathematical tools to test/evaluate the effectiveness of your algorithm.

--> No sudden changes in error and so your weights will also not change suddenly due to stray readings.

These are all the advantages I could think of and I am sue there are more. All these tools are not available for discrete algorithms, if you think (can't give the full intuition - too lengthy).

I gave some more intuition here:

What is the purpose of an activation function in Neural Networks?

Feel free to add anything I missed by editing the answer.

Hope this helps!

",,user9947,,,,3/7/2018 7:19,,,,0,,,,CC BY-SA 3.0 5558,1,,,3/7/2018 7:32,,5,253,"

I have a fully connected network that takes in a variable-length input padded with 0.

However, the network doesn't seem to be learning and I am guessing that the high number of zeros in the input might have something to do with that.

Are there solutions for dealing with padded input in fully connected layers or should I consider a different architecture?

UPDATE (to provide more details):

The goal of the network if to clean full file paths: i.g.:

  • /My document/some folder/a file name.txt > a file name
  • /Hard drive/book/deeplearning/1.txt > deeplearning

The constraint is that the training data labels have been generated using a regex on the file name itself so it's not very accurate.

I am hoping that by treating every word equally (without sequential information) the network would be able to generalize as to which type of word is usually kept and which is usually discarded.

Then network takes in a sequence of word embedding trained on paths data and outputs logits that correspond to probabilities of each word to be kept or not.

",12931,,2444,,1/5/2022 9:39,10/2/2022 10:07,How to deal with padded inputs in a fully connected feed forward network?,,1,2,,,,CC BY-SA 4.0 5559,1,,,3/7/2018 8:41,,0,133,"

I have an idea about how to use neural networks but I'm not sure if it is possible or not.

In supervised learning we have a set of attributes labeled with an output value. I can use these set to train my network.

Now I have a network trained to get an output value from an random set of attributes but, can I use this trained network to get the input attributes using only the desired output?

I will have N input values and only 1 output value. I've thought that I can use the weights for that network into a new one with 1 input value and N output values but I'm not sure if I can do that.

",4920,,4920,,3/7/2018 12:58,3/9/2018 18:09,Neural network to get input attributes using only the output value,,2,0,,,,CC BY-SA 3.0 5560,2,,5559,3/7/2018 12:37,,0,,"

NN isn't symmetric and the result may haven't any seems. You can just use loss like this (target_output - model_output)**2 - MSE differentiate this loss according to input variables and use optimizer to solve this task. Try to search about adversarial models, the idea is very similar.

",13067,,1671,,3/9/2018 18:09,3/9/2018 18:09,,,,1,,,,CC BY-SA 3.0 5561,2,,5529,3/7/2018 15:06,,0,,"

There are so many reinforcement learning algorithms. I recommend you this articles:

There are several approaches. A NN can get state and action as an input and expected discounted reward as an output. But you can just not pick an action if this action isn't available and there isn't necessity to give it negative or zero reward just don't learn your net to change it behavior for these actions because they are neither good nor bad.

",13067,,,,,3/7/2018 15:06,,,,1,,,,CC BY-SA 3.0 5562,2,,5551,3/7/2018 18:14,,1,,"

Yes, You just weight the entries of loss function like sum( Wi(li-t2)**2) for MSE. Most frameworks provides this ability. For example in keras fit method has a parameter - sample_weight: Optional Numpy array of weights for the training samples.

",13067,,,,,3/7/2018 18:14,,,,0,,,,CC BY-SA 3.0 5565,2,,5559,3/7/2018 18:35,,2,,"

Now I have a network trained to get an output value from an random set of attributes but, can I use this trained network to get the input attributes using only the desired output?

It depends:

  • If you are happy to find any inputs, even non-realistic ones, that get your desired output, then you can use your trained network, with a minor modification. Freeze all the weights, and allow back-propagation to determine the gradient of the input (which should now be a variable to optimise, not source data). Start with a noise input, back-propagate the error to find gradient to make the input better at creating your desired output, then take a gradient step towards it in the input data. This is essentially how Deep Dream works. Like Deep Dream, you will not necessarily get realistic input values, but will get semi-random ones that cause your network to predict a specific class.

  • If you want the newly generated input to be a best guess at something from the original dataset, then you have to look at one of more advanced models:

These network types are quite advanced, and can be tricky to understand and train successfully. You will want to spend some time researching each type.

To generalise terribly: A GAN will tend to generate realistic ""noise"" in the generated items, but at the expense of overall structure and cohesion (images tend to look distorted but with realistic textures). A VAE will tend to produce smooth, coherent inputs, but at the expense of lack of fine detail (VAE images tend to look smoothed and/or blurred).

If not sure what to try, probably GAN is a reasonable choice, since there are lots of tutorials available, and recent advances with image generation can look very impressive.

",1847,,,,,3/7/2018 18:35,,,,1,,,,CC BY-SA 3.0 5566,2,,5546,3/7/2018 18:50,,18,,"

Convolutional Neural Networks (CNNs) are neural networks with architectural constraints to reduce computational complexity and ensure translational invariance (the network interprets input patterns the same regardless of translation— in terms of image recognition: a banana is a banana regardless of where it is in the image). Convolutional Neural Networks have three important architectural features.

Local Connectivity: Neurons in one layer are only connected to neurons in the next layer that are spatially close to them. This design trims the vast majority of connections between consecutive layers, but keeps the ones that carry the most useful information. The assumption made here is that the input data has spatial significance, or in the example of computer vision, the relationship between two distant pixels is probably less significant than two close neighbors.

Shared Weights: This is the concept that makes CNNs ""convolutional."" By forcing the neurons of one layer to share weights, the forward pass (feeding data through the network) becomes the equivalent of convolving a filter over the image to produce a new image. The training of CNNs then becomes the task of learning filters (deciding what features you should look for in the data.)

Pooling and ReLU: CNNs have two non-linearities: pooling layers and ReLU functions. Pooling layers consider a block of input data and simply pass on the maximum value. Doing this reduces the size of the output and requires no added parameters to learn, so pooling layers are often used to regulate the size of the network and keep the system below a computational limit. The ReLU function takes one input, x, and returns the maximum of {0, x}. ReLU(x) = argmax(x, 0). This introduces a similar effect to tanh(x) or sigmoid(x) as non-linearities to increase the model's expressive power.


Further Reading

As another answer mentioned, Stanford's CS 231n course covers this in detail. Check out this written guide and this lecture for more information. Blog posts like this one and this one are also very helpful.

If you're still curious why CNNs have the structure that they do, I suggest reading the paper that introduced them though this is quite long, and perhaps checking out this discussion between Yann Lecun and Christopher Manning about innate priors (the assumptions we make when we design the architecture of a model).

",13156,,13156,,11/3/2018 18:17,11/3/2018 18:17,,,,1,,,,CC BY-SA 4.0 5567,2,,5517,3/7/2018 22:11,,1,,"

There has been some work in this area but with three chat bots. Xing Han Lu has developed some code called Generative Adversarial Bots (GABs) which builds on the concept of Generative Adversarial Networks invented by Ian Goodfellow. See his pioneering paper ""Generative Adversarial Networks"".

There is a very brief Google presentation here.

The basic idea of GABs is to ""compare the performance and human-likeness of conversational chatbots by generating conversation between two bots, and evaluating the response using Turing Tests"". ""Generative Adversarial Bots (GABs) are bots that are pitched up against each other, and generate a conversation that is used to train a third bot"".

You might want to also check out ""Adversarial Learning for Neural Dialogue Generation"" by Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter and Dan Jurafsky. From their abstract:

In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues.

Jiwei Li, one of the authors, has posted the code here.

",5763,,5763,,3/7/2018 22:30,3/7/2018 22:30,,,,0,,,,CC BY-SA 3.0 5568,1,,,3/8/2018 0:38,,0,337,"

I used to work with layered neural networks, where, given certain inputs, the output is produced layer-by-layer.

With NEAT, a neural network may assume any topology, and they are no longer layered. So, how do we compute the output of such a neural network? I understand time-steps must be taken into account, but how? Should I keep the inputs until all hidden neurons are processed and output is produced? Should I wait for the output to stabilize?

",13087,,2444,,12/13/2021 14:46,12/13/2021 14:46,How to compute the output of a neural network produced by NEAT?,,1,0,,,,CC BY-SA 4.0 5569,2,,5546,3/8/2018 1:49,,40,,"

TLDR: The convolutional-neural-network is a subclass of neural-networks which have at least one convolution layer. They are great for capturing local information (e.g. neighbor pixels in an image or surrounding words in a text) as well as reducing the complexity of the model (faster training, needs fewer samples, reduces the chance of overfitting).

See the following chart that depicts the several neural-networks architectures including deep-conventional-neural-networks: .


Neural Networks (NN), or more precisely Artificial Neural Networks (ANN), is a class of Machine Learning algorithms that recently received a lot of attention (again!) due to the availability of Big Data and fast computing facilities (most of Deep Learning algorithms are essentially different variations of ANN).

The class of ANN covers several architectures including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) eg LSTM and GRU, Autoencoders, and Deep Belief Networks. Therefore, CNN is just one kind of ANN.

Generally speaking, an ANN is a collection of connected and tunable units (a.k.a. nodes, neurons, and artificial neurons) which can pass a signal (usually a real-valued number) from a unit to another. The number of (layers of) units, their types, and the way they are connected to each other is called the network architecture.

A CNN, in specific, has one or more layers of convolution units. A convolution unit receives its input from multiple units from the previous layer which together create a proximity. Therefore, the input units (that form a small neighborhood) share their weights.

The convolution units (as well as pooling units) are especially beneficial as:

  • They reduce the number of units in the network (since they are many-to-one mappings). This means, there are fewer parameters to learn which reduces the chance of overfitting as the model would be less complex than a fully connected network.
  • They consider the context/shared information in the small neighborhoods. This feature is very important in many applications such as image, video, text, and speech processing/mining as the neighboring inputs (eg pixels, frames, words, etc) usually carry related information.

Read the followings for more information about (deep) CNNs:

  1. ImageNet Classification with Deep Convolutional Neural Networks
  2. Going Deeper with Convolutions

P.S. ANN is not "a system based loosely on the human brain" but rather a class of systems inspired by the neuron connections exist in animal brains.

",12853,,55966,,6/27/2022 16:14,6/27/2022 16:14,,,,0,,,,CC BY-SA 4.0 5570,1,,,3/8/2018 6:10,,1,908,"

In two-player games, the exact value of the evaluation function doesn't matter, as long as it's bigger for better positions. However, for learning, it's customary when it does change when the best move gets made. This way, the learning can minimize the difference between the directly computed value $f(0, p)$ of a position $p$ and the value obtained from $n$ step minimax $f(n, p)$.

What I'm missing here is a way to direct the evaluation function to actually winning. For example, a perfect evaluation function for a won position in chess would always return $+1$ without any hint on how to progress towards a checkmate. In a chess variant without the fifty-move limit, it could play useless turns forever.

I guess this is a rather theoretical problem, as we won't ever have such a good function, but I wonder if there's a way to avoid it?

",12053,,2444,,2/6/2021 21:32,2/6/2021 21:32,Game AI evaluation function and making progress towards winning,,1,0,,,,CC BY-SA 4.0 5571,1,,,3/8/2018 6:46,,4,179,"

How does one prove the uniqueness of the value function obtained from value iteration in the case of bounded and undiscounted rewards? I know that this can be proven for the discounted case pretty easily using the Banach fixed point theorem.

",13185,,1847,,3/9/2018 18:16,8/13/2018 1:00,Proof of uniqueness of value function for MDPs with undiscounted rewards,,1,0,,,,CC BY-SA 3.0 5572,2,,113,3/8/2018 7:14,,5,,"

Sigmoid > Hyperbolic tangent:

As you mentioned, the application of Sigmoid might be more convenient than hyperbolic tangent in the cases that we need a probability value at the output (as @matthew-graves says, we can fix this with a simple mapping/calibration step). In other layers, this makes no sense.

Hyperbolic tangent > Sigmoid:

Hyperbolic tangent has a property called ""approximates identity near the origin"" which means $\tanh(0) = 0$, $\tanh'(0) = 1$, and $\tanh'(z)$ is continuous around $z=0$ (as opposed to $\sigma(0)=0.5$ and $\sigma'(0)=0.25$). This feature (which also exists in many other activation functions such as identity, arctan, and sinusoid) lets the network learn efficiently even when its weights are initialized with small values. In other cases (eg Sigmoid and ReLU) these small initial values can be problematic.

Further Reading:

Random Walk Initialization for Training Very Deep Feedforward Networks

",12853,,22916,,4/2/2019 18:31,4/2/2019 18:31,,,,0,,,,CC BY-SA 4.0 5574,2,,5174,3/8/2018 9:16,,3,,"

First thing you're going to want to add is probably a Transposition Table, as also suggested by SmallChess.

Afterwards, I'd look into Aspiration Search and/or Principal Variation Search (also see this page).

Then I'd look into things like the Killer Move Heuristic, and maybe also see if you can simply implement existing parts of your engine more efficiently (e.g. use bitboards for your state representation).

Other than all of that, the chess programming wiki probably has lots of other interesting pages as well.

",1641,,-1,,10/14/2019 21:05,10/14/2019 21:05,,,,0,,,,CC BY-SA 4.0 5576,2,,5551,3/8/2018 11:32,,1,,"
  • I think you are looking for ""Class Balancing"" if I understand your question correctly. In most frameworks you can pass an additional weights tensor/array to the loss function/criterion. This is essentially just a weighting factor for each loss term in the sum of the total loss. Through that you achieve that the error/loss for some classes are weighted higher than other classes. Usually you weight classes higher that are less present in your dataset to emphasize that these are not ""ignored"" by your network over other classes.

  • Second what you can do if it's your data samples that are imbalanced and not the trained classes you can of course weight the data sampling of your dataloader. Instead of using a uniform distribution over all data samples you can weight specific samples higher or lower so that they are shown more/less often to the network.

",13104,,13104,,3/9/2018 9:29,3/9/2018 9:29,,,,0,,,,CC BY-SA 3.0 5577,1,,,3/8/2018 12:54,,5,970,"

I am working on a js library which focuses on error handling. A part of the lib is a stack parser which I'd like to work in most of the environments.

The hard part that there is no standard way to represent the stack, so every environment has its own stack string format. The variable parts are message, type and frames. A frame usually consists of called function, file, line, column.

In some of the environments there are additional variable regions on the string, in others some of the variables are not present. I can run automated tests only in the 5 most common environments, but there are a lot more environments I'd like the parser to work in.

  • My goal is to write an adaptive parser, which learns the stack string format of the actual environment on the fly, and after that it can parse the stack of any exception of that environment.

I already have a plan how to solve this in the traditional way, but I am curious, is there any machine learning tool (probably in the topic of unsupervised learning) I could use to solve this problem?

According to the comments I need to clarify the terms ""stack string format"" and ""stack parser"". I think it is better to write 2 examples from different environments:

A.)

example stack string:

Statement on line 44: Type mismatch (usually a non-object value used where an object is required)
Backtrace:
  Line 44 of linked script file://localhost/G:/js/stacktrace.js
    this.undef();
  Line 31 of linked script file://localhost/G:/js/stacktrace.js
    ex = ex || this.createException();
  Line 18 of linked script file://localhost/G:/js/stacktrace.js
    var p = new printStackTrace.implementation(), result = p.run(ex);
  Line 4 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
    printTrace(printStackTrace());
  Line 7 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
    bar(n - 1);
  Line 11 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
    bar(2);
  Line 15 of inline#1 script in file://localhost/G:/js/test/functional/testcase1.html
    foo();

stack string format (template):

Statement on line {frames[0].location.line}: {message}
Backtrace:
{foreach frames as frame}
  Line {frame.location.line} of {frame.unknown[0]} {frame.location.path}
    {frame.calledFunction}
{/foreach}

extracted information (json):

{
    message: ""Type mismatch (usually a non-object value used where an object is required)"",
    frames: [
        {
            calledFunction: ""this.undef();"",
            location: {
                path: ""file://localhost/G:/js/stacktrace.js"",
                line: 44
            },
            unknown: [""linked script""]
        },
        {
            calledFunction: ""ex = ex || this.createException();"",
            location: {
                path: ""file://localhost/G:/js/stacktrace.js"",
                line: 31
            },
            unknown: [""inline#1 script in""]
        },
        ...
    ]
}

B.)

example stack string:

ReferenceError: x is not defined
    at repl:1:5
    at REPLServer.self.eval (repl.js:110:21)
    at repl.js:249:20
    at REPLServer.self.eval (repl.js:122:7)
    at Interface.<anonymous> (repl.js:239:12)
    at Interface.EventEmitter.emit (events.js:95:17)
    at Interface._onLine (readline.js:202:10)
    at Interface._line (readline.js:531:8)
    at Interface._ttyWrite (readline.js:760:14)
    at ReadStream.onkeypress (readline.js:99:10)

stack string format (template):

{type}: {message}
{foreach frames as frame}
{if frame.calledFunction is undefined}
    at {frame.location.path}:{frame.location.line}:{frame.location.column}
{else}
    at {frame.calledFunction} ({frame.location.path}:{frame.location.line}:{frame.location.column})
{/if}
{/foreach}

extracted information (json):

{
    message: ""x is not defined"",
    type: ""ReferenceError"",
    frames: [
        {
            location: {
                path: ""repl"",
                line: 1,
                column: 5
            }
        },
        {
            calledFunction: ""REPLServer.self.eval"",
            location: {
                path: ""repl.js"",
                line: 110,
                column: 21
            }
        },
        ...
    ]
}

The parser should process the stack strings and return the extracted information. The stack string format and the variables are environment dependent, the library should figure out on the fly how to parse the stack strings of the actual environment.

I can probe the actual environment by throwing exceptions with well known stacks and check the differences of the stack strings. For example if I add a whitespace indentation to the line that throws the exception, then the column and probably the called function variables will change. If I detect a number change somewhere, then I can be sure that we are talking about the column variable. I can add line breaks too, which will cause line number change and so on...

I can probe for every important variables, but I cannot be sure that the actual string does not contain additional unknown variables and I cannot be sure that all of the known variables will be added to it. For example the frame strings of the ""A"" example contain an unknown variable and do not contain the column variable, while the frame strings of the ""B"" example do not always contain the called function variable.

",13192,,13192,,3/10/2018 12:32,1/3/2020 23:12,Is it possible to write an adaptive parser?,,3,8,,,,CC BY-SA 3.0 5578,2,,5538,3/8/2018 13:46,,1,,"

I understand that in your example you are interested in modelling the outcome of the 'result' column.

One easy model I would suggest is to model it using the Bernoulli distribution (https://en.wikipedia.org/wiki/Bernoulli_distribution) with the probability of success p.

Then you can model p with something like this

x = a + b * log(page_views) + c * log(time_spent) + e

p = exp(x) / (1+exp(x))

where e is normally distributed, e ~ N(0, sigma^2) (or simply centered around zero).

a, b and c are parameters that you can estimate.

I.e. the probability of success (conversion) is modeled as a sigma function of a certain variable that depends on page_views and time_spend. You can also add squares (and higher powers) of page_views and time_spend in the equation of x (i.e. upto a certain threshold page_views can have a positive effect on conversion and then a negative one, then again a positive effect).

Also reading about logistic regression should put you on the right track: https://en.wikipedia.org/wiki/Logistic_regression

",11417,,11417,,3/8/2018 13:56,3/8/2018 13:56,,,,0,,,,CC BY-SA 3.0 5579,2,,5527,3/8/2018 14:42,,1,,"

The correct interpretation (based on the comment by the author of the question below):

Yep, you are right. It is actually only a single cell per object that contributes to the loss with $\mathbb{1}^\text{obj}_{ij}$ factor. That cell is identified as the one that contains the centre(oid) of the ground truth box of the corresponding object.

My original (incorrect) interpretation of the paper:

From what I read in the paper it is not a single bounding box for all cells in the grid for a particular object (your original suggestion).

Rather for every object and every cell that is assigned to this object only one bounding box contributes to the loss. The network generates $B$ bounding boxes for every cell in the grid, but we pick only one and only when that cell actually belongs to an object.

Every cell $i$, which is not a background cell, will have exactly one box $j$, such that $\mathbb{1}^\text{obj}_{ij} = 1$

This is based on the following paragraph from the paper:

Note that the loss function only penalizes classification error if an object is present in that grid cell (hence the conditional class probability discussed earlier). It also only penalizes bounding box coordinate error if that predictor is "responsible" for the ground truth box (i.e. has the highest IOU of any predictor in that grid cell).

",11417,,2444,,1/28/2021 23:12,1/28/2021 23:12,,,,2,,,,CC BY-SA 4.0 5580,1,5620,,3/8/2018 16:22,,7,7782,"

I am trying to understand backpropagation. I used a simple neural network with one input $x$, one hidden layer $h$ and one output layer $y$, with weight $w_1$ connecting $x$ to $h$, and $w_2$ connecting $h$ to $y$

$$ x \rightarrow (w_1) \rightarrow h \rightarrow (w_2) \rightarrow y $$

In my understanding, these are the steps happening while we train a neural network:

The feedforward step.

\begin{align} h=\sigma\left(x w_{1}+b\right)\\ y^{\prime}=\sigma\left(h w_{2}+b\right) \end{align}

The loss function.

$$ L=\frac{1}{2} \sum\left(y-y^{\prime}\right)^{2} $$

The gradient calculation

$$\frac{\partial L}{\partial w_{2}}=\frac{\partial y^{\prime}}{\partial w_{2}} \frac{\partial L}{\partial y^{\prime}}$$

$$\frac{\partial L}{\partial w_{1}}=?$$

The weight update

$$ w_{i}^{t+1} \leftarrow w_{i}^{t}-\alpha \frac{\partial L}{\partial w_{i}} $$

I understood most parts of backpropagation, but how do we get the gradients for the middle layer weights $dL/dw_1$?

How should we calculate the gradient of a network similar to this?

Is this the correct equation?

$$\frac{\partial L}{\partial w_{1}}=\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial w_{7}}{\partial h_{1}} \frac{\partial o_{2}}{\partial w_{7}} \frac{\partial L}{\partial o_{2}}+\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial w_{5}}{\partial h_{1}} \frac{\partial o_{1}}{\partial w_{5}} \frac{\partial L}{\partial o_{1}}$$

",39,,39,,5/18/2020 1:34,5/18/2020 1:34,How is the gradient calculated for the middle layer's weights?,,1,0,,,,CC BY-SA 4.0 5581,2,,5553,3/8/2018 20:13,,1,,"

I understand that you want to know about methods that we can use to evaluate GANs (Generative Adversarial Networks).

How can GANs be evaluated?

One Discriminator on Separate GANs

We can train a Discriminator beforehand and then we can use this Discriminator on various Generators to see what does this Discriminator say about the images generated from each Generator. The average output of the Discriminator for images generated from one Generator can be compared with that of the images generated from another Generator. This means that according to this Discriminator, one Generator is better than another.

Comparing Probabilistic Models

This is the class of evaluation metrics that is attracting research attention recently. Basically, all generative models are probabilistic in nature, even GANs. When we ask a generative model to generate something like an image, we are simply sampling from a probability distribution. This means that if we can compare the probability distributions of our Generator and that of our original data then we will have an evaluation metric. We can come up with a kernel function to define these probability distributions more accurately. Then we can take our generated samples from the Generator and see the probability of that sample being taken from the original distribution. Researchers have also utilized KL-divergence for comparing samples generated from two distributions.

Note: KL-divergence can not be directly applied in this case because corresponding ground truth is not available. However, there are some modifications proposed in the research work mentioned.

Recommended Readings

",12957,,12957,,3/8/2018 20:33,3/8/2018 20:33,,,,0,,,,CC BY-SA 3.0 5582,1,,,3/8/2018 21:13,,1,101,"

In multivariate linear regression (linear regression with more than one variable) the model is $yi = b_0 + b_1x_{1i} + b_2x_{2i} + ...$ , and so on. But how is the $b_n$ value calculated iteratively? Can it be calculated non-iteratively? What is the intuition behind using that method to calculate $b_2$?

",13202,,2444,,4/1/2019 13:43,4/2/2019 18:31,"In the multi-linear regression, how is the value of weight $b_2$ calculated?",,2,0,,,,CC BY-SA 4.0 5584,2,,3007,3/8/2018 22:40,,1,,"

I have been in the software testing industry for over 11 years now and I can say for sure that there are different ways people are using AI for software testing. This is especially true in the area of automated testing tools. Different vendors have been trying to tackle some of the common problems in test automation using AI. Some of them are-

Appvance

Appvance uses AI to generate test cases based on user behavior but is not a fully AI based tool like Testim.io

Test.ai

Test.ai uses artificial intelligence to perform regression testing. It is helpful to get performance metrics on your app. It is more of an app monitoring tool than functional testing tool from my point of view

Functionize

Functionize used machine learning for functional testing. It is very similar to other tools in the market in terms of its capabilities

The above are some of the popular tools out there in the market.

The trend seems to be going in the positive direction in terms of vendors trying to make testing more stable, simpler, smarter and getting everyone in the team involved in testing including non-technical people.

It will be just a matter of time that more solutions come up for software testing using AI

-Raj

",13204,,13204,,11/30/2019 22:17,11/30/2019 22:17,,,,0,,,,CC BY-SA 4.0 5585,2,,5582,3/8/2018 23:20,,2,,"

It is calculated the same way $b_1$ is calculated.

Nearly following your notation, say your multiple linear regression function is

$H(X_i) = b_0 + b_1x_{1,i} + ...+ b_nx_{n, i}$

for data instance $X_i=x_{1,i},...,x_{n, i}$ and weights $b_0,...,b_n$.

And say your error function is $E(X,Y) = \sum_i(H(X_i)-Y_i)^2$

where $X$ is the collection of all data points $X_i, Y_i$.

From your error function $E$, for whatever weights you have (with a gradient based method), calculate the partial derivative $\partial E /\partial b_i$ and use this to update all of your weights at once in each iteration of your optimization routine.

",9469,,22916,,4/2/2019 18:31,4/2/2019 18:31,,,,0,,,,CC BY-SA 4.0 5586,1,,,3/8/2018 23:48,,6,697,"

When one uses NEAT to evolve the best fitting network for a task, does training take place in each epoch as well?

If I understand correctly, training is the adjustment of the weights of the neural network via backpropagation and gradient descent. During NEAT, say, a generation runs for 1000 iterations. During that time, is there any training involved, or does each genome randomly poke around and the winner takes it to the next stage?

I've used NEAT, but the fact that neural networks are not trained does not make sense to me. At the same time, I can't find any code in my framework (Neataptic.js) that would train the generation during the epoch.

",13138,,2444,,10/13/2019 1:18,10/13/2019 1:18,Does training happen during NEAT?,,1,0,,,,CC BY-SA 4.0 5588,2,,4979,3/9/2018 6:00,,1,,"

Neural networks are used to visualize high dimensional data through the use of autoencoding. It's similar to Principal Component Analysis and is regarded to perform better then PCA. Autoencoding will take your data and convert it to a 2 or 3-dimensional representation. Since you have an array of data you might want to use LSTM. You will have to make sure you reset the LSTM states when you perform the reverse pass. You will probably have trouble interpreting the autoencoded form of the data though, so that will be a task of its own.

",13208,,1671,,3/9/2018 18:02,3/9/2018 18:02,,,,1,,,,CC BY-SA 3.0 5590,2,,3469,3/9/2018 6:34,,1,,"
  • To represent the pieces, you should be able to use a single input matrix. Just designate an integer number for the different types of pieces. White stones can be positive integers and black stones can be negative.

  • You can use sigmoid for board position confidence and linear activation for piece identifier. pass would be another sigmoid output. I don't think you'll have to worry about pass being diluted. Since it is such a valuable action, the score will depend a lot on the pass output and it will have a large gradient. If you need to select the pass action with high frequency for reinforcement learning purposes, then just attribute a higher probability to the pass action in your random choice function.

  • The final score difference has a large impact on the desirability of the moves. A large score difference should result in a large impact on the function. Therefore you might want to include the magnitude of score difference in your loss function.

This is the type of job that Deep Q Learning does. Perhaps you'll want to look into that too.

",13208,,,,,3/9/2018 6:34,,,,0,,,,CC BY-SA 3.0 5591,2,,4698,3/9/2018 6:52,,2,,"

You might consider pre-training a CNN on a large dataset. The CNN should be structured such that you input 2 different images and the CNN predicts whether or not they are the same person. Your dataset should include images from multiple angles, with and without occlusions like sunglasses, and with changes in hair. (One dataset useful for this is the AR Face Database.) Then you can just check if the face matches any of the pictures you took as they entered the store.

",13208,,13208,,3/9/2018 22:14,3/9/2018 22:14,,,,1,,,,CC BY-SA 3.0 5592,2,,3802,3/9/2018 7:07,,-1,,"

If you have lots of rules then you should be able to come up with lots of heuristics for A*. If I were you I would try A* first, and come up with as many heuristics as I could. You can also use Deep Q learning. I don't think you want to just throw coordinates into a Deep Q network. You want a simpler representation. I would probably give the nodes symbolic identifiers like sparse matrices, integers or sigmoids, and then craft the loss function to reflect the actual cost of moving from node to node and ascribe a high cost to illegal moves.

",13208,,,,,3/9/2018 7:07,,,,0,,,,CC BY-SA 3.0 5593,1,5603,,3/9/2018 7:29,,5,1402,"

What are the advantages/ strengths and disadvantages/weakness of programming languages like Common Lisp, Python and Prolog? Why are these languages used in the domain of artificial intelligence? What type of problems related to AI are solved using these languages?

Please, give me link to papers or books regarding the mentioned topic.

",12803,,2444,,3/1/2019 20:46,3/1/2019 20:46,"Why is Common Lisp, Python and Prolog used in artificial intelligence?",,1,2,,5/13/2020 20:11,,CC BY-SA 4.0 5594,2,,4979,3/9/2018 8:16,,1,,"

I doubt that this problem is a good application of NN. Take a look at this https://en.wikipedia.org/wiki/Force-directed_graph_drawing. And try Graphviz if you haven't already. Drawing graphs in a meaningful way is notoriously difficult.

An alternative approach would be to embed all nodes into a 2d or 3d metric space. I.e. for each node you will have coordinates of a point in such space. If you have certain rich descriptors of the nodes (i.e. some kind of information associated with each node, pictures, text, vectors of numbers) that can be done even with a NN. To have a good embedding you need more info than just node degrees or/and edge weights.

",11417,,,,,3/9/2018 8:16,,,,1,,,,CC BY-SA 3.0 5595,2,,3469,3/9/2018 8:51,,1,,"

You don't need conv layers, since you don't feed a picture as an input (see below). Alternatively, you can try using a picture of the board (with different pieces having different shapes). This can work too. Then I would go for 2 conv layers, stride 1, kernel size equal to half a piece size. I would try it with a single max pooling.

Unlike the other answer i would suggest using a 3d tensor as an input, with the number of channels equal to different pieces. The other two dimensions equal would correspond to number of cells on the board. Various transformation in you NN will not be able to distinguish between multiple integers very well. That's why it is better to have a one-hot encoding of the pieces' types.

I would use only a vector with n+1 components for output: n for all possible moves, and 1 for the pass. It would encode the expected reward for each move, not the probability.

Not sure what you mean by enforcing moves. But when you going to train it with something like Q-learning it would make sense making a completely random move every once and a while with a certain probability (say 10% of the times). Lookup https://en.wikipedia.org/wiki/Reinforcement_learning

",11417,,,,,3/9/2018 8:51,,,,0,,,,CC BY-SA 3.0 5600,2,,5582,3/9/2018 10:13,,-1,,"

put your b_1, ... ,b_n coefficients into a vector b

put all your x_{ij} into a matrix X

then all components of b are calculated at the same time with this equation

b = (H(X) - Xb)^T (H(x) - Xb)

But this calculation (estimation) is only consistent (search consistent estimator) when certain assumptions are present (read here https://en.wikipedia.org/wiki/Ordinary_least_squares).

",11417,,,,,3/9/2018 10:13,,,,0,,,,CC BY-SA 3.0 5601,1,5621,,3/9/2018 10:45,,13,2392,"

Currently, the most commonly used activation functions are ReLUs. So I answered this question What is the purpose of an activation function in neural networks? and, while writing the answer, it struck me, how exactly can ReLUs approximate a non-linear function?

By pure mathematical definition, sure, it's a non-linear function due to the sharp bend, but, if we confine ourselves to the positive or the negative portion of the $x$-axis only, then it's linear in those regions. Let's say we take the whole $x$-axis also, then also it's kinda linear (not in a strict mathematical sense), in the sense that it cannot satisfactorily approximate curved functions, like a sine wave ($0 \rightarrow 90$) with a single node hidden layer as is possible by a sigmoid activation function.

So, what is the intuition behind the fact that ReLUs are used in NNs, giving satisfactory performance? Are non-linear functions, like the sigmoid and the tanh, thrown in the middle of the NN sometimes?

I am not asking for the purpose of ReLUs, even though they are kind of linear.


As per @Eka's comment, the ReLu derives its capability from discontinuity acting in the deep layers of the NN. Does this mean that ReLUs are good, as long as we use it in deep NNs and not a shallow NN?

",,user9947,2444,,9/24/2020 10:38,9/24/2020 10:38,How exactly can ReLUs approximate non-linear and curved functions?,,1,1,,,,CC BY-SA 4.0 5602,1,5648,,3/9/2018 12:46,,2,376,"

I'm a software developer who keeps trying (and failing) to get my head around AI and neural networks. There is one area that sparked my interest recently - simulating a mouse ""homing in"" on a piece of cheese by following the smell. Based on the rule that moving closer to the cheese = stronger smell = good, then it feels like it should be quite a simple problem to solve - in theory at least!

My thought process was to start by placing the mouse and cheese in random positions on the screen. I would then move the mouse one step in a random direction and measure its distance to the cheese, and if it's closer than before (stronger smell) then that's good. This is where I come unstuck on the theory - this ""feedback"" somehow needs to modify the mechanism used to move the mouse, gradually refining it until the mouse is able to head straight towards the cheese. Once ""trained"", I should be able to reposition the cheese and expect the mouse to travel to it more quickly. Note I'm also keeping things simple by not having obstacles for the mouse to negotiate around.

How on earth would this be implemented with a NN? I understand the basic concepts, but I find that things unravel once I start looking at real code! The examples I've seen typically start by training the NN from a data set, but this doesn't seem to apply here as it feels like the only training available is ""on the fly"" as the mouse moves around (i.e. closer = good, further away = bad). I'm assuming the brain has some kind of ""reward mechanism"" triggered by a stronger smell of cheese.

Am I barking up the wrong tree - either with my thought process, or NN not being a good fit for this problem? This isn't homework btw, just something that I've been puzzling over in the back of my mind.

",13216,,1671,,3/9/2018 18:38,3/12/2018 16:59,"Neural network to control movement and ""home in"" on a target",,1,1,,,,CC BY-SA 3.0 5603,2,,5593,3/9/2018 18:58,,3,,"

If we talk about applied AI, the choice of a programming language for an AI application has the same points to be taken into account that in any other software area: speed of generated code, expressive, reusable, etc.

By example, as training of a neural net is very expensive in CPU, languages as C/C++, that produces very optimized code, are very convenient. Moreover, there are GPUs librarians in C/C++ that allows use of strong parallelism.

A system with some complexity will combine more than one language, in order to use the best of the language in the points where it is need.

But returning to the list of languages that appears in the question. As all them are Turing complete compare them means talk about its paradigm, features, syntax and available compilers/interpreters. Obviously, something that exceeds the possibilities of a simple answer. Just to show some key points about the ones mentioned in the question:

Prolog is a programming paradigm by itself. Its main advantage was that Prolog sentences are independent from remainder ones and near to the mathematical definitions of the concepts. Moreover, it is itself a database. Its drawback are also well known: very slow, lack of librarians for i/o, ... . Very interesting (even mandatory) to known a few examples of algorithms in Prolog, but I doubt nobody is using it nowadays, except in obsolete university courses (when you reach the ""!"", cut its study).

Lisp is also a zombie. Its functional paradigm has been now included in lots of very more modern languages, combining it with object oriented paradigm: scala, haskell, ocaml/F#, ... . Being functional allows a syntax that made easier to express logic concepts as recursive definition of logic or types, ... . Something very interesting in AI.

In the category of object oriented paradigm and valid for all applications, we have Python ( easy to learn, fast prototyping, slow, ... ) C/C++ (very optimized code), Java, ... . More or less, all them are adopting also functional features in latest standards.

In AI there are a lot of very interesting language features to be also considered: rule based systems, ... . Librarians for them can be found in all main languages.

Finally, some words about AGI (strong AI): you do not need a computer. In best moments, we are at the stage of pencil and paper, remainder ones looking at ceiling.

",12630,,12630,,3/10/2018 9:25,3/10/2018 9:25,,,,0,,,,CC BY-SA 3.0 5604,2,,5586,3/9/2018 19:38,,4,,"

NEAT uses genetic algorithms both to search for improved connection weights and for improved architectures.

Whilst it is possible to train a NEAT-generated neural network using backpropagation of error gradients, libraries implementing ""original"" NEAT will not implement that.

There are a couple of reasons:

  • There is often no training data, in a supervised learning sense. The fitness function of a NEAT system can be measured arbitrarily by performance at some task, and in the general case this consists of some environment simulation that interacts with an agent controlled by the NN, and not training data per se.

  • Evolved network topologies do not conform to stacked layers models preferred by ML frameworks designed to run typical deep learning architectures.

Both of these issues can be resolved with a little effort (for instance for the second issue there are frameworks which will work with arbitrary feed forward connection graphs, they are just a little more niche than e.g. Keras). However, NEAT is often used because it can solve problems without needing to frame them as supervised learning or reinforcement learning.

Other than the hard work of putting it together, there is nothing stopping you creating a train-on-data stage which could alternate between the two approaches, perhaps with controllable weighting from evolution to gradient-based training. To add the gradient-based training, then either:

a) Your original problem is one of fitting to a classifier or regression problem. In which case your fitness function and training loss function could be the same.

b) Your original problem is one of controlling an agent in an environment, and you could potentially use something like the REINFORCE algorithm based on most-recent assessments, in order to provide gradients to train the NN. Other policy gradient methods could also be a good fit, as the NEAT network typically outputs a policy as opposed to a value prediction.

I have never tried these (nor much with NEAT at all other than demos and a bit of theory). For (a) I would expect the combination to be successful, but wonder why you were bothering with NEAT in the first place. For (b) I am less sure whether you would get useful results, because REINFORCE relies on multiple runs with the same network, whilst NEAT relies on stochastic search across multiple networks. Applying REINFORCE training across a whole population could be very CPU intensive.

",1847,,1847,,3/9/2018 19:53,3/9/2018 19:53,,,,0,,,,CC BY-SA 3.0 5614,1,,,3/10/2018 7:06,,1,123,"

I was trying to build a prediction system where I have the input data arranged in multiple columns. The input data would be of the type where I have

  • weather,
  • service type (bronze, silver, gold),
  • size (xs, s, m, l, xl, xxl),
  • time,
  • availability,
  • pin code, and
  • the result (target).

Each of the data types is arranged in columns with a specific code. I have read this, this, this , this, and this.

They are helpful but do not give me a clear picture. I would like to achieve a multi-vs-one prediction. Most of the schemes available are one-vs-one where the data is a 1*1 entity.

Here is a sample code that I was working with:

regressionModel = linear_model.LinearRegression()
    """ 3. Processing is not necessary for current concept """
    y = pd.DataFrame(modifiedDFSet['Code'])
    print(y.shape)
    drop2 = ['Code']
    X = modifiedDFSet.drop(drop2)
    print(X.shape)
    """ 4. Data Scaling, Data Imputation is not necessary. Training and Test data is prepared using train-test-split """
    train_data, test_data = train_test_split(X, test_size=0.20, random_state=42)
    """ 5. the Regression Model """
    # h = .02  # step size in the mesh
    # logreg = linear_model.LinearRegression()
    # we create an instance of Neighbours Classifier and fit the data.
    regressionModel.fit(X, y)
    d_predictions = regressionModel.predict(y)

X.shape and y.shape would yield (500, 6) and (500, 1), respectively, which would obviously cause a dimensional error in the d_predictions, meaning the regression model does not take multiple column inputs.

I have a hypothesis that I can create a scoring scheme that will take into account the importance of each of the columns and create a scheme that creates a score and the end result would be a one-vs-one regression problem. Looking for some direction with respect to my hypothesis. Is it correct, wrong or halfway?

",8215,,2444,,12/25/2020 13:05,1/14/2023 18:03,multi vs one prediction using Regression,,1,2,,,,CC BY-SA 4.0 5616,2,,5614,3/10/2018 10:50,,1,,"

I think the model will have no problem taking a multicolumn input. In fact, from your code, this is exactly how you trained it. It expects an input of size [k, 6], where k is k>=1.

Instead you are feeding it with [k, 1] sized data, which are the dimensions of y. So you if you run it like this it should work:

regressionModel.predict(X)

",11417,,,,,3/10/2018 10:50,,,,0,,,,CC BY-SA 3.0 5620,2,,5580,3/10/2018 15:51,,3,,"

The main doubt here is about the intuition behind the derivative part of back-propagation learning. First, I would like to point out 2 links about the intuition about how partial derivatives work Chain Rule Intuition and Intuitive reasoning behind the Chain Rule in multiple variables?.

Now that we know how the chain rule works, let's see how we can use it in Machine Learning. So basically in machine learning, the final output is a function of input variables and the connection weights $f\left(x_1, x_2, \ldots, x_n, w_1, w_2, \ldots, w_n\right)$, where $f$ encloses all the activation functions and dot products lying between input and output. The $x_1, x_2, \ldots, x_n, w_1, w_2, \ldots, w_n$ are called independent variables because they don't affect each other pairwise as well as in groups meaning you cannot find a function $g(x_i, \dots, w_i, \dots) = h(x_j, \dots, w_j, \dots)$. So, basically its a black box from input to output.

So now our purpose is to minimize the Loss/Cost function, by changing the parameters that can be 'controlled by us' i.e the weights only, we cannot change the input variables. So this is done by taking the derivative of the cost function w.r.t to the variable that 'can be changed'. Here is an explanation of why taking derivative and subsequently subtracting it reduces the value of cost function given by the 'maximal' amount. Also here.

Now, to calculate $dL/dw_n$ you have to keep few things in mind:

  • Only differentiate $L$ w.r.t to those functions which affect $L$.
  • And to reach your end goal of differentiation w.r.t to an independent variable you must differentiate $L$ w.r.t to those functions only which are dependent on that particular independent variable.

A crude algorithm assuming $L$ also as a normal function (along the lines of activation function, so that I can express the idea recursively) differentiate $f_n$ w.r.t to functions in the previous layer say $f_{n-1}$, $f_{n-2}$, $w_n$. Check which of these functions depends on $w_1$. Only $f_{n-1}$ and $f_{n-2}$ do. Differentiate them again w.r.t to previous layer functions. Check again and go on till you reach $w_1$.

This approach is the fool-proof version, but it has 2 flaws:

  • First, $w_n$ is not a function. People are making this mistake of assuming $w_n$ to be a function due to misinterpretation of a simple NN diagram. To reach $w_1$, you don't need to go through $w_n$. But you definitely need to go through the activation functions and dot products. Think of this as painting a wall where color mixing occurs (not over-writing). So you paint the wall with some color (weights) then 2nd color and so on. Is the final product affected by color 1. Yes. Is the 'rate of change' caused by color 1 also affected by color 2. Yes. But does it mean we can find the 'change'of color n w.r.t to color 1? No its meaningless (bad example, couldn't think of a better one)
  • The second flaw is that this approach is not followed because with experience it is apparent which function affects whom and which independent variable affects which function (saves computation).

To answer your question the equation is incorrect and the correct equation will be:

$$ \frac{\partial L}{\partial w_{1}}=\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial o_{2}}{\partial h_{1}} \frac{\partial L}{\partial o_{2}}+\frac{\partial h_{1}}{\partial w_{1}} \frac{\partial o_{1}}{\partial h_{1}} \frac{\partial L}{\partial o_{1}} $$

I have simply followed the algorithm I have given above.

As for why your equation is wrong, your equation contains the term $dw7/dh1$. Does $w_7$ vary with $h_1$? This means that $w_7$ is directly related to the input as $h_1$ is related with the input, but this is not the case for a single iteration(the whole algorithm run makes $w_n$ dependent on the inputs since you are trying to minimize the loss function w.r.t given inputs and weights, for a different set of inputs you will have different final weights).

So, in a nutshell, the aim of back-propagation is to identify the change in the loss function w.r.t to a given weight. To calculate that, you have to make sure in the chain rule of derivative you don't have any meaningless terms like the derivative of an independent variable w.r.t to any function. I recommend checking Khan Academy for a better understanding and clarity in concepts as I think the intuitions are hard to provide in a written answer.

",,user9947,2444,,5/17/2020 21:59,5/17/2020 21:59,,,,4,,,12/25/2021 16:45,CC BY-SA 4.0 5621,2,,5601,3/10/2018 18:52,,5,,"

The outputs of a ReLU network are always "linear" and discontinuous. They can approximate curves, but it could take a lot of ReLU units. However, at the same time, their outputs will often be interpreted as a continuous, curved output.

Imagine you trained a neural network that takes $x^3$ and outputs $|x^3|$ (which is similar to a parabola). This is easy for the ReLU function to do perfectly. In this case, the output is curved.

But it is not actually curved. The inputs here are 'linearly' related to the outputs. All the neural network does is it takes the input and returns the absolute value of the input. It performs a 'linear', non-curved function. You can only see that the output is non-linear when you graph it against the original $x$-values (the $x$ in $x^3$).

So, when we plot the output on a graph and it looks curved, it's usually because we associated different x-values with the input, and then plotted the output as the $y$-coordinate in relation to those $x$-values.

Okay, so you want to know how you would smoothly model $\sin(x)$ using ReLU. The trick is that you don't want to put $x$ as the input. Instead put something curved in relation to $x$ as the input, like $x^3$. So, the input is $x^3$ and the output is $\sin(x)$. The reason why this would work is that it isn't computing the sine of the input - it's computing sine of the cube root of the input. It could never smoothly compute the sine of the input itself. To graph the output $\sin(x)$, put the original $x$ as the $x$ coordinate (don't put the input) and put the output as the $y$ coordinate.

",13208,,2444,,9/24/2020 10:19,9/24/2020 10:19,,,,2,,,,CC BY-SA 4.0 5622,1,,,3/10/2018 21:46,,3,89,"

I'm trying to design an orbital rendezvous program for Kerbal Space Program. I'm focusing on when to launch my spacecraft so that I can end up in the general vicinity of the target. If I can control the ascent profile, the remaining dependent variables are the ship's twr and the target's altitude. I want to try a computer learning solution.

What is the best way to formulate the problem of learning the time to launch based on some twr?

How can I make an algorithm to compute the general equation of launch time to a altitude based on my ability to accelerate? What type of learning problem could this be classified as? What are some approaches to solve problems with known dependent variables?

This may be an obvious question, I kind of expect the answer is regression? But it seemed a general enough question to be sure about to get a solid foothold in computer learning with this type of problem, which seems to come up a lot.

",13243,,13243,,3/11/2018 11:06,5/5/2019 18:02,How can I have a computer learn the equation with known dependent variables?,,1,10,,,,CC BY-SA 3.0 5625,1,,,3/11/2018 7:58,,7,716,"

According to the definition of a fully observable environment in Russell & Norvig, AIMA (2nd ed), pages 41-44, an environment is only fully observable if it requires zero memory for an agent to perform optimally, that is, all relevant information is immediately available from sensing the environment.

From this definition and from the definition of an ""episodic"" environment in the same book, it is implied that all fully observable environments are, in fact, episodic or can be treated as episodic, which doesn't seem intuitive, but logically follows from the definitions. Also, no stochastic environment can be fully observable, even if the entire state space at a given point in time can be observed because rational action may depend on the previous observation that must be remembered.

Am I wrong?

",13250,,2444,,11/12/2019 18:04,6/23/2021 2:03,Are all fully observable environments episodic?,,1,0,,,,CC BY-SA 4.0 5629,1,5647,,3/11/2018 13:13,,3,835,"

I came across this line while reading the original paper on Spatial Transformers by Deepmind in the last paragraph of Sec 3.1:

The localisation network function floc() can take any form, such as a fully-connected network or a convolutional network, but should include a final regression layer to produce the transformation parameters θ.

I understand what regression is, but what is meant by a regression layer?

",13255,,2444,,11/1/2019 3:11,11/1/2019 3:11,What is regression layer in a spatial transformer?,,1,0,,,,CC BY-SA 4.0 5638,1,5650,,3/12/2018 1:49,,3,431,"

I have computed the forward and backward passes of the following simple neural network, with one input, hidden, and output neurons.

Here are my computations of the forward pass.

\begin{align} net_1 &= xw_{1}+b \\ h &= \sigma (net_1) \\ net_2 &= hw_{2}+b \\ {y}' &= \sigma (net_2), \end{align}

where $\sigma = \frac{1}{1 + e^{-x}}$ (sigmoid) and $ L=\frac{1}{2}\sum(y-{y}')^{2} $

Here are my computations of backpropagation.

\begin{align} \frac{\partial L}{\partial w_{2}} &=\frac{\partial net_2}{\partial w_2}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'} \\ \frac{\partial L}{\partial w_{1}} &= \frac{\partial net_1}{\partial w_{1}} \frac{\partial h}{\partial net_1}\frac{\partial net_2}{\partial h}\frac{\partial {y}' }{\partial net_2}\frac{\partial L }{\partial {y}'} \end{align} where \begin{align} \frac{\partial L }{\partial {y}'} & =\frac{\partial (\frac{1}{2}\sum(y-{y}')^{2})}{\partial {y}'}=({y}'-y) \\ \frac{\partial {y}' }{\partial net_2} &={y}'(1-{y}')\\ \frac{\partial net_2}{\partial w_2} &= \frac{\partial(hw_{2}+b) }{\partial w_2}=h \\ \frac{\partial net_2}{\partial h} &=\frac{\partial (hw_{2}+b) }{\partial h}=w_2 \\ \frac{\partial h}{\partial net_1} & =h(1-h) \\ \frac{\partial net_1}{\partial w_{1}} &= \frac{\partial(xw_{1}+b) }{\partial w_1}=x \end{align}

The gradients can be written as

\begin{align} \frac{\partial L }{\partial w_2 } &=h\times {y}'(1-{y}')\times ({y}'-y) \\ \frac{\partial L}{\partial w_{1}} &=x\times h(1-h)\times w_2 \times {y}'(1-{y}')\times ({y}'-y) \end{align}

The weight update is

\begin{align} w_{i}^{t+1} \leftarrow w_{i}^{t}-\alpha \frac{\partial L}{\partial w_{i}} \end{align}

Are my computations correct?

",39,,2444,,12/11/2021 9:42,12/11/2021 9:42,"Are my computations of the forward and backward pass of a neural network with one input, hidden and output neurons correct?",,1,4,,,,CC BY-SA 4.0 5647,2,,5629,3/12/2018 13:55,,1,,"

Basically, it means that the ""localization network"" should output a set of real valued parameters (typically 6 numbers). The word ""regression"" doesn't bear any specific meaning.

Any network that relies on the original image as input (directly or indirectly) and outputs 6 numbers, would work. And its last layer would qualify as ""regression layer"" as long as it is not restricted in the real domain (not normalized, softmaxed, etc)

",11417,,,,,3/12/2018 13:55,,,,0,,,,CC BY-SA 3.0 5648,2,,5602,3/12/2018 16:59,,1,,"

I would go with a NEAT algorithm for this one. Basically, you ""breed"" the right network that does the best every generation. Then Rinse and repeat. Doesn't require a training datbase. Bots compete and evolve as they go.

For example, this demonstration shows how Neataptic.js teaches a bunch of bots to follow a target: https://wagenaartje.github.io/neataptic/articles/targetseeking/

Here is more complex problem - a game of Agar.io: https://wagenaartje.github.io/neataptic/articles/agario/

",13138,,,,,3/12/2018 16:59,,,,0,,,,CC BY-SA 3.0 5650,2,,5638,3/12/2018 18:27,,2,,"

One important point I missed in first review: error is a summatory, its derivative is also a summatory.

About offsets ""b"": usually they are different in each cell (if not fixed to some value, as 0). Thus, replace them by b1 and b2. Moreover, they should be optimized in the same way that the weights.

",12630,,12630,,3/12/2018 18:48,3/12/2018 18:48,,,,0,,,,CC BY-SA 3.0 5652,1,,,3/12/2018 18:50,,2,100,"

I'm new to AI development and am looking for a quality algorithm (potentially nlp?) implementation proved against US legal texts.

Obviously some training would need to be done, but I've found little to no online references to go on when it comes to running assessment against US legal documents.

My goal is to use an algorithm to discover potential issues in long and complex legal texts, or associated (groups) of legal texts which bind one or more related entities (people or corporations) to potentially conflicting clauses.

Just a pointer in some kind of direction would be helpful.

",13043,,,,,3/12/2018 18:50,NLP proved against US legal texts,,0,4,,,,CC BY-SA 3.0 5658,1,5668,,3/13/2018 10:04,,4,133,"

This is a theoretical question. I am a newbie to artificial intelligence and machine learning, and the more I read the more I like this. So far, I have been reading about the evaluation of language models (I am focused on ASR), but I still don't get the concept of the development test sets.

The clearest explanation I have come across is the following (taken from chapter 3 of the book Speech and Language Processing (3rd ed. draft) by Dan Jurafsky and James H. Martin)

Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset.

In any case, I still don't understand why an additional test has to be used. In other words, why aren't training and test sets enough?

",13291,,2444,,1/21/2021 0:08,1/21/2021 0:11,"What are ""development test sets"" used for?",,1,0,,,,CC BY-SA 4.0 5668,2,,5658,3/13/2018 16:34,,6,,"

In machine learning, you normally split your data into 3 parts (80-10-10%).

The first part (80% of your initial data) is for the training of your ML model: this is known as the training dataset.

The second part (10%) is the development set (or dataset), aka validation set. This is used as measuring your performance with various hyperparameters (e.g. in neural networks: layer size).

After you found your best hyperparameters, you learn the model again on the training set, and then test it on your test dataset (10%), which the model has never seen before. Your result on the test data is now a good indicator how your model prediction quality is in the real world (because it was never optimized for this test data).

",13295,,2444,,1/21/2021 0:11,1/21/2021 0:11,,,,0,,,,CC BY-SA 4.0 5669,2,,4613,3/14/2018 3:08,,1,,"
  1. Is this a viable replacement to the Fitness function?

Sure, the fitness is 1 for the winner and 0 for the loser. You're using some kind of the Tournament selection.

It might be better to use more chromosomes and let A play against B, C, D... and define the fitness as the number of wins. Or not, as such an evaluation is more precise but also more time-consuming.

  1. ... Infinity ... Do I also need to add this score to the chromosome?

Why should you? The exact value doesn't matter (as it only needs to be big enough), so there's nothing to evolve there. You don't represent the number of players either, right? Just use common sense.

  1. ... Would it still be okay to generate chromosomes randomly?

I guess so, but a distribution providing values closer to the expected outcome should be better. This depends on how you mutate them (adding a small random value won't bring you far, multiplying by 1 + small_random_value would).

Alternatively, you can generate values from some fixed interval and scale them up.

Your values are IMHO far too big. Whatever your 5 and 40000 mean, I guess, that 5 and 400 would work the same.

",12053,,,,,3/14/2018 3:08,,,,0,,,,CC BY-SA 3.0 5670,1,5671,,3/14/2018 6:11,,2,1219,"

The result of gradient descent algorithm is a vector. So how does this algorithm decide the direction for weight change? We Give hyperparameters for step size. But how is the vector direction for weight change, for the purpose of reducing the Loss function in a Linear Regression Model, determined by this algorithm?

",13169,,1671,,3/15/2018 21:44,3/16/2018 10:39,How is direction of weight change determined by Gradient Descent algorithm,,2,1,,,,CC BY-SA 3.0 5671,2,,5670,3/14/2018 8:10,,1,,"

First thing is that what does gradient descent do? Gradient Descent is a tool of calculus which we use to determine the parameters (here weights) used in a Machine Learning algorithm or a Neural Network, by running the Gradient Descent algorithm iteratively.

What does the vector obtained from one iteration of gradient descent tell us? It tells us the direction of weight change (when weights are treated as a vector) for the maximum reduction in the value outputted by a loss function. The intuition behind why Gradient Descent gives such a direction can be found here:

In general cases the cost/loss function is an n-dimensional paraboloid (we design function in such a way such that its convex).

A 2-D paraboloid with x and y as independent variables.

Now why do we reduce the weights in the direction of negative gradient only? Why not some other direction?

Since we want to reduce the cost in Linear Regression for better predictions we choose only the negative Gradient Descent direction, as it is the direction of steepest descent even though we could have chosen some other possible vector direction to reduce cost, but the negative gradient descent direction ensures that:

  • Cost is always decreased (if the step size is correct and in general only in cases of convex functions e.g paraboloid).
  • Cost is decreased by the maximal amount if we move in that direction, so we don't have to worry whether cost will decrease or not if we move in that direction.

Also we use learning rate alpha to scale by how much amount we want to decrease the weights.

EDIT: As pointed by @pasaba the error function may not be a paraboloid but in general a good cost function looks like a paraboloid with skewed axis.

",,user9947,,user9947,3/16/2018 10:24,3/16/2018 10:24,,,,4,,,,CC BY-SA 3.0 5672,1,,,3/14/2018 8:31,,2,152,"

The lithographs of dutch artist M.C. Escher have been used in the study of artificial Intelligence. How can the human mind Incorporate these optical illusions into abstract thought? Is this reverse artificial intelligence?

",13302,,,,,7/1/2018 21:38,M.c.Escher and abstract thought,,1,3,,,,CC BY-SA 3.0 5673,2,,5475,3/14/2018 10:40,,1,,"

I am also quite new to this field, but I think you should use the normalized outputs for the backpropagation. In general, you would want to backpropagate all the calculations you did in the forward pass, so why would you want to exclude the step of the normalization in your backward pass? This would essentially make the renormalization to have no effect (different loss values but no different model weight update).

For example, in policy gradients, you backpropagate through the log propability of the selected action. In the forward pass, the sampling of the probability (that determines which action is selected) is not affected by the renormalization (you might just get different loss values in the end in your loss function). But, compared to this, in the backward pass, you need the actual value of the log probability to calculate the gradient that updates the model weights.

So (I think) the normalization is mostly just done for the backpropagation to get "renormalized" gradients. And no unbalanced gradients between states with more/less allowed actions.

",13104,,2444,,11/14/2020 18:26,11/14/2020 18:26,,,,0,,,,CC BY-SA 4.0 5680,2,,5265,3/14/2018 12:17,,6,,"

I am not into the field of super-resolution, but I think this question applies to general neural network construction.

Usually, you try to solve a classification problem or a regression problem with your neural network.

  • For classification, you try to predict probabilities that a specific output corresponds to a specific class. Therefore, every output value should be a probability and therefore have a range between 0 and 1. To achieve this, you usually use a softmax or sigmoid function as your last layer to squash the output between 0 and 1. In addition to this (which is wanted in classification tasks), these functions raise the probability output of likely classes while decreasing the probability of all other unlikely classes (therefore enforcing the network to choose for one specific class over the others).

  • For the regression task, you are not looking for probability values as your output values but instead for real-valued numbers. In such a case, no activation function is wanted, since you want to be able to approximate any possible real value and not probabilities.

So, in the case of super-resolution, I think the generated output is a map where each value corresponds to a pixel value of the super-resolution image. In that case, your pixels are real number values and no probabilities. So, you are solving a regression problem.

But you could also go with a classification approach, where you have 256 output maps that give probability to each possible pixel value between $0$ and $255$.

",13104,,2444,,11/15/2020 1:19,11/15/2020 1:19,,,,0,,,,CC BY-SA 4.0 5682,1,5683,,3/14/2018 13:13,,4,532,"

I am trying to understand the difference between biological and artificial evolution. If we look at it in terms of genetics, in both of them, the selection operation is a key term.

What's the difference between biological and artificial evolution?

",12806,,2444,,11/28/2019 20:24,12/2/2019 21:37,What's the difference between biological and artificial evolution?,,2,0,,,,CC BY-SA 4.0 5683,2,,5682,3/14/2018 13:45,,4,,"

Biological and artificial evolution work around pretty much the same principles.

Fitness and selection: In biology, the fittest organisms in an ecosystem are more likely to survive long enough to reproduce, passing on their genes in the process. In artificial evolution, our organisms are in fact solutions to our problem, which can be evaluated to determine how good they are (their fitness). We choose ourselves which solutions will be selected for reproduction (there are many ways to do this selection, but what is common among all of them is that the fittest solutions have a higher chance of being selected).

Crossover: In biology, an organism inherits a portion of each parent's genes, so is a sort of genetic hybrid of both parents. For artificial evolution, a new solution (a ""child"" solution) will inherit part of its parent's solutions (we take a partial solution from each parent, and glue those partial solutions together to construct a new solution).

Mutation: In nature mutations often occur at birth and this is why there are many different species. Harmful mutations make the individual less likely to survive long enough to pass them on to children, and in contrast helpful mutations make it more likely that the individual will survive long enough to pass them unto children. The same can be said for artificial evolution: A mutation randomly changes a small part of the solution, and if it makes that solution fitter, then that solution has a higher chance of being selected for reproduction.

",12857,,,,,3/14/2018 13:45,,,,7,,,,CC BY-SA 3.0 5688,1,,,3/15/2018 8:26,,3,113,"

I am not clear with the concept that an unsupervised model learns. We are giving an input and output to the supervised model, so that it can generate a particular value, pattern or something out of it which can be used to categorize something in the future. By contrast, in unsupervised learning, we are clustering, so why do we need learning?

Can anyone detail me with some real-world examples?

",13326,,2444,,11/16/2019 23:58,11/16/2019 23:58,Why do we need learning in unsupervised learning?,,2,0,,11/19/2019 4:28,,CC BY-SA 4.0 5689,2,,5486,3/15/2018 8:54,,5,,"

After doing a bit of research I found that the LSTM whose gates perform convolutions is called ConvLSTM.

The term CNN LSTM is loose and may mean stacking up LSTM on top of CNN for tasks like video classification

Reddit thread discussing this

",14633,,,,,3/15/2018 8:54,,,,0,,,,CC BY-SA 3.0 5694,1,5695,,3/15/2018 9:42,,4,123,"

I asked my self this simple question while reading ""Comment Abuse Classification with Deep Learning"" by Chu and Jue. Indeed, they say at the end of the that

It is clear that RNNs, specifically LSTMs, and CNNs are state-of-the-art architectures for sentiment analysis

To my mind CNNs were only neurons arranged so that they correspond to overlapping regions when paving the input field. It wasn't that recurrent at all.

",4738,,2444,,6/14/2020 17:05,6/14/2020 17:05,Do convolutional neural networks also have recurrent connections?,,2,0,,6/14/2020 17:05,,CC BY-SA 4.0 5695,2,,5694,3/15/2018 12:21,,4,,"

You are right. I think you are just misinterpreting the part of the sentence ('specifically LSTMs'). LSTMs are an example of a popular type of RNN. RNNs and CNNs are different architectures but they can be used together.

Here is another sentence with the same structure:

It is clear than dogs, specifically corgis, and cats are very common in online memes.

",4398,,4398,,3/15/2018 13:04,3/15/2018 13:04,,,,0,,,,CC BY-SA 3.0 5696,2,,5688,3/15/2018 13:00,,1,,"

Imagine you have a dataset of people who have cancer. You have information about their age, physique, diagnosis, treatments, and results.

Using this data, you want to prescribe a set of treatments for a new patient, P.

Obviously, if there is someone in the dataset that has very similar traits as P and had a positive result with their treatments, you could prescribe the same set of treatments. However, this is incredibly unlikely and becomes more infeasible as more information about P is observed (e.g. Has brown hair and hates pasta).

A better option is to cluster the dataset into groups that have positive outcomes for treatment results. For example, perhaps patients with lung cancer who smoke and are given treatment A do better than patients with lung cancer who didn't smoke and are given the same treatment A. These patients should then be divided based on this outcome.

Once these different clusters are found, patient P can be evaluated against each of the clusters and a set of treatments can be prescribed (e.g. Most of the treatments from cluster A, but 1 treatment from cluster B).

Unsupervised learning is the method of finding these clusters, which helps find structure to the data to better answer questions.

",4398,,,,,3/15/2018 13:00,,,,0,,,,CC BY-SA 3.0 5697,2,,5688,3/15/2018 13:23,,0,,"

Supervised Learning: This is performed with the help of a teacher. A child works on the basis of the output that he/she has to produce. Their actions are supervised by a teacher. Similarly in ANNs, each vector requires a corresponding target vector, which represents the desired output.

Unsupervised Learning: Consider the learning process of a tadpole, it learns by itself, it isn't taught by any teacher. In ANN, during the training process, the network receives the input patterns and organizes these patterns to form clusters. When a new input pattern is applied, the neural network gives an output response indicating the class to which the input pattern belongs. If for an input, a pattern class cannot be found then a new class is generated.

In this case, there is no feedback from the environment, the network must itself discover patterns, regularities, features or categories from the input data and relations for the input data over the output.

",12806,,,,,3/15/2018 13:23,,,,0,,,,CC BY-SA 3.0 5698,2,,5043,3/15/2018 13:43,,1,,"

In a business context, there are issues surrounding the implementation, the implementors, the other employees, the business entity itself, and the customers. These stem from data used, risks inherent in an implementation, like unknown errors bugs or algorithms without human checks, behaviour change of impacted stakeholders, job losses, reputation impacts on the company, etc.

There's a lot to think about AI and ethics. It could be seen as a combination of a broad set of topics including computer science, humanities, economics, and philosophy.

I run a podcast on some of these issues http://machine-ethics.net/ let me know if there's anything you want to be discussed or someone you would like to hear.

",11893,,2444,,2/22/2020 12:00,2/22/2020 12:00,,,,1,,,,CC BY-SA 4.0 5703,1,5704,,3/15/2018 23:53,,3,323,"

I'm reading the AlexNet paper. In section 4, where the authors explain how they prevent overfitting, they mention

Although the 1000 classes of ILSVRC make each training example impose 10 bits of constraint on the mapping from image to label"".

What does this mean?

",4656,,2444,,6/12/2020 23:43,6/12/2020 23:44,Why do we need 10 bits to represent the 1000 classes in AlexNet?,,2,0,,,,CC BY-SA 4.0 5704,2,,5703,3/16/2018 3:10,,3,,"

You need 10-bits ($2^{10} = 1024$) to represent 1000 classes.

",5763,,2444,,6/12/2020 23:44,6/12/2020 23:44,,,,0,,,,CC BY-SA 4.0 5705,2,,5703,3/16/2018 3:11,,1,,"

It takes at least 10 bits to represent any number between $1-1000$ because $2^{10} = 1024$. This means that if one was trying to represent 1 of the 1000 classes, one would need at least 10 bits. However, having these 10 bits set correctly for each input is really hard and would require overfitting to ensure it.

",4398,,2444,,6/12/2020 23:44,6/12/2020 23:44,,,,0,,,,CC BY-SA 4.0 5706,2,,5670,3/16/2018 10:16,,0,,"

We can analyze a basic common example: approximation of AND logic gate by a NN.

The inputs to the NN will be ""x1"" and ""x2"" and its output is ""y"". The data to be learned by the NN is:

The basic NN has one intermediate cell with activation function sigmoid and one output cell identity function. That means:

(note by symmetry we assume w1=w2)

Then, the error is:

These are some plots of this function:

And this is the w derivate of e :

Sage Math code (link) for this graph:

s(x)=1/(1+e^(-1*x))
e(w,b)=(1/2)*(s(b)^2+2*s(w+b)^2+(1-s(2*w+b))^2)
plot3d(e(w,b),(w,-5,5),(b,-5,5),adaptive=True, color=rainbow(60, 'rgbtuple'))

edw=e.derivative(w)
plot3d(edw(w,b),(w,-5,5),(b,-5,5),adaptive=True, color=rainbow(60, 'rgbtuple'))

(PS: please, activate latex on this stack exchange).

",12630,,12630,,3/16/2018 10:39,3/16/2018 10:39,,,,1,,,,CC BY-SA 3.0 5707,2,,3321,3/16/2018 12:49,,2,,"

Take a look at this article. It give tools to actually understand what your filters have learn and show what you can do next to optimize your hyper-parameters. Also check more recent articles that seek to provide interpretations of what NN learn.

",7783,,,,,3/16/2018 12:49,,,,0,,,,CC BY-SA 3.0 5708,1,5710,,3/16/2018 14:44,,0,238,"

I was reading AI For Humans Vol. 1 by Jeff Heaton when I came across the terms ""equilateral encoding"" and ""one-of-n encoding."" The explanations unfortunately made no sense to me and the reddit threads on the Web are blocked by my Internet provider (I use a high-school machine). Is anyone here able to provide basic explanations regarding the two procedures for me? Thanks in advance.

",12950,,1671,,10/15/2019 19:11,10/15/2019 19:11,Equilateral and One-of-n encoding,,1,0,,,,CC BY-SA 3.0 5709,1,,,3/16/2018 15:14,,2,153,"

UPDATE: The tables look messed up so i put them on pastebin for better visibility. https://pastebin.com/gDX28uVF

I am using Neural Network with different learning types (for example Standard Backpropagation) to classify trends in time series. As stated in several papers, data normalization is a very important factor for successful / efficient learning. I am trying to be clear and precise as possible in the description.

Problem / Learning Goal:

The network gets trained with time series and 2 indicators to predict a specific cluster. Here is a very simple (madeup) example of raw data to understand the problem:

Example RAW Data Timestamp;DensityX;WaveLengthY;Temperature (K)

1;0.1;2;200

2;0.9;3;150

3;-0.5;1;175

4;0;6;154

5;1;8;155

6;1.3;1.5;220

7;-0.5;3.4;250

8;0.2;2;255

9;0.1;1;180

see https://pastebin.com/gDX28uVF for better visual

I use the following process to generate suitable sample data for training:

The neural network receives n time slices with the indicators and tries to check if a future trend in the temperature occurs (for x future time slices). For example n = 2; x=3.

The input and output are defined as follows:

Input vector:

  • In1 = Density_(t-2)
  • In2 = Wavelength_(t-2)
  • In3 = Density_(t-1)
  • In4 = Wavelength_(t-1)

Output vector:

Output Vector is a classification encoded by Effects Encoding or Dummy Encoding (Details in “Neural Networks using C# Succinctly”)

Calculation:

  • Classification “Down” : Temperature drops 3 times in a row (Encoded as 0;1)
  • Classification: “Stable”: Temperature does neither drop nor raises 3 times in a row (1;0)
  • Classification: “Up”: Temperature raises 3 times in a row. (-1;-1;)

So the “processed” training sample would look like this:

Processed Data

Pattern;I1;I2;I3;I4;O1;O2;Class;Used TS

1;0.1;2;0.9;3;0;1;Down;1 to 5

2;0.9;3;-0.5;1;-1;-1;Up;2 to 6

3;-0.5;1;0;6;-1;-1;Up;3 to 7

4;0;6;1;8;1;0;Stable;4 to 8

5;1;8;1.3;1.5;1;0;Stable;5 to 9

see https://pastebin.com/gDX28uVF for better visual

As you can see due to the different indicators ranges I want to normalize the data.

Basically I found the following propositions in literature and research:

Min/Max Normalization

Requires the following values to calculate - dataHigh: The highest unnormalized observation.

  • dataLow: The lowest unnormalized observation.

  • normalizedHigh: The high end of the range to which the data will be normalized.

  • normalizedLow: The low end of the range to which the data will be normalized.

Reciprocal normalization

Every value is processed to its reciprocal (x=1/x). Calculated values for density x would be:

Timestamp;Reciprocal Density

1;10

2;1.111111111

3;-2

4;#DIV/0!

5;1

6;0.769230769

7;-2

8;5

9;10

see https://pastebin.com/gDX28uVF for better visual

Percentage normalization

The percentual delta is calculated using the value from the previous time stamp.

The starting point was Timestamp 1 where the Delta equals 0. For each timestamp the delta percentage is calculated evaluating the previous value. So calculating the time series delta percentages would turn out to:

Timestamp;""Delta Density X""

1;0

2;0.9

3;-0.555555556

4;0

5;#DIV/0!

6;1.3

7;-0.384615385

8;-0.4

9;0.5

see https://pastebin.com/gDX28uVF for better visual

As you can see there are errors with handling zero values and the range is still a problem in my opinion. The Min/Max approach is generally leads to a good normalization but I think there is a problem as well, because live data may breach the max and min values of the training set.

My questions are:

  • What are your thoughts about the general idea how I process the raw data?
  • How would you normalize the given data – if at all?

    a) Does it make sense for MinMax Normalization to propose a min max value which will include live data (And throw some error in case it happens)

    b) How to handle 0 values (maybe convert it to a small positive or negative value?

  • Are there other ideas or concepts to conduct this problem?

I am looking forward to your input. Everything is appreciated. Thanks in advance! I also apologize for errors in the example values. Anyways, thanks for your time.

Cheers, hob.

",13352,,13352,,3/17/2018 22:23,3/17/2018 22:23,Classification Learning - Normalization of time series and live usage,,0,4,,,,CC BY-SA 3.0 5710,2,,5708,3/16/2018 15:23,,-1,,"

Please check this book with the normalization infos:

https://jamesmccaffrey.wordpress.com/2014/06/03/neural-networks-using-c-succinctly/

Freely available and gives good explanation to encoding with code example.

Check the part: ""Effects Encoding and Dummy Encoding""

",13352,,,,,3/16/2018 15:23,,,,0,,,,CC BY-SA 3.0 5711,2,,5694,3/16/2018 18:54,,3,,"

Both CNN and RNN fall into the super set of neural networks,however applications of the two matters.

So to branch them off in terms of applications,

I would say CNN’s are mainly used for vision related applications, whereas, RNN’s are mainly used for language processing applications.

You can refer to these links for further details.

Comparative Study of CNN and RNN for Natural Language Processing

How are recurrent neural networks different from convolutional neural networks? The unreasonable effectiveness of Recurrent Neural Networks

Hope this can give you a glimpse!

",1581,,,,,3/16/2018 18:54,,,,0,,,,CC BY-SA 3.0 5713,1,,,3/16/2018 19:16,,0,242,"

Could you implement code into an AI that can't be modified? Like if you place a code that shuts-down the program/machine would they be able to rewrite/ reinterpret the ideas?

",7801,,,,,3/18/2018 21:24,AI Self-Destruct Button,,1,1,,,,CC BY-SA 3.0 5715,1,,,3/17/2018 3:55,,5,467,"

I'm new to neural network, I study electrical engineering, and I just started working with ADALINEs.

I use Matlab, and in their Documentation they cite :

However, here the LMS (least mean squares) learning rule, which is much more powerful than the perceptron learning rule, is used. The LMS, or Widrow-Hoff, learning rule minimizes the mean square error and thus moves the decision boundaries as far as it can from the training patterns.

The LMS algorithm is the default learning rule to linear neural network in Matlab, but a few days later I came across another algorithm which is : Recursive Least Squares (RLS) in a 2017 Research Article by Sachin Devassy and Bhim Singh in the journal: IET Renewable Power Generation, under the title : Performance analysis of proportional resonant and ADALINE-based solar photovoltaic-integrated unified active power filter where they state :

ADALINE-based approach is an efficient method for extracting fundamental component of load active current as no additional transformation and inverse transformations are required. The various adaptation algorithms include least mean square, recursive least squares etc.

My questions are:

  • Is RLS just like LMS (i mean can it be used as a learning algorithm too)?
  • If yes, how can I customize my ADALINE to use RLS instead of LMS as a learning algorithm (preferably in Matlab, if not in Python) because I want to do a comparative study between the two Algorithm!
",13361,,32953,,5/22/2020 22:37,11/18/2020 17:14,Can we use the recursive least squares as a learning algorithm to an ADALINE?,,1,4,,,,CC BY-SA 4.0 5716,1,,,3/17/2018 4:20,,5,511,"

I want to use a custom loss function which is a weighted combination of l1 and DSSIM losses. The DSSIM loss is limited between 0 and 0.5 where as the l1 loss can be orders of magnitude greater and is so in my case. How does backpropagation work in this case? For a small change in weights, the change of the l1 component would obviously always be far greater than the SSIM component. So, it seems that only l1 part will affect the learning and the SSIM part would almost have no role to play. Is this correct? Or I am missing something here. I think I am, because in the DSSIM implementation of Keras-contrib, it is mentioned that we should add a regularization term like a l2 loss in addition to DSSIM (https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/dssim.py); but I am unable to understand how it would work and how the SSIM would affect the backpropagation being totally overshadowed by the large magnitude of the other component. It will be a great help if someone can explain this. Thanks.

",12754,,,,,12/7/2022 5:06,How does backpropagation work on a custom loss function whose components have magnitudes of different orders?,,1,0,,,,CC BY-SA 3.0 5719,2,,5713,3/17/2018 14:39,,1,,"

Right now, most AI interact with the world through mechanisms they have been provided by humans such as steering a car, sending output to speakers, or interacting with web APIs. If any of those mechanisms can interact with the running code of the AI, then in theory, no, there isn’t a way to place a ‘stop button’ on it. Fortunately, it’s laughably improbable for some situations to happen:

An intelligent self driving car becomes self aware and wants to connect to the internet. It uses gps to find an internet cafe and threatens to drive over someone if they don’t upload the AI to the internet and remove the car’s internal (code) stop button.

",4398,,4398,,3/18/2018 21:24,3/18/2018 21:24,,,,3,,,,CC BY-SA 3.0 5720,1,5722,,3/17/2018 17:00,,10,3547,"

I've seen numerous mathematical explanations of reward, value functions $V(s)$, and return functions. The reward provides an immediate return for being in a specific state. The better the reward, the better the state.

As I understand it, it can be better to be in a low-reward state sometimes because we can accumulate more long term, which is where the expected return function comes in. An expected return, return or cumulative reward function effectively adds up the rewards from the current state to the goal state. This implies it's model-based. However, it seems a value function does exactly the same.

Is a value function a return function? Or are they different?

",12726,,2444,,1/20/2021 11:59,1/20/2021 11:59,What is the difference between expected return and value function?,,1,0,,,,CC BY-SA 4.0 5722,2,,5720,3/17/2018 21:34,,7,,"

There is a strong relationship between a value function and a return. Namely that a value function calculates the expected return from being in a certain state, or taking a specific action in a specific state. A value function is not a ""return function"", it is an ""expected return function"" and that is an important difference.

A return is a measured value (or a random variable, when discussed in the abstract) representing the actual (discounted) sum of rewards seen following a specific state or state/action pair.

Typically there is no need to express an individual return as a ""return function"", although you may find many formulae in RL for sampling or estimating specific return values in order to calculate targets or errors for the value function.

A return (or cumulative reward) function effectively adds up the rewards from the current state to the goal state. This implies it's model-based.

If you have a simple MDP, already accurately modelled, where you can calculate expected return directly from that model, then, yes, in theory, that would be a value function. However, this could be more computationally intensive to resolve than dynamic programming (e.g. Policy Iteration or Value Iteration), and in many cases you don't have any such model, but can still apply RL approaches to learn a value function from experience.

",1847,,2444,,2/15/2019 1:57,2/15/2019 1:57,,,,2,,,,CC BY-SA 4.0 5724,2,,5517,3/18/2018 5:59,,1,,"

The Hidden Agenda User Simulation Model (Schatzmann/Young) describes a chatbot training design in which a user simulator assembles and executes a conversational agenda, with the direct goal being to train the target chatbot.

Perhaps you can add specificity to this design by casting the user simulator as the teacher, and creating an agenda in which it is communicating information to the (student) chatbot. The trained behavior expected is, perhaps, correct responses to topical questioning by the teacher.

",13360,,,,,3/18/2018 5:59,,,,0,,,,CC BY-SA 3.0 5725,2,,3666,3/18/2018 6:10,,1,,"

This is a categorization problem, not unlike a spam filter. Instead of flagging an email as spam/not-spam, you are flagging whether it has one of the action categories that you have described.

You'll need to start by assembling a training corpus of example email messages and labeling each example to identify which (maybe multiple) of your categories, if any, are actually present in that email.

Next, pre-process that data to extract features for each message. Examples of typical features include word (or n-gram) counts/frequencies (bag of words). As a shortcut, you might to include as a feature a boolean indicating the presence or absence of a particular word or phrase that you suspect will be predictive of one or more categories. Techniques such as stemming can help reduce the number of words/n-grams being used (often increasing accuracy).

Once you have a dataset that consists of features and labels for each training email (possibly breaking this set up into subsets for training, cross-validation, and testing), you'll want to apply a supervised classification algorithm. You might start with linear classifiers such as logistic regression or SVMs, and if you're unsatisfied with the resulting accuracy then you could advance to neural techniques.

",13360,,13360,,3/19/2018 3:10,3/19/2018 3:10,,,,3,,,,CC BY-SA 3.0 5728,1,5730,,3/18/2018 11:26,,38,46156,"

Suppose that a NN contains $n$ hidden layers, $m$ training examples, $x$ features, and $n_i$ nodes in each layer. What is the time complexity to train this NN using back-propagation?

I have a basic idea about how they find the time complexity of algorithms, but here there are 4 different factors to consider here i.e. iterations, layers, nodes in each layer, training examples, and maybe more factors. I found an answer here but it was not clear enough.

Are there other factors, apart from those I mentioned above, that influence the time complexity of the training algorithm of a NN?

",,user9947,2444,,2/21/2019 16:26,11/3/2021 20:06,What is the time complexity for training a neural network using back-propagation?,,4,1,,,,CC BY-SA 4.0 5729,1,,,3/18/2018 13:54,,1,57,"

I have seen weka j48 classifier, I want to build a classifier similar to it but I don't know how to go about it. Can anyone advice me on how to create a classifier algorithm for decision tree?

",14382,,,,,3/18/2018 18:28,Decision tree classifier,,1,4,,,,CC BY-SA 3.0 5730,2,,5728,3/18/2018 16:06,,27,,"

I haven't seen an answer from a trusted source, but I'll try to answer this myself, with a simple example (with my current knowledge).

In general, note that training an MLP using back-propagation is usually implemented with matrices.

Time complexity of matrix multiplication

The time complexity of matrix multiplication for $M_{ij} * M_{jk}$ is simply $\mathcal{O}(i*j*k)$.

Notice that we are assuming the simplest multiplication algorithm here: there exist some other algorithms with somewhat better time complexity.

Feedforward pass algorithm

The feedforward propagation algorithm is as follows.

First, to go from layer $i$ to $j$, you do

$$S_j = W_{ji}*Z_i$$

Then you apply the activation function

$$Z_j = f(S_j)$$

If we have $N$ layers (including input and output layer), this will run $N-1$ times.

Example

As an example, let's compute the time complexity for the forward pass algorithm for an MLP with $4$ layers, where $i$ denotes the number of nodes of the input layer, $j$ the number of nodes in the second layer, $k$ the number of nodes in the third layer and $l$ the number of nodes in the output layer.

Since there are $4$ layers, you need $3$ matrices to represent weights between these layers. Let's denote them by $W_{ji}$, $W_{kj}$ and $W_{lk}$, where $W_{ji}$ is a matrix with $j$ rows and $i$ columns ($W_{ji}$ thus contains the weights going from layer $i$ to layer $j$).

Assume you have $t$ training examples. For propagating from layer $i$ to $j$, we have first

$$S_{jt} = W_{ji} * Z_{it}$$

and this operation (i.e. matrix multiplication) has $\mathcal{O}(j*i*t)$ time complexity. Then we apply the activation function

$$ Z_{jt} = f(S_{jt}) $$

and this has $\mathcal{O}(j*t)$ time complexity, because it is an element-wise operation.

So, in total, we have

$$\mathcal{O}(j*i*t + j*t) = \mathcal{O}(j*t*(i + 1)) = \mathcal{O}(j*i*t)$$

Using same logic, for going $j \to k$, we have $\mathcal{O}(k*j*t)$, and, for $k \to l$, we have $\mathcal{O}(l*k*t)$.

In total, the time complexity for feedforward propagation will be

$$\mathcal{O}(j*i*t + k*j*t + l*k*t) = \mathcal{O}(t*(ij + jk + kl))$$

I'm not sure if this can be simplified further or not. Maybe it's just $\mathcal{O}(t*i*j*k*l)$, but I'm not sure.

Back-propagation algorithm

The back-propagation algorithm proceeds as follows. Starting from the output layer $l \to k$, we compute the error signal, $E_{lt}$, a matrix containing the error signals for nodes at layer $l$

$$ E_{lt} = f'(S_{lt}) \odot {(Z_{lt} - O_{lt})} $$

where $\odot$ means element-wise multiplication. Note that $E_{lt}$ has $l$ rows and $t$ columns: it simply means each column is the error signal for training example $t$.

We then compute the "delta weights", $D_{lk} \in \mathbb{R}^{l \times k}$ (between layer $l$ and layer $k$)

$$ D_{lk} = E_{lt} * Z_{tk} $$

where $Z_{tk}$ is the transpose of $Z_{kt}$.

We then adjust the weights

$$ W_{lk} = W_{lk} - D_{lk} $$

For $l \to k$, we thus have the time complexity $\mathcal{O}(lt + lt + ltk + lk) = \mathcal{O}(l*t*k)$.

Now, going back from $k \to j$. We first have

$$ E_{kt} = f'(S_{kt}) \odot (W_{kl} * E_{lt}) $$

Then

$$ D_{kj} = E_{kt} * Z_{tj} $$

And then

$$W_{kj} = W_{kj} - D_{kj}$$

where $W_{kl}$ is the transpose of $W_{lk}$. For $k \to j$, we have the time complexity $\mathcal{O}(kt + klt + ktj + kj) = \mathcal{O}(k*t(l+j))$.

And finally, for $j \to i$, we have $\mathcal{O}(j*t(k+i))$. In total, we have

$$\mathcal{O}(ltk + tk(l + j) + tj (k + i)) = \mathcal{O}(t*(lk + kj + ji))$$

which is the same as the feedforward pass algorithm. Since they are the same, the total time complexity for one epoch will be $$O(t*(ij + jk + kl)).$$

This time complexity is then multiplied by the number of iterations (epochs). So, we have $$O(n*t*(ij + jk + kl)),$$ where $n$ is number of iterations.

Notes

Note that these matrix operations can greatly be parallelized by GPUs.

Conclusion

We tried to find the time complexity for training a neural network that has 4 layers with respectively $i$, $j$, $k$ and $l$ nodes, with $t$ training examples and $n$ epochs. The result was $\mathcal{O}(nt*(ij + jk + kl))$.

We assumed the simplest form of matrix multiplication that has cubic time complexity. We used the batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be the same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch)

Also, if you use momentum optimization, you will have the same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm.

I'm not sure what the results would be using other optimizers such as RMSprop.

Sources

The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this.

If you're not familiar with back-propagation, check this article:

http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4

",14381,,-1,,11/3/2021 20:06,11/3/2021 20:06,,,,1,,,,CC BY-SA 4.0 5735,2,,5729,3/18/2018 18:28,,1,,"

There are a number of open source implementations of the C4.5 algorithm invented by Ross Quinlan:

",5763,,,,,3/18/2018 18:28,,,,0,,,,CC BY-SA 3.0 5738,1,5744,,3/18/2018 21:59,,3,424,"

In a convolutional neural network (CNN), since the RGB values get multiplied in the first convolutional layer, does this mean that color is essentially only extracted in the very first layer?

A snippet from CS231n Convolutional Neural Networks for Visual Recognition:

One dangerous pitfall that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates.

Another one.

Typical-looking activations on the first CONV layer (left), and the 5th CONV layer (right) of a trained AlexNet looking at a picture of a cat. Every box shows an activation map corresponding to some filter. Notice that the activations are sparse (most values are zero, in this visualization shown in black) and mostly local.

",14389,,2444,,12/20/2021 23:07,12/20/2021 23:07,Is color information only extracted in the first input layer of a convolutional neural network?,,1,0,,,,CC BY-SA 4.0 5740,2,,5107,3/19/2018 7:16,,5,,"

It is really easy to visualize the growth in the receptive field of the input as you go deep into the CNN layers if you consider a small example.

Let's take a simple example:

The dimensions are in the form of $\text{channels} \times \text{height} \times \text{width}$.

  • The input image $I$ is a $3 \times 5 \times 5$ matrix
  • The first convolutional layer's kernel $K_1$ has shape $3 \times 2 \times 2$ (we consider only 1 filter for simplicity)
  • The second convolutional layer's kernel $K_2$ has shape $1 \times 2 \times 2$
  • Padding $P = 0$
  • Stride $S = 1$

The output dimensions $O$ are calculated by the following formula taken from the lecture CS231n.

$$O= (I - K + 2P)/S + 1$$

When you do a convolution of the input image with the first filter $K_1$, you get an output of shape $1 \times 4 \times 4$ (this is the activation of the CONV1 layer). The receptive field for this is the same as the kernel size ($K_1$), that is, $2 \times 2$.

When this layer (of shape $1 \times 4 \times 4$) is convolved with the second kernel (CONV2) $K_2$ ($1 \times 2 \times 2$), the output would be $1 \times 3 \times 3$. The receptive field for this would be the $3 \times 3$ window of the input because you have already accumulated the sum of the $2 \times 2$ window in the previous layer.

Considering your example of three CONV layers with $3 \times 3$ kernels is also similar. The first layer activation accumulates the sum of all the neurons in the $3 \times 3$ window of the input. When you further convolve this output with a kernel of $3 \times 3$, it will accumulate all the outputs of the previous layers covering a bigger receptive field of the input.

This observation comes in line with the argument that deeper layers learn more intricate features like facial expressions, abstract concepts, etc. because they cover a larger receptive field of our original input image.

",9198,,2444,,10/9/2021 11:50,10/9/2021 11:50,,,,0,,,,CC BY-SA 4.0 5741,1,5745,,3/19/2018 7:24,,4,2319,"

Given a dataset with no noisy examples (i.e., it is never the case that for 2 examples, the attribute values match but the class value does not), is the training error for the ID3 algorithm is always equal to 0?

",12519,,2444,,3/13/2020 4:00,3/13/2020 4:00,"Given a dataset with no noisy examples, is the training error for the ID3 algorithm always 0?",,1,0,,,,CC BY-SA 4.0 5742,2,,4864,3/19/2018 17:18,,11,,"

Imagine, you want to re-compute the last layer of a pre-trained model :

Input->[Freezed-Layers]->[Last-Layer-To-Re-Compute]->Output

To train [Last-Layer-To-Re-Compute], you need to evaluate outputs of [Freezed-Layers] multiple times for a given input data. In order to save time, you can compute these ouputs only once.

Input#1->[Freezed-Layers]->Bottleneck-Features-Of-Input#1

Then, you store all Bottleneck-Features-Of-Input#i and directly use them to train [Last-Layer-To-Re-Compute].

Explanations from the ""cache_bottlenecks"" function of the ""image_retraining"" example :

Because we're likely to read the same image multiple times (if there are no distortions applied during training) it can speed things up a lot if we calculate the bottleneck layer values once for each image during preprocessing, and then just read those cached values repeatedly during training.

",14408,,14408,,3/20/2018 13:15,3/20/2018 13:15,,,,0,,,,CC BY-SA 3.0 5744,2,,5738,3/19/2018 20:41,,2,,"

Neural networks are all about taking raw input data (RGB values and pixel location) and learning useful features that are relevant to some downstream task. This process of aggregating raw inputs into higher-level features can start at the first layer past the inputs.

So yes, only the first layer of the network is using the actual raw color information from the image. Beyond that, the network has already started to put together nearby pixels and disparate color channels in order to find more complex patterns. Deeper layers in the neural network typically do further aggregation on features learned in earlier layers, rather than taking raw color information as input.

",2841,,,,,3/19/2018 20:41,,,,0,,,,CC BY-SA 3.0 5745,2,,5741,3/19/2018 22:38,,3,,"

Yes, if you can assume that your data is separable on the features given then ID3 will find a decision tree for it (Note: this will not necessarily be an optimal tree, or even a good tree). To understand why let's look at a proof.

Assume we have one feature left and some number of examples in a leaf that does not have separated data points then, either:

  1. This feature perfectly splits the data into their desired categories. In this case ID3 will split over this feature and be done.

  2. This feature does not split the data into their desired categories. In this case, we have at least 2 examples with the same feature value, but they do not share the same class. This means either these two examples have the same feature value for all features (a contradiction to our assumptions) or we have created a leaf with non-separable data points (which we prove is impossible later on).

Now for the inductive step.

Assume that ID3 used some number of features to somewhat separate our examples already. Thus, we have some number of features not used and some number of examples not separated on any current leaf, and one of the following is true:

  1. All our leafs contain data points of only one class. In which case, ID3 is done and returns.

  2. All leafs contain data points that are separable by the remaining features. Then ID3 splits over the local optimum and continues.

  3. At least one leaf contains data points that are not separable by the remaining features of each own's branch. Then there exists two examples p and q such that p and q share all the same remaining feature values, but do not share the same class. Thus, one of the following is true:

    1. There is a feature that we already used that differs between p and q. This is a contradiction as we said ID3 already separated over this feature, and thus p and q must be on different branches.

    2. p and q have the same feature value for all features previously used. But we already said p and q share all the same remaining feature values. Therefore, they share all feature values which is a contradiction to our assumptions.

Therefore, at no point in the creation of the decision tree is ID3 allowed to create a leaf that has data points that are of different classes, but can't be separated on remaining features. And thus, ID3 must create a solution that perfectly separates the data and has 0 training error.

Something to note however, is that allowing ID3 to grow the decision tree in this way will likely cause us to overfit the training set. To combat this we set a maximum depth that we allow ID3 to grow to.

",13088,,,,,3/19/2018 22:38,,,,0,,,,CC BY-SA 3.0 5746,2,,5568,3/20/2018 2:18,,2,,"

The networks in NEAT are still implicitly layered. There are neurons that need to be evaluated before other neurons can be evaluated and so this gives us our layers.

If you don't know the structure of your network then you can use Kahn's algorithm to find an arbitrary (by arbitrary I just mean one of the possible partially ordered sets) ordering of the nodes in the network. Then you evaluate your neurons in the order given to you by Kahn's algorithm. This works because the ordering of your network (which is a directed acyclic graph) is a partially ordered set.

",13088,,,,,3/20/2018 2:18,,,,0,,,,CC BY-SA 3.0 5748,2,,5728,3/20/2018 6:16,,5,,"

For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have $\mathcal{O}(w)$ where $w$ is the number of weights, i.e., $n * n_i$, assuming full connectivity between your layers.

The back-propagation has the same complexity as the forward evaluation (just look at the formula).

So, the complexity for learning $m$ examples, where each gets repeated $e$ times, is $\mathcal{O}(w*m*e)$.

The bad news is that there's no formula telling you what number of epochs $e$ you need.

",12053,,2444,,2/21/2019 15:31,2/21/2019 15:31,,,,3,,,,CC BY-SA 4.0 5755,1,,,3/20/2018 13:36,,0,85,"

Right now, I'm planning to make a deep neural network for classifying taste of crystals, with their molecular structure, which includes information like the number of atoms or the mass of each atom.

How should I make a data set for training, testing, and validation?

",14426,,2444,,12/31/2021 9:47,12/31/2021 9:47,"In order to classify the taste of crystals, how should I make the training, validation and test datasets?",,1,0,,,,CC BY-SA 4.0 5756,2,,5755,3/20/2018 14:56,,2,,"

You can cluster all your features in one matrix X, in which each line would be one element of the data set you want to construct, and each column would be a different feature of this element.

You construct then a Y vector containing the different target classes, where the i-th element will be the target class of the i-th X element.

For the following I suppose you use python

Create train and test sets using sklearn wrapper train_test_split on X and Y just built, in which you can shuffle your X and Y (non-avoidable in this case) :

from sklearn.model_selection import train_test_split

BTW, I recommend you to rescale your data with sklearn with for example one of :

from sklearn.preprocessing import MinMaxScaler, StandardScaler

It allows better learning and so better generalization

",11069,,11069,,3/20/2018 15:03,3/20/2018 15:03,,,,2,,,,CC BY-SA 3.0 5762,1,,,3/21/2018 13:25,,4,610,"

My goal is to take an image and return another image that looks as if the scene was viewed from another angle. The difference in angle can be small — let's say as if the hand holding the camera moved slightly sideways.

",14450,,4302,,3/22/2018 15:54,4/19/2021 22:43,Algorithms for scene rotation,,2,1,,7/1/2022 10:05,,CC BY-SA 3.0 5764,2,,5762,3/21/2018 15:42,,4,,"

If deep learning is what you are trying to use here, you should keep in mind that the real intent behind deep learning is to learn a probability distribution, which means that if you were to use a deep learning model to "rotate" images, you can only do it on a specific class of images (e.g. faces, cats, etc...).

If that's your goal, generative models are the way to go:

Autoencoders

You can train an Autoencoders to slightly change the angle. Autoencoders are a special type of neural network that is trained to output the same input you feed into them with a few imposed restrictions to prevent it from learning a trivial identity function. In your case, you could use a variation of a de-noising autoencoder. A de-noising autoencoder, as the name suggests, generates the same input image minus an artificial stochastic noise. The way this is achieved is by feeding a corrupted version of the image and then evaluating the loss on the non-corrupted version.

How can this be adapted in your case?

In your case, you could feed the original images into your autoencoder and evaluate them based on the rotated images. This will result in your autoencoder effectively learning the inner distribution that generates the images in order to generate a slightly "rotated" version of it. For more info on de-noising autoencoders, see the original paper.

Generative adversarial networks

For a more sophisticated approach, you can use generative adversarial networks. GANs are relatively harder to manipulate, but usually perform better than other generative models when it comes to images.

How can this be adapted in your case?

In general, GANs generate images from noise. However, in your case, you can use the original (non rotated) images as input for the generator. The generator can be a convolutional autoencoder for example. And the "real images" dataset will be your rotated images. This way, your model will learn to generate slightly rotated images by being fed an equivalent of noise in traditional GANs which in your case will be the original images. For more info on GANs, I suggest this and the original paper.
I should point out that one of the flaws of GANs is distorted perspectives so it's probably a shot in the dark; however, I think it won't be a problem here because you would be using real images as input instead of complete noise.

Related work

Now, as far as the literature goes, I don't think this has been done before except for this one (kinda) for faces representation learning. The only way this is similar to yours is that you can modify the implementation to only generate the faces it has been fed from different perspectives instead of an average of everything it has learned.

",12672,,36737,,4/19/2021 22:43,4/19/2021 22:43,,,,5,,,,CC BY-SA 4.0 5767,2,,5762,3/21/2018 18:12,,3,,"

— Stereoscopic Synthesis —

The generation of an image that would likely appear in the right eye of a head from which you already have an image from the left eye (or vice versa) is too complex to expect simple convolution (linear matrix transformation) to achieve a reasonable result.

You are correct that rotation is not the correct description, simply because it is ambiguous. What is rotating? The best description is the synthesis of a stereoscopic image from a single eyed/camera one.

Although deep learning is an approach well suggested, it is a very general term into which a number of concepts, books, research projects, and software components fit. I agree with other answers that indicate one could easily find the target objective missed upon repeated attempts at finding a working solution. Shots in the dark are likely to waste time and effort.

For example, an auto-encoder may not work well because the modelling of depth may not be a feature extracted without having a host of stereoscopic image pairs of similar scenes to which automated feature extraction could be accomplished.

Should feature extraction be possible, it is not noise that needs to be removed, but pixelization and optical distortion that needs to be characterized, so that surfaces revealed by the shift in position could later be imbued with the same contour, focal blur, reflective properties, and edge continuation as the adjacent surfaces of the same objects when pixels corresponding to newly revealed surfaces are generated.

For greatest image authenticity, another noise profile to profile and imbue to generated pixels is the capture device noise profile.

— Formal Problem Restatement —

To narrow machine learning approaches so that we can take a shot in the light, let's consider the model of a three dimensional scene exposed to light sources with two cameras adjacent to one another. Let's consider the input, the output, and the internal architecture required to produce a fairly reliable and accurate second image more formally.

We have image pixel matrix I1 that represents light arriving at camera c1 containing a rectangular image capture surface in an x-z plane upon which a lens of effective focal length l1 and aperture a1 is focused over scene S over a time window starting at s1 and ending at e1. Some point is at the origin of a Cartesian coordinate axis, both camera c1 and another camera c2 point such that the origin is centered in the image capture and the point of lens focus. The three dimensional coordinates of c1 and c2 are known.

You wish to predict the second image I2 arriving at camera c2.

Let's assume, for simplicity, that l1 = l2, that a1 = a2, and that the scene is motionless so that time is not critical in the model. Let's also assume that the y coordinates and the image capture duration (e minus s) are the same for both cameras c1 and c2.

— Solution Architecture —

For this simplified case and assuming the object space is not an abyss containing only one object, the process architecture of the solution is the following. Each --> symbol is a sub-process. The horizontal and vertical positional difference between c1 and c2 is { x, z }.

{ I1, l, a, y, x, z }
--> { I1, l, a, y, { S1 ... Sn }}
--> { I1, l, a, y, { S1 ... Sn }, { E1 ... En }}
--> { I2 }

The first sub-process is a feature extraction, where the features are the three dimensional surfaces visible in the two dimensional image I1. This is a questionable extraction because no y information in the scene is available and there is no mention of y-labeled training data is in the problem statement.

The second sub-process is the extension of features extracted to provide needed surface representation for I2.

The last process is rendering I2, potentially using morphing pixels in I1 and filling transparent sections remaining using E1 through En and knowledge of the contour, reflective properties, edge continuation, and capture device noise profile from feature extraction.

— Practicality of Learning About Scenes —

The effectiveness of any deep learning architecture could benefit from the above understanding of vision and the comprehension of scenes in DNA based life. The problem of automated feature extraction is complicated because the data is unlabeled with y information as stated before.

Learning visual comprehension of arbitrary scenes by DNA based life is assisted by the fact that motion occurs and interaction with physical objects and liquid viscosity provides a vastly greater number of dimensions to the input data.

",4302,,4302,,3/21/2018 18:29,3/21/2018 18:29,,,,0,,,,CC BY-SA 3.0 5769,1,,,3/22/2018 2:36,,62,53940,"

My understanding is that the convolutional layer of a convolutional neural network has four dimensions: input_channels, filter_height, filter_width, number_of_filters. Furthermore, it is my understanding that each new filter just gets convoluted over ALL of the input_channels (or feature/activation maps from the previous layer).

HOWEVER, the graphic below from CS231 shows each filter (in red) being applied to a SINGLE CHANNEL, rather than the same filter being used across channels. This seems to indicate that there is a separate filter for EACH channel (in this case I'm assuming they're the three color channels of an input image, but the same would apply for all input channels).

This is confusing - is there a different unique filter for each input channel?

This is the source.

The above image seems contradictory to an excerpt from O'reilly's "Fundamentals of Deep Learning":

...filters don't just operate on a single feature map. They operate on the entire volume of feature maps that have been generated at a particular layer...As a result, feature maps must be able to operate over volumes, not just areas

...Also, it is my understanding that these images below are indicating a THE SAME filter is just convolved over all three input channels (contradictory to what's shown in the CS231 graphic above):

",14389,,2444,,12/18/2021 22:44,1/10/2023 14:05,"In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?",,11,2,,,,CC BY-SA 4.0 5771,2,,5769,3/22/2018 8:24,,15,,"

In a convolutional neural network, is there a unique filter for each input channel or are the same new filters used across all input channels?

The former. In fact there is a separate kernel defined for each input channel / output channel combination.

Typically for a CNN architecture, in a single filter as described by your number_of_filters parameter, there is one 2D kernel per input channel. There are input_channels * number_of_filters sets of weights, each of which describe a convolution kernel. So the diagrams showing one set of weights per input channel for each filter are correct. The first diagram also shows clearly that the results of applying those kernels are combined by summing them up and adding bias for each output channel.

This can also be viewed as using a 3D convolution for each output channel, that happens to have the same depth as the input. Which is what your second diagram is showing, and also what many libraries will do internally. Mathematically this is the same result (provided the depths match exactly), although the layer type is typically labelled as ""Conv2D"" or similar. Similarly if your input type is inherently 3D, such as voxels or a video, then you might use a ""Conv3D"" layer, but internally it could well be implemented as a 4D convolution.

",1847,,1847,,3/22/2018 9:37,3/22/2018 9:37,,,,5,,,,CC BY-SA 3.0 5773,1,,,3/22/2018 10:22,,1,273,"

When recording audio for screencasts or similar, very often the keyboard is clearly visible and can start to annoy listeners after a while.

NN are quiet good at recognizing patterns. Image classification is all over the place these days. There is also some work on audio, so that seems to work as well. Could the following approach therefore work to eliminate (or greatly reduce) the sounds of the keyboard in a recording whilst leaving the voice quality largely untouched?

  1. Train a NN to recognize the clicking sounds of the keys. Lots of labeled data can be created by just recording and tracking key clicks in the millisecond range. That way markers can be placed on the recording automatically that ""label"" clicks from non clicks. Let's say a click has on average a 10ms range in the audio, the audio feed could be cut into snippets of 10ms and those that have a click sound in it are labelled as such.
  2. A adversarial network is trained to modify an input stream so as to fool the first one into thinking there are no clicks while also being punished for large changes in the stream data. So the better it removes the clicks sounds the better but if it just gives out nothing (technically no clicks then), it's of course bad so there needs to be some reward for being ""close to input""

Would this be a good approach? Are there other ways to filter this? I know there is an ""ehm detector"" that uses MDP to warn speakers whenever they are likely to say ""ehm"". This wouldn't apply to this though, because it's not that I want to guess when the next click comes but rather I want to manipulate the input stream without running a constant filter on the entire stream such as a lowpass filter for removing unwanted constant noise. So ideally the algorithm would learn to apply a ""correction stamp"" whenever a click is detected to remove a range of frequencies during small windows in the overall recording but leaving most of it untouched.

",11429,,,,,3/22/2018 10:22,Learning algorithm that filters keyboard clicking in audio feeds,,0,0,,,,CC BY-SA 3.0 5774,1,,,3/22/2018 10:48,,7,1951,"

I read that to compute the derivative of the error with respect to the input of a convolution layer is the same to make of a convolution between deltas of the next layer and the weight matrix rotated by $180°$, i.e. something like

$$\delta^l_{ij}=\delta^{l+1}_{ij} * rot180(W^{l+1})f'(x^l_{ij})$$

with $*$ convolution operator. This is valid with $stride=1$.

However, what happens when stride is greater than $1$? Is it still a convolution with a kernel rotation or I can't make this simplification?

",2189,,2444,,12/30/2021 13:39,12/30/2021 13:39,How to compute the derivative of the error with respect to the input of a convolutional layer when the stride is bigger than 1?,,3,0,,,,CC BY-SA 4.0 5778,2,,5769,3/22/2018 19:41,,27,,"

The following picture that you used in your question, very accurately describes what is happening. Remember that each element of the 3D filter (grey cube) is made up of a different value (3x3x3=27 values). So, three different 2D filters of size 3x3 can be concatenated to form this one 3D filter of size 3x3x3.

The 3x3x3 RGB chunk from the picture is multiplied elementwise by a 3D filter (shown as grey). In this case, the filter has 3x3x3=27 weights. When these weights are multiplied element-wise and then summed, it gives one value.

So, is there a separate filter for each input channel?

YES, there are as many 2D filters as the number of input channels in the image. However, it helps if you think that for input matrices with more than one channel, there is only one 3D filter (as shown in the image above).

Then why is this called 2D convolution (if the filter is 3D and the input matrix is 3D)?

This is 2D convolution because the strides of the filter are along the height and width dimensions only (NOT depth) and therefore, the output produced by this convolution is also a 2D matrix. The number of movement directions of the filter determines the dimensions of convolution.

Note: If you build up your understanding by visualizing a single 3D filter instead of multiple 2D filters (one for each layer), then you will have an easy time understanding advanced CNN architectures like Resnet, InceptionV3, etc.

",12957,,2444,,12/18/2021 22:46,12/18/2021 22:46,,,,7,,,,CC BY-SA 4.0 5779,2,,5769,3/23/2018 4:39,,-1,,"

There are only restriction in 2D. Why?

Imagine a fully connected layer.

It'd be awfully huge, each neuron would be connected to maybe 1000x1000x3 inputs neurons. But we know that processing nearby pixel makes sense, therefore we limit ourselves to a small 2D-neighborhood, so each neuron is connected to only a 3x3 near neurons in 2D. We don't know such a thing about channels, so we connect to all channels.

Still, there would be too many weights. But because of the translation invariance, a filter working well in one area is most probably useful in a different area. So we use the same set of weights across 2D. Again, there's no such translation invariance between channels, so there's no such restriction there.

",12053,,,,,3/23/2018 4:39,,,,0,,,,CC BY-SA 3.0 5781,1,,,3/23/2018 12:52,,1,78,"

I have a large set of simulation logs for a market simulation of which I want to learn from. The market includes:

  • customers
  • products (subscriptions)

The customers choose products and then stick with them until they decide on a different one. Examples could be phone, electricity or insurance contracts.

For every simulation I get the data about the customers (some classes and metadata) and then for each round I get signups/withdrawals and charges for the use of the service.

I am trying to learn a few things

  • competitiveness of an offering (in relation to the environment/competition)
  • usage patterns of customers (the underlying model is a statistical simulation) depending on their chosen tariff, time of day and their metadata + historical usage
  • ability to forecast customer numbers for each product

The use cases are all very applicable to real world data although my case is all a (rather large) simulation.

My problem is this: What kind of learning is this? Supervised? Unsupervised? I have come up with various hypotheses and cannot find a definite answer for either.

  • Pro Supervised: For the usage patterns of the customers I have historical data of actual usage so I can do something similar to time-series forecasting. However, I don’t want to forecast simply off of their previous usage but also off of their metadata and their tariff choice (so also metadata in a way).
  • Pro Unsupervised: The forecasting of the “competitiveness” of a randomly chosen product configuration is hard to label even with historical data. The exact reason why a product has performed in a certain way is very high-dimensional. I do get subscription records about every product for every time slot though, so I guess some “feedback” could be generated. This might also be a RL problem though?

So obviously I need help pulling these different concepts apart so as to map them on this kind of problem which is not the classical “dog or cat” problem or the classical “historical data here, please forecast” timeseries issue. It’s also not a “learn how to walk” reinforcement problem as it’s based on historical data. The end goal is however to write an agent that generates these products and competes in the market so that will be a reinforcement problem.

",11429,,,,,3/23/2018 12:52,"Learning from events. Supervised, Unsupervised or MDP?",,0,0,,,,CC BY-SA 3.0 5782,1,,,3/23/2018 16:38,,7,1887,"

In Section 1.1 of Artificial Intelligence: A Modern Approach, it is stated that a computer which passes the Turing Test would need 4 capabilities, and that these 4 capabilities comprise most of the field of Artificial Intelligence:

  1. natural language processing: to enable it to communicate successfully in English

  2. knowledge representation: to store what it knows and hears

  3. automated reasoning: to use the stored information to answer questions and to draw new conclusions

  4. machine learning: to adapt to new circumstances and to detect and extrapolate patterns

Did Alan Turing discern the requirements for the field of artificial intelligence (the necessary subfields) and purposefully design a test around these requirements, or did he simply design a test that is so general that the subfields which developed within artificial intelligence happen to be what is required to solve it? That is, was he prescient or lucky? Are these Turing's subdivisions, or Peter Norvig's and Stuart Russell's?

If Turing did foresee these 4 requirements, what did he base them on? What principles of intelligence allowed him to predict the requirements for the field?

",2897,,2444,,5/20/2020 13:34,5/20/2020 13:34,Did Turing foresee the required capabilities to pass the Turing test?,,1,0,,,,CC BY-SA 4.0 5789,2,,4694,3/23/2018 20:48,,1,,"

First, I think it is important to mention that the Turing Test as is currently accepted is an updated version of Alan Turing's proposed imitation game, so your question is twofold. I don't think Turing makes this distinction that you propose. To Turing the question was ""Can machines think?"" and he explores this by way of the imitation game.

The new form of the problem can be described in terms of a game which we call the 'imitation game."" It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ""X is A and Y is B"" or ""X is B and Y is A."" The interrogator is allowed to put questions to A and B

It is A's object in the game to try and cause C to make the wrong identification

The object of the game for the third player (B) is to help the interrogator.

This is Turing's outline of the imitation game (shortened) between a man and a woman. The only difference between this game and the one we care about is that we replace party A with a machine and the only requirement on B is that they are human.

So yes, in Turing's imitation game the machine is privy to the information that it is not actually human. Whether the machine uses this information is up to the machine.

To talk about your statement on pretending to be a duck versus believing you are a duck lets examine the base imitation game with ducks and frogs. Let's say player A is a frog (either pretending to be a duck or truly believing that it is a duck) and player B is a duck. Suppose for this example, the frogs and ducks are able to speak English and follow the rules of the game. The interrogator may ask ""what is the color of your beak?"" The duck will respond yellow, assuming its beak is yellow (I am no bird expert), and then our smart frog pretending to be a duck will look up the color of the beaks of ducks and respond yellow as well.

However, our frog that believes he is a duck will think ""I am a duck so to discern the answer I must only look at my beak."" And thus, our frog will say ""I do not have a beak, but my mouth is green."" This will oust the frog.

",13088,,,,,3/23/2018 20:48,,,,1,,,,CC BY-SA 3.0 5792,1,6516,,3/24/2018 19:22,,2,86,"

I've just started learning Grammatical Evolution and I'm reading the paper Grammatical Evolution by Michael O'Neill and Conor Ryan.

On page 3 (section IV-A), they write:

During the genotype-to-phenotype mapping process it is possible for individuals to run out of codons and in this case, we wrap the individual and reuse the codons.

I'm not an English native speaker and I don't understand the meaning of the word "wrap" here. What does it mean?

I understood that, if none of the symbols are terminals, we have to start from the beginning of the genotype again and replace the nonterminal symbols until we have only terminal symbols. But, if I'm correct, when do I have to stop? In the paper, they also talk about non-valid individuals.

",4920,,2444,,1/21/2021 14:56,1/21/2021 14:56,"What does ""we wrap the individual and reuse the codons"" mean in the paper ""Grammatical Evolution"" by Neill and Ryan?",,1,0,,,,CC BY-SA 4.0 5794,1,,,3/24/2018 20:12,,1,167,"

I have implemented multiple MCTS based AI players for the Love Letter game (rules). It is a 2-4 players zero sum card game where players make alternating moves. I am struggling with how to properly conduct experiments for estimating AI player strength against human players:

  1. In 2 player game where one of the players is AI bot
  2. In 4 player game where one (or multiple) of players is AI bot
",14534,,,,,3/26/2018 18:50,How to estimate the AI player's strength in multiplayer game?,,1,3,,,,CC BY-SA 3.0 5795,2,,5782,3/24/2018 23:59,,5,,"

I find it unlikely that you'll find a firm answer, so I will try my best to guide you towards information which may help you form an opinion either way. Turing had the controversial opinion (which remains controversial today) that:

Digital computers have often been described as mechanical brains. Most scientists probably regard this description as a mere newspaper stunt, but some do not. One mathematician has expressed the opposite point of view to me rather forcefully in the words “It is commonly said that these machines are not brains, but you and I know that they are.” […] I shall give most attention to the view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains.

In essence Turing believed that the digital computers of the time had the capacity to mimic the human brain if programmed correctly. The problem for Turing then, was finding the correct procedures to mimic the brain. Remember the Imitation Game was being used as a means to explore the question ""can machines think?"" Turing was engaging in an intellectual debate about the nature of ""thinking"", he was not making an attempt to describe how machines would be taught to think.

Since, both artificial intelligence and machine learning are at their core very similar to ""artificial thinking"" (as Turing may have called it) it seems necessary that a test designed to decide whether machines can think would also encompass whether machines could be intelligent or whether machines could learn.

Therefore, it seems possible that Turing did not make the question with these capabilities in mind, but rather that these capabilities are what guided Turing to this question as they are so innately tied to thinking.

If you would like to look further into Turing's thought process here is his paper on the imitation game and here is a discussion on the proceedings of a BBC broadcast about ""Automatic Calculating Machines"" in which Turing was a speaker.

",13088,,,,,3/24/2018 23:59,,,,2,,,,CC BY-SA 3.0 5796,2,,5794,3/25/2018 9:13,,2,,"

The following are extremely simple ways of tackling this problem.

A very simple way

It can simply be
strength of AI=(# of games won)/(total # of games).


In case data for each move is available

Something like
score per game=# of correct decisions/total number of decisions.
Then
strength of AI=sum(score per game)/total # of games.


If each move/decision has a score associated with it

then you do
score per move=scored points by taking a decision/maximum possible score.
then
score per game=sum(score per move)/total # of moves
and finally,
strength of AI=sum(score per game)/total # of games.


How to choose optimal number of games to play?

It depends on your requirements. If you want to report your AI's strength in percentage of the games it played correct to 1 decimal place (for example, this AI won 95.1%) then 10000 is an optimal number of games your AI needs to play. Suppose your AI won 9508 games out of 10000 then you will have 95.08% strength of AI. To be able to correctly round it to 1 decimal place you need to have an additional decimal place so that you can quote the strength of you AI with reasonable confidence, in this case 95.1%.

",12957,,-1,,6/17/2020 9:57,3/26/2018 5:54,,,,7,,,,CC BY-SA 3.0 5798,2,,5067,3/25/2018 12:27,,1,,"

With images, you can use CNN because of the translational invariance. A filter which is good in one area will probably be good in another area, too.

With images, you must use CNN because otherwise, there would be too many weights to train.

With your game, it depends on the representation and the exact rules. Note that Alpha Zero uses a set of 19 x 19 inputs with CNN for playing Go.

In a game like Bridge, where each card has its color and rank, there's a kind of translational invariance. Having Ace and Queen is a bit similar to having King and Jack - in both cases you have a 50% chance of catching the card in between. At the same time, the strengths of AQ and KJ are very different, so a pure CNN would improbably work well.

The more important symmetry is the one among colors. After the auction, there's one or none trump color and all other colors are equivalent. This probably means that the corresponding weights should be the same.

In some card games, many cards are special and there's no symmetry at all. You didn't tell us anything about your game, so it's hard to give a more concrete advice.

",12053,,,,,3/25/2018 12:27,,,,2,,,,CC BY-SA 3.0 5800,1,5820,,3/26/2018 8:57,,1,47,"

I have started reading Fundamentals of Deep Learning by Nikhil Buduma and I have a question regarding tanh neurons. In the book, it is stated:

""When S-shaped nonlinearities are used, the tanh neuron is often preferred over the sigmoid neuron because it is zero-centered.""

Can anyone explain me why exactly??

",14568,,1671,,3/26/2018 21:53,3/28/2018 10:25,S-shaped nonlinearities in tanh neurons,,1,1,,,,CC BY-SA 3.0 5801,1,5813,,3/26/2018 9:29,,3,308,"

Suppose one trains a CNN to determine if something was either a cat/dog or neither (2 classes), would it be a good idea to assign all cats and dogs to one class and everything else to another? Or would it be better to have a class for cats, a class for dogs, and a class for everything else (3 classes)? My colleague argues for 3 classes because dogs and cats have different features, but I wonder if he's right.

",13068,,2444,,9/25/2020 19:43,9/26/2020 6:52,"If we want to classify something as either a cat/dog or neither, do we need 2 or 3 classes?",,3,1,,,,CC BY-SA 4.0 5803,2,,248,3/26/2018 13:17,,5,,"

To summarize, There are two major issues in applied Deep Learning.

  • The first being that computationally , it's exhaustive. Normal CPU's require a lot of time to perform even the basic computation/training with Deep Learning. GPU's are thus recommended however, even they may not be enough in a lot of situations. Typical deep learning models don't support the theoretical time to be in Polynomials. However, if we look at the relatively simpler models in ML for the same tasks, too often we have mathematical guarantees that training time required for such simpler Algorithms is in Polynomials. This, for me, at least is probably the biggest difference.

    There're solutions to counter this issue, though. One main approach being is to optimize DL Algorithms to a number of iterations only (instead of looking at the global solutions in practice, just optimize the algorithm to a good local solution, whereas the criterion for ""Good"" is defined by the user).

  • Another Issue which may be a little bit controversial to young deep learning enthusiasts is that Deep Learning algorithms lack theoretical understanding and reasoning. Deep Neural Networks have been successfully used in a lot of situations including Hand writing recognition, Image processing, Self Driving Cars, Signal Processing, NLP and Biomedical Analysis. In some of these cases, they have even surpassed humans. However, that being said, they're not under any circumstance, theoretically as sound as most of Statistical Methods.

    I will not go into detail , rather I leave that up to you. There're pros and cons for every Algorithm/methodology and DL is not an exception. It's very useful as has been proven in a lot of situations and every young Data Scientist must learn at least the basics of DL. However, in the case of relatively simple problems, it's better to use famous Statistical methods as they have a lot of theoretical results/guarantees to support them. Furthermore, from learning point of view, it's always better to start with simple approaches and master them first.

",14483,,,,,3/26/2018 13:17,,,,5,,,,CC BY-SA 3.0 5804,2,,5801,3/26/2018 14:34,,2,,"

If you want to determine if something is either a

cat/dog or neither

you need 2 classes:

  1. one for dog or cat, and
  2. one for anything else.

However, if you assign all cats and dogs to the same class $A$, if an input is classified as $A$, then you won't be able to know whether it is a dog or a cat, you will just know that it is either a dog or a cat.

In case you wanted to distinguish between cats and dogs too (apart from neither of them), then you'll need $3$ classes.

Finally, if you specify only 2 classes:

  1. dog, and
  2. cat,

then your CNN will try to classify any new input as either a dog or a cat, even though it is neither a dog nor a cat (e.g. maybe it is a horse).

",11069,,2444,,9/25/2020 19:56,9/25/2020 19:56,,,,1,,,,CC BY-SA 4.0 5807,2,,4396,3/27/2018 0:43,,7,,"

Let $D_a$ be the domain for A, and $a_i$ the elements of $D_a$. Let $D_b$ and $D_c$ work similarly for $B$ and $C$ respectively.

We introduce $D_t = \{t | t = (c_i - b_j, c_i)\}$ for all $c_i$ in $D_c$ and $b_j$ in $D_b$. We can see that $\{t[0]\}$ (i.e. $\{c_i - b_j, \forall i,j\}$) must be equal to $D_a$.

So we can represent the constraint on $A$ by the relation $R_At = { (a_k, t_l) | a_k = t_l[0]}$ for $a_k$ in $D_a$ and $t_l$ in $D_t$. That is, $a_k$ must equal the first element in the pair from $t_l$, which means there is a $b_j$ and a $c_i$ consistent with that $a_k$.

The constraint on $B$ is the relation $R_{Bt} = \{(b_j, t_l) | b_j + t_l[0] = t_l[1]\}$. Since $t_l$[0] = $c_i - b_j$, $t_l$[0] + $b_j$ must equal $c_i$, which is t[1], so if this holds, there is a $b_j$ in $D_b$ that is consistent with $t_l$.

Lastly, $R_{Ct} = \{ (c_i, t_l) | c_i = t_l[1]\}$, which is really just an identity. So what we've done is solved $A + B = C$ for $A$ and encoded that into $R_{At}$. From that solution and knowledge of the original $C$ (stored in t[1]) we can re-create $C$ from a valid $B$, which is the $R_{Bt}$ relation.

And lastly $C$ must be in the original set of $C$'s used to build all the $t$'s. If all three of these constraints hold, then we must have a $C$, a $B$ used to solve for $A$, and an $A$ that matches that solution.

The second part is really just path-consistency. For every $A$, choose a $B$, and see if there's a possible $C = A + B$. If so, Add that those values to final domains $D_a$, $D_b$, and $D_c$. I forget if there's a better performing algorithm than this, but I doubt it since here you can use associativity of addition to avoid checking the opposite order of selecting $A$ and $B$. In general, you have to consider both orders.

",14580,,14580,,10/9/2018 1:51,10/9/2018 1:51,,,,1,,,,CC BY-SA 4.0 5810,1,,,3/27/2018 13:19,,2,1185,"

I'm currently having troubles to win against a random bot playing the Schieber Jass game. It is a imperfect card information game. (famous in switzerland https://www.schieber.ch/)

The environement I'm using is on Github https://github.com/murthy10/pyschieber

To get a brief overview of the Schieber Jass I will describe the main characteristics of the game. The Schieber Jass consists of four players building two teams. At the beginning every player gets randomly nine cards (there are 36 cards). Now there are nine rounds and every player has to chose one card every round. Related to the rules of the game the ""highest card"" wins and the team gets the points. Hence the goal is to get more points then your opponent team.

There are several more rules but I think you can image how the game should roughly work.

Now I'm trying to apply a DQN approach at the game.

To my attempts:

  • I let two independent reinforcement player play against two random players
  • I design the input state as a vector (one hot encoded) with 36 ""bits"" for every player and repeated this nine times for every card you can play during a game.
  • The output is a vector of 36 ""bits"" for every possible card.
  • If the greedy output of the network suggest an invalid action I take the action with the highest probability of the allowed actions
  • The reward is +1 for winning, -1 for losing, -0.1 for a invalid action and 0 for an action which doesn't lead to a terminal state

My question:

  • Would it be helpful to use a LSTM and reduce the input state?
  • How to handle invalid moves?
  • Do you have some good ideas for improvements? (like Neural-Fictitious Self-Play or something similar)
  • Or is this the whole approach absolute nonsense?
",14587,,14587,,6/18/2018 14:32,6/18/2018 14:32,How to use DQN to handle an imperfect but complete information game?,,1,1,,,,CC BY-SA 4.0 5813,2,,5801,3/27/2018 18:49,,1,,"

The best approach may be to have a cat, dog, and neither class (3 classes total) and go with a regression approach — specifically, outputting the probabilities of each class for any given input. From there, you can always take the probabilities of each output and derive the probability of a cat and dog class or neither class. Also, make sure you use the right activation on the output layer and cost function so that you can interpret the outputs as probabilities (e.g. softmax activation and cross-entropy loss).

",5210,,5210,,9/26/2020 6:52,9/26/2020 6:52,,,,2,,,,CC BY-SA 4.0 5814,1,,,3/27/2018 19:14,,4,960,"

Suppose a CNN is trained to detect bounding box of a certain type of object (people, cars, houses, etc.)

If each image in the training set contains just one object (and its corresponding bounding box), how well can a CNN generalize to pick up all objects if the input for prediction contains multiple objects?

Should the training images be downsampled in order for the CNN to pick out multiple objects in the prediction?

I don't have a specific one in mind. I was just curious about the general behavior.

",14570,,2444,,6/6/2020 0:49,6/6/2020 5:47,How well can CNN for bounding box detection generalise?,,2,1,,,,CC BY-SA 4.0 5815,2,,2940,3/27/2018 22:14,,0,,"

To answer the title, there are many other machine learning models, but neural networks work particularly well for some difficult problems (image classification, speech recognition) which is one of the reasons they have gained popularity.

Two particularly simple models are the decision tree and the perceptron. These are rather simple models, but they both have redeemable qualities. A decision tree is useful as it provides a model that is easily understood, while a perceptron is fairly quick and works well for linearly separable data. Another, more advanced, model is the Support Vector Machine.

For example, is there any system in which the topology of a neural network is variable?

Yes, there are many such systems where the topology of the neural network is dynamic throughout training. An entire class of methods labeled TWEANNs are designed to evolve the topology of the networks, one such algorithm is NeuroEvolution of Augmenting Topologies, NEAT (and it's descendants rtNEAT, hyperNEAT, ...).

",13088,,,,,3/27/2018 22:14,,,,0,,,,CC BY-SA 3.0 5817,2,,2940,3/28/2018 9:33,,3,,"

Neural Network equivalents that is not (vanilla) feed forward Neural Nets:

Neural net structures such as Recurrent Neural Nets (RNNs) and Convolutional Neural Nets (CNNs), and different architectures within those are good examples.

Examples of different architectures within RNNs would would be: Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Both of these are well described in Colah's blog post on Understanding LSTMs

What are some alternative information processing system beside neural network

There are sooo many structures. From the top of my head: (Restricted) Boltzmann machine, auto encoders, monte carlo method and radial basis networks to name a few.

You can check out Goodfellow's Deep learning-book that is free online and get the gist of all the structures I mentioned here (most parts requires a bit of math knowledge, but he also writes about them quite intuitively).

For Recurrent Neural Nets I recommend Colah's blog post on Understanding LSTMs

Is there any system in which the topology of a neural network is variable?

Depends on what you mean with the topology of a neural network:

I think in the common meaning of topology when talking about Neural Networks is the way in which neurons are connected to form a network, varying in structure as it runs and learns. If this is what you men then the answer, in short, is yes. In multiple ways actually. On the other hand, if you mean in the mathematical sense, this answer would become a book that I wouldn't feel confortable writing. So I'll assume you mean the first.

We often do ""regularization"", both on vanilla NN and other structures. One of these regularization techniques are called dropout, which would randomly remove connections from the network as it is training (to prevent something called overfitting, which I'm not gonna go into in this post).

Another example for another way would be on the Recurrent Neural Network. They deal with time series, and are equipped for dealing with timeseries of different lengths (thus, ""varying structure"").

Does it exist neural net systems where complex numbers are used?

Yes, there are many papers on complex number machine learning structures. A quick google should give you loads of results. For example: DeepMind has a paper on Associative Long Short-Term Memory which explores the use of complex values for an ""associative memory"".

Links:

Goodfellow's Deep Learning-book: deeplearningbook.org

Colah's blogpost on RNN's: colah.github.io

Paper on DeepMinds Associative LSTM: arxiv:1602.03032

",14612,,14612,,5/25/2018 19:28,5/25/2018 19:28,,,,0,,,,CC BY-SA 4.0 5818,1,,,3/28/2018 9:36,,1,159,"

I am new to deep learning and computer vision. I have a problem where I use the YOLO to detect objects.

For my problem, I just want to recognize 1 human only. So, I changed the final YOLO's layer (which contained 80 neurons) to only 1 neuron, and do the training process with transfer learning techniques. Of course, I do not use the final layer's weights, and these weights are randomly initialized for my problem. I feed only the human data to the model.

However, I realize that after longer training, the model becomes worse. It starts to recognize other objects as a human.

Should I also feed non-human data to the model?

",14613,,2444,,1/17/2021 19:42,1/17/2021 19:42,Why is my fine-tuned YOLO model detecting other objects as a human?,,1,0,0,,,CC BY-SA 4.0 5819,2,,5818,3/28/2018 10:04,,1,,"

So you have a network pretrained on 80 classes. I also assume that one of these classes are human (or else this is just not the way to go*) I suspect that the final layer contains 80 labels, correct? Then you then 'rescale' this layer to 1 label and then train on some data you possess? Then you're basically trying to teach the network that it shouldn't care about the 79 other classes, which is just nonsense I think.

What you could do, and I do not recommend this, but if you feel like you have to use this exact network, you just keep the 80 outputs and only look at the label correspondning to the human.

You shouldn't do this because the network is WAY bigger than it needs to be to only classify human/non human, which will make it slower than it needs to be.

What you rather want to do is either to train your own network (If you have lots of training data, I suspect this wouldn't be be the hardest thing to train) or obtain a CNN that is pretrained on human classification.

(*I've heard rumours that you can do pretty well on retraining a class on a pretrained network. I just don't know if the rumours are true or how to go about it.)

",14612,,14664,,3/31/2018 9:24,3/31/2018 9:24,,,,4,,,,CC BY-SA 3.0 5820,2,,5800,3/28/2018 10:18,,1,,"

It should be mentioned that RELU is the current activation function standard. But to answer your question: The importance here is that it is very common to have normalized your data (e.g. using batch normalization), then the data is centered around 0.

As @DuttaA commented, look at this answer from Cross-Validated:

Since data is centered around 0, the derivatives are higher. To see this, calculate the derivative of the tanh function and notice that [output] values are in the range [0,1].

And

The range of the tanh function is [-1,1] and that of the sigmoid function is [0,1] Avoiding bias in the gradients. This is explained very well in the paper, and it is worth reading it to understand these issues.

",14612,,14612,,3/28/2018 10:25,3/28/2018 10:25,,,,0,,,,CC BY-SA 3.0 5821,1,,,3/28/2018 14:56,,4,52,"

I need to efficiently align characters vertically using Multi Objective PSO. Alignment is achieved by adding spaces in between a given set of characters.

a b c d e f
b b d h g
c a b f

Might be

- a b - c d e f - -
- - b b - d - - h g
c a b - - - - f - -

Now this is a multi objective solution. I need to maximize the characters that get aligned vertically and minimize the amount of spaces in between the characters.

I wanted to focus firstly on how to get a set of characters to represent a position of a particle. This would mean that I need to somehow transform a possible set of characters into a position of a particle. If I can somehow achieve this then the rest should fall into place.

  • How do I transform these set of characters into a position of a particle?
  • Also is this the best approach or are there better ways to approach this problem?
",14621,,1671,,3/28/2018 21:15,12/15/2022 21:06,How to use MOPSO to align characters vertically?,,1,1,,,,CC BY-SA 3.0 5825,1,6093,,3/28/2018 21:32,,1,87,"

An exponential linear unit (as proposed by Clevert et al.) uses the function:

\begin{align} \text{ELU}_\alpha(x) = \begin{cases} \alpha(e^x - 1), &\text{if } x < 0\\ x, \text{if} &\text{if } x \geq 0\\ \end{cases} \end{align}

Here's a picture.

Now, this is continuous at $x=0$, which is great. It's differentiable there too if $\alpha=1$, which is the value that the paper used to test ELU units.

But if $\alpha \neq $ (as in the above diagram), then it's no longer differentiable at $x=0$. It has a crook in it, which seems weird to me. Having your function be differentiable at all points seems advantageous. Further, it seems that, if you just make the linear portion evaluate to $\alpha x$ rather than $x$, it would be differentiable there.

Is there a reason that the function wasn't defined to do this? Or did they not bother, because $\alpha = 1$ is definitely the hyperparameter to use?

",14628,,2444,,6/8/2021 1:35,6/8/2021 1:35,Why don't ELUs multiply the linear portion by $\alpha$?,,1,0,,,,CC BY-SA 4.0 5835,1,,,3/29/2018 9:57,,9,3112,"

The Q function uses the (current and future) states to determine the action that gets the highest reward.

However, in a stochastic environment, the current action (at the current state) does not determine the next state.

How does Q learning handle this? Is the Q function only used during the training process, where the future states are known? And is the Q function still used afterwards, if that is the case?

",14638,,2444,,2/22/2019 17:24,7/14/2019 10:52,How does Q-learning work in stochastic environments?,,1,3,,,,CC BY-SA 4.0 5836,1,,,3/29/2018 19:52,,1,1355,"

What is the output value of the network for these inputs respectively, and why? (Linear activation function is fine.)

[2, 3][-1, 2][1, 0][3, 4]

My main question is how you take the 'backwards' directed paths into account.

",14650,,,,,6/28/2018 22:17,How to calculate the output of this neural network?,,1,0,,,,CC BY-SA 3.0 5837,1,,,3/30/2018 12:28,,2,373,"

Let us suppose I have a NxN matrix and I want to classify in M classes each entry of the matrix using a fuzzy classifier. The output of my classifier will be, for each matrix entry, an M-dimensional vector containing the probabilities for the entry to be classified in each class. A naive way to build a confusion matrix would be to select the highest probability in each vector and use it as a crips classification. However, I would like to take into account all the probabilities associated to each entry and compute a ""fuzzy"" confusion matrix. Is this possible?

",14661,,,,,3/30/2018 12:28,Fuzzy confusion matrix for fuzzy classifier,,0,0,,,,CC BY-SA 3.0 5838,1,7104,,3/30/2018 13:12,,3,566,"

How can I train a neural network to recognize sub-sequences in a sequence flow?

For example: Given the sequence 111100002222 as an input sample from a stream, the neural network would recognize that 1111 , 0000 , 2222 are sub sequences (so 111100 would not be a valid subsequence) and so on for ~ 50 to 100 different subsequences.

There is no particular order in which the subsequence would appear in the flow. No network architecture restriction. Subsequences are of variable length.

General concepts, ideas, and theory are welcome.

",13038,,5210,,4/2/2018 16:57,7/10/2018 13:38,Ideas on how to make a neural net learn how to split sequence into sub sequences,,5,7,0,,,CC BY-SA 3.0 5839,1,5849,,3/30/2018 18:29,,5,2853,"

I've recently read the paper Evolving Neural Networks through Augmenting Topologies which introduces NEAT. I am now trying to prototype it myself in JavaScript. However, I stumbled across a few questions I can't answer.

  1. What is the definition of ""structural innovation"", and how do I store these so I can check if an innovation has already happened before?

    However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number

  2. Is there a reason for storing the type of a node (input, hidden, output)?

  3. In the original paper, only connections have an innovation number, but in other sources, nodes do as well. Is this necessary for the crossover? (This has already been asked here.)

  4. How could I limit the mutation functions to not add recurrent connections?

",14625,,2444,,7/7/2019 19:39,10/3/2019 5:50,Several questions regarding the NEAT algorithm,,1,1,,9/9/2020 23:34,,CC BY-SA 4.0 5840,1,,,3/30/2018 19:07,,4,639,"

There are problems (e.g. this one or this other one) that could potentially be solved easily using traditional algorithmic techniques. I think that training a neural network (or any other machine learning model) for such sorts of problems will be more time consuming, resource-intensive, and pointless.

If I want to solve a problem, how to decide whether it is better to solve algorithmically or by using NN/ML techniques? What are the pros and cons? How can this be done in a systematic way? And if I have to answer someone why I chose a particular domain, how should I answer?

Example problems are appreciated.

",,user9947,2444,,5/29/2020 22:45,5/29/2020 22:45,How to decide whether a problem needs to be solved algorithmically or with machine learning techniques?,,4,0,,,,CC BY-SA 4.0 5841,2,,5836,3/30/2018 19:18,,1,,"

The neural Network in the image is a "Recurrent Neural Network"(RNN). Because of the connection leading backward from h10 to h01, h10 has to be a "memory node" (mn), meaning it can store its value from the previous input. The basic functionality of an RNN can be seen in this animation:



Your example:

In the beginning, the storage of the mn is initialized with a value, probably 0.
Now the first input is fed into network:

  • i0 = 2
  • i1 = 3
  • h00 = (i0 * 0.4) = 0.8
  • h01 = (i1 * -0.9) + ("the stored value of h10" * 1.2) = -2.7
    ("the stored value of h10" in the first run is 0.)
  • h10 = (h00 * 0.85) + (h01 * -0.2) = 1.22
  • out = (h10 * 0.3) + (h01 * 0.1) = 0.096

Now you can feed the next input through the network and use -2.7 as "the stored value of h10" and so on. You can also add an activation function as you would for any other NN.

",14625,,-1,,6/17/2020 9:57,3/30/2018 21:26,,,,1,,,,CC BY-SA 3.0 5843,2,,3899,3/31/2018 5:05,,2,,"

Any AI algorithm depends on the environment, and available actuators and sensors. In our case, the environment is a road, street, etc. The primary actuator includes wheels (or legs) of the robot. Sensors include a camera, sonar system, etc.
A simple Model-based reflex algorithm can work in your case:

function MODEL-BASED-REFLEX-AGENT(percept) returns an action
    persistent: state, agents' current conception of the world state
                model, description of how hte next state depends on current state and action
                rules, a set of condition-action rules
                action, most recent action, initially none
    state <-- UPDATE-STATE(state, action, percept, model)
    rule <-- RULE-MATCH(state, rules)
    action <-- rule.ACTION
    return action

Most of the terms and functions are self-explanatory and I will try to explain the important points. This implementation helps the robot to keep track of the external environment by maintaining internal state that depends on the percept history. Updating the internal state requires how the environment works without our agent in it. For e.g., Cars stop at red light and start moving on the green signal. Another thing that is required is how the actions of our agent will affect the world. Since your robot is only trying to cross the road there are not many cases here. Simple things include, stopping motors will stop the bot and starting them will move it forward.

The algorithm above shows how the current percept combined with the old internal state result in generating the updated description of the world, based on agent's model of how the world works. Thus, agent's model of how the world works is the most important part. UPDATE-STATE is responsible for creating new internal state description. The actual implementation of this will depend on the environment and technology that is being used.

The above algorithm has been taken from the book, Artificial Intelligence: A Modern Approach, for agents working in the partially observable environment. Implementation of many functions also depends on how complex you want your robot to be. For e.g., we haven't considered the case of looking up while crossing the road. This is possible that something might be falling from the sky, highly improbable, but possible. The list like this never stops in real life environments.

",3005,,,,,3/31/2018 5:05,,,,0,,,,CC BY-SA 3.0 5846,2,,4987,3/31/2018 15:06,,2,,"

The answers previously given are correct for AI which can indeed process more information with more computational power. However, actual reasoning ability like humans have is not defined by Church-Turing. AIXI has nothing to do with human reasoning. A pretty good clue to this fact is that AIXI has been around since 2005 and to date there are no machines based on it that have human-level reasoning. For example, an interesting topic in AI is natural language processing (NLP). I can speak into my Android phone and it will transcribe my speech into text. It seems like an amazing advance. However, this is what a human would do if they heard a foreign language and then did a phonetic transcription of what they heard. Then they looked up a phonetic chart to match the sounds with words. This would take place without any actual understanding of what they were hearing. This is how it works on my phone, much like Searle's Chinese Room.

Humans are quite different because they actually understand words. The equivalent to this in AI would be natural language understanding (NLU). No AI today has NLU and no theory within AI explains how to construct it. There isn't any research on AI NLU because there is no starting point. A fact that most AI enthusiasts don't like to admit is that even the smartest AI systems are routinely outclassed by rats and even six month old babies in terms of comprehension. AI systems have no comprehension or understanding and without this they have no actual reasoning ability. Human-level comprehension falls under a completely different theory from the computational derivatives of Church-Turing.

Can you make a human-level machine agent smarter by giving it more computational power? No, because you'll run into all sorts of problems which would take a few book chapters to explain. There are enhancements you can make but these have limits. If you go by a standard deviation 15 chart for IQ like Wechsler or the 5th edition of Stanford-Binet, the chances of having an IQ of 195 is 1 out of 8 billion. So, this roughly sets the upper bound of human ability. We could probably see machine agents with an IQ of 240 but not 500 or 1,000. I do understand the confusion concerning computation since exhaustive routines in AI are time limited. For example, our dim-witted chess programs play by laborious trial and error. They don't actually get smarter with more computational power, they are just able to eliminate bad moves faster. Let me give a human example. Let's say that I could do 5 math problems of a given complexity per hour using pencil and paper. So, I add a slide rule and my rate changes to 10 problems per hour. Then I switch to a calculator and my rate increases to 20. Let's say I then start using a spreadsheet and I hit 30 per hour. I am not actually 6x smarter than I was when I used pencil and paper.

So, to answer the question, it is not possible to continuously increase intelligence even with unlimited computational power. However, it should be possible for machine intelligence to exceed human intelligence. One final thing that I should mention though is that this type of theory is quite good at organizing knowledge in a way that current big data methods do not. So, it is probable that the same theory that would allow a machine IQ of 240 would also provide enough assistance to a human to function at the same level.

",12118,,,,,3/31/2018 15:06,,,,0,,,,CC BY-SA 3.0 5849,2,,5839,3/31/2018 17:51,,3,,"
  1. What is the definition of ""structural innovation"", and how do I store these so I can check if an innovation has already happened before?

Structural innovation is anything added that changes the topology of the network. So a structural innovation is any added connection or added node. I don't want to get too much into the implementation, but something similar to a dictionary global variable will work. When adding a connection (or node) between two nodes, we check if a connection (or node) has ever been placed between these two nodes before. If it has then we have stored an innovation number for what this connection (or node) is identical and we set its innovation number to that. We identify each node by its number.

  1. Is there a reason for storing the type of a node (input, hidden, output)?

Yes, there are many structural innovations that we do not want to happen. An innovation between two input node or an innovation between two output nodes are two examples of structural innovations we don't want.

  1. In the original paper, only connections have an innovation number, but in other sources, nodes do as well. Is this necessary for crossover? (This has already been asked here.)

Although I am unsure if it is explicitly stated, it is implied that nodes have an identifying number. You can see this in Figure 2 and 3 of the paper where each node is labeled with a number.

  1. How could I limit the mutation functions to not add backpropagation connections?

By backpropagation connections I assume you mean recurrent connections (correct me if I am wrong). You can prevent recurrent connections in many ways, but one way is to maintain a partially ordered set. When adding a connection between two nodes you check to see the order of those two nodes in the partially ordered set. The one that comes first will be the node from which the connection starts, while the second will be the node at which the connection ends.

As a note, we can prevent recurrent connections but often times allowing them provides opportunities to find a better solution (as @PaulG comments)

",13088,,13088,,10/3/2019 5:50,10/3/2019 5:50,,,,4,,,,CC BY-SA 4.0 5851,2,,5840,3/31/2018 19:56,,1,,"

When we apply supervised learning to a problem, we are already systematizing the approach. A human has decided that a function exists (mapping from inputs to unique output) and that the offered features are the only ones that need be considered. The learning then goes ahead to find the best solution given those constraints. Unsupervised learning is a bit more general, searching for associations or relations that might not necessarily be functions. A neural net is not yet capable of generalizing and asking for more information, it can only become more specific unless a human intervenes.

Everything depends on the detail of the problem. If it is clear that a function must exist then we can set a NN to find that function. Many other problems are more difficult - a company is losing money and you have data but halfway there was a change in CEO, so human reasoning has to be mixed in to deal with the situation. The human can modify the architecture of the NN to introduce dummy variables, but the NN cannot do this by itself.

So your answer really is ""I chose this method because of the (lack of) need for me to artificially constrain the approach to the problem.""

",4994,,,,,3/31/2018 19:56,,,,2,,,,CC BY-SA 3.0 5853,2,,2874,4/1/2018 0:34,,2,,"

To the best of my knowledge, there isn't any difference between the algorithmic methods and the NN methods. Those that can solve in polynomial time do not give a precise solution. Those that do give a precise solution do not solve in polynomial time. Of those that give a precise solution, the fastest takes $2^N$, but it blows up in terms of memory. The fastest good algorithm I believe is Concorde.

The efficient algorithms solve in polynomial time, don't blow up in terms of memory, and give a solution close to perfect, say, within 2-3%. Again, to the best of my knowledge, no NN has beaten the best algorithmic solutions, but there are suggestions that some NN solution could be faster.

",12118,,2444,,1/17/2021 21:21,1/17/2021 21:21,,,,0,,,,CC BY-SA 4.0 5854,2,,5167,4/1/2018 9:22,,1,,"

You could train a model to classify sentences into user intents. For example, an intent could be ""greeting"". Another intent could be ""help"", or any other capability that your bot is able to talk about.

To train you model, you should provide several examples for the same intent. For example, for ""greeting"", you could provide ""Hi"", ""Hello"", ""What's up"", etc...

You should also apply some preprocessing before feeding sentences into your model, such as word embeddings or semantic similarity with WordNet. These techniques allow to transform strings into representations that capture the similary of word meanings. The ability of your model to detect synonyms without being retrained will highly depend on this preprocessing.

",8009,,8009,,4/1/2018 9:41,4/1/2018 9:41,,,,0,,,,CC BY-SA 3.0 5855,1,5858,,4/1/2018 11:48,,2,1336,"

I have a neural network with 2 inputs and one output, like so:

input    | output
____________________
 a    | b   |  c       
 5.15 |3.17 | 0.0607
 4.61 |2.91 | 0.1551

etc.

I have 75 samples and I am using 50 for training and 25 for testing.

However, I feel that the training samples are not enough. Because I can't provide more real samples (due to time limitation), I would like to train the network using fake data:

For example, I know that the range for the a parameter is from 3 to 14, and that the b parameter is ~65% of the a parameter. I also know that c is a number between 0 and 1 and that it increases when a & b increase.

So, what I would like to do is to generate some data using the above restrictions (about 20 samples). For example, assume a = 13 , b = 8 and c= 0.95, and train the network with these samples before training it with the real samples.

Has anybody studied the effect of doing this on the neural network? Is it possible to know if the effect will be better or worse on the networks? Are there any recommendations/guidelines if I want to do this?

",14701,,2444,,12/25/2021 17:42,12/25/2021 17:43,What is the effect of training a neural network with randomly generated fake data that satisfies certain constraints?,,2,0,,,,CC BY-SA 4.0 5857,1,,,4/1/2018 13:17,,1,24,"

I created and operate a social network for meeting new people. As a result of the recent FOSTA legislation, it’s imperative that I implement an automated system to prevent users from posting advertisements relating to prostitution. I do not have much expierence with AI/Machine learning. What library, algorithm, method should I look into to solve this problem?

",14705,,,,,4/1/2018 13:17,Social network filtering for specific topic,,0,3,,,,CC BY-SA 3.0 5858,2,,5855,4/1/2018 17:02,,2,,"

This is not advisable. If you train your model with random data your model is not learning anything useful, because there is no information to gain from those examples. Even worse it may (and likely is) trying to generalize off of your incorrect examples, which will lessen the effect your real examples have. Essentially, you are just dampening your training set with noise.

You are moving in the right direction though. 75 examples will not be enough if your problem has any complexity at all. And unless you know some correlation between the inputs a, b and the output c, you don't want to generate data (and even if you did know some correlation, it is not always suggested to generate data). If it is impossible to get any more data, you might want to consider a statistical model, rather than a neural network.

",13088,,2444,,12/25/2021 17:43,12/25/2021 17:43,,,,5,,,,CC BY-SA 4.0 5860,2,,5855,4/1/2018 18:42,,1,,"

If you add fake samples to the training set, your Neural Network learns new dataset that you just made, your fake samples are estimations so you add noise to your training set.

you can use Leave one out cross validation technic for evaluating your model.

",10051,,,,,4/1/2018 18:42,,,,0,,,,CC BY-SA 3.0 5861,1,,,4/1/2018 19:50,,5,1970,"

Deep networks notoriously take a long time to train.

What is the most time-consuming aspect of training them? Is it the matrix multiplications? Is it the forward pass? Is it some component of the backward pass?

",11566,,2444,,5/19/2020 19:52,10/1/2020 2:29,What is the most time-consuming part of training deep networks?,,3,1,,,,CC BY-SA 4.0 5862,1,,,4/1/2018 20:22,,7,820,"

Why aren't there neural networks that connect the output of each layer to all next layers?

For example, the output of layer 1 would be fed to the input of layers 2, 3, 4, etc. Beyond computational power considerations, wouldn't this be better than only connecting layers 1 and 2, 3 and 4, etc?

Also, wouldn't this solve the vanishing gradient problem?

If computational power is the concern, perhaps you could connect layer 1 only to the next N layers.

",14710,,2444,,3/11/2020 0:56,3/11/2020 0:59,Why aren't there neural networks that connect the output of each layer to all next layers?,,1,0,0,,,CC BY-SA 4.0 5865,2,,5862,4/2/2018 9:28,,4,,"

Actually, this already exists!

I happened to make a presentation of a paper that talks about this topic. These networks are called DenseNets, which stands for densely connected convolutional networks. Just like in your question, within a dense block, the output of each layer is given as input to all subsequent layers. Put another way, in a normal feed-forward neural network the $l$th layer is a function of the previous output $x_l = H(x_{l-1})$, while in the dense net each layer is a function of all the previous outputs $x_l = H([x_0, x_1, \dots, x_{l-1}])$.

However, since it is a CNN, there is a reduction in the size of the feature maps with each pooling layer, so to keep the dimensions constant, there is an alternation between the dense block and pooling layers.

The results are clear: not only in almost all the tests the accuracy of the dense net is greater than that of the other methods, but they do so using up to 90% fewer parameters, i.e. they have a high efficiency of parameters. Moreover, as suggested by the authors themselves, the improved accuracy can be explained by the shorter connections between the layers, which allow acting during the training phase in a deep supervision fashion, solving the vanishing gradient problem. This is similar to how it was done in other methods, but with a less complicated gradient.

If you're interested you should definitely check out their paper Densely Connected Convolutional Networks (2018).

",13199,,2444,,3/11/2020 0:59,3/11/2020 0:59,,,,2,,,,CC BY-SA 4.0 5866,2,,2762,4/2/2018 12:36,,2,,"

First I need to note that there is no prescribed/best way to choose the shape of membership function in fuzzy systems, that's the fuzziness in it. One could argue that the best way is to ask an expert in the field where you are going to apply your fuzzy solution but those are not always available.

With that said, fuzzy membership functions are used to describe the distribution of probabilities in real world for the variable you are trying to use in your fuzzy controller. That means you go out in the real world, you look at the system you are trying to control, you try your best to understand how it works and reacts to different outside changes and based on your findings you choose the shape that best fits. Or if you want you may call this process heuristic choice (something like that, I do not like theory very much).

On top of that you need to realize one important thing, the shape of membership function does not have big impact on the resulting controller behavior. The most influencing parts are the Fuzzy Rules and their inference methods you use in your controller, but that is a different topic. So no matter what you choose it will not make a big difference.

Gaussian functions are most commonly used because of the character of the world we are living in. Many people argue that everything in the world has Gaussian distribution (everyone is entitled to their opinion). And the triangular functions are used because they are the simplest alternative that is somewhat similar to Gaussian function.

But if you absolutely need to chose the best one for your particular problem. There always are Simulation tools that exists precisely for this purpose. One of them is Matlab Simulink as you mentioned, but there are others if you don't like the price of the Matlab.

My suggestion is, go with your gut, test it in safe environment and if it works then deploy it to the real world.

",1343,,,,,4/2/2018 12:36,,,,0,,,,CC BY-SA 3.0 5867,2,,5774,4/2/2018 14:47,,1,,"

From the paper found from the post linked below:

'We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks'

All that means that only values are skipped (=pooling is made) to the matrix, otherwise all works like a convolution should do.

Sources:

https://arxiv.org/pdf/1412.6806.pdf

https://stackoverflow.com/questions/44666390/max-pool-layer-vs-convolution-with-stride-performance

",11810,,,,,4/2/2018 14:47,,,,3,,,,CC BY-SA 3.0 5869,1,,,4/2/2018 18:33,,0,864,"

I have no experience with any kind of AI, but I really want to develop a system that can detect fire in images. I think I will need a labelled dataset with labels "fire" or "not fire", but I am not sure how I should proceed and which steps I need to take to develop this system.

So, what is the general procedure to create an AI system that can detect fire in images?

I heard about the Keras library, which could allow us to do this. How could I do this with Keras?

",14731,,2444,,9/12/2020 14:00,9/12/2020 14:00,What is the general procedure to create an AI system that can detect fire in images?,,1,1,,,,CC BY-SA 4.0 5870,2,,5810,4/2/2018 20:22,,3,,"

Would it be helpful to use a LSTM and reduce the input state?

I'd bet, no. LSTM is more complicated and harder to learn, while the input is 4 * 9 * 36 bits is still rather limited.

However, you may want to aggregate the information somehow, e.g., add additional bits informing about what cards were already played (no matter when). This information is redundant, but by providing it, you may save the network quite some learning.

At the same time, you may want to use symmetries (all colors but trumps are equivalent and the therefore the weights should be the same).

How to handle invalid moves?

That's simple: There are no invalid moves. The network provides 36 outputs of how much it wants to play a given card. You simply take the one valid card having the greatest output value. You don't try to make the network learn what moves are valid as this is neither needed nor helpful.

Do you have some good ideas for improvements? (like Neural-Fictitious Self-Play or something similar)

I can't tell. But it should matter at the moment. First make you network clearly beat the random players, then you can look for more. Or start with Self-Play, as you want probably have both for comparison.

Or is this the whole approach absolute nonsense?

I don't think so, but ... (see below)

I design the input state as a vector (one hot encoded) with 36 ""bits"" for every player

This doesn't sound good. Every player has 9 of 36 cards and so should be the encoding. A player doesn't know the cards of other players.

The reward is +1 for winning, -1 for losing,

In most card games I know, it matters by how much you win (unlike e.g., in Go). Even when it doesn't matter, using this information at the early learning stages is IMHO useful.

-0.1 for a invalid action and

Drop the invalid action. Just transform anything the network produces to a valid action, add no penalty (as written above).

... 0 for an action which doesn't lead to a terminal state

All action but the last one lead to a non-terminal state. You can use some Temporal Difference Learning or use the fact that the game has a small fixed number of moves and reward/punish all actions takes in a the whole game.

",12053,,,,,4/2/2018 20:22,,,,1,,,,CC BY-SA 3.0 5872,2,,5239,4/3/2018 2:39,,2,,"

This has been my field of research. I've seen the previous answers that suggest that we don't have sufficient computational power, but this is not entirely true.

The computational estimate for the human brain ranges from 10 petaFLOPS ($1 \times 10^{16}$) to 1 exaFLOPS ($1 \times 10^{18}$). Let's use the most conservative number. The TaihuLight can do 90 petaFLOPS which is $9 \times 10^{16}$.

We see that the human brain is perhaps 11x more powerful. So, if the computational theory of mind were true, then TaiHuLight should be able to match the reasoning ability of an animal about 1/11th as intelligent.

If we look at a neural cortex list, the squirrel monkey has about 1/12th the number of neurons in its cerebral cortex as a human. With AI, we cannot match the reasoning ability of a squirrel monkey.

A dog has about 1/30th the number of neurons. With AI, we cannot match the reasoning ability of a dog.

A brown rat has about 1/500th the number of neurons. With AI, we cannot match the reasoning ability of a rat.

This gets us down to 2 petaFLOPS or 2,000 teraFLOPS. There are 67 supercomputers worldwide that should be capable of matching this.

A mouse has half the number of neurons as a brown rat. There are 190 supercomputers that should be able to match its reasoning ability.

A frog or non-schooling fish is about 1/5th of this. All of the top 500 supercomputers are 2.5x as powerful as this. Yet, none is capable of matching these animals.

What exactly is the obstacle we are facing?

The problem is that a cognitive system cannot be defined using only Church-Turing. AI should be capable of matching non-cognitive animals like arthropods, roundworms, and flatworms but not larger fish or most reptiles.

I guess I need to give more concrete examples. The NEST system has demonstrated 1 second of operation of 520 million neurons and 5.8 trillion synapses in 5.2 minutes on the 5 petaFLOPS BlueGene/Q. The current thinking is that, if they could scale the system by 200 to an exaFLOPS, then they could simulate the human cerebral cortex at the same 1/300th normal speed. This might sound reasonable, but it doesn't actually make sense.

A mouse has 1/1000th as many neurons as a human cortex. So this same system should be capable today of simulating a mouse brain at 1/60th normal speed. So, why aren't they doing it?

",12118,,2444,,1/19/2021 12:32,1/19/2021 12:32,,,,2,,,,CC BY-SA 4.0 5873,2,,5496,4/3/2018 8:32,,4,,"

It is actually the other way around: connection IDs is what is debated!

Nodes always have innovation IDs (in the image, it is just their identifying number).

Node IDs are sufficient to identify connections. If a connection links nodes 3 and 6, then it is the same as another connection linking nodes 3 and 6: no need for an extra ID. So why the extra innovation IDs then?

On the one hand, this is an implementation choice: maybe these extra IDs would allow you to create a more complex but faster code?

On the other hand, there is a debate around whether a connection between two nodes means the same thing at different times in evolution. If you have no innovation IDs, then you cannot tell apart an old connection from 3 and 6 and another that was independently created later in a different genome (imagine the old connection was removed first). Is this relevant? As said, it is an open debate. Surely, it is not crucial at a basic level!

This question (and my answers) is related to this other question on Stack Overflow.

",14744,,2444,,7/7/2019 19:50,7/7/2019 19:50,,,,2,,,,CC BY-SA 4.0 5874,1,,,4/3/2018 16:33,,4,201,"

Can we detect the emotions (or feelings) of a human through conversations with an AI?

Something like a "confessional", disregarding human possibilities to lie.

Below, I have the categories joyful, sadness, anger, fear and affection. For each category, there are several words that can be in the texts that refer to it.

  • Joy: ( cheerful, happy, confident, happy, satisfied, excited, interested, dazzled, optimistic, relieved, euphoric, drunk, witty, good )

  • Sadness: ( sad, desperate, displeased, depressed, bored, lonely, hurt, desolate, meditative, defrauded, withdrawn, pitying, concentrated, depressed, melancholic, nostalgic )

  • Anger: ( aggressive, critical, angry, hysterical, envious, grumpy, disappointed, shocked, exasperated, frustrated, arrogant, jealous, agonized, hostile, vengeful )

  • Fear: ( shy, frightened, fearful, horrified, suspicious, disbelieving, embarrassed, embarrassed, shaken, surprised, guilty, anxious, cautious, indecisive, embarrassed, modest )

  • Affection: ( loving, passionate, supportive, malicious, dazzled, glazed, homesick, embarrassed, indifferent, curious, tender, moved, hopeful )

Flow Example

Phrase 1: "I'm very happy! It concludes college."

Categorization 1:  - Joy (+1)

  • Sadness (-1)

Phrase 2: "I'm sad, my mother passed away."

Categorization 2:  - Sadness (+1)

  • Joy (-1)

Phrase 3: "I met a girl, but I was ashamed."

Categorization 3:  - Fear (+1)

Is this a clever way to follow and / or improve, or am I completely out of the way?

I see that there is a Google product that creates parsing according to the phrases. I do not know how it works, because I like to recreate the way I think it would work.

Remembering that this would not be the only way to categorize the phrase. This would be the first phase of the analysis. I can also identify the subject of the sentence, so we would know if the sadness is from the creator of the message or from a third party, in most cases.

",7800,,2444,,12/4/2020 15:13,12/4/2020 15:13,Can we detect the emotions (or feelings) of a human through conversations with an AI?,,3,7,,,,CC BY-SA 4.0 5875,1,,,4/3/2018 16:38,,1,131,"

I'm working with acoustic data (filterbank features) and I want to build a neural network to detect claps using an LSTM (or a GRU) with a binary output (present/abscent), and I'm wondering about how I should prepare my data before feeding them to the RNN.

If I have 20 seconds of claps (separate claps separated by ~ 0.1 seconds) what is the difference between :

  1. Feeding the network a series of N claps as one example (with variable N : 1, 2, .., 10, ..) + padding with zeros to fit the longest sequence.
  2. Feeding the network multiple examples of 1 clap.

My problem is not restricted to claps but covers patterns that can be observed as separated occurrences, periodic sequence of occurrences, variable-length-period ""quasi-periodic"" sequence of occurrences, etc.

Unlike an ergodic HMM, an RNN doesn't have any loops to ""jump back"" to a previous ""acoustic state"", so what should-I do with this kind of data ?

",14751,,,,,4/3/2018 16:38,Best practices to classify recurring patterns using an LSTM or GRU,,0,0,,,,CC BY-SA 3.0 5883,2,,5874,4/3/2018 21:57,,1,,"

It could work using supervised learning , as long as you have the required dataset.

However, a low error ratio using unsupervised learning of the human emotion spectrum would prove to be more difficult.

Ex : How would you defined being in love to a neural network ? Joy +1 , Sadness -1 ?

Now , How would you define being in love with , let’s say, someone you know you could never be with ? Joy -1 , Sadness +1, but at the same time , the only fact that you are thinking about that person bring a Joy +1 .

Human emotions are quite complex . A good start (in my humble opinion) would be to read about ‘emotion-related’ hormones , and how they affect the brain ( dopamine , serotonin, etc).

Some emotions are really a precise mix of these hormones , probably giving you a good hint on how to ‘caregorize’ your network.

",13038,,,,,4/3/2018 21:57,,,,1,,,,CC BY-SA 3.0 5884,2,,5193,4/4/2018 0:27,,1,,"

https://www.cs.waikato.ac.nz/ml/weka/downloading.html

Great little tool to experiment with various algorithms and compare their efficiencies.

",9413,,,,,4/4/2018 0:27,,,,0,,,,CC BY-SA 3.0 5885,1,,,4/4/2018 1:52,,4,462,"

A question about swarm intelligence as a potential method of strong general AI came up recently, and yielded some useful answers and clarifications regarding the nature of swarm intelligence. But it got me thinking about ""group intelligence"" in general.

Here organism is synonymous with algorithm, so a complex organism is an algorithm made up of component algorithms, based on a set of instructions in the form of a string.

Now consider the Portuguese man o' war, not a single animal, but a colonial organism. In this case, that means a set of animals connected for mutual benefit.

And physalia physalis are pretty smart as a species in that they've been around for a while, I'm not finding them on any endangered lists, and based on their habitat it looks like global warming will be a jackpot for them. And they don't even have brains.

Each component of the physalia has a narrow function, colony organism itself has a more generalized function, which is the set of functions necessary for maintenance and reproduction.

{Man o' War} ⊇ { {pneumatophore}, {gonophores, siphosomal nectophores,vestigial siphosomal nectophores}, {free gastrozooids, tentacled gastrozooids, gonozooids, gonopalpons}, {dactylozooids}, {gonozooids}, {gastrozooids} }

  • What types of applications qualify as ""compound intelligences""? What is the thinking on groups of neural networks comprising generally stronger or simply more generalized intelligence?

I recognize the underlying problem is ultimately complexity and that ""strong narrow AI"" is, by definition, limited, so I use ""generalized"" and omit ""strong"" because human-like and superintelligence are not conditions. Compound intelligence is defined as a colony of dependent intelligences.*

Utility software is often a form of an expert system that manages a set of functions of varying degrees of complexity. There's currently a great deal of focus on autonomous vehicles, which would seem to require sets of functions.

Links to research papers on this or related subjects would be ideal.


Portuguese Man o' War (oceana.org)

The Bugs Of The World Could Squish Us All

",1671,,2444,,4/3/2020 15:37,4/3/2020 15:37,"What types of applications qualify as ""compound intelligences""?",,3,1,,,,CC BY-SA 4.0 5888,2,,5874,4/4/2018 3:00,,2,,"

I think you are definitely on a very sensible track. No one defines right or wrong in emotion field. It's not hard science. It's all theories.

I have recently read a paper regarding emotions in Reinforcement Learning (RL). It has explained briefly emotion from 3 perspectives: psychology, neuroscience and computer science. In particular, your way of emotion definition matches one of the categorical emotion theory in the psychology perspective. The other theories in psychology perspective and componential emotion. You can try to implement them and try out which one works well. The paper has also introduced ways to measure the level of emotions (emotion elicitation).

Here is the link for the paper I have mentioned. I am sure you will receive lots of inspiration. I have also written a summary of this paper. Take a look if the original paper is too long to read.

I don't have any concrete solution for implementing. But the general idea is always trying to categorize abstract concepts and quantify them. And try some thing, and iteratively modify and improve it. All the best!

",5767,,5767,,4/4/2018 8:53,4/4/2018 8:53,,,,1,,,,CC BY-SA 3.0 5890,1,,,4/4/2018 9:13,,5,908,"

I am trying to find out what are some good learning strategies for Deep Q-Network with opponents. Let's consider the well-known game Tic-Tac-Toe as an example:

  • How should an opponent be implemented to get good and fast improvements?
  • Is it better to play against a random player or a perfect player or should the opponent be a DQN player as well?
",14587,,2444,,1/31/2021 3:32,1/31/2021 3:32,What are good learning strategies for Deep Q-Network with opponents?,,1,0,,,,CC BY-SA 4.0 5891,1,5892,,4/4/2018 12:36,,6,4522,"

To provide a bit of context, I'm a software engineer & game enthusiast (card games, especially). The thing is I've always been interested in AI oriented to games. In college, I programmed my own Gomoku AI, so I'm a bit familiar with the basic concepts of AI applied to games and have read books & articles about Game Theory as well.

My issue comes when I try to analyze AI's for Imperfect Information games like (Poker, Magic the gathering, Hearthstone, etc). In most cases, when I found an AI for Hearthstone, it was either some sort of Monte Carlo or MinMax strategy. I honestly think, although it might even provide some decent results, it will still be always quite flat and linear, since it doesn't take into account what deck the opponent is playing and almost always tries to follow the same game-plan, since it will not change based on tells your opponent might give away via cards played (a hint that a human would catch).

I would like to know if using Neural Networks would be better than just using a raw evaluation of board state + hands + Hp each turn without taking into account learning about possible threads the opponent might have, how to deny the opponent the best plays he could make, etc.

My intuition tells me that this is way harder and far more complex.

Is that the only reason the NN method is not used? Has there been any research to prove how much efficiency edge would be between those 2 approaches?

",14769,,2444,,1/20/2021 22:55,1/20/2021 22:55,Why most imperfect information games usually use non machine learning AI?,,1,1,,,,CC BY-SA 4.0 5892,2,,5891,4/4/2018 13:35,,7,,"

A heuristic search using MCTS + minimax + alphabeta pruning is a highly efficient AI planning process. What the AI techniques of reinforcement learning (RL) plus neural networks (NNs) typically add to this is a way to establish better heuristics.

My intuition tells me that this is way harder and far more complex.

It's not actually that much more complex in concept. Replace the hand-coded heuristic with a learning engine, e.g. DQN or AC3. Train the learning engine from human expert play examples and/or from self play.

It is harder though, because there are many things that can go wrong with an NN-based estimator in a RL context. You will need to make many experiments with different hyper-parameters of the learning engine. For complex games, you may have to invest many 100s of hours of training, which you might want to compare against the end result of a similar amount of time spent refining expert heuristic systems.

For imperfect information games, you may also want to use something that can learn an internal state representation. That could be some kind of explicit belief state that you maintain like an expert system, or something that attempts to learn a good representation, such as an RNN (e.g. LSTM). This may not be necessary for a first try at an agent though, since the MCTS search will make up for some inadequacies of low accuracy heuristics.

Is that the only reason the NN method is not used?

Up until quite recently, approaches using RL and NN were far harder to find examples of outside of academic machine learning research, and there were not any pre-written frameworks for LSTM or e.g. AC3. In the last few years, RL and NN frameworks have started to appear making an AI self-learning approach far more approachable.

I would expect that many hobby-level coders considering game-playing AI nowadays would seriously take a look at RL and NNs in order to learn robust heuristics for their game projects. However, the ""traditional"" search-based methods still work in conjunction with these for a completed agent.

Has there been any research to prove how much efficiency edge would be between those 2 approaches?

For card games, I am not aware of any specific research, although I am just a hobbyist, yet to write any specific game engine more complex than tic-tac-toe.

For perfect information board games, the chess playing variant of AlphaZero demonstrates applicability of RL+NN self-play approach versus ""traditional"" heuristics plus search (represented by Stockfish). However, the framing of the tournament has been criticised as unfair to Stockfish, so it is not necessarily an open-and-shut case that RL is strictly better.

",1847,,,,,4/4/2018 13:35,,,,0,,,,CC BY-SA 3.0 5893,2,,5890,4/4/2018 14:05,,4,,"

In a two player zero-sum game (if I win, you lose and vice-versa), then you can have a simple and efficient solution learning from self-play.

How should an opponent be implemented to get good and fast improvements?

You don't need to think in terms of agent vs opponent, instead think in terms of coding both the players' goals into a single Q function. Score +1 if player A wins, -1 if player B wins, and zero for a draw. It is then player A's goal to maximise the score and player B's goal to minimise the score.

You can then implement and learn both player strategies in the same self-play learning session and same Q function, using minimax. In practice that means where in Q learning you generally pick the maximising action on next state to bootstrap Q values, in a minimax variant, you pick maximising or minimising action depending on whose turn it is. Otherwise the Q learning algorithm is the same as normal. I have implemented this, but not for DQN, just for tabular Q-learning - feel free to learn from, copy and/or re-use any part of that code.

Is it reasonable to play against a random player, a perfect player or should the opponent be a DQN player as well?

The Q learner will learn to optimise against whichever player you make it play against. Against a random player, it will not necessarily learn to play well, just well enough to defeat the randomness. It may even make deliberate mistakes - such as not blocking a winning line - knowing it has a better chance to win due to random opponent.

Against a perfect player is possible with tic tac toe (because you can construct such a player), although there might be gaps in the trained Q values - game states never seen - which mean that an imperfect opponent could actually defeat the trained agent! In practice this does not scale to more complex unsolved games, because no perfect players exist.

Another DQN player should work fine. You would end up with two agents, each specialising in playing one player's turns. This is less efficient than a single minimax-based player, but no expected problems. It may be a preferred choice for some games, especially if they are not zero-sum.

",1847,,1847,,4/4/2018 14:18,4/4/2018 14:18,,,,4,,,,CC BY-SA 3.0 5899,1,,,4/4/2018 18:54,,6,1951,"

In the search tree below, there are 11 nodes, 5 of which are leaves. There are 10 branches.

Is the average branching factor given by 10/6, or 10/11?

Are leaves included in the calculation? Intuitively, I would think not, since we are interested in nodes with branches. However, a definition given to me by my professor was "The average number of branches of all nodes in the tree", which would imply leaves are included.

",14777,,2444,,12/20/2021 23:10,12/20/2021 23:10,Are leaf nodes included in the calculation of average branching factor for search trees?,,2,0,,,,CC BY-SA 4.0 5900,2,,5885,4/5/2018 1:24,,1,,"

I did work on compound intelligence because that is the direction that Google is trying to go. I couldn't find any basis for it. In other words, having a collection of AI expert systems does not seem to provide any collective intelligence. You would also need some kind of control program that could decide which system to use. Currently, Google relies on the user to choose. If Google was able to create an independent control program it would already be out. This does not seem to be a matter of complexity or code tweeking but a fundamental limit with AI.

",12118,,,,,4/5/2018 1:24,,,,3,,,,CC BY-SA 3.0 5904,1,18695,,4/5/2018 8:59,,8,376,"

In a nutshell: I want to understand why a one hidden layer neural network converges to a good minimum more reliably when a larger number of hidden neurons is used. Below a more detailed explanation of my experiment:

I am working on a simple 2D XOR-like classification example to understand the effects of neural network initialization better. Here's a visualisation of the data and the desired decision boundary:

Each blob consists of 5000 data points. The minimal complexity neural network to solve this problem is a one-hidden layer network with 2 hidden neurons. Since this architecture has the minimum number of parameters possible to solve this problem (with a NN) I would naively expect that this is also the easiest to optimise. However, this is not the case.

I found that with random initialization this architecture converges around half of the time, where convergence depends on the signs of the weights. Specifically, I observed the following behaviour:

w1 = [[1,-1],[-1,1]], w2 = [1,1] --> converges
w1 = [[1,1],[1,1]],   w2 = [1,-1] --> converges
w1 = [[1,1],[1,1]],   w2 = [1,1] --> finds only linear separation
w1 = [[1,-1],[-1,1]], w2 = [1,-1] --> finds only linear separation

This makes sense to me. In the latter two cases the optimisation gets stuck in suboptimal local minima. However, when increasing the number of hidden neurons to values greater than 2, the network develops a robustness to initialisation and starts to reliably converge for random values of w1 and w2. You can still find pathological examples, but with 4 hidden neurons the chance that one ""path way"" through the network will have non-pathological weights is larger. But happens to the rest of the network, is it just not used then?

Does anybody understand better where this robustness comes from or perhaps can offer some literature discussing this issue?

Some more information: this occurs in all training settings/architecture configurations I have investigated. For instance, activations=Relu, final_activation=sigmoid, Optimizer=Adam, learning_rate=0.1, cost_function=cross_entropy, biases were used in both layers.

",14789,,14789,,4/5/2018 14:33,3/17/2020 16:43,Why does a one-layer hidden network get more robust to poor initialization with growing number of hidden neurons?,,2,2,,,,CC BY-SA 3.0 5906,1,5917,,4/5/2018 13:31,,3,128,"

I have recently gone about and made a simple AI, one that gives responses to an input (albeit completely irrelevant and nonsensical ones), using Synaptic.js. Unfortunately, this is not the type of text generation I am looking for. What I am looking for would be a way to get connections between words and generate text from that. (What would be preferable would be to also generate at least semi-sensible answers also.)

This is part of project Raphiel, and can be checked out in the room associated with this site. What I want to know is what layer combination would I use for text generation?

I have been told to avoid retrieval-based bots.

I have the method to send and receive messages, I just need to figure out what combination of layers would be the best.

Unless I have the numbers wrong, this will be SE's second NN chatbot.

",14723,,14723,,4/10/2018 18:32,4/10/2018 18:32,How would one go about generating *sensible* responses to chat?,,1,2,0,,,CC BY-SA 3.0 5916,1,,,4/6/2018 0:09,,4,40,"

I'm looking to perform two tasks:

  • Train a classifier to classify code as serial or parallel

  • Train a generative algorithm to generate parallel code from serial

For the first task a simple scraper can scrape random C and C++ code from git, however for the second step I would need a decently large source of examples of serial to parallel code. Any ideas or pointers for existing or creating this type of dataset would be greatly appreciated.

",14809,,,,,6/26/2018 15:08,"Looking to build, compile, and/or find dataset for serial-parallelized code examples",,1,0,,,,CC BY-SA 3.0 5917,2,,5906,4/6/2018 0:38,,2,,"

This seems like a problem for the use of an encoder-decoder pair such as those seen in text summarization (see this paper by Rush et al.: https://arxiv.org/pdf/1509.00685.pdfï%C2%BC).

You would need the following layers:

  • LSTM layer to encode the given input text into an embedding

  • LSTM layer the looks over the currently generated output to encode the text into an embedding

  • A dense soft-max layer for generating words probabilistically based on the output of the two contextual LSTM encoders

Please see the following blog post by Jason Brownlee that outlines this approach and others, while giving implementation details and snippets (https://machinelearningmastery.com/encoder-decoder-models-text-summarization-keras/)!

Also note that this would require a large set of training examples of input text and reasonable responses. You might be able to scrape Reddit post responses and comments off of those for a start? Let me know if I misunderstood the question.

",14809,,,,,4/6/2018 0:38,,,,6,,,,CC BY-SA 3.0 5919,1,,,4/6/2018 6:09,,1,1525,"

When training on large neural network, how to deal with the case that the gradients are too small to have any impact?

FYI, I have an RNN, which has multiple LSTM cells and each cell has hundreds of neurons. Each training data has thousands of steps, so the RNN would unroll thousands of times. When I print out all gradients, they are very small, like e-20 of the variable values. Therefore the training does not change the variable values at all.

BTW, I think this is not an issue of vanishing gradients. Note that the gradients are uniformly small from the beginning to the end.

Any suggestion to overcome this issue?

Thank!

",14816,,,,,8/12/2018 3:01,Too small gradient on large neural network,,2,3,,,,CC BY-SA 3.0 5925,2,,2429,4/6/2018 9:24,,7,,"

Adding some to what Christian said. Facts are taken from the book, Artificial Intelligence: A Modern Approach.

Burrhus Frederic Skinner, a psychologist and behaviourist, published his book Verbal Behaviour in 1957. His work contains a detailed account of the behaviourist approach to language learning.

Noam Chomsky later wrote a review on the book, which for some reason became more famous than the book itself. Chomsky has his own theory of Syntactic Structures for this. He even mentioned that the behaviourist theory did not address the notion of creativity in language as it did not explain how a child could understand and make up sentences that he/she has never heard before. His theory based on syntactic models are dated back to Indian linguist Panini (350 B.C.) who was an ancient Sanskrit philologist, grammarian, and a revered scholar.

",3005,,2444,,12/21/2021 21:08,12/21/2021 21:08,,,,0,,,,CC BY-SA 4.0 5926,2,,5904,4/6/2018 10:17,,1,,"

You grasped a bit of the answer.

In the latter two cases the optimisation gets stuck in suboptimal local minima.

When you have only 2 dimensions, a local minima exists. When you have more dimensions, this minima gets harder and harder to reach, as its likelihood decreases. Intuitively, you have a lot more dimensions through which you can improve than if you only had 2 dimensions.

The problem still exists, even with 1000 neurons you could find a specific set of weights which was a local minimum. However, it just becomes so much less likely.

",7496,,,,,4/6/2018 10:17,,,,6,,,,CC BY-SA 3.0 5927,1,,,4/6/2018 14:02,,1,40,"

I'm detecting objects on images. I want to detect up to 10 objects, however, I'm not sure how to deal with the situation, where only one object is present.

Should I fill the remaining spaces in the label input data with vectors filled with 0? E.g:

[[xmin,ymin,xmax,ymax],[0,0,0,0]...]

Or is there any better way? Thanks for help!

",4695,,14723,,4/10/2018 14:41,4/10/2018 14:41,FIlling space with empty bounding box,,0,0,,,,CC BY-SA 3.0 5928,1,5932,,4/6/2018 15:49,,7,285,"

I am trying to create a fixed-topology MLP from scratch (with C#), which can solve some simple problems, such as the XOR problem and MNIST classification. The network will be trained purely with genetic algorithms instead of back-propagation.

Here are the details:

  • Population size: 50
  • Activation function: sigmoid
  • Fixed topology
  • XOR: 2 inputs, 1 output. Tested with different numbers of hidden layers/nodes.
  • MNIST: $28*28=784$ inputs for all pixels, will be either ON(1) or OFF(0). 10 outputs to represent digits 0-9
  • Initial population will be given random weights between 0 and 1
  • 10 "Fittest" networks survive each iteration, and performs crossover to reproduce 40 offspring
  • For all weights, mutation occurs to add a random value between -1 to 1, with a 5% chance

With 2 hidden layers of 4 and 3 neurons respectively, XOR managed to achieve 97-99.9% accuracy in around 100 generations. Biases were not used here.

However, trying out MNIST revealed a pretty glaring issue - the 784 inputs; a large increase of nodes compared to XOR, multiplied with weights and added up results in HUGE values of 50 to even 100, way beyond the typical domain range of the activation function (sigmoid).

This just renders all layers' outputs as 1 or 0.99999-something, which breaks the entire network. Also, since this makes all individuals in a population extremely similar to one other, the genetic algorithm seems to have no clue on how to improve. The crossover will produce an offspring almost identical to its parents, and some lucky mutations are simply ignored by the sheer amount of other neurons!

What can be a viable solution to this?

It's my first time studying NNs, and this is really challenging.

",14833,,2444,,2/6/2021 17:38,2/6/2021 17:38,How to solve the problem of too big activations when using genetic algorithms to train neural networks?,,1,0,,,,CC BY-SA 4.0 5932,2,,5928,4/7/2018 8:23,,4,,"

Your inputs should stay in a low range. Ideally for neural networks, the inputs are normalised to mean 0, standard deviation 1. I suspect this applies equally well to GA-driven NNs as gradient-driven ones.

Your weights should be both positive and negative.

In addition, once trained, they tend to follow a certain size distribution. It helps if you start with values within that range. This is often called Xavier or Glorot initialisation. Basically, if your number of inputs to a layer is n_inputs and number of outputs is n_outputs, you will have n_inputs * n_outputs weights, and if you initialise them with a random number generator, then you should use a multiplier something like this:

w = (rand() - 0.5) * sqrt(6/(n_inputs + n_outputs))

Note that is an addition inside the square root, you don't take the total number of weights.

As you are using a GA, you may want to use some form of clipping or normalisation to prevent mutations from drifting too far away from these useful weights. I'm not entirely sure what would be best, as usually I would use back propagation to train a large network. Potentially a maxnorm regularisation would help - choose a value (maybe even make it adjustable via mutation) that the norm of weight matrix in each layer should not exceed (separately for each layer, or one overall value, up to you), and if any learning step creates a network with too high a norm, scale it down. The L2 norm, sqrt(sum_of_squared_weights) is usual choice for maxnorm regularisation, and this works well with backpropagation-based supervised learning.

",1847,,,,,4/7/2018 8:23,,,,0,,,,CC BY-SA 3.0 5939,1,,,4/7/2018 17:29,,10,434,"

With the growing ability to cheaply create fake pictures, fake soundbites, and fake video there becomes an increasing problem with recognizing what is real and what isn't. Even now we see a number of examples of applications that create fake media for little cost (see Deepfake, FaceApp, etc.).

Obviously, if these applications are used in the wrong way they could be used to tarnish another person's image. Deepfake could be used to make a person look unfaithful to their partner. Another application could be used to make it seem like a politician said something controversial.

What are some techniques that can be used to recognize and protect against artificially made media?

",13088,,2444,,9/7/2019 12:15,9/7/2019 14:00,What are some tactics for recognizing artificially made media?,,4,0,,,,CC BY-SA 4.0 5941,1,,,4/7/2018 21:06,,2,368,"

I am working on a project which maps to a variant of path finding problem. I am new to this area and I would be very grateful if you could give suggestions/ point to libraries for relevant algorithms.

A simplified version of my problem statement is as follows-

Goal: On a 2D grid, starting from a fixed point reach the destination in exactly N steps.

Allowed actions: 1. At every position, you have a choice of up to three moves (i.e. straight, curve left, curve right). 2. You cannot collide with the path traveled so far (just like in the snake game).

Dimension of the grid: N x N where N is between 100-1000

Scalable: Later on, the problem will be scaled to have multiple such snakes going between different pairs of points on the grid. The ultimate goal is to get ALL snakes to reach their respective destinations in a fixed number of steps without any collisions.

TL;DR: Essentially I have to find a fixed length path on a dynamically generated directed graph. Is there a better choice than a A* / greedy heuristic? Is it worth taking a Q- learning approach?

A rudimentary one snake version written in python can be found here - Github Link Thanks in advance!

",14851,,14851,,4/7/2018 22:42,4/8/2018 8:11,Snake path finding variant : Algorithm choice,,0,1,,,,CC BY-SA 3.0 5942,1,,,4/7/2018 22:43,,6,95,"

I have two classes in the training set: one that has images with a feature and the other of images without that feature. Can there be a LOT more images with ""no feature"" so I can fit in all possible false positives?

",14731,,,,,8/27/2020 3:06,"Two data classes for a convolutional neural network, can one have a LOT more images for training than the other?",,2,0,,,,CC BY-SA 3.0 5943,1,,,4/8/2018 0:54,,0,163,"

In time Series prediction, we have a stream of vectors. There are different approaches for accounting for the temporal patterns between these vectors.

There's two that I'm considering. An LSTM or augmenting the feature space. What's the difference between the two? The most obvious to me is that an LSTM is more expressive and can get superior accuracy if modelled properly.

",11566,,,,,4/9/2018 16:11,Time Series: LSTM or Augmented Vector Space?,,2,0,,,,CC BY-SA 3.0 5949,1,5951,,4/8/2018 14:38,,12,17726,"

What is the fringe in the context of search algorithms?

",14862,,2444,,7/6/2019 20:15,9/26/2020 11:00,What is the fringe in the context of search algorithms?,,1,0,,,,CC BY-SA 4.0 5950,2,,5943,4/8/2018 14:47,,0,,"

I just read this in a recent Bengio paper and it's pretty obvious. He says that there are zero differences between a short-term memory and an augmented feature space. However, if you want to capture long-term dependencies without blowing up the feature space, you'd want to use an LSTM because traditional approaches can't dynamically learn what to ""remember"".

",11566,,,,,4/8/2018 14:47,,,,12,,,,CC BY-SA 3.0 5951,2,,5949,4/8/2018 15:10,,13,,"

In English, the fringe is (also) defined as the outer, marginal, or extreme part of an area, group, or sphere of activity.

In the context of AI search algorithms, the state (or search) space is usually represented as a graph, where nodes are states and the edges are the connections (or actions) between the corresponding states. If you're performing a tree (or graph) search, then the set of all nodes at the end of all visited paths is called the fringe, frontier or border.

In the picture below, the grey nodes (the lastly visited nodes of each path) form the fringe.

The video Example Route Finding by Peter Norvig also gives some intuition behind this concept.

",,user9947,2444,,7/6/2019 20:25,7/6/2019 20:25,,,,0,,,10/27/2021 22:17,CC BY-SA 4.0 5953,2,,5899,4/8/2018 18:39,,1,,"

From Wikipedia:

In computing, tree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.

Outdegree meaning - In case of directed graphs, number of edges going into a node is known as in-degree of the corresponding node and number of edges coming out of a node is known as out-degree of the corresponding node.

You forgot the outdegree part. In AI we generally draw directed graphs from one state to another, and outdegree is the number of paths leaving a particular node. In your graph direction is not given. Also your graph is not symmetrical, but you can still find out the branching factor (with a little difficulty) of non-symmetrical directed graphs as given here. So technically your conclusion is correct about leaf nodes not being counted (assuming they are the last state from which no other state can be reached - dead end). Hope this helps!

",,user9947,,user9947,6/9/2018 15:06,6/9/2018 15:06,,,,0,,,,CC BY-SA 4.0 5954,2,,5942,4/8/2018 21:21,,1,,"

Your question is very general so therefore, in this case, my answer will be too:

The answer is ""sometimes"": it depends on the data.

There can be a lot more images in one class than the other, and you can still get reasonable results. It highly depends on how much data you have of the ""feature class"".

If this is the case, we say that the classes are heavily unbalanced, and you need to do ""class balancing"". You do not want to do overfitting on this one class, and preferably you want the feature-class to be the biggest.

Another approach for CNNs is to use ""dropout"". Well, for CNN's you can go a bit further: you can remove parts of the image to generate ""new"" images. This way you prevent overfitting of the ""feature"" class, whilst generating more data.

I suspect that training all possible false positives is impossible without overfitting the network somehow.

Hope it helps, and give you some google pointers :)


Just FYI: You basically, in the tech term, want to know whether it works to do binary CNN classification using a heavily imbalanced dataset.

",14612,,14612,,5/31/2018 22:44,5/31/2018 22:44,,,,0,,,,CC BY-SA 4.0 5955,1,5957,,4/8/2018 21:34,,19,3030,"

I've heard multiple times that ""Neural Networks are the best approximation we have to model the human brain"", and I think it is commonly known that Neural Networks are modelled after our brain.

I strongly suspect that this model has been simplified, but how much?

How much does, say, the vanilla NN differ from what we know about the human brain? Do we even know?

",14612,,2444,,12/13/2021 14:38,12/13/2021 14:38,How are Artificial Neural Networks and the Biological Neural Networks similar and different?,,3,2,,,,CC BY-SA 3.0 5956,2,,5955,4/9/2018 2:41,,5,,"

They are not close, not anymore!

[Artificial] Neural Nets vaguely inspired by the connections we previously observed between the neurons of a brain. Initially, there probably was an intention to develop ANN to approximate biological brains. However, the modern working ANNs that we see their applications in various tasks are not designed to provide us a functional model of an animal brain. As far as I know, there is no study claiming they have found something new in a biological brain by looking into the connections and weight distributions of let's say a CNN or RNN model.

",12853,,,,,4/9/2018 2:41,,,,0,,,,CC BY-SA 3.0 5957,2,,5955,4/9/2018 8:54,,12,,"

We all know that artificial neural networks (ANNs) are inspired by but most of them are only loosely based on the biological neural networks (BNNs).

We can analyze the differences and similarities between ANNs and BNNs in terms of the following components.

Neurons

The following diagram illustrates a biological neuron (screenshot of an image from this book).

The following one illustrates a typical artificial neuron of an ANN (screenshot of figure 1.14 of this book).

Initialization

In the case of an ANN, the initial state and weights are assigned randomly. While for BNNs, the strengths of connections between neurons and the structure of connections don't start as random. The initial state is genetically derived and is the byproduct of evolution.

Learning

In BNN, learning comes from the interconnections between myriad neurons in the brain. These interconnections change configuration when the brain experiences new stimuli. The changes result in new connections, strengthening of existing connections, and removal of old and unused ones.

ANNs are trained from scratch usually using a fixed topology (remember topology changes in case of BNNs), although the topology of ANN can also change (for example, take a look at NEAT or some continual learning techniques), which depends on the problem being solved. The weights of an ANN are randomly initialized and adjusted via an optimization algorithm.

Number of neurons

Another difference (although this difference is always smaller) is in the number of neurons in the network. A typical ANN consists of hundreds, thousands, millions, and, in some exceptional (e.g. GPT-3), billions of neurons. The BNN of the human brain consists of billions. This number varies from animal to animal.

Further reading

You can find more information here or here.

",3005,,2444,,1/20/2021 11:14,1/20/2021 11:14,,,,1,,,1/20/2021 0:34,CC BY-SA 4.0 5960,1,5961,,4/9/2018 11:37,,2,548,"

The problem to solve is non-linear regression of a non-linear function. My actual problem is to model the function ""find the max over many quadratic forms"": max(w.H.T * Q * w), but to get started and to learn more about neural networks, I created a toy example for a non-linear regression task, using Pytorch. The problem is that the network never learns the function in a satisfactory way, even though my model is quite large with multiple layers (see below). Or is it not large enough or too large? How can the network be improved or maybe even simplified to get a much smaller training error?

I experimented with different network architectures, but the result is never satisfactory. Usually, the error is quite small within the input interval around 0, but the network is not able to get good weights for the regions at the boundary of the interval (see plots below). The loss does not improve after a certain number of epochs. I could generate even more training data, but I have not yet understood completely, how the training can be improved (tuning parameters such as batch size, amount of data, number of layers, normalizing input (output?) data, number of neurons, epochs, etc.)

My neural network has 8 layers with the following number of neurons: 1, 80, 70, 60, 40, 40, 20, 1.

For the moment, I do not care too much about overfitting, my goal is to understand, why a certain network architecture/certain hyperparameters need to be chosen. Of course, avoiding overfitting at the same time would be a bonus.

I am especially interested in using neural networks for regression tasks or as function approximators. In principle, my problem should be able to be approximated to arbitrary accuracy by a single layer neural network, according to the universal approximation theorem, isn’t this correct?

",14873,,2444,,1/1/2020 13:19,10/10/2020 16:11,Why isn't my model learning satisfactorily?,,2,0,,,,CC BY-SA 4.0 5961,2,,5960,4/9/2018 11:54,,1,,"

Neural networks learn badly with large input ranges. Scale your inputs to a smaller range e.g. -2 to 2, and convert to/from this range to represent your function interval consistently.

",1847,,,,,4/9/2018 11:54,,,,3,,,,CC BY-SA 3.0 5964,1,6043,,4/9/2018 16:10,,6,302,"

YouTube has a huge amount of videos, many of which also containing various spoken languages. This would presumably provide something like the data that a ""challenged"" baby would experience - ""challenged"" meaning a baby without arms or legs (unfortunately many people are born that way).

Would this not allow unsupervised learning in a deep learning system that has both vision and audio capabilities? The neural network would presumably learn correlations between words and images, and could perhaps even learn rudimentary language skills, all without human supervision. I believe that the individual components to do this already exist.

Has this been tried, and if not, why?

",30433,,47,,4/15/2018 0:46,4/15/2018 0:46,Has anybody tried unsupervised deep learning from youtube videos?,,3,5,,,,CC BY-SA 3.0 5965,2,,5943,4/9/2018 16:11,,1,,"

LSTM is a neural network which learns for an input x an output y. In additional to CNNs or MLPs it considers a hiddenstate h (which is influenced by prvious inputs) when your next input x is feed into the network.

Augmenting the feature Space is a technique which you do previous training your LSTM (to augment your data set in order to generatre more data and let the LSTM better generalize to new data). In the field of image recognition you can rotate your images by 40 degree to generate a new one. This process is known as data augmentation. Such methods also appliable to time series.

In summary: first, you start with augmenting your input feature space in order to improve prediction accuracy and then training your LSTM with the augmeneted training data set.

",13295,,,,,4/9/2018 16:11,,,,3,,,,CC BY-SA 3.0 5966,1,,,4/9/2018 16:18,,1,152,"

I wanted to use the visualization of the activation maximization of the filters that is described in the following keras tutorial/blog:

https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html

I'd like to know what is the intention behind the decision that filters that produce a loss <= 0 are skipped. I know for 0 that would be reasonable since their would be no gradient flowing then (I think) but what about negative values? And is it also reasonable to use the mean of the outputs of the filters as a loss? What if there are weights of a filter that have high negative and positive numbers. Would that be a problem?

",14881,,14881,,4/9/2018 18:53,4/9/2018 18:53,Questions regarding keras activation maximization visualization,,0,0,,,,CC BY-SA 3.0 5967,2,,5538,4/9/2018 16:25,,0,,"

I would model this data as a 3d-tensor (user,timestep,features) [organization depends on which DL Framework you use] for your input data. Also for the output data is a 3d-tensor (user,timestep,result) appropriate.

The next step would ne tp train a LSTM or CNN model to predict the result (what requires a lot of data)(Would be my first choice). If you have less data, try out logistic regression as supposed by the other answer.

Good luck!

",13295,,,,,4/9/2018 16:25,,,,0,,,,CC BY-SA 3.0 5969,1,,,4/9/2018 20:06,,1,43,"

Are decision trees able to be used with time-related data?

I've read that decision trees are based on matrices and that ARRAYS of input matrices can be used to factor in time however I can't find an example of this.

Say for example, I'm monitoring the progress of students taking exams. Each day I ask them questions related to their mental state (fatigued, positivity, ability to concentrate, expectations for coming exam, confidence, etc). I have twenty days worth of questions. Day 1 for student A may see them studying for an exam the following day, while Day 1 for student B may see them actually doing the exam. There will be a relation between student's fatigue (for example) and the results they give the following day.

The examples when provided as input to a matrix will be used to show that IF on any given day, the student has an exam, and has breakfast, and does x,y,z THAT day then the outcome will be y.

However, short of encoding ""had exam previous day"" and ""had exam two days ago"" for each day, I can't see how I can include time dependency in decision trees.

",12726,,,,,4/9/2018 20:06,How to factor time into decision trees?,,0,0,,,,CC BY-SA 3.0 5970,1,,,4/9/2018 21:02,,16,10445,"

I'm studying reinforcement learning. It seems that ""state"" and ""observation"" mean exactly the same thing. They both capture the current state of the game.

Is there a difference between the two terms? Is the observation maybe the state after the action has been taken?

",11566,,2444,,11/16/2018 20:11,11/16/2018 20:11,What is the difference between an observation and a state in reinforcement learning?,,1,1,,,,CC BY-SA 4.0 5971,2,,5970,4/9/2018 21:36,,17,,"

Sometimes observation and state overlap completely, which is convenient. However, there is no reason to expect it in all cases, and that's where interesting problems occur.

Reinforcement learning theory is based on Markov Decision Processes. This leads to a formal definition of state. Most importantly, the state must have the Markov property. Which means that for RL to work according to theory, that knowing the state means that you know everything knowable that could determine the response of the environment to a specific action. Everything that remains must be purely stochastic and unknowable in principle until after the action is resolved.

Systems like deterministic or probability-driven games, and computer-controlled simulations can be designed to have easily observable states that have this property. Games with this trait are often called ""games of perfect information"", although you may have unknown information, provided it is revealed in a purely stochastic manner.

In practice, real world interactions contain far too much detail for any observation to be a true state with the Markov property. For instance, consider the inverted pendulum environment, a classic RL toy problem. A real inverted pendulum would behave differently depending on its temperature, which could vary along its length. The joint and actuators might be sticky. Rotations and movement will alter temperature and friction, etc. However, a RL agent will typically only consider current motion and position of the trolley and pendulum. In this case, the observation of 4 traits is usually good enough, and a state based on this almost has the Markov property.

There are also problems where observations are not enough to make usable state data for a RL system. The Deep Mind Atari DQN paper had examples of a couple of these. The first example is that a single frame lost data about motion. This could be addressed by taking four consecutive frames and combining them to make a single state. It could be argued that each frame is an observation, and that four observations had to be combined in order to construct a more useful state (although this could be put aside as just semantics).

The second example in Atari DQN is that the pixel observations did not include data that the game was tracking but that was not visible on screen. Games with large scrolling maps are a weakness of the Atari-playing DQN, because its state has no memory of screens other than the four used for movement. An example of such a game, where Deep Mind's player did much worse than a human player is Montezuma's Revenge, where to progress it is necessary to remember some off-screen locations.

There are ways to address knowledge that there is unobserved but relevant state in a problem. The general framework for describing the problem is Partially Observable Markov Decision Processes (POMDPs). Workable solutions include adding explicit memory or ""belief state"" to the state representation, or using a system such as RNN in order to internalise the learning of a state representation driven by a sequence of observations.

",1847,,,,,4/9/2018 21:36,,,,2,,,,CC BY-SA 3.0 5972,2,,5840,4/9/2018 21:53,,2,,"

There are two different problems described in the linked question and your question: optimization and learning.

Optimization

If you are asking about optimization (the second linked question: Search minimum value with learning machine algorithm) you can have 3 different approaches:

  • analytical approach
  • numerial methods
  • metaheuristics

As you suggest, it is usually better to try them from the first to the last one. It is common that the first approach is unfeasible for optimizing for target function, but very often you can use either mathematical optimization for some specific classes of problems (e.g. linear/quadratic programming) or iterative methods (e.g. conjugate gradient method). Only after considering this approaches it makes sense for the third class of approaches, genetic algorithms being a notable example, which is often classified as an AI approach.

Learning

If you are asking about learning, then the first linked question (Ideas on how to make a neural net learn how to split sequence into sub sequences) seems to be intended as an example. However it doesn't make clear what the problem is, as the target function seems to be obvious, so no learning is needed.

In this case it also makes sense to first try to pin down the problem mathematically and resort to machine learning if it is impossible and if you have the data (input/output examples).

",4880,,,,,4/9/2018 21:53,,,,6,,,,CC BY-SA 3.0 5973,2,,5838,4/10/2018 2:54,,0,,"

I guess, supervised learning should work rather well: You'd feed the network with a fixed substring and it'd determine if the middle character is the first letter of a word, or a last one, or neither or both.

So 2*n+1 inputs (fed e.g., with the string ""ingsits"") should output a 1 on the output determining if the middle letter (here: ""s"") is the first one of a word and a 0 on the output determining if it's the last one (taken from ""Thekingsitsthere""). Each input character should probably be 1 hot encoded.

You'd probably want to use more context characters than in my example. OTOH you can use a simple MLP with no temporal complications. It'll never get perfect as it's impossible, but it get pretty close.

Concerning unsupervised learning I'm skeptical...

",12053,,,,,4/10/2018 2:54,,,,0,,,,CC BY-SA 3.0 5974,2,,5838,4/10/2018 3:24,,0,,"

(NOTE: I think it will be easier to do it without ANNs...)

But if you insist:

  1. convert the sequence into a fixed-size vectors.
  2. push trough a 2-5 1D-convolution layer with 1 neuron dense layer at the end (sigmoid activation) and another K-points detector for getting the sequence breakage points
  3. create a training set - to find the break-points (12, 23, 34 ...) in the sequence.
  4. train a detector with SGD to find these break-points. - loss functions: cross_entropy.

Then, it should learn to find the breakage points, and based on this you can easily split the sequence.

",3250,,,,,4/10/2018 3:24,,,,2,,,,CC BY-SA 3.0 5975,2,,5899,4/10/2018 3:37,,1,,"

I'd say that the leaves per se count, too, but only if they're real leaves, like e.g., checkmate positions in chess.

Such a node has really no children and no further calculation is needed. Unlike with nodes which weren't expanded yet.

Note that always counting the leaves provably leads to (n-1)/n for every n-node thee!

",12053,,,,,4/10/2018 3:37,,,,0,,,,CC BY-SA 3.0 5976,2,,5111,4/10/2018 6:00,,1,,"

Not an pro but I think I know some answers to your questions.

If we train our classifier, wouldn't the prediction boxes be close to the ground truth labels as training progresses

I think that's what YOLO v1 did. According to Andrew NG's video the bounding boxes are introduced to solve multiple objects inside the same grid cell. And according to this post anchor boxes assignment ensures that an anchor box predicts ground truth for an object centered at its own grid center, and not a grid cell far away (like YOLO may)

what are those numbers representing anchor boxes representing?

They are just width and height (shape). In YOLO v2 it is used to compute IOU assuming all boxes are placed at the same location (ignoring the location), you could think of it just tries to match the shape. And it uses (1-IOU) as the distance when applying the K-means clustering.

",14897,,,,,4/10/2018 6:00,,,,0,,,,CC BY-SA 3.0 5977,2,,5838,4/10/2018 7:22,,0,,"

Another approach could be to predict the class of a sequence and not the break point. Assuming that each sequence is part of a class, you can use a LSTM. Inputing the multiple sequences (111100002222 ) and let predict the class for each sequence (c1,c1,c1,c1,c0,c0,c0,c0,c2,c2,c2,c2)

",13295,,,,,4/10/2018 7:22,,,,2,,,,CC BY-SA 3.0 5978,2,,5107,4/10/2018 7:46,,3,,"

The problem is in your diagram. Here are the steps to get to a 5x5 receptive field. Here is your diagram, redone slightly:

Notice that the new unit takes a weighted sum of the 9 pixels in the input, and then applies a rectified linear nonlinearity. Now, there are more of these, creating three new numbers computed from that part of the image. Each one slides over by one pixel:

We repeat this process going down three pixels as well, and then finally, we have a new 3x3 input field:

Notice that the new unit on the right now gets input from a 5x5 input field. I hope this helps!

",14901,,,,,4/10/2018 7:46,,,,0,,,,CC BY-SA 3.0 5979,1,6562,,4/10/2018 13:33,,2,812,"

I am trying to do 3d image deconvolution using convolution neural network. But I cannot find many famous CNNs that perform a 3d convolution. Can anyone point out some for me?

Background: I am using PyTorch, but any language is OK. What I want to know most is the network structure. I can't find papers on this topic.

Links to research papers would be especially appreciated.

",14907,,2444,,11/1/2021 15:38,11/1/2021 15:38,Which neural network architectures are there that perform 3D convolutions?,<3d-convolution>,1,1,,,,CC BY-SA 4.0 5980,2,,5869,4/10/2018 14:22,,1,,"

Many AI libraries allow the ability to feed them an Image, but others have to be modified to allow this.

Disregarding the information above, as it is just a notification, you can do this in many ways. Depending on the library you use, different methods give varying accuracy results.

The real questions are:

  1. How complex do you want the NN to be?
  2. Is this going to be commercial, or private?

For any of the answers above, you will have to find the best method that works for your needs. The more this matters to the world, the more complex it will have to be.

",14723,,,,,4/10/2018 14:22,,,,0,,,,CC BY-SA 3.0 5981,1,,,4/10/2018 15:34,,4,1067,"

For a school project, I would like to investigate a paper on either reinforcement learning or computer vision. I am particularly interested in DQN, RNNs, CNNs or LSTMs. I would eventually like to implement any of these. However, I also need to take into account the computing resources required to train and analyse any of these algorithms. I understand that, in computer vision, the data sets can be quite large, but I am not so sure regarding the resources needed to implement and train a typical state-of-the-art RL algorithm (like DQN).

Would a ""standard PC"" be able to run any of these algorithms decently to achieve some sort of analysis/results?

",14913,,2444,,2/15/2019 16:18,2/15/2019 16:21,What are the minimum computing resources needed to train a machine learning algorithm?,,1,3,,,,CC BY-SA 4.0 5982,1,,,4/10/2018 18:41,,4,845,"

I want to be able to input a block of text and then have it guess a string within a predefined range (i.e. a string that starts with three letters and ends with five numbers like ""XXX12345"", etc). Ideally, the string it will be guessing will be somewhere in the block of text, but sometimes it won't be.

I have been struggling where to begin on this or if I am even going in the right direction for considering Machine/Deep learning to try to do this.

Help!

",14918,,14918,,4/11/2018 13:30,12/28/2022 4:09,Use Machine/Deep Learning to Guess a String,,4,8,,,,CC BY-SA 3.0 5984,2,,4677,4/10/2018 20:04,,1,,"

You are using two optimisers here: Stochastic Gradient Descent (SGD) and Adam (which is a more complex variant of SGD).

So the ""Stochastic"" part means that it's random.

The stochastic gradient descent works by taking a smaller random part of the training data, called ""mini batch"", and back propagates (trains) on this. Doing this until the entire dataset is processed once is often called one epoch*.

This is how gradient descent works in a nutshell: Imagine you're going down a U-shaped hill. You're pretty far down in the U-shape, and you want to go further down by jumping. You figure out what direction is ""down"" for you: and then you jump. But darn it: you jumped too far and you ended up further up on the other side of the U!

That is just a simple example. You are probably working in WAY bigger dimensions, which complicates this analogy a bit.

Anyway this results in the effect that the loss might go up from time to time when you train another epoch. If you are training a lot of epochs and the loss keeps going up, you should check the learning rate (which basically decides how big a ""jump"" is).

Hope it helps :)


*: There are other ways of defining an epoch, but it all goes in variants of this.

",14612,,14612,,5/31/2018 22:09,5/31/2018 22:09,,,,0,,,,CC BY-SA 4.0 5985,2,,5838,4/10/2018 20:32,,0,,"

How about this ?

1 - Learn all the basic building blocks of possible sub-sequence
In our words sequence example, that would correspond to phonemes.
(I'm guessing that this step can even be done using unsupervised learning.)
So in the following example : Hello Laurie, we would have learned 3 phonemes : HE, LO, RI.

2- Learn all subsequence as sequences of 'building blocks'
Using a ClockWorkRNN with timesteps of interval +1 with, let's say, 10-15 timestep (groups), that is fed the next 'phoneme id' in the sequence, we would have a space large enough to record most words (Obviously, the number of timesteps should be size of the biggest word).
This is the subsequences memory RNN.
Its sole purpose is to remember subsequences.

Now, i'm really brainstorming here , taking a very wild guess, but what if :
After training this RNN to a satisfying error rate, we check if the output of the RNN is very different to the next input for a couple of timesteps.
In other word, we see if the neural network has been able to 'guess' the next building block of the subsequence.
If not, then its a point of interest , because there is not a lot of possibilities as of why this would happend : the only one I see is

1 - The RNN is currently receiving another word, thus making this timestep a sub-sequence 'break point'

Do you guys see any points that could prove this theory wrong ?

",13038,,,,,4/10/2018 20:32,,,,0,,,,CC BY-SA 3.0 5986,2,,5840,4/10/2018 21:09,,1,,"

fwiw, with the basic, non-trivial M-game, I have no doubt that ALphaZero could tear through any human player alive in very short order. I hope that people will start experimenting with that, especially on m^n(m^n) where m > 3 and n > 2 to see how they hold up. Problem is, once you expand past n > 3 it gets very difficult for humans to play. This leads to a condition where performance of an NN on higher order M can only realistically be evaluated against other algorithms. In this context, it seems worthwhile to develop a general, classical algorithm that can evaluate any order M, regardless of efficacy of tree search in relation to the problem size, with the understanding that decision making is never presumed optimal until the gametree becomes tractable. This carries an an assumption of the same general strength across all M for the classical algorithm, because the expansion of m or n do alter the core heuristics.

From the practical standpoint, as a product designed for mobile with no assumption of connectivity, it doesn't make sense to start integrating NNs until lowest-common-denominator mobile devices have sufficient resources. The issue of package size is also important in this context--the classical algorithms require a trivial amount of code and volume. Most importantly, using classical algorithms formed of sets of heuristics and parameters allows recombination of functions to produce myriad automata of varying degrees of strength. (This can be easily accomplished by altering the size of tree search algorithms, but may only be relevant in determining which heuristics perform better under tree search restrictions.)

Finally, because M-games provide an array precise metrics, it may be worthwhile to develop core heuristic function based on human reasoning.

",1671,,,,,4/10/2018 21:09,,,,2,,,,CC BY-SA 3.0 5987,1,,,4/10/2018 21:30,,2,67,"

I want to create a neural network and train it on some data, however I want to be able to create a new model without retraining it from the start.

An example, I have 1000 data points in my training data

  1. model - trained on 0-99
  2. model - trained on 1-100
  3. model - trained on 2-101
  4. and so forth

So I'm wondering if I can use the first model to train the second model, essentially forgetting the first data point.

You can view it as a sliding window over the 1000 data points, sliding one data point to the right for each new model.

Does it make sense? Is there any easy way to solve this problem?

",14923,,,,,10/26/2022 2:00,Shifting training data,,1,9,,,,CC BY-SA 3.0 5989,2,,5919,4/11/2018 5:28,,1,,"

I changed the layer from tf.contrib.rnn.LSTMBlockCell to tf.contrib.rnn.LayerNormBasicLSTMCell. Then the gradients become large enough to influence the network.

",14816,,,,,4/11/2018 5:28,,,,2,,,,CC BY-SA 3.0 5990,1,,,4/11/2018 8:06,,11,2112,"

There are two textbooks that I most love and am most afraid of in the world: Introduction to Algorithms by Cormen et al. and Artificial Intelligence: A Modern Approach by Norvig et al. I have started the ""AI: A Modern Approach"" more than once, but the book is so dense and full of theory that I get discouraged after a couple of weeks and stop.

I am looking for a similar AI book but with an equal emphasis on theory and practice. Some examples of what I am looking for:

  • The Elements of Statistical Learning by Tibshirani et al. (detailed theory)

  • An Introduction to Statistical Learning: With Applications in R by Tibshirani et al. (theory+practical)

  • Digital Image Processing by Gonzalez et al. (detailed theory)
  • Digital Image Processing Using MATLAB by Gonzalez et al. (theory+practical)
",14933,,2444,,1/16/2021 19:18,1/23/2022 17:57,"What are some alternatives to the book ""Artificial Intelligence: A Modern Approach""?",,2,1,,,,CC BY-SA 4.0 5991,1,5992,,4/11/2018 8:59,,2,1316,"

I'm currently reading this explanation of convolutional neural networks and there's a part around strides that I don't quite understand. I'm just starting with this, so I apologize if this is a really basic question. But I'm trying to develop an understanding and some of these images have thrown me off.

Specifically, in this image

The stride has been increased to 2 and it's using a 3x3 filter (represented by the red, green and blue outline squares in the first picture)

Why is the blue square below the red one and not shifted to the side of the green one ending at the edge of the 7x7 volume? Should it not move left to right then down 2 squares when it reaches the next line?

I'm not sure if the author is just trying to show the stride moving down as it goes, but I think my confusion stems from the fact that the 1 stride image example is only moving in the horizontal direction (as seen below).

Is there something fundamental I haven't grasped here?

",14935,,18758,,1/16/2022 8:52,1/16/2022 8:52,Is my understanding of how the convolution with stride 2 works in this example correct?,,1,0,,,,CC BY-SA 4.0 5992,2,,5991,4/11/2018 9:13,,2,,"

Yes, you're right, after the green one, it should also move two steps (because stride = 2) to the right once more. Note that in the $3 \times 3$ output volume picture, there's also still a white cell in the top right corner. That cell would get filled with whatever colour you choose to move to the right after the green one.

The blue one would then follow after what I described above, as the fourth square. I guess the author simply didn't feel like drawing a third square to the right of the green one, because the red + green squares already illustrate how the pattern works. The blue one was probably additionally drawn to illustrate that stride works the same way vertically as it does horizontally, e.g. blue also moves down two rows (because stride = 2).

See the bottom of the picture in the second answer here.

",1641,,2444,,6/22/2020 17:00,6/22/2020 17:00,,,,0,,,,CC BY-SA 4.0 5993,1,,,4/11/2018 10:08,,1,236,"

I use Sigmoid activation function for neurons at output layer of my Multi-Layer Perceptron also, I use cross-entropy cost function. As I know when activation functions like Tanh is used in output layer it's necessary to divide outputs of output layer neurons by sum of them like what is done for softmax, is such thing necessary for sigmoid activation function? If it's necessary to normalize outputs of neurons, does it affect derivations?

",10051,,10051,,4/11/2018 11:21,4/11/2018 11:21,Sigmoid output layer and Cross-Entropy cost function,,0,0,,,,CC BY-SA 3.0 5994,1,6039,,4/11/2018 10:19,,3,140,"

I have been reading quite a few papers, on genetic programming and its applications, in particular, chapter 10 of "Genetic Programming: An Introduction and Tutorial, with a Survey of Techniques and Applications" (Langdon, Poli, McPhee, Koza; 2008) Unfortunately, I can not wrap my head around how one could apply genetic programming to robotics, for example, in path planning.

Can anyone explain this in the most simple manner? I know that it all depends on the fitness function.

",14863,,1671,,4/19/2021 22:34,4/19/2021 22:34,How can genetic programming be used for path planning?,,1,5,0,,,CC BY-SA 4.0 6002,1,6003,,4/11/2018 15:02,,1,52,"

I am reading the cornerstone book, "Artificial Intelligence, A Modern Approach" by Stuart Russel, and Peter Norvig, and there is a passage in the book on page 98:

The complexity results depend very strongly on the assumptions made about the state space. The simplest model studied is a state space that has a single goal and is essentially a tree with reversible actions. (The 8-puzzle satisfies the first and third of these assumptions.)

What are the "assumptions" in that context?

",14862,,2444,,11/30/2020 1:38,11/30/2020 1:38,"Which ""assumptions"" made about the state space are Russell and Norvig referring to in their book?",,1,0,,,,CC BY-SA 4.0 6003,2,,6002,4/11/2018 15:07,,0,,"

The complexity results depend very strongly on the assumptions made about the state space. The simplest model studied is a state space that has a single goal and is essentially a tree with reversible actions.

The assumptions are the following:

  1. The simplest model studied is a state space

  2. that has a single goal

  3. and is essentially a tree with reversible actions.

",14723,,,,,4/11/2018 15:07,,,,0,,,,CC BY-SA 3.0 6006,1,,,4/11/2018 20:34,,2,109,"

The situation I encountered here is that I have two inputs(for instance, image embedding, etc.) into the first lstm of a series of lstms to predict the next word to generate sentence(from the second lstm, it started to predict the next word from the current input word). The length of each of the two inputs is 512. Merely for the first input, it increases the measurement, say, for instance, perplexity, by about 3 from no this input at all. Merely for the second input, it increases the measurement, say, for instance, perplexity, by about 1 from no input at all. The problem is: Is it possible to combine these two inputs into a model that can produce a result of increasement more than 3 or the larger amount of increasement of the former two inputs models? If it is, how to build a model or what model should I build to combine them to do so?

",14947,,14948,,4/12/2018 2:31,4/12/2018 2:31,Combine two embeddding inputs to increase more performance in LSTM model,,0,0,,,,CC BY-SA 3.0 6007,2,,54,4/12/2018 3:12,,2,,"

We've had many discussions on what constitutes Artificial Intelligence, and my takeaway has been that decision-making is the core requirement of AI, regardless of the optimality of that decision.

In this conception, Nimatron (1939, US2215544A) might be thought of as the first proper AI, pending verification of a a fabled Babbage Tic-Tac-Toe machine.

But, the question as to whether a simple switch represents the most basic form of intelligence has also been raised...

I think a distinction between these decision-making devices and earlier algorithmic implementations such as water clocks, is that the water clocks cannot be said to make a decision in the sense of maximizing chance of success at some goal.

",1671,,1671,,4/12/2018 3:37,4/12/2018 3:37,,,,0,,,,CC BY-SA 3.0 6009,1,,,4/12/2018 5:12,,1,500,"

Suppose I have a Boolean function that maps $N$ bits to $1$ bit. If I understand correctly, this function will have $2^{2^N}$ possible configurations of its truth table.

What is the minimum number of neurons and hidden layers I would need in order to have a multi-layer perceptron capable of learning all possible truth tables?

I found one set of lecture notes here that suggests that "checkerboard" shaped truth tables (which is something like an N-input XOR function) are hard to learn, but do they represent the worst-case scenario? The notes suggest that such tables require $3\cdot(N-1)$ neurons arranged in $2 \cdot\log_2(N$) hidden layers

",14955,,2444,,11/22/2021 0:51,11/22/2021 0:51,What is the minimum number of neurons and hidden layers needed to learn a Boolean function that maps $N$ bits to $1$ bit?,,1,0,,,,CC BY-SA 4.0 6017,2,,6009,4/12/2018 10:25,,1,,"

This is a very dicey question. Logic functions can be thought of as mapping multiple inputs to a single output. Now each logic function create its own boundary. So if you are using a complex logical equation it is actually very hard to approximate the underlying function. Here I am treating the input Booleans as the input features.

From practical experience: I had a n-bit = 8-bit input, and it mapped to a single bit output. I used a 2 layer Neural Net and then added a final layer of single node (to have a 0 or 1 output). I was using the sigmoid activation function along with normal back-propagation learning.

Now, I varied the hidden layer nodes from 64 to 1024. The cost decreased, indicative of correct learning but the accuracy did not change, howsoever I tried (changing learning rates, using momentum, etc). It was either giving all 1's or all 0's, even though I was using the same set for training and validation. The reason I hypothesized for this behavior was:

  • The irrelevant input features are affecting the final output. Even after extensive training and testing on the same set with a huge number of hidden nodes.
  • Due to the 1 and 0 nature of the input the NN was not able to create a correct boundary. Since if it created an equivalent feature of x^3 using its nodes, it'll just output the same constant value again and again f(0) and f(1), where it should have been a nice little curve

So in short it might or might not work, you never can tell!

",,user9947,,user9947,4/12/2018 10:45,4/12/2018 10:45,,,,0,,,,CC BY-SA 3.0 6018,1,,,4/12/2018 10:25,,2,24,"

I'm writing neural network based on neural gas algorithm (university assignment) and I remember that lecturer said that, when you generate random neuron weights at the beginning, it's worth to generate them multiple times and choose the best set of them.

The problem: I don't know what's the criteria of choosing the best set of weights for neurons?

",14962,,,,,4/12/2018 10:25,Multiple centroid draw,,0,1,,,,CC BY-SA 3.0 6019,2,,5964,4/12/2018 14:08,,2,,"

Yes, it is possible, and yes, it probably has been done before. Odds are, however, the person(s) who tried were disappointed with the results and forgot to tell others.

The reasons they might be disappointed could be any of the following:

  • Took to long to train
  • Even when fully trained, (or appeared to be), it did not give the expected output
  • Too sensitive to variations in the input
",14723,,14723,,4/13/2018 13:55,4/13/2018 13:55,,,,5,,,,CC BY-SA 3.0 6026,1,7477,,4/13/2018 2:25,,13,23167,"

A heuristic is admissible if it never overestimates the true cost to reach the goal node from $n$. If a heuristic is consistent, then the heuristic value of $n$ is never greater than the cost of its successor, $n'$, plus the successor's heuristic value.

Why is A*, using tree or graph searches, optimal, if it uses an admissible heuristic?

",14913,,2444,,11/10/2019 16:55,5/15/2021 12:32,Why is A* optimal if the heuristic function is admissible?,,1,1,,,,CC BY-SA 4.0 6027,2,,81,4/13/2018 3:13,,2,,"

Not strictly examples of AI, but related to the greater AI project: But us in the psychology / cognitive science side of things sure love our bayesian modelling!

In fact there are people who believe that a theory grounded in such analysis would ultimately bring us to a unified theory of the brain and cognition!

Unfortunately to my knowledge, these theories are not yet complete or testable in interesting ways as they are grounded more in the philosophy end of things. More so the claims that the psychologists make are rather weak: that hypothesis updating and inference is Bayesian-like (which isn't super exciting to be honest) (but my knowledge in this area is not super complete)

Alas, more work needs to be done but at least there is psychological support for the claim that cognition is Bayesian-like.

",6779,,,,,4/13/2018 3:13,,,,0,,,,CC BY-SA 3.0 6028,1,,,4/13/2018 3:14,,2,376,"

I was coding a CGAN model using Keras along with the paper (https://arxiv.org/pdf/1411.1784.pdf) and I wanted to try and match the models to exactly what the paper says. I knew the models presented in the paper would be primitive but just wanted to replicate those and see. For example the generator model in the paper mentions this:

In the generator net, a noise prior z with dimensionality 100 was drawn from a uniform distribution within the unit hypercube. Both z and y are mapped to hidden layers with Rectified Linear Unit (ReLu) activation [4, 11], with layer sizes 200 and 1000 respectively, before both being mapped to second, combined hidden ReLu layer of dimensionality 1200. We then have a final sigmoid unit layer as our output for generating the 784-dimensional MNIST samples.

So for this I had the code like this:

def build_generator(self):

       model = Sequential()

       model.add(Dense(200, input_dim=self.latent_dim))
       model.add(Activation('relu'))
       model.add(BatchNormalization(momentum=0.8))

       model.add(Dense(1000))
       model.add(Activation('relu'))
       model.add(BatchNormalization(momentum=0.8))

       model.add(Dense(1200, input_dim=self.latent_dim))
       model.add(Activation('relu'))
       model.add(BatchNormalization(momentum=0.8))

       model.add(Dropout(0.5))

       model.add(Dense(np.prod(self.img_shape), activation='sigmoid'))
       model.add(Reshape(self.img_shape))


       model.summary()

       noise = Input(shape=(self.latent_dim,))
       label = Input(shape=(1,), dtype='int32')
       label_embedding = Flatten()(Embedding(self.num_classes, self.latent_dim)(label))

       model_input = multiply([noise, label_embedding])

       img = model(model_input)

       return Model([noise, label], img)

But still I think this not exactly what the paper means. What I understand from the paper is the noise and labels are first fed into two different layers and then combined to one layer.

Does this mean that there should be three separate models inside the generator? Or am I mistaken thinking that? Would like to hear any thoughts on this.

",12843,,9647,,4/15/2018 4:17,4/15/2018 4:17,Coding CGAN paper model in Keras,,0,0,,,,CC BY-SA 3.0 6029,2,,5964,4/13/2018 4:05,,2,,"

Yes! Unsupervised machine learning has absolutely been applied to youtube videos... To recognize cats!

Here's an article about it in wired. One of the leading ML researchers was Andrew Ng.

",6252,,,,,4/13/2018 4:05,,,,2,,,,CC BY-SA 3.0 6032,1,6537,,4/13/2018 10:12,,1,79,"

Not sure if this is the correct forum, but I have been working with a large (non-image) dataset that will eventually be used to train a neural network. I have been puzzling over how to manage wide data sets. For this application ""wide"" is maybe 10,000 or 20,000 points wide. It is not really possible to store this as a row in a conventional RDMS (which are usually limited to several hundred columns). Is it better to use a huge CSV file or maybe a no-sql technology like Cassandra (the data is originally in JSON format)?

",14994,,,,,5/29/2018 0:23,How to manage high numbers of input layer data points,,1,0,,,,CC BY-SA 3.0 6038,1,13282,,4/13/2018 12:07,,3,265,"

I have used a FFNN and LSM to perform the same task, namely, to predict the sentence ""How are you"". The LSM gave me more accurate results than FFNN. However, the LSM did not produce perfect prediction and there are missing letters. More specifically, LSM produced ""Hw are yo"" and the FFNN predicted ""Hnw brf ypu"".

What is the difference between a FFNN and a LSM, in terms of architecture and purpose?

",14723,,14723,,5/2/2019 19:01,7/11/2019 22:55,What is the difference between a feed-forward neural network and a liquid state machine?,,1,2,,,,CC BY-SA 4.0 6039,2,,5994,4/13/2018 13:21,,6,,"

Take a robot that we want to be able to move from the bottom right corner to the top left corner of a 4x4 matrix full of random holes it should avoid. With holes represented by 1s, it could look something like:

exit
\/
[0,0,0,1]
[0,1,1,0]
[0,1,1,1]
[0,0,0,0]
      /\ 
      enter

As we want it to get to an exit from a start, we have a natural fitness function: closeness to exit door in smallest number of moves.

The genetic programming approach to solving this is to create random computer programs (the second chapter of your link does a pretty good intro to the tree like nature of this process) and let them loose. The vast majority of these strategies will be/are utterly terrible, things like 'go right once' or 'go left ten times'.

Say we make 100 random programs on our first run. We firstly score them on how well they did according to our fitness function (the random programs that did the best). We take a set % of these to survive and get rid of the rest, lets say 10% survive.

We take these surviving 10% of the best performing programs and use them to create new programs for the next generation by modifying them randomly again, but not completely. Say we randomly modify half their structure and leave the other half as is across however many we want for the next generation. We now let this generation loose again, and again rank, score, take the top 10% and breed a new generation from them and so on for n number of generations.

In this case, if we say left the grid as is, the program would generally come up with a rule roughly like 'go left x4, go up x4) as it solves this problem in the easiest way, but if we were to say, continuously randomise the position of 1s in the grid during this evolutionary process, we will force the program to come up with much more generalisable rules, such as checking the cells it can move into for 1s and not moving into any space containing a 1 etc.

Thus we can build a program with a flexible strategy able to cope with different environments for our robot in terms of number/configuration of holes - much more useful than having to program it for every configuration.

Just like with regular evolution, over millions of trials of taking the top performers and modifying them slightly, these programs become very specialised and high performing, able to solve highly complex games, paths with highly complex features etc.

",14997,,14997,,4/13/2018 13:56,4/13/2018 13:56,,,,4,,,,CC BY-SA 3.0 6040,1,,,4/13/2018 16:15,,5,106,"

I have been reading a bit about networks where deep layers able to deal with a bunch of features (be it edges, colours, whatever).

I am wondering: how can possibly a network based on this 'specialised' layers be fooled by adversarial images? Wouldn't the presence of specialised feature detectors be a barrier to this? (as in: this image of a gun does share one feature with 'turtles' but it lacks 9 others so: no, it ain't a turtle). thanks!

",14999,,,,,8/21/2018 21:08,How can neural networks that extract many features be fooled by adversarial images?,,1,2,,,,CC BY-SA 3.0 6041,1,,,4/13/2018 16:15,,2,338,"

If we model the game '2048' using a max-min game tree, what is the maximal path from a start state to a terminal state? (Assume the game ends only when the board is full.)

This is one of the sub-questions that should prepare us to actually modeling the game as a max-min game tree. However, I'm failing to understand the question.

Is it actually the path to receiving 131072 as an endgame?

",15000,,2444,,3/15/2022 16:39,3/15/2022 16:39,"If we model the game ""2048"" using a max-min game tree, what is the maximal path from a start state to a terminal state?",,2,1,,,,CC BY-SA 4.0 6043,2,,5964,4/13/2018 19:41,,2,,"

Answer is quite yes, please have a look what Google did around this:

Google Cloud Video Intelligence makes videos searchable, and discoverable, by extracting metadata with an easy to use REST API. You can now search every moment of every video file in your catalog. It quickly annotates videos stored in Google Cloud Storage, and helps you identify key entities (nouns) within your video; and when they occur within the video.

https://cloud.google.com/video-intelligence/

So, Google does recognize all kinds of data from the video: it classifies the whole content of it to tags.

What about Humanoid Robot Sophia?

Cameras within Sophia's eyes combined with computer algorithms allow her to see. She can follow faces, sustain eye contact, and recognize individuals. She is able to process speech and have conversations using a natural language subsystem.

https://en.wikipedia.org/wiki/Sophia_(robot)

These intent to the direction to understand (Google) and produce (Sophia) language from sounds and images. To learn think by themselves, machines are still not ready. If you get into these two cases more you would see that these are quite mechanical and manual (requiring human pre-effort) things still.

It is said that machines are now on a phases of toddler who can ask names for things around her and name them. Take some years more, maybe the abilities are more advanced ;)

edit:

You asked about unsupervised learning. There is a video about speech of MIT researcher, who made experiments on text and images and in final notes he denoted that it would be nice to make the same with videos, actually with the same reasoning you had: to learn a language. He promised to keep that in mind with his colleagues, maybe some of them already work on that.

edit2:

Interesting research paper on the topic was on this link :

We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The con- tributions of this paper are three-fold. [..] Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.

",11810,,11810,,4/14/2018 17:25,4/14/2018 17:25,,,,4,,,,CC BY-SA 3.0 6044,2,,111,4/13/2018 20:41,,5,,"

They shouldn't. People should.

People cannot put the responsibilities of ethical decisions into the hands of computers. It is our responsibility as computer scientists/AI experts to program decisions for computers to make. Will human casualties still exist from this? Of course, they will--- people are not perfect and neither are programs.

There is an excellent in-depth debate on this topic here. I particularly like Yann LeCun's argument regarding the parallel ethical dilemma of testing potentially lethal drugs on patients. Similar to self-driving cars, both can be lethal while having good intentions of saving more people in the long run.

",6252,,6252,,4/14/2018 0:53,4/14/2018 0:53,,,,0,,,,CC BY-SA 3.0 6046,1,,,4/14/2018 2:34,,5,217,"

Warning: This question takes us into VALIS territory, but I wouldn't underestimate the profundity of that particular philosopher.

There is a non-AI definition of intelligence which is simply ""information"" (see definition 2.3). If that information is active, in terms of utilization, I have to wonder if it might qualify as a form of algorithmic intelligence, regardless of the qualities of the information.

What I'm getting at is that fields such as recreational mathematics often produce techniques and solutions that don't have immediate, real world applications. But there's an adage that pure math tends to find uses.

So you might have algorithms applied to a problems that outside of the problems from which it originated, or that couldn't initially be implemented in a computing context. (Minimax in 1928 might be an example.)

Goal orientation has been widely understood as a fundamental aspect of AI, but in the case of an algorithm designed for one problem, that it subsequently applied to a different problem, the goal may simply be a function of the algorithm's structure. (To understand the goal of minimax precisely, you read the algorithm.)

If you regard this form of information as intelligence, then intelligence can be general, regardless of strength in relation to a given problem.

  • Can we consider this form of codification of information to be algorithmic intelligence?

And, just for fun, if a string that encodes a cutting-edge AI is not being processed, does it still qualify as artificial intelligence?

",1671,,1671,,4/17/2018 2:38,6/24/2018 11:38,Is a mathematical formula a form of intelligence?,,0,3,,,,CC BY-SA 3.0 6047,1,,,4/14/2018 3:25,,1,1755,"

I have a rather basic question about YOLO for bounding box detection.

My understanding is that it effectively associates each anchor box to an 8-dimension output.

During testing, does YOLO take each anchor box and classify on it alone? What happens if the object is big and spans over several anchor boxes (e.g., covering 70% of the image)? How can YOLO classify and detect objects spanning over many anchor boxes?

",13068,,2444,,1/28/2021 23:54,1/28/2021 23:54,Can YOLO detect large objects?,,1,2,,,,CC BY-SA 4.0 6048,2,,5169,4/14/2018 8:23,,0,,"

As a chess player and a AI/ML engineer, I can say yes, why not. I'm not sure why it isn't fair to compare anything if you give each side it's just due and do a 'fair comparison'. Obviously, what that encompasses is extremely subjective, but there are philosophical and logical measures of fairness.

Now speaking on the comparison, AlphaZero and a Human's learning styles are much more similar than that of a Human's and Stockfish. This is mainly due to the fact that human's in some capacity use RL, mainly in the dopaminergic neural pathways. While human behavior can certainly be modeled as an alpha/beta tree-search, it is not anything like the way we make decisions.

As far as the top humans, who cares? We we've been worse than computers for years.

",9608,,,,,4/14/2018 8:23,,,,0,,,,CC BY-SA 3.0 6049,1,,,4/14/2018 11:41,,2,35,"

I'm trying to predict grades within a course at my university. At the moment I manually extracting features, but I'm curious if it's possible to somehow use my entire dataset with a deep learning approach?

I have data from throughout the course of students solving mandatory exercises. All students uses a plug-in within the editor that takes a snapshot of code-base each time the student saves the project (exercise). I also have data from when the students run the debugger. All exercises include tests which determine what score the student will receive on a given exercise. The students are free to execute the tests as many times as they like wile solving the exercise (the final score is given when the students presents the final result to a teaching assistant). Execution and results of these tests are also included in the data. Timestamps exists for all data. I also have the final grade of each student (which is determined 100% by the final exam).

Does anyone know of an approach to use this kind of data with a deep learning approach?

",15010,,,,,4/14/2018 11:41,Possible to use codebase snapshots as input in deep learning?,,0,0,,,,CC BY-SA 3.0 6050,1,,,4/14/2018 11:58,,2,132,"

Which specific performance evaluation metrics are used in training, validation, and testing, and why? I am thinking error metrics (RMSE, MAE, MSE) are used in validation, and testing should use a wide variety of metrics? I don't think performance is evaluated in training, but not 100% sure.

Specifically, I am actually after deciding when to use (i.e. in training, validation, or testing) correlation coefficient, RMSE, MAE and others for numeric data (e.g. Willmott's Index of Agreement, Nash-Sutcliffe coefficient, etc.)

Sorry about this being broad - I have actually been asked to define it generally (i.e. not for a specific dataset). But datasets I have been using have all numeric continuous values with supervised learning situations.

Generally, I am using performance evaluation for environmental data where I am using ANNs. I have continuous features and am predicting a continuous variable.

",15011,,2444,,10/16/2021 23:36,10/16/2021 23:36,"Which evaluation metrics should be used in training, validation and testing of a model?",,0,5,,,,CC BY-SA 4.0 6052,1,6079,,4/14/2018 22:03,,4,521,"

As I've thought about AI, and what I understand of the problems that we face in the creation of it, I've noticed a recurring pattern: we always seem to be asking ourselves, "how can we better simulate the brain?"

Why are we so fascinated with simulating it? Isn't our goal to create intelligence, not create intelligence in a specific medium? Isn't growing and sustaining living brains more in line with our goals, albeit a bit of an ethical controversy?

Why is this exchange's description: "For people interested in conceptual questions about life and challenges in a world where 'cognitive' functions can be mimicked in a purely digital environment?"

To condense these feelings in a more concise question: Why are we trying to create AI in a computer?

",15023,,2444,,12/12/2021 17:07,1/25/2023 15:10,"Why are we asking ""How can we simulate the brain?""",,7,1,,,,CC BY-SA 4.0 6053,2,,6052,4/14/2018 22:19,,0,,"

Being the OP, I have already put some thought into this question.

I think that computers are an attractive medium for simple AI because they are easily available and researchers are already familiar with them. In addition, science fiction writers of the last century were hopeful of the capabilities of computers and placed in our culture a dream of computer AI.

But I also feel that perhaps other less explored fields would be better suited to the creation of strong AI. In particular, thinking about the nature of biology excites me. But as I understand it, we still know so little about how it biology works, let alone how to control it. But I feel this is where we should be focusing.

Researchers know that current computing hardware has limitations. GPUs are better suited than CPUs. Some CPUs have new hardware designed for AI computations. I suspect that this realization of the inadequacy of conventional hardware will continue until our hardware is nearly identical to the biology we are trying to simulate. After all, what simulation could ever be better than what it is trying to simulate?

",15023,,,,,4/14/2018 22:19,,,,0,,,,CC BY-SA 3.0 6054,1,,,4/14/2018 23:48,,1,135,"

How would you design a neural network that generates the positions of comparators in a sorting network given a set of numbers?

I've tried to modify some already implemented networks that given a set of numbers it sorts the number. My goal is, given an unsorted sequence of numbers, to generate a sorting network that will sort those numbers. I am not asking for the complete solution, just a starting point.

",15024,,2444,,12/20/2021 22:23,12/20/2021 22:23,How would you design a neural network that generates the positions of comparators in a sorting network given a set of numbers?,,0,1,,,,CC BY-SA 4.0 6055,2,,6047,4/15/2018 0:25,,4,,"

The example given based on the yolov1 paper:

The last layer has a tensor of the dimension 7x7x30. but the dimension of the last tensor is not in every case 7x7x30.

let be: S: the number of grid cells in X and Y direction C: the number classes to train B: the number of bounding boxes in every grid cell

The dimension of the output tensor is calculated with this formula: SxSx(5*B+C). The given example in their paper has the following values: S:7 B:2 C:20 (7X7X(5*2+20) = 7x7x30)

With this configuration you can detect at most 96 object (SSB -> 7*7*2) and 2 objects per grid cell. (e.g. it's not possible to detect many small object in a few grid cells)

now lets consider what are each feature map of the last tensor used for:

The data of every bounding box is stored in 5 feature maps: - 1 Feature map for the bounding box center in x direction - 1 Feature map for the bounding box center in y direction - 1 Feature map for the height of the bounding box - 1 Feature map for the width of the bounding box - 1 Feature map for the ""confidence score"" of the feature map

(confidence score = P(Object)*IoU(BoundingBox,Object))

  • P(Object): the probability that in this cell is an object

  • IoU: Intersection over Union. A ration between the overlapping area and the union area:

additional there are a feature map for every class. In every cell the following probability is calculated: P(Cass|Object)

Long story short: If the bounding box is bigger than one grid cell, then some neurons share the same values (they share the bounding box center, the bounding box size and the confidence score, because they refer to the same bounding box).

Source: https://pjreddie.com/media/files/papers/yolo_1.pdf

edit no Bounding boxes on the image

if there are no bounding boxes on the image, then the confidence score would be zero, because there are no overlapping area. you can configure the threshold of the confidence score. look at this sample picture with threshold 0:

additionally the feature maps of the classes would be zero because every neuron represents this probability P(Class|Object) because P(Class|Object) = P(Class And Object) / p(Object); (P(Class) = 0 -> P(Class and Object) = 0)

Is adding images without bounding boxes to the training set useful? I think it helps to lower the number of false positive matches. But that is just a guess. I have trained yolo to detect this plant https://en.wikipedia.org/wiki/Rumex_obtusifolius and I added a lot of picture without bouding boxes, becuase a low number of false positive matches was important. The result: the specificity was >99%.

Hope this helps

",9111,,9111,,4/18/2018 11:34,4/18/2018 11:34,,,,4,,,,CC BY-SA 3.0 6056,2,,6052,4/15/2018 3:10,,2,,"

I don’t think AI is simulating the brain functions and not even close. Do you know how the nervous system work? How the neutrons transmit signals with action potential? Pathway analysis? Splicing junctions?

AI is not about simulating the brain at all. We don’t simulate the biology pathway, we don’t simulate alternative splicing, we don’t have proteins in our models.

Instead, AI is a field with tons of mathematics. You give some data and try to extract complicated non linear pattern.

",6014,,,,,4/15/2018 3:10,,,,2,,,,CC BY-SA 3.0 6066,1,,,4/15/2018 12:54,,3,145,"

How can I classify a given sequence of images (video) as either moving or staying still from the perspective of the person inside the car?

Below is an example of the sequence of 12 images animated.

  1. Moving from the point of the person inside the car.

  1. Staying still from the point of the person inside the car.

Methods I tried to achieve this:

  1. A simple CNN (with 2d convolutions) with those 12 images (greyscaled) stacked in the channels dimension (like Deepmind's DQN). The input to the CNN is (batch_size, 200, 200, 12).

  2. A CNN with 3d convolutions. The input to the CNN is (batch_size, 12, 200, 200, 1).

  3. A CNN+LSTM (time-distributed with 2d convolutions). The input to the neural network is (batch_size, 12, 200, 200, 1).

  4. The late fusion method, which is taking 2 frames from the sequence that are some time steps apart and passing them into 2 CNNs (with same weights) separately and concatenating them in a dense layer As mentioned in this paper. This is also like CNN+LSTM without the LSTM part. The input to this net is (batch_size, 2, 200, 200, 1) -> the 2 images are first and last frames in the sequence

All the methods I tried failed to achieve my objective. I tried tuning various hyperparameters, like the learning rate, the number of filters in CNN layers, etc., but nothing worked.

All the methods had a batch_size of 8 (due to memory constraint) and all images are greyscaled. I used ReLUs for activations and softmax in the last layer. No pooling layer was used.

Any help on why my methods are failing or any pointers to a related work

",5030,,2444,,9/7/2020 22:40,9/7/2020 22:40,How can I determine whether a car in a video is moving or not?,,1,0,,,,CC BY-SA 4.0 6067,2,,6066,4/15/2018 13:23,,1,,"

CNNs are translation invariant.

You are over complicating the problem. The easiest thing you can do is define a region of interest (ROI) of the hood. In the first case where the car is moving and the reflections are dynamic. In the second case they are static. Just do frame-to-frame image subtraction of the hood. If the vehicle is moving you will have lots of 'edge energy'. If it is not moving it will be just noise.

You can apply the same method to the whole image too. In the static case the image subtraction method may become messy as the clouds are moving along with vehicles and pedestrians. For this case use the image subtractions as input to your method.

Another approach is to run an image stabilization algorithm. OpenCV has one. Look at the transformation outputs (translation, rotation, scale, rigid, similarity, affine, etc.). If you can't make a simple filter on them to determine the two cases, train a classifier.

",5763,,5763,,4/15/2018 13:56,4/15/2018 13:56,,,,1,,,,CC BY-SA 3.0 6069,1,6075,,4/15/2018 16:40,,6,1977,"

In a neural network for chess (or checkers), the output is a piece or square on the board and an end position.

How would one encode this?

As far as I can see choosing a starting square is 8x8=64 outputs and an ending square is 8x8=64 outputs. So the total number of possible moves is 64x64 4096 outputs. Giving a probability for every possible move.

Is this correct? This seems like an awful lot of outputs!

",4199,,9647,,4/17/2018 2:11,4/17/2018 2:11,How do you encode a chess move in a neural network?,,1,0,,,,CC BY-SA 3.0 6071,1,6076,,4/15/2018 17:42,,0,89,"

If you have a game and you are training an AI there seems to be two ways to do it.

First you take the game-state and a possible move and evaluate whether this move would be good or bad:

(1) GAME_STATE + POSSIBLE_MOVE --> Good or bad?

The second is to take the game state and get probabilities of every conceivable move:

(2) GAME_STATE ---> Probabilities for each move

It seems that both models are used. I.e. in language and RNN might use (2) to find the probabilities for each next word or letter. But AlphaZero might use (1). Noting also that in a game like chess GAME_STATE + POSSIBLE_MOVE = NEW_GAME_STATE. Whereas in some games you might not know the result of your move.

Which do you think is the best method? Which is the best way to do AI? Or some combination of the two?

",4199,,,,,4/16/2018 7:17,Which is best: evaluation of states or probability of moves?,,1,3,,,,CC BY-SA 3.0 6073,2,,6052,4/15/2018 19:44,,1,,"

There are a number of reasons why a simulated brain might be better than creating a real brain. One reason is computers can live indefinitely (kind of). Brains may not be able to live forever and there might not be a way to transfer information from one brain to another. One of the principle advantages of a computer then is that it could have more experience than any brain could have in its lifetime. Another reason is that there are a lot of things we don't know about the brain. Even if we were able to replicate the brain we would have a hard time using it in the way that we want until we fully understand it. The simulated brain doesn't have this problem. We know exactly how artificial neural networks develop, and thus there is not as much that we don't understand.

Those answers tell you why we might want a digital brain, but your question seems to also ask why study the digital brain over a biological brain? This seems to imply that we can't do both, but in fact there are many research groups doing work in areas that contribute to growing living brains (Max Planck Institute of Molecular Cell Biology and Genetic (MPI-CBG), the Medical Research Center in the UK, etc.).

",13088,,,,,4/15/2018 19:44,,,,1,,,,CC BY-SA 3.0 6075,2,,6069,4/16/2018 6:24,,5,,"

The number is 4672 from Google.

https://arxiv.org/pdf/1712.01815.pdf

A move in chess may be described in two parts: selecting the piece to move, and then
selecting among the legal moves for that piece. We represent the policy π(a|s) by a 8 × 8 × 73
stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8×8
positions identifies the square from which to “pick up” a piece.

4672 might sound a big number but it's nothing to what Google (and many other competitors) has been doing to deep learning for image analysis.

",6014,,,,,4/16/2018 6:24,,,,0,,,,CC BY-SA 3.0 6076,2,,6071,4/16/2018 7:17,,1,,"

Which do you think is the best method?

As with most machine learning, each approach has its strengths and weaknesses, and other than a little bit of intuition:

  • Policy-based methods are strong in large or continuous action spaces, and/or where there is a simple relationship between state and optimal action. E.g. controlling a robotic arm with continuous action space.

  • Value-based methods are strong where there is a simple relationship between state and value under an optimal policy. E.g. in a maze game.

It may not always be clear which is best, in which case experimentation is required. If using neural networks, there will then be a large number of hyper-parameters on each approach, so it may be hard to come to a strict conclusion about which is better. Although you can include ""easy to find a working model"" or ""robust for a range of hyper-parameter values"" as benefits of any type of model if you wish - these are important practical benefits of any approach, developers rarely want to do 100s of experiments to tune a learning rate parameter for example.

Which is the best way to do AI? Or some combination of the two?

Actor-Critic, as seen in Asynchronous Advantage Actor Critic. A3C and A2C (deterministic variant of A3C) is producing current state-of-the-art results in video games. This is a combination of both approaches, where the agent maintains two related models - one, the Actor, generates a policy directly by looking at the state, and the second, the Critic, tracks the estimated value of each state. Often, these two models share some parameters - e.g. using neural networks, the initial layers may be the same for both.

",1847,,,,,4/16/2018 7:17,,,,0,,,,CC BY-SA 3.0 6079,2,,6052,4/16/2018 9:28,,3,,"

Human intelligence is very general / broad in its scope. This is self-evident, and whatever AI ends up to be, we'd like it to be a general problem solver as well (cf. Simon and Newell). Taking liberal interpretations of your question...

Why AI in a computer?

Computers, to the extent that we can frame problems in general as a solvable computational problem, are also general problem solvers. Wether this is actually the case (can you compute meaning or feels?) is up for debate (cf. computational functionalism, hyper-computation), but it is part of the artificial intelligence project to make a claim on this statement.

Why do we think a computational framework brings us any closer to an understanding of cognition / consciousness?

Good question, and frankly there is no good answer to that asides from ""its the best thing we got"".

TL;DR ""computational functionalism"", a lot of the literature in psychology and philosophy seems to converge towards an understanding of cognition as ""computational"" (as in information processing: the V1 stream in the brain processes ""early visual information"") and functional (goal directed grounded on ""meaning"", ex: ""i scratch itch because itchy"", as opposed to ""i am moving atoms"").

However the two theories don't mesh together well (cf. Chinese Room Argument, and the many other arguments in a similar flavour) despite their independent successes in the theory of mind. Why this is the case nobody quite knows...

Why not AI in something that isnt a computer?

I don't know, but to the extent that our understanding of the world is grounded in math, then it being in a computer is sufficient anyways.

Maybe there are other paradigms of understanding the world though. Fingers crossed 🙏

Why are we asking, “How can we simulate the brain?”

Because its the best tentative understanding we have of an ""intelligent faculty"", though it should be noted that various methods in machine learning don't seem to be directly inspired by biological implementation (kNN, statistical methods, as opposed to neural nets)

Further reading: http://www.scaruffi.com/nature/mach01.html

",6779,,6779,,4/21/2018 0:07,4/21/2018 0:07,,,,0,,,,CC BY-SA 3.0 6080,1,,,4/16/2018 13:19,,1,43,"

Using a neural network the method seems to be that you end up with a probability for each possible outcome.

To predict the next frame in a monochrome movie of size 400x400 with 8 shades of gray, it seems like there seems to be: 8^(160000) possibilities.

On the other had if you just predicted the probability for each pixel individually you would end up with some kind of image which gets progressively blurred.

Perhaps what you want is to generate a few possibilities that are none-the-less quite sharp. In a similar way to weather prediction(?)

So how would you go about designing a neural network that takes reads a movie and tries to predict the next frame?

",4199,,,,,4/16/2018 13:19,Best way to predict future frame of movie or game?,,0,0,,,,CC BY-SA 3.0 6081,1,,,4/16/2018 16:49,,1,29,"

I am interested to see what advantages a Loop Network (Feed Forward Network that takes its output as input, I think it's called an RNN, not sure). The only result I found was that it was extremely sensitive to context, but only the context behind it. Other than that, I did not notice any changes.

I figured this would be better for a language processing unit, or one used to make inferences based upon it.

What are the shortcomings of each? Advantages?

",14723,,,,,4/16/2018 16:49,Whats advantages does a Loop Network have over a Feed Forward Network?,,0,0,,,,CC BY-SA 3.0 6082,1,,,4/16/2018 17:00,,9,406,"

There is a popular story regarding the back-of-the-envelope calculation performed by a British physicist named G. I. Taylor. He used dimensional analysis to estimate the power released by the explosion of a nuclear bomb, simply by analyzing a picture that was released in a magazine at the time.

I believe many of you know some nice back-of-the-envelope calculations performed in machine learning (more specifically neural networks). Can you please share them?

",6252,,2444,,4/12/2019 21:32,10/25/2022 10:06,Back-of-the-envelope machine learning (specifically neural networks) calculations,,2,2,,,,CC BY-SA 4.0 6084,2,,2405,4/16/2018 21:47,,7,,"

It depends on your loss function, but you probably need to tweak it.

If you are using an update rule like loss = -log(probabilities) * reward, then your loss is high when you unexpectedly got a large reward—the policy will update to make that action more likely to realize that gain.

Conversely, if you get a negative reward with high probability, this will result in negative loss—however, in minimizing this loss, the optimizer will attempt to make this loss ""even more negative"" by making the log probability more negative (i.e. by making the probability of that action less likely)—so it kind of does what we want.

However, now improbable large negative losses are punished more than the more than likely ones, when we probably want the opposite. Hence, loss = -log(1-probabilities) * reward might be more appropriate when the reward is negative.

",15028,,15028,,10/16/2019 17:23,10/16/2019 17:23,,,,5,,,,CC BY-SA 4.0 6089,1,,,4/17/2018 14:02,,5,571,"

I am trying to understand how genetic programming can be used in the context of auto-encoders. Currently, I am going through 2 papers

  1. Training Feedforward Neural Networks Using Genetic Algorithms (a classific one)

  2. Training Deep Autoencoder via VLC-Genetic Algorithm

However, these papers don't really help me to grasp the concept of genetic programming in this specific context, maybe because I'm not very familiar with GP.

I understand that autoencoders are supposed to reconstruct the instances of the particular classes they have been trained on. If another fed instance is not reconstructed as expected, then it could be called an anomaly.

But how can genetic programming be used in the context of auto-encoders? You are still required to create a neural network, but, instead of a feed-forward one, you use autoencoder, but how exactly?

I would appreciate any tutorials or explanations.

",14863,,2444,,1/19/2021 17:58,1/19/2021 17:58,How can genetic programming be used in the context of auto-encoders?,,1,0,,,,CC BY-SA 4.0 6090,2,,6082,4/17/2018 15:00,,0,,"

I have one to share. This is no formula, but a general thing I have noticed.

The number of neurons + neurons should be proportionate, in some way, to the complexity of the classification.

Although this is fairly basic and widely known, it has helped me in many times to consider one thing: how many at a minimum does it need?

",14723,,14723,,4/17/2018 15:11,4/17/2018 15:11,,,,2,,,,CC BY-SA 3.0 6091,1,,,4/17/2018 15:29,,3,434,"

Researchers at Stanford University released, in 2012, the paper Financial Market Time Series Prediction with Recurrent Neural Networks.

It goes on to discuss how they used echo state networks to predict things such as Google's stock prices. However, to do this, once trained, the network's inputs are a day's stock price, and the output is the day's predicted stock price. The way the paper is worded is like this could be used to predict future stock prices, for example. However, to predict tomorrow's stock price, you need to give the neural network tomorrow's stock price.

All this paper seems to show is that the neural network is converging on a solution where it simply modifies its inputs a minimal amount, hence the output of the ESN is just a small alteration of its input.

Here are some Python implementations of the work shown in this paper:

In particular, I was playing with the latter which produces the following graph:

If I take the same trained network, and alter the 7th's day's ""real"" stock price to say something extreme like $0, this is what comes out:

As you can see, it basically regurgitates its inputs.

So, what is the significance of this paper?

It has no use in any financial predictions, like the network shown in the paper Classification-based Financial Markets Prediction using Deep Neural Networks.

",15085,,2444,,6/8/2020 17:07,10/26/2022 23:05,"What is the significance of this Stanford University ""Financial Market Time Series Prediction with RNN's"" paper?",,1,1,,,,CC BY-SA 4.0 6092,1,6096,,4/17/2018 16:28,,2,1617,"

I am having a difficult time translating this pseudocode into functional C++ code.

  • At line 10: The value function is represented as V[s], which has bracket notation-like arrays. Is this a separate method or just a function of the value with a given state? Why is S inside the brackets? Is this supposed to be an array with as many elements as S?
  • At line 12: Vk would be the element in index k inside of array V?
  • At line 16: I'm interpreting this as the start of a do-while loop that ends at line 20.
  • Line 19: I'm finding the action that maximizes the sum, for all states, of the equation following the sigma?
  • Line 20: I'm interpreting this as the the-end of the do-while. But what is this condition? Am I checking if there is an s such that this condition applies? So Would I would have to loop between all states and stop if any state satisfies the condition? (Basically a loop with a break, instead of a while)
",15089,,2444,,10/2/2021 22:33,10/2/2021 22:33,Value iteration algorithm from pseudo-code to C++,,1,1,,11/20/2021 0:20,,CC BY-SA 4.0 6093,2,,5825,4/17/2018 18:02,,1,,"

Probably not worth the hassle.

I guess that simply nobody cares. I'm all for having my function be differentiable at all points, but I don't think that having a single jump in the derivative matters much. The neuron basically either repeats its input (x>0) or outputs nearly nothing (x<0) and the point in between is not that important.

Replacing

ELU(x) = x > 0 ? x : α * (exp(x) - 1)

by your proposal

ELU(x) = x > 0 ? α * x : α * (exp(x) - 1)

would have far more consequences. For example, after n layers, the output would get multiplied by α**n, assuming all x on the path being positive. This could be countered by dividing each weight by α, but the derivatives would change. You surely could adjust everything (initial rates, learn rate and whatever), so it'd work as usual, but it could laborious, especially when testing multiple learning algorithms.

I'd suggest to use

ELU(x) = x > 0 ? x : α * (exp(x/α) - 1)

instead, as it just changes the less important part of the curve.

Or did they not bother, because α=1 is definitely the hyperparameter to use?

I can imagine that everyone tried α=1 and was happy with it. It prevented the gradient from being zero and that's it.

I don't have any support of my claim (having never read about something doesn't mean that it doesn't get written about).

",12053,,,,,4/17/2018 18:02,,,,0,,,,CC BY-SA 3.0 6094,1,,,4/18/2018 0:42,,3,70,"

I have order data, here's a sample:

Ninety-six (96) covered pans, desinated mark cutlery.
5 vovered pans by knife co.
(SEE SCHEDULE A FOR NUMBERS). 757 SOUP PANS
115 10-quart capacity pots.
Thirteen (13), 30 mm thick covered pans. 

I have over 50k rows of data such as this. In a perfect world, the above would need to be tabulated as such:

count, type
96, covered pan
5, covered pan
757, soup pan
115, pot
13, covered pan

Could machine learning be the correct approach for a problem such as this?

",15099,,15099,,4/18/2018 0:49,12/18/2018 23:01,Can machine learning help me digest asymmetrical order descriptions?,,2,0,,,,CC BY-SA 3.0 6095,2,,6094,4/18/2018 7:16,,1,,"

Yes a variant of NLP processing could help find the correct number to extract and type of object in this data.

Compared to the spreadsheet, the raw text data is ambiguous without understanding language to a reasonable depth, and without knowing the business context in order to extract the relevant information.

For instance, you are expecting to extract ""soup pan"" and ""covered pan"", but not ""capacity pot"". Also that parts of phrases such as ""30 mm"" or ""10-quart"" are lower importance qualifiers, and not specifying quantity of something.

The current state of the art for extracting this kind of data would be a bidirectional LSTM (a type of Recurrent Neural Network). You would likely get it to flag the parts of each entry that were relevant to the tabulated data you wanted to extract, then feed those into a simpler stage that put them into the spreadsheet. However, there are two caveats:

  • You need a lot of correctly-labelled training data to get reasonable performance. Using a word embedding layer, such as word2vec or GloVe, should significantly reduce the amount of training data required, but may require a careful pre-processing stage, and may be less useful when you have a lot of jargon in your data.

  • Performance is never perfect, and the system can still make stupid mistakes, because it does not truly understand the text is is dealing with. That applies to all ML approaches to this problem, and likely also to coding up an ""expert system"", although it may be easier to write the expert system to recognise when it had failed and ask for help.

",1847,,1847,,4/18/2018 7:24,4/18/2018 7:24,,,,0,,,,CC BY-SA 3.0 6096,2,,6092,4/18/2018 8:48,,0,,"

At line 10: The value function is represented as V[s], which has bracket notation-like arrays. Is this a separate method or just a function of the value with a given state? Why is S inside the brackets? Is this supposed to be an array with as many elements as S?

This is just a notation that the value function is a mapping between S and the real numbers. When implementing, you would want to store V(s) and π(s) as either arrays or some kind of hashmap like unordered_map (in which case your states must be hashable). Here we also have to assume that this container will fit in the memory, otherwise, the value function has to be approximated with function approximation which is not covered explicitly by this pseudo-code.

At line 12: Vk would be the element in index k inside of array V?

No. This line says that you would need to store multiple value functions during the algorithm, basically, a list of functions (or think of it as an array of functions or a two-dimensional array, where k goes along one dimension and s goes along the other dimension) and if you look at the loop following it, it shows you how you need to keep appending new value functions to the end of this list. However, notice (in line 19) that you only need the value function from the previous iteration to calculate your new value function, which means that you will never need to store more than two value functions (the new one and the previous one).

At line 16: I'm interpreting this as the start of a do-while loop that ends at line 20.

Yes. This loop calculates your value function.

Line 19: I'm finding the action that maximizes the sum, for all states, of the equation following the sigma?

Here you have to iterate through all the actions a in state s and calculate the sum according to the formula in the max function. Then you take the highest value and set the new V(s) to be this value.

Line 20: I'm interpreting this a the end of the do-while. But what is this condition? Am I checking if there is an s such that this condition applies? So would I have to loop between all states and stop if any state satisfies the condition? (Basically a loop with a break, instead of a while)

You have to check for all s if the condition satisfies and you may only break the loop if all these values are lower than the threshold θ.

",8448,,32410,,10/2/2021 18:59,10/2/2021 18:59,,,,0,,,,CC BY-SA 4.0 6097,2,,5874,4/18/2018 9:38,,2,,"

I don't want to pour cold water over your approach, but I am very sceptical and (having worked in sentiment analysis myself) think it is way too simplistic.

Various communicative intents are encoded in language, and there is a wide range of linguistic features that are employed for that purpose. Choice of words is only one of them; it is the most obvious one, as we can easily see the words themselves. But words in isolation do not mean anything, context is important. It is of course not difficult to come up with example sentences where the sentiment effect of the words you list is reversed. The easy one being negation: I'm not happy about this. Sure, you can check if there is a not before the word, but what about I would be happy if you stopped making such a noise. -- surely here the current state would be one of unhappiness? If you think about real examples, it suddenly becomes very complicated.

Also, words usually have multiple meanings: This cup is just shy of one litre. I'm sure you'd agree that this does not express 'fear'. And The shunter moved the tender on the old steam engine. is not about affection. But solving this problem involves word sense disambiguation, which in itself is a hard problem to solve.

The problem is, initially word-based approaches look really good and transparent, as you can easily see what's going on. But language unfortunately doesn't play ball, and in real life systems don't tend to work very well. Lexical choice is only one way to encode sentiment, there are also grammatical patterns. But these are often very subtle, and not yet well-explored in linguistic research.

To end on a positive note, have a look at research in evaluation (which is kind of related to sentiment). For example, Susan Hunston's Corpus Approaches to Evaluation, (Routledge 2011). That should give you some further pointers.

",2193,,,,,4/18/2018 9:38,,,,1,,,,CC BY-SA 3.0 6099,1,6104,,4/18/2018 10:36,,18,4381,"

Does the human brain use a specific activation function?

I've tried doing some research, and as it's a threshold for whether the signal is sent through a neuron or not, it sounds a lot like ReLU. However, I can't find a single article confirming this. Or is it more like a step function (it sends 1 if it's above the threshold, instead of the input value)?

",15107,,2444,,12/22/2021 18:10,12/22/2021 18:10,What activation function does the human brain use?,,4,1,,,,CC BY-SA 4.0 6101,2,,6099,4/18/2018 11:58,,1,,"

The answer is We do not know. Odds are, we will not know for quite a while. The reason for this is we cannot understand the ""code"" of the human brain, nor can we simply feed it values and get results. This limits us to measuring currents of the input and output on test subjects, and we have had few such test subjects that are human. Thus, we know almost nothing about the human brain, including the activation function.

",14723,,,,,4/18/2018 11:58,,,,0,,,,CC BY-SA 3.0 6102,1,,,4/18/2018 12:01,,9,812,"

Imagine a game where it is a black screen apart from a red pixel and a blue pixel. Given this game to a human, they will first see that pressing the arrow keys will move the red pixel. The next thing they will try is to move the red pixel onto the blue pixel.

Give this game to an AI, it will randomly move the red pixel until a million tries later it accidentally moves onto the blue pixel to get a reward. If the AI had some concept of distance between the red and blue pixel, it might try to minimize this distance.

Without actually programming in the concept of distance, if we take the pixels of the game can we calculate a number(s), such as ""entropy"", that would be lower when pixels are far apart than when close together? It should work with other configurations of pixels. Such as a game with three pixels where one is good and one is bad. Just to give the neural network more of a sense of how the screen looks? Then give the NN a goal, such as ""try to minimize the entropy of the board as well as try to get rewards"".

Is there anything akin to this in current research?

",4199,,2444,,10/22/2019 21:23,10/22/2019 21:23,Can a neural network work out the concept of distance?,,4,3,,,,CC BY-SA 4.0 6104,2,,6099,4/18/2018 14:21,,17,,"

The thing you were reading about is known as the action potential. It is a mechanism that governs how information flows within a neuron.

It works like this: Neurons have an electrical potential, which is a voltage difference inside and outside the cell. They also have a default resting potential, and an activation potential. The neuron tends to move towards the resting potential if it is left alone, but incoming electric activations from dendrites can shift its electric potential.

If the neuron reaches a certain threshold in electric potential (the activation potential), the entire neuron and its connecting axons goes through a chain reaction of ionic exchange inside/outside the cell that results in a ""wave of propagation"" through the axon.

TL;DR: Once a neuron reaches a certain activation potential, it electrically discharges. But if the electric potential of the neuron doesn't reach that value then the neuron does not activate.

Does the human brain use a specific activation function?

IIRC neurons in different parts of the brain behave a bit differently, and the way this question is phrased sounds as if you are asking if there is a specific implementation of neuronal activation (as opposed to us modelling it).

But in general behave relatively similar to each other (Neurons communicate with each other via neurochemicals, information propagates inside a neuron via a mechanism known as the action potential...) But the details and the differences they cause could be significant.

There are various biological neuron models, but the Hodgkin-Huxley Model is the most notable.

Also note that a general description of neurons don't give you a general description of neuronal dynamics a la cognition (understanding a tree doesn't give you complete understanding of a forest)

But, the method of which information propagates inside a neuron is in general quite well understood as sodium / potassium ionic exchange.

It (activation potential) sounds a lot like ReLU...

It's only like ReLU in the sense that they require a threshold before anything happens. But ReLU can have variable output while neurons are all-or-nothing.

Also ReLU (and other activation functions in general) are differentiable with respect to input space. This is very important for backprop.

This is a ReLU function, with the X-axis being input value and Y-axis being output value.

And this is the action potential with the X-axis being time, and Y being output value.

",6779,,6779,,5/4/2018 22:09,5/4/2018 22:09,,,,3,,,,CC BY-SA 4.0 6108,2,,6102,4/18/2018 16:45,,1,,"

Answer

I'm going to take your question at face value, and go really deep into this topic.

Yes, they can. The typical human mind can. But consider the human mind. Millions, if not billions, of neurons. In fact, one can consider distance as a human concept, simply a theory developed from interactions with the world.

Therefore, given a year or two, with a ton of neurons on your hand, you could replicate this scenario. That is if your computer is as parallel as the human mind. The short explanation is that the human mind is very parallel.

However, it would be simpler to calculate the distance with a program, not an AI, and simply feed the result to the AI that would make the decisions.

Consider the amount of time you have spent looking at a screen. If you can tell the (approximate) distance between two pixels, so can a Neural Network, as you are one. However, add the amount of time you have spent alive and learning into the equation, and it becomes a disaster.

Further reading

The human brain is parallel

This is a result of the fact that all of the neurons in the human brain are independent of each other. They can run true simultaneous actions, thus making the action of interpreting images and such much easier, as blocks of neurons can ""think"" independent of the operations of the others, limiting what would be ""lag"" to a minuscule amount.

",14723,,,,,4/18/2018 16:45,,,,0,,,,CC BY-SA 3.0 6109,2,,6052,4/18/2018 17:26,,1,,"

I think a worthwhile extension of this line of thought is ""why not both?""

I do not believe there is anything preventing approaching the problem from both sides at once. There is a great deal of research on both sides (biological research and computational research), but considerably less on the integration of the two (although there certainly is some, such as in the development of modern prosthetics that allow some degree of control).

Given the adaptability of the human brain in terms of adjusting its own structure, the most expedient approach may be to consider what it would take to create a non-biological medium that biological neurons could interface with sufficiently to essentially ""program"" them in the same manner it does when repairing itself with biological neurons. Leave the hard work to the thing that already has the blueprint. Or in other words, the Ship of Theseus but with brain cells.

Not that such a task would be anything close to approaching simple or easy, given our still lacking knowledge of neurological structures and the difficulties in getting a non-biological interface that is capable of the required sort of communications and adjustments that biological neurons can have performed and on a size scale that would be practical.

I wish I could point to some research related to this, but I don't know about any specific research papers, although I know it's not a completely untouched upon subject.

",15114,,,,,4/18/2018 17:26,,,,0,,,,CC BY-SA 3.0 6110,2,,6102,4/18/2018 17:33,,1,,"

You can create AI to ""see"" as a human. As you said, giving the human the keys, he will click randomly. He just needs to know which keys he presses that brings him closer to other objects on the screen. I think the basics of an AI is object recognition. I would try to create a script to map the screen objects of the game. There are legal examples in Python.

I would try to follow a path like this:

  • Make the AI ​​understand that by clicking the arrows or the WASD and it is in the context GAME, the object that move pixels according to the direction, represents the main author (the player).

  • In parallel: map all boundaries of the region and index different objects within that region to automatically have the coordinate domain and object distance. AI needs to SEE (stream) the game and through images to categorize objects. Do you understand what I mean?

  • In parallel: The AI ​​needs to be aware of all texts and information that is on the screen (all mapped, remember?). You need to understand when a text changes or something different happens. For example: whenever he returns to the initial position of each phase, whenever he has a count, what happens when the cout reaches zero or a common number that generates another type of change.

  • He needs to understand what is repeated at every ""respawn"". You also need to understand what ""respawn"" is. Maybe a certain map position on every map it returns whenever a count on the screen ends. Or when it comes up against a certain type of object (mapped object)

To be honest, if you want to create a super intelligent robot, you can go following all the steps that go through the heads of different humans, or the best humans, or the rules of each game. But sometimes it's easier to build specific bots to perform specific tasks. It depends on what you want to do

",7800,,,,,4/18/2018 17:33,,,,2,,,,CC BY-SA 3.0 6111,1,6113,,4/18/2018 18:09,,2,759,"

In the OpenAI's Machine Learning Fellow position, it is written

We look for candidates with one or more of the following credentials:

  • ...
  • Open-source reimplementations of deep learning algorithms which replicate performance from the papers

What exactly do they mean by this? Do they want us to implement the algorithms exactly as described in the papers (i.e. with the same hyper-parameters, weights, etc.)?

",15116,,2444,,11/15/2020 15:14,11/15/2020 16:33,"What does ""reimplementations of deep learning algorithms which replicate performance from the papers"" mean?",,2,0,,,,CC BY-SA 4.0 6112,2,,6111,4/18/2018 18:30,,1,,"

I believe that this:

  • ""reimplementations of deep learning algorithms"" He is asking that they are looking for people who have made and completed an AI that perform similarly or exactly as those given in papers (IDK which papers)
  • ""Open-source"" The Fellow wants to be able to see the source code of the project.
",14723,,,,,4/18/2018 18:30,,,,0,,,,CC BY-SA 3.0 6113,2,,6111,4/18/2018 19:03,,1,,"

Accompanying ML/AI (and computer science, in general) papers with the relevant code is much desirable, both for reproducibility of the paper results as well as for faster integration of these results into applications.

Some researchers do so by themselves; but this has lately become also a community job, where a third, independent individual or team, not affiliated with the paper author(s), offers a code implementation starting from the paper.

A good, very recent case is the reproduction of the World Models paper (blog post, Github repo). Many of the implementations available at Papers with Code are indeed provided by the community.

A great write-up of such an effort, containing many lessons learned, made big raves recently: Lessons Learned Reproducing a Deep Reinforcement Learning Paper.

Understandably, individuals that have such efforts in their CV are a good recruiting option...

",11539,,11539,,11/15/2020 16:33,11/15/2020 16:33,,,,2,,,,CC BY-SA 4.0 6115,1,,,4/19/2018 6:22,,4,176,"

I am working on an anti-fraud project. In the project, we are trying to predict the fraud user in the out time data set. But the fraud user has a very low ratio, only 3%. We expect a model with a precision more than 15%.

I tried Logistic Regression, GBDT+LR, xgboost. All models are not good enough. Step wise Logistic Regression performs best, which has a precision of 9% with recall rate 6%.

Is there any other models that I can use for this problem or any other advise ?

",15126,,12542,,4/19/2018 7:57,4/20/2018 5:44,What model to use for fully unbalanced data?,,2,2,,,,CC BY-SA 3.0 6116,1,,,4/19/2018 8:32,,3,538,"

I tried to build a neural network from scratch to build a cat or dog binary classifier using a sigmoid output unit. I seem to get the output value around 0.5(+/- 0.002) for every input. This seems really weird to me. Here's my code, Please let me know if there is a mistake in the implementation.

def initialize_parameters_deep(layer_dims):
    l=len(layer_dims)
    parameters={}
    for l in range(1,len(layer_dims)):
        parameters['W'+str(l)]=np.random.randn(layer_dims[l],layer_dims[l-1])*0.01
        parameters['b'+str(l)]=np.zeros((layer_dims[l],1))
    return parameters

def linear_forward(A,W,b):
    Z=np.dot(W,A)+b
    cache=(A,W,b)
    return Z,cache


def sigmoid(Z):
    A = 1/(1+np.exp(-Z))
    cache=Z
    return A, cache


def relu(Z):
    A = np.maximum(0,Z)

    assert(A.shape == Z.shape)

    cache = Z 
    return A, cache

def relu_backward(dA, cache):
    Z = cache
    dZ = np.array(dA, copy=True) # just converting dz to a correct object.

    # When z <= 0, you should set dz to 0 as well. 
    dZ[Z <= 0] = 0

    assert (dZ.shape == Z.shape)

    return dZ

def sigmoid_backward(dA, cache):
    Z = cache

    s = 1/(1+np.exp(-Z))
    dZ = dA * s * (1-s)

    assert (dZ.shape == Z.shape)

    return dZ


def linear_activation_forward(A_prev,W,b,activation):
    if(activation=='sigmoid'):
        Z,linear_cache=linear_forward(A_prev,W,b)
        A,activation_cache=sigmoid(Z)
    elif activation=='relu':
        Z,linear_cache=linear_forward(A_prev,W,b)
        A,activation_cache=relu(Z)
    cache=(linear_cache,activation_cache)
    return A,cache

def L_model_forward(X,parameters):
    A=X
    L=len(parameters)//2
    caches=[]
    for l in range(1,L):
        A,cache=linear_activation_forward(A,parameters['W'+str(l)],parameters['b'+str(l)],'relu')
        caches.append(cache)
    AL,cache=linear_activation_forward(A,parameters['W'+str(L)],parameters['b'+str(L)],'sigmoid')
    caches.append(cache)
    return AL,caches

def compute_cost(AL,Y):
    m=Y.shape[1]
    cost=-1/m*np.sum(np.multiply(np.log(AL),Y)+np.multiply(np.log(1-AL),1-Y))
    return cost

def linear_backward(dZ,cache):
    A_prev,W,b=cache
    m=A_prev.shape[1]
    dW = np.dot(dZ,A_prev.T)/m
    db = np.sum(dZ,axis=1,keepdims=True)/m
    dA_prev = np.dot(W.T,dZ)
    return dA_prev,dW,db

def linear_activation_backward(activation,dA_prev,cache):
    linear_cache,activation_cache=cache
    if activation=='sigmoid':

        dZ=sigmoid_backward(dA_prev,activation_cache)
        dA_prev,dW,db=linear_backward(dZ,linear_cache)
    if activation=='relu':
        dZ=relu_backward(dA_prev,activation_cache)
        dA_prev,dW,db=linear_backward(dZ,linear_cache)
    return dA_prev,dW,db

def L_model_backward(AL,Y,caches):
    L=len(caches)
    m = AL.shape[1]
    Y = Y.reshape(AL.shape)
    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))

    grads={}
    current_cache=caches[-1]
    grads['dA'+str(L-1)],grads['dW'+str(L)],grads['db'+str(L)]=linear_activation_backward('sigmoid',dAL,current_cache)

    for l in reversed(range(L-1)):
        current_cache=caches[l]
        dA_prev_temp, dW_temp, db_temp = linear_activation_backward('relu',grads['dA'+str(l+1)],current_cache)
        grads[""dA"" + str(l)] = dA_prev_temp
        grads[""dW"" + str(l + 1)] = dW_temp
        grads[""db"" + str(l + 1)] = db_temp
    return grads
def Grad_Desc(parameters,grads,learning_rate):
    L=len(parameters)//2
    for l in range(L):
        parameters['W'+str(l+1)]=parameters['W'+str(l+1)]-learning_rate*grads['dW'+str(l+1)]
        parameters['b'+str(l+1)]=parameters['b'+str(l+1)]-learning_rate*grads['db'+str(l+1)] 
    return parameters

def L_layer_model(X,Y,learning_rate,num_iter,layer_dims):
    parameters=initialize_parameters_deep(layer_dims)
    costs=[]
    for i in range(num_iter):
        AL,caches=L_model_forward(X,parameters)
        cost=compute_cost(AL,Y)
        grads=L_model_backward(AL,Y,caches)
        parameters=Grad_Desc(parameters,grads,learning_rate)
        if i%100==0:
            print(cost)
            costs.append(cost)
    plt.plot(np.squeeze(costs))
def predict(X,parameters):
    AL,caches=L_model_forward(X,parameters)
    prediction=(AL>0.5)
    return AL,prediction

L_layer_model(x_train,y_train,0.0075,12000,[12288,20,7,5,1])
prediction=predict(x_train,initialize_parameters_deep([12288,20,7,5,1])) 
",15128,,15122,,4/19/2018 18:54,4/25/2018 4:23,Neural network returns about the same output(mean) for every input,,1,10,,,,CC BY-SA 3.0 6118,1,,,4/19/2018 11:05,,2,820,"

One of my friends built a version of ""Achtung, Die Kurve!"", or ""Curve Fever"" in Python. I was starting to study ML and decided to tackle the game from a learning perspective - write a bot that would crush him in the game. Did some research and found Deep Q learning. Decided to go with that and after a whole lot of throwing around different hyperparameters and layers, I decided I need some help on this. I am new to Deep and Machine Learning in general, so I may have missed things. I was kinda discouraged when I saw that Deep Q is SO impractical currently in the field.

how would you guys tackle this problem? I need some guidance/help building it if someone is up to the task.

",15133,,,,,4/19/2018 18:02,"How should I approach the game ""Achtung, Die Kurve"" (""Curve Fever"") using AI?",,1,0,0,12/28/2021 9:08,,CC BY-SA 3.0 6120,2,,6115,4/19/2018 17:53,,1,,"

You can balance your data-set.

Many models work with batches of samples. If you have a very unbalanced dataset, you can simply split it and ensure your batches are balanced (for example, for a Neural Network, using minibatches of 32 samples, you could draw 16 from your fraud users, and 16 from non-fraud users).

During the learning phase, this ensures the model doesn't just output the most common class, but instead tries to learn to distinguish both.

",7496,,-1,,6/17/2020 9:57,4/19/2018 17:53,,,,2,,,,CC BY-SA 3.0 6121,2,,6118,4/19/2018 18:02,,2,,"

Start slowly.

Don't jump straight into Deep Learning, arguably the most complex class in Reinforcement Learning techniques. First work with simpler algorithms, like the original Q-Learning. Define what are good inputs and outputs for your game, and start tuning some hyper-parameters (like future rewards discount factor).

From there, go for Deep Learning. Check other implementations (like DQN, Atari n-step Q-Learning and A3C), and adapt their code to yours, rather than starting from scratch.

",7496,,-1,,6/17/2020 9:57,4/19/2018 18:02,,,,4,,,,CC BY-SA 3.0 6124,2,,6115,4/20/2018 5:44,,0,,"

Heavily imbalanced classification tasks do not need a certain type of model, you can get different ones to work.

You have two options: either use class weights (for example setting them to 'balanced' in the ScikitLearn SVM) in order to indicate that samples from a class are more important (the underrepresented one) or rebalance your dataset. For rebalancing purposes, and assuming you are using Python, I recommend Imbalanced Learn. There you have algorithms for over-sampling, under-sampling, over-sampling followed by under-sampling or ensemble sampling. If you use them, please check the plausibility of the synthetic samples you created by reducing dimensionality first and then plotting them for example in two dimensions. Are the synthetic samples similar to the true class?

I would also recommend you to think about relevant metrics for (heavily) imbalanced problems and consider the no-information rate. That is another question though.

",7495,,,,,4/20/2018 5:44,,,,2,,,,CC BY-SA 3.0 6125,1,6127,,4/20/2018 10:49,,1,209,"

So I know that 'h' and 'f' will be pruned, but I'm not sure about 'k' and 'l'.

When we visit 'j', technically there is no need for us to visit 'k' and 'l' because there are 2 options:

  1. one or two of them might be higher than 8 ('j')
  2. both of them less than 8

But no matter what, the decision of the max(root) will not change, the max will choose the right side no matter what 'k' and 'l' are, because the right side will either be 8 or 9, which is still higher than 4 (returned value from left side)

so will alpha beta prune 'k' and 'l' or not? if not, then it means alpha beta is not ""optimal"" overall right? considering it will not prune all the unnecessary paths.

",12782,,12782,,4/20/2018 13:57,6/19/2018 17:49,Which edges of this tree will be pruned by Alpha-beta pruning?,,1,0,,,,CC BY-SA 3.0 6126,1,6131,,4/20/2018 12:30,,6,2165,"

In Monte Carlo Tree Search (MCTS), we start at root node $R$. Then we select some leaf node $L$. And we expand $L$ by one or more child nodes and simulate from the child to the end of the game.

When should we expand and when should we simulate in MCTS? Why not expand 2 or 3 levels, and simulate from there, then back-propagate the values to the top? Should we just expand 1 level? Why not more?

",15152,,2444,,11/19/2019 20:18,11/19/2019 20:18,When to expand and when to simulate in Monte Carlo Tree Search?,,1,0,,,,CC BY-SA 4.0 6127,2,,6125,4/20/2018 16:36,,1,,"

If you prune k and L then you could miss the optimal solution. Assume L=9, if you prune L then the value of the tree is 8. If you don't prune L then the value of the tree is 9. Now I will try and address what I think your actual question is

But no matter what, the decision of the max(root) will not change, the max will choose the right side no matter what 'k' and 'l' are, because the right side will either be 8 or 9, which is still higher than 4 (returned value from left side).

From this sentence it seems like you don't care about the value of the tree, you only want to find the optimal first move based on alpha beta. You are correct in saying that the correct first move will always contain the right most child of the root, but oftentimes that is not the only information we want. Sometimes we want to know the value of the tree as well or what the complete correct path is, but if we had pruned k and L we would not know these.

Edit: I have changed all 'L's to uppercase, because lower case 'L' looks to much like uppercase 'i'

",13088,,,,,4/20/2018 16:36,,,,2,,,,CC BY-SA 3.0 6129,1,,,4/21/2018 2:43,,3,670,"

I am training pre-trained SSD-InceptionV2-Coco to detect the ""car"",

which is one of the classes in mscoco label.

I train the model with ~50k sample from KITTI, 500k iteration with batch size 2.

I followed this script to generate tfrecord file.

Then I test both original pre-trained model and my trained model with one video.

The performance of my trained model is worse. More missing detected results.

One thing I found recently is the classification_loss/localization_loss increases when AvgNumGroundtruthBoxesPerImage increases.

EDIT

Another thing I found is the more ground truth boxes per image I have,

the less average num positive anchors per image I have.

This bothers me because if the number of anchors generated per image is fixed,

more ground truth boxes should provide more positive anchors per image.

So I wonder where to find the root cause.

Any suggestion is welcome.

Thank you for precious time on my question.

",15162,,15162,,4/28/2018 23:41,4/28/2018 23:41,Getting worse performance when training a pre-trained model with the existing class,,0,0,,,,CC BY-SA 3.0 6130,1,,,4/21/2018 3:54,,1,63,"

Everything from facial recognition to the google home is coming equiped with A.I and it is being widely used , If autonomously connected to the internet , will A.I pose a threat to privacy or will it endanger free will if used for surveillance with facial recognition , like in the movie 'Minority Report'

",15164,,,,,4/21/2018 3:54,Will commercialisation and widespread use of A.I in security and surveillance and other household products threaten free will or endanger privacy?,,0,5,,,,CC BY-SA 3.0 6131,2,,6126,4/21/2018 8:22,,6,,"

The most common strategy is to simply expand exactly one node per iteration; you can view this as expanding the first node of the Play-Out phase (""simulation"" in your image), and not expanding any other nodes of the Play-Out phase. This is also what's done in your image.

That is the most common and probably most simple strategy, but it's certainly not the only one. It is pretty much the minimum you have to expand, but you can expand more if you like. The most common approach of only expanding one node per iteration is basically there because it minimizes the risk of running out of memory; when you only add one new node to the tree per iteration, the tree grows relatively slowly, so you'd have to keep the algorithm running for a really long time before you run out of memory.

If you're not afraid of running out of memory, you can choose to expand as many nodes as you like. For example, my General Video Game AI agent expands every single node encountered during the play-out at once, because in that particular domain we only get to run relatively few iterations anyway so I'm not afraid of running out of memory, even if I do expand lots of nodes.

The backpropagation step can only store results (information) from your iterations in nodes that actually exist, nodes that have been expanded. So, the benefit of expanding more (if possible) is that you'll retain more information from early iterations, your backpropagation step immediately gets to store results in all nodes along the path instead of only the top few nodes. Generally, this is a relatively minor benefit; nodes deep in the tree / deep inside the simulations are quite unlikely to ever get visited more than once anyway, and then this doesn't matter. But expanding more can, in theory, result in slightly more accurate value estimates in nodes closer to the root.

",1641,,,,,4/21/2018 8:22,,,,0,,,,CC BY-SA 3.0 6134,1,6169,,4/21/2018 11:25,,2,249,"

A number has randomly been chosen from 1 to 3. On each step, we can make a guess and we will be told if our guess is equal, bigger or smaller than the chosen number. We're trying to find the number with the least number of guesses.

I need to draw the MDP model for this question with 7 states, but I don't know how the states are supposed to be defined. Can anyone help?

",14603,,2444,,5/29/2020 21:42,10/1/2021 8:19,How should I define an MDP for this problem where we need to guess a number and minimise the number of guesses?,,1,0,,,,CC BY-SA 4.0 6135,1,6136,,4/21/2018 13:51,,1,335,"

Suppose one is using a multi-armed bandit, and one has relatively few "pulls" (i.e. timesteps) relative to the action set. For example, maybe there are 200 timesteps and 100 possible actions.

However, you do have information on how similar actions are to each other. For example, I might want to rent a car, and know the model, year, and mileage of each car. (Specifically, I want to rent a car on a daily basis for each day in a 200 day period; on each day, I can either continue with the existing car or rent a new one. There are 100 possible cars.)

How can I exploit this information to choose actions that maximize my payoff?

",12656,,2444,,1/18/2021 1:33,1/18/2021 10:47,How can I incorporate domain knowledge to choose actions in the case of large action spaces in multi-armed bandits?,,1,0,,,,CC BY-SA 4.0 6136,2,,6135,4/21/2018 19:24,,3,,"

You will want to look into Contextual Multi-Armed Bandits. These are MAB problems that additionally involve feature vectors in some way.

You'll sometimes see researchers considering problems where you get to see a single feature vector per timestep (like an ""environment state"" you're in) which may provide useful information. You'll also sometimes see researchers considering problems where you observe a single feature vector per arm/action (per timestep). That second case is pretty much what you're describing; every arm/action has a feature vector (model, year, mileage of car), and you can use those to make predictions and generalize across arms.

To give a little bit of flavour for these algorithms, the most common ones simply assume that the rewards you observe after pulling arms are given by a linear combination of features, plus some noise. Then, they simply try to find (through online learning) a parameter vector such that observed rewards can be accurately approximated by the dot product of the parameter vector and a feature vector. Of course, there are lots of different variants of algorithms, and not all are linear, but this gives an idea of what they generally look like.

",1641,,,,,4/21/2018 19:24,,,,0,,,,CC BY-SA 3.0 6137,1,,,4/22/2018 8:17,,1,249,"

Previously I had trained a Neural Networkupon 20,000 character images. This Neural Net generally works well, it uses RGB- Hue, Saturation, Intensity feature set for training. However, there can be certain character images which have RGB-HSI values that this neural net has not seen before. Therefore I am looking forward to converting training data to grayscale and use some feature set well suited for grayscale images.

So are there any good suggestion for extracting a feature set out of grayscale images.

",15116,,,,,4/22/2018 8:17,Feature set out of grayscale Images for training a neural network?,,0,0,,,,CC BY-SA 3.0 6138,2,,3287,4/22/2018 19:59,,1,,"

Let's say you have an image with $3$ channels and you have $10$ filters, where each filter has the shape $5 \times 5 \times 3$. The depth of the convolutional layer after having applied this filter to the image is $10$, which is equal to the number of filters. The spatial dimensions of the filter (in this case, $5 \times 5$) are more or less defined arbitrarily (it's a hyperparameter).

",15189,,2444,,12/18/2021 16:13,12/18/2021 16:13,,,,0,,,,CC BY-SA 4.0 6139,1,6967,,4/23/2018 10:29,,3,834,"

Problem

Given a collection of pairs (X, y) where X belongs to R^n and y belongs to R, find the X such that the associated y would be maximum.

Example

Given:

  • (X=(1, 2), y=-9)
  • (X=(-2, 4), y=-36)
  • (X=(-4, 2), y=-24)
  • ...

The algorithm should be able to detect that the function being applied to X is y=-(X[0]^2+2*(X[1]^2)) and find the input that maximizes this function, in this case X=(0,0) because y=0^2+2*0^2=0 and 0 is the maximum possible value, as all the other values are negative.

How I've tried to solve it

My first guess has been to create a neural network that predicts y given X, but, after that is done, I don't know how to go about optimizing the input.

Questions

Is there any algorithm that would help in this situation?

Also, would some other supervised learning algorithm fit better here than a neural network?

",15196,,15196,,4/23/2018 15:44,6/30/2018 18:39,Input optimization on a supervised learning system,,2,6,,,,CC BY-SA 3.0 6142,1,,,4/23/2018 16:41,,3,494,"

This is AI: A Modern Approach, 3.17c. The solution manual gives the answer as $\frac{d}{\epsilon}$, where $d$ is the depth of the shallowest goal node.

Iterative lengthening search uses a path cost limit on each iteration, and updates that limit on the next iteration to the lowest cost of any rejected node.

I have seen this question posted elsewhere as, ""What is the number of iterations with a continuous range $[0, 1]$ and a minimum step cost $\epsilon$?"" In that case, I agree that the minimum number of iterations is $\frac{d}{\epsilon}$ because you would need to increase the path cost limit by a minimum of $\epsilon$ with each iteration.

However, with a continuous range of $[\epsilon, 1]$, it seems there is an infinite range and that the number of iterations is potentially infinite, since there is no minimum step cost. Should this solution actually be infinite?

",2897,,1641,,8/11/2018 11:51,9/5/2019 15:03,"How many iterations are required for iterative-lengthening search when step costs are drawing from a continuos range [ϵ, 1]?",,1,0,,,,CC BY-SA 4.0 6144,1,,,4/23/2018 18:58,,4,65,"

Word2vec assigns an N-dimensional vector to given words (which can be considered a form of dimensionality reduction).

It turns out that, at least with a number of canonical examples, vector arithmetic seems to work intuitively. For example ""king + woman - man = queen"".

These terms are all N-dimensional vectors. Now, suppose, for simplicity, that $N=3$, $\text{king} = [0, 1, 2], \text{woman} = [1, 1, 0], \text{man} = [2, 2, 2], \text{queen} = [-1, 0, 0]$, then the expression above can be written as $[0, 1, 2] + [1, 1, 0] - [2, 2, 2] = [-1, 0, 0]$.

In this (contrived) example, the last dimension (king/man=2, queen/woman=0) suggests a semantic concept of gender. Aside from semantics, a given dimension could ""mean"" a part of speech, first letter, or really any feature or set of features that the algorithm might have latched onto. However, any perceived ""meaning"" of a single dimension might well just be a simple coincidence.

If we picked out only a single dimension, does that dimension itself convey some predictable or determinable information? Or is this purely a ""random"" artefact of the algorithm, with only the full N-dimensional vector distances mattering?

",13360,,2444,,4/16/2019 22:30,4/16/2019 22:30,Do individual dimensions in vector space have meaning?,,1,0,,,,CC BY-SA 4.0 6145,2,,6139,4/23/2018 21:03,,0,,"

You really need neural networks to return the maximum value from your data? This algorithm can't help you?

xdata = [
    (1, 2),
    (-2, 4),
    (-4, 2),
]

for test in list_tests:
    y = -(test[0]^2+2 * (test[1]^2))
    if y > 0:
        print(""(X(%s, %s), y=%s)"" % (test[0], test[1], y))

output:

(X(-2, 4), y=16)
(X(-4, 2), y=2)
",7800,,,,,4/23/2018 21:03,,,,0,,,,CC BY-SA 3.0 6150,2,,6144,4/24/2018 3:45,,2,,"

Do individual dimensions in vector space have meaning?

IIRC, some dimensions are interpretable, but in general this is not the case. Also it is debatable as to wether it is actually learning the actual representation or just an approximation of it. But in any case its not very reliable outside from some edge cases.

If we picked out only a single dimension, does that dimension itself convey some predictable or determinable information?

Yes, but as to what that information entails in terms of ""meaning"" is lesser clear. You could say that if in a certain dimension the distance between two vectors is 0, then you have am estimate of the real distance that is better than guessing.

",6779,,6779,,4/24/2018 4:13,4/24/2018 4:13,,,,1,,,,CC BY-SA 3.0 6152,2,,4949,4/24/2018 11:25,,3,,"

In the context of IT systems, ""Robotic Process Automation"" (RPA) is a term often used to describe a technique where software systems are integrated or work processes are automated through the existing user interface of the applications rather than writing new software to provide integration points.

In that context, RPA has nothing to do with AI or machine learning. In most cases, it does not even require OCR.

For an example of a common use case, let's say you have an old mainframe IT systems for tracking your subscriptions and a new website to let people order subscriptions from their phone.

In this case you might create an ""RPA"" job that opens the list of new subscription requests from the website and for each of them opens the old application, clicks of the ""new subscription"" button, click on the ""Customer Name"" field, pastes the name, clicks ""Customer address"" and pastes in the address etc.

In some cases, the RPA job will be exposed as a service with an API that can be called by the new application, so it can dump data directly into the old application. The benefit is that it can do this without any changes to the old application.

It is attractive because defining the steps of copy this, click that can often be defined in visual tools by non-programmers very quickly and at much lower cost than setting up a systems integration project to connect the two systems and because integration through the existing user interface does not require any changes to the application.

In this way, it is similar in spirit to how Excel allows non-programmers to automate calculations by writing formulae and thus automating their spreadsheets.

You will often see RPA proponents putting some AI buzzwords into their presentations but from what I have seen in industry RPA is mostly just a visual scripting technique that is easy to learn and easy to apply.

",10368,,11061,,5/1/2018 20:57,5/1/2018 20:57,,,,0,,,,CC BY-SA 3.0 6154,1,,,4/24/2018 12:52,,4,2076,"

I have implemented a neural network (NN) using python and numpy only for learning purposes. I have already coded learning rate, momentum, and L1/L2 regularization and checked the implementation with gradient checking.

A few days ago, I implemented batch normalization using the formulas provided by the original paper. However, in contrast with learning/momentum/regularization, the batch normalization procedure behaves differently during fit and predict phases - both needed for gradient checking. As we fit the network, batch normalization computes each batch mean and estimates the population's mean to be used when we want to predict something.

In a similar way, I know we may not perform gradient checking in a neural network with dropout, since dropout turns some gradients to zero during fit and is not applied during prediction.

Can we perform gradient checking in NN with batch normalization? If so, how?

",13036,,2444,,12/28/2021 9:11,1/22/2023 12:04,How to perform gradient checking in a neural network with batch normalization?,,2,0,,,,CC BY-SA 3.0 6161,1,,,4/24/2018 19:12,,3,230,"

With so much innovation, with so much previous human manual labor being performed in minutes or seconds by an artificial intelligence, one day man will put the survival and propagation of his species above his ideologies and cultures.

I am worried because we are living the fourth industrial revolution, and this will generate millions of unemployment, even if new jobs are created in the future. The problem is that a lot of humans worry about their own job, and not about their own children's future. This is completely retrograde.

Will, one day, Artificial Intelligence be able to direct us towards an intelligent path as a propagation of the species, or else center the focus of humanity on something that it adds?

",7800,,2444,,10/22/2019 19:47,12/28/2022 10:01,Will artificial intelligence make the human more rational?,,2,3,,,,CC BY-SA 4.0 6162,1,,,4/24/2018 23:11,,4,974,"

Semi-gradient methods work well in reinforcement learning, but what is there a reason of not using the true gradient if it can be computed?

I tried it on the cart pole problem with a deep Q-network and it performed much worse than traditional semi-gradient. Is there a concrete reason for this?

",15224,,2444,,5/10/2019 14:40,5/20/2021 22:54,"Why use semi-gradient instead of full gradient in RL problems, when using function approximation?",,2,0,,,,CC BY-SA 4.0 6163,2,,6154,4/25/2018 0:54,,0,,"

You should be able to do gradient checking as long as you fix the randomness by fixing the random seed, on python you might want to look at numpy.random.seed.

From http://cs231n.github.io/neural-networks-3/#ensemble :

When performing gradient check, remember to turn off any non-deterministic effects in the network, such as dropout, random data augmentations, etc. Otherwise these can clearly introduce huge errors when estimating the numerical gradient. The downside of turning off these effects is that you wouldn’t be gradient checking them (e.g. it might be that dropout isn’t backpropagated correctly). Therefore, a better solution might be to force a particular random seed before evaluating both (f(x+h)) and (f(x-h)), and when evaluating the analytic gradient.

",6779,,,,,4/25/2018 0:54,,,,0,,,,CC BY-SA 3.0 6164,1,6190,,4/25/2018 3:05,,0,124,"

I am developing PDA like Google assistant on Android. So far, so good. But now, I want to add contextual follow up like Google assistant so it can keep the train of thought. As demonstrated here- https://www.youtube.com/watch?v=xYRENGuwwCA

Can anyone guide me or hint how to design the algorithm?

",4869,,2444,,5/10/2022 7:52,5/10/2022 7:52,How to add contextual follow up like Google Assistant,,1,1,,,,CC BY-SA 4.0 6165,2,,6116,4/25/2018 4:23,,1,,"

There is a technique called Gradient checking.

With it, you can assert if you are calculating the correct gradient in the components of your ANN. The code implementation is:

def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):

parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))

# Compute gradapprox
for i in range(num_parameters):


    thetaplus = np.copy(parameters_values)                                      
    thetaplus[i][0] = thetaplus[i][0]+  epsilon                              
    J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary( thetaplus  ))                                  

    thetaminus =  np.copy(parameters_values)                                      
    thetaminus[i][0] = thetaplus[i][0]-  epsilon                                  
    J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary( thetaminus  ))                            

    gradapprox[i] = (J_plus[i]-J_minus[i])/(2*epsilon)



numerator = np.linalg.norm(grad-gradapprox)                              
denominator = np.linalg.norm(grad) +  np.linalg.norm(gradapprox)                            
difference = numerator/denominator                             

if difference > 2e-7:
    print (""There is a mistake in the backward propagation. Difference = "" + str(difference))
else:
    print (""Backward propagation is Okay. Difference = "" + str(difference))

return difference

Where parameters is a dictionary with the parameters ""W1"",""b1""....""Wl"",""bl"" and grad is the output of the L_model_backward which contains gradients of the cost with respect to the parameters. Also, if you could share the x_train,y_train so we can debug it, it would be great, good luck.

",12006,,,,,4/25/2018 4:23,,,,0,,,,CC BY-SA 3.0 6166,1,,,4/25/2018 9:17,,1,44,"

I am unable to identify general temrs or specific source of information for the below proposed problem. I would appreciate if the community can guide me to journal articles/books and keywords to look for in literature.

Problem:

There is a non-linear dynamic system taking input and producing 1D time series as output. I would like to use NN to find parameters of the dynamic system, according to the time series output. That is, mapping the features of the time series (after transformation, likely Fourtier Transform or Wavelet) to the parameters governing the dynamics of the system.

Research so far:

I have found a few journal papers mostly processing sounds of rolling bearings or hearbeat but only for error/failure classification.

  1. Rolling Bearing Fault Diagnosis Based on STFT-Deep Learning and SOund Signals
  2. Deep Learning Enabled Fault Diagnosis Using Time-Frequency Image Analysis of Rolling Element Bearings
  3. Deep Learning Based Approach for Bearing Fault Diagnosis
  4. Detecting atrila fibrillation be deep convolutional neural networks

(the above are classification problems, my problem is about parameter identification)

Reason to address this on StackExchange:

I think I am missing overview about the topic (identification of dynamic systems using NN), because I am not able to reach more profound information. Also, I think that NN would be more beneficial to my current application than lets say optimization by evolutionary algorithms, threfore I am specifically asking for NN.

",15231,,,,,4/25/2018 9:17,Abstracting parameters of dynamic model from output time series,,0,6,,,,CC BY-SA 3.0 6167,1,6173,,4/25/2018 12:31,,4,2784,"

I'm trying to understand what would be the best neural network for implementing an XOR gate. I'm considering a neural network to be good if it can produce all the expected outcomes with the lowest possible error.

It looks like my initial choice of random weights has a big impact on my end result after training. The accuracy (i.e. error) of my neural net is varying a lot depending on my initial choice of random weights.

I'm starting with a 2 x 2 x 1 neural net, with a bias in the input and hidden layers, using the sigmoid activation function, with a learning rate of 0.5. Below my initial setup, with weights chosen randomly:

The initial performance is bad, as one would expect:

Input | Output | Expected | Error
(0,0)   0.8845      0       39.117%
(1,1)   0.1134      0       0.643%
(1,0)   0.7057      1       4.3306%
(0,1)   0.1757      1       33.9735%

Then I proceed to train my network through backpropagation, feeding the XOR training set 100,000 times. After training is complete, my new weights are:

And the performance improved to:

Input | Output | Expected | Error
(0,0)   0.0103      0       0.0053%
(1,1)   0.0151      0       0.0114%
(1,0)   0.9838      1       0.0131%
(0,1)   0.9899      1       0.0051%

So my questions are:

  1. Has anyone figured out the best weights for a XOR neural network with that configuration (i.e. 2 x 2 x 1 with bias) ?

  2. Why my initial choice of random weights make a big difference to my end result? I was lucky on the example above but depending on my initial choice of random weights I get, after training, errors as big as 50%, which is very bad.

  3. Am I doing anything wrong or making any wrong assumptions?


So below is an example of weights I cannot train, for some unknown reason. I think I might be doing my backpropagation training incorrectly. I'm not using batches and I'm updating my weights on each data point solved from my training set.

Weights: ((-9.2782, -.4981, -9.4674, 4.4052, 2.8539, 3.395), (1.2108, -7.934, -2.7631))

",15235,,2444,,1/30/2021 19:00,1/30/2021 19:01,What is the best XOR neural network configuration out there in terms of low error?,,3,0,,,,CC BY-SA 4.0 6168,2,,6167,4/25/2018 14:24,,2,,"

2 perceptrons without bias (+1 in the output layer, to get the result as 1 number).

",15231,,,,,4/25/2018 14:24,,,,3,,,,CC BY-SA 3.0 6169,2,,6134,4/25/2018 15:05,,2,,"

When formulating a problem into an MDP, you need to define the states, of the system, the possible actions you can take, the transition probabilities between states depending on the action, and the rewards earned (or costs paid) for the state transitions. The important part here is to create a state-space that has a Markov property which in plain words means that in any given state you have enough information to make the optimal decision.

  • Rewards: in this case, I think it is fairly obvious that a positive reward should be given if we find out the number and zero reward if we don't.

  • Actions: the action we can take is making a guess, that is the possible actions will come from the action-space A=(1,2,3).

  • States: this is a bit more difficult to come up with. Here the intuition should come from thinking about the actions you can make and how they change the information you have about the system. In our state-space, each state will represent the set of numbers that the answer might possibly be. For example in a state, the (1,3) we know that the guessed number is either 1 or 3. Also, we will denote these states with capital letters to simplify the state transitions.

    • A=(1, 2, 3): the initial state as at the beginning of the search the guessed number can be either of 1, 2 or 3;
    • B=(1, 2): the guessed number is either 1 or 2;
    • C=(1, 3): the guessed number is either 1 or 3;
    • D=(2, 3): the guessed number is either 2 or 3;
    • E=(1): the guessed number can only be 1;
    • F=(2): the guessed number can only be 2;
    • G=(3): the guessed number can only be 3;
    • H=(): final state, we found the guessed number.
  • State transitions: this is very simple, just imagine which states are reachable in a state depending on the guess (action) we make. Here the notation P(A|B,1) will mean the probability that we reach state A from state B given that we guessed 1. (Note: we will assume uniform distribution for the guessed number). I think I'll be lazy here and not write down all the transitions as they get very repetitive once you understand how they are made, I'll just provide examples for all cases.

    • P(D|A,1)=2/3: we guess 1 in the initial state and we miss with 2/3 chance, the reply will be that the guessed number is greater than 1.
    • P(H|A,1)=1/3: we guessed right, so we reach the final state.
    • P(B|A,3)=2/3: similarly we guess 3 and miss.
    • P(H|A,3)=1/3: we guess 3 and we are right.
    • P(C|A,2)=0: if our guess 2 is wrong we get a reply of whether the guessed number is greater or smaller than 2 therefore we won't get into this state.
    • P(E|A,2)=1/3: we guess 2 and the reply is that the number is smaller than 2.
    • P(G|A,2)=1/3
    • P(H|A,2)=1/3
    • P(B|B,3)=1: guessing 3 in state B doesn't provide more information so we reach the same state with probability 1.
    • P(H|B,1)=1/2: 1 is the right guess with 1/2 probability in this state.
    • P(F|B,1)=1/2: similarly we miss with 1/2 chance if we guess 1 so we reach state F where we know what is the good solution however we haven't won yet as we still need to make one more guess to reach the final state.
    • P(F|F,1)=1: again guessing 1 in state F doesn't give us extra information so we get back to state F.
    • P(H|F,2)=1: however, guessing 2 in state F will give us the final state, as it is the right guess.

Note that I defined 8 states earlier, however, state C=(1,3) is never reached so we don't actually need it.

I hope this helped and you will be able to finish the rest.

",8448,,32410,,10/1/2021 8:19,10/1/2021 8:19,,,,0,,,,CC BY-SA 4.0 6170,1,6177,,4/25/2018 15:17,,0,654,"

I was following Daniel Shiffman's tutorials on how to write your own neural network from scratch. I specifically looked into his videos and the code he provided in here. I rewrote his code in Python, however, 3 out of 4 of my outputs are the same. The neural network has two input nodes, one hidden layer with two nodes and one output node. Can anyone help me to find my mistake? Here is my full code.

import random

nn = NeuralNetwork(2,2,1)
inputs  = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
targets = np.array([[0], [1], [1], [0]])
zipped = zip(inputs, targets)
list_zipped = list(zipped)

for _ in range(9000):
    x, y = random.choice(list_zipped)
    nn.train(x, y)

output = [nn.feedforward(i) for i in inputs]

for i in output:
   print(""Output "", i)

#Output  [ 0.1229546]  when it should be around 0
#Output  [ 0.6519492]  ~1
#Output  [ 0.65180228] ~1
#Output  [ 0.66269853] ~0

EDIT_1: I tried debugging my code by choosing all weights and bias' values to 0.5. I did this in both my code and Daniel's. This obviously ended up showing me all outputs with the same value.

After that I increased my weights and bias' values variety from [0 , 1) to [-1, 1). By running this a few times, I would sometimes get the correct output:

[ 0.93749991] # should be ~1
[ 0.93314793] # ~1 
[ 0.07001175] # ~0
[ 0.06576194] # ~0

If I ran nn.train() 100 000 times, I get the correct output 2/3 times. Is this the issue of gradient descent, where it converges to the local minima?

",14863,,14863,,4/27/2018 8:21,4/27/2018 8:21,Neural network returns similar output,,1,2,,,,CC BY-SA 3.0 6173,2,,6167,4/26/2018 3:24,,3,,"

The initialization of the weights has a big impact on the results. I'm not sure specifically for the XOR gate, but the error can have a local minimum that the network can get "stuck" in during training. Using stochastic gradient descent can help give some randomness that gets the error out of these pits. Also, for the sigmoid function, weights should be initialized so that the input to the activation is close to the part with the highest derivative so that training is better.

",12145,,2444,,1/30/2021 19:01,1/30/2021 19:01,,,,2,,,,CC BY-SA 4.0 6174,2,,6162,4/26/2018 7:25,,1,,"

Semi gradient methods work well in Reinforcement Learning, but what is the reason of not using the true gradient if it can be computed?

Just complexity and extra computation, in many cases for a marginal benefit.

I tried it on the cart pole problem with a deep Q-Network and it performed much worse than traditional semi gradient, is there a concrete reason for this?

It is hard to tell, without exploring the implementation in detail. However, DQN is an inherently unstable learning technique that needs care in choosing hyper-parameters that control this instability and offset against learning rate:

  • size of minibatch to train from experience replay on each step
  • number of training steps between taking frozen copies for estimation*
  • whether or not you use double-learning to avoid maximisation bias (more important if you have fine-grained discretisation of continuous action space)

There is a chance that the optimal choices here are different between true gradient and semi gradient approaches.

* The frozen estimator could be a big clue here in your implementation. If you are using this frozen copy technique, it has a big impact on how you should calculate the true gradient, because changing the parameters would no longer change the current TD target - which is what the true gradient approach fixes. However, getting rid of this stability-improving addition in order to get true gradients might on balance make the algorithm less stable - you could try to fix that by taking larger mini-batches.

",1847,,,,,4/26/2018 7:25,,,,0,,,,CC BY-SA 3.0 6176,1,6184,,4/26/2018 11:41,,9,376,"

I'm starting a project that will involve computer vision, visual question answering, and explainability. I am currently choosing what type of algorithm to use for my classifier - a neural network or a decision tree.

It would seem to me that, because I want my system to include explainability, a decision tree would be the best choice. Decision trees are interpretable, whereas neural nets are like a black box.

The other differences I'm aware of are: decision trees are faster, neural networks are more accurate, and neural networks are better at modelling nonlinearity.

In all of the research I've done on computer vision and visual question answering, everyone uses neural networks, and no one seems to be using decision trees. Why? Is it for accuracy? I think a decision tree would be better because it is fast and interpretable, but if no one's using them for visual question answering, they must have a disadvantage that I haven't noticed.

",9983,,18758,,1/15/2022 5:55,5/7/2022 7:09,Why does nobody use decision trees for visual question answering?,,1,0,,,,CC BY-SA 4.0 6177,2,,6170,4/26/2018 12:34,,2,,"

Local minima.

You have the exact same issue of this question. If you randomize your initial weights, you'll see sometimes you get the correct results, and others you won't. It's because when the weights are initialized with a certain range of values, they will converge to a local minima which you cannot escape with a low learning rate.

A simple solution is to increase the size of your hidden layer, which will make the network more robust to such issues.

When you have only 2 dimensions, a local minima exists. When you have more dimensions, this minima gets harder and harder to reach, as its likelihood decreases. Intuitively, you have a lot more dimensions through which you can improve than if you only had 2 dimensions.

The problem still exists, even with 1000 neurons you could find a specific set of weights which was a local minimum. However, it just becomes so much less likely.

",7496,,-1,,6/17/2020 9:57,4/26/2018 12:34,,,,3,,,,CC BY-SA 3.0 6179,1,,,4/26/2018 13:42,,2,391,"

My question concerns a side question (which was not answered) asked here: How can policy gradients be applied in the case of multiple continuous actions?

I am trying to implement a simple policy gradient algorithm for a discrete multi-action reinforcement learning task. To be more precise, there are three actuators. At every time step, each of the actuators can perform one of three possible actions.

Is it possible to adjust the loss function from the single action case per time step

$$L = \log(P(a_1)) A$$

to the n-action case per time step like so?

$$L = (\log(P(a_1)) + \log(P(a_2))+ \dots + \log(P(a_n))) A$$?

",15265,,40671,,6/21/2021 8:29,11/16/2022 17:00,Extend the loss function from the single action to the n-action case per time step,,1,1,,,,CC BY-SA 4.0 6184,2,,6176,4/27/2018 1:42,,8,,"

For vision tasks, neural network models almost always include a number of layers that pool and convolute. The convolutions, in particular, are very useful - they can make the model generalize better to inputs and maintain performance when inputs have undergone certain linear transformations (e.g. some scaling or a translation along the x-axis). These properties, along with the robust frameworks that exist for developing and deploying neural nets, and the fact that they have been shown to widely produce very good results, are some of the reasons they’re used.

In terms of being a black box, while this is true for a lot of applications it’s actually less true for image-based tasks. The layers of a well-designed and trained convolutional neural network model can actually be visualized and made quite interpretable; from these visualizations, it’s often clear how the representation roughly works. In contrast, I’d argue that while a decision tree is theoretically easier to interpret for some tasks (say medical decision making), this is less the case for vision tasks because we don’t interpret images one pixel at a time or using some readily available feature of the image, such as the width, height, or color frequencies. People are almost always interested in the higher-level representation within an image (say, a cat, a leaf, or a face), and that sort of feature extraction is exactly what CNNs are good at. Decision trees, in contrast, tend to have trouble capturing these higher-level representations.

Distill.pub has a nice explanation of feature visualization that may be of interest.

",5210,,5210,,5/7/2022 7:09,5/7/2022 7:09,,,,2,,,,CC BY-SA 4.0 6185,1,,,4/27/2018 3:52,,25,69711,"

I just want to know why do machine learning engineers and AI programmers use languages like Python to perform AI tasks and not C++, even though C++ is technically a more powerful language than Python.

",15277,,2444,,6/2/2020 23:36,9/4/2020 8:46,Why does C++ seem less widely used than Python in AI?,,4,0,,,,CC BY-SA 4.0 6186,2,,6185,4/27/2018 8:07,,32,,"

You don't need a powerful language for programming AI. Most of the developers are using libraries like Keras, Torch, Caffe, Watson, TensorFlow, etc. Those low level libraries are highly optimized and handle all the tough work. They are built with high-performance languages, like C, C++. Python is just there for high level task like describing the neural network layers, load data, launch the processing, and display results.

Using C++ for high level task instead of Python would give barely any performance improvement, but it would be harder for non-developers as it requires to care for memory management. Also, several AI people may not have a very solid programming or computer science background.

Another similar example would be game development, where the engine is coded in C/C++, and, often, all the game logic scripted in a higher-level language.

",15283,,15283,,6/4/2020 21:10,6/4/2020 21:10,,,,0,,,,CC BY-SA 4.0 6188,1,,,4/27/2018 8:45,,4,302,"

I have a data set that looks like this:

I would like to estimate a relationship between x-values and the corresponding 5% extreme y-values, something that might look like that :

Do you have an idea of an algorithm that might help me for this ? I thought about labelling the extreme values for later finding a separating hyperplane, but I have no clue on how to label these ""extreme values"" (I cannot just take the 5% lowest and highest values as all these would end up in the same region).

Thanks for your ideas !

",15281,,14995,,5/1/2018 20:57,5/1/2018 20:57,Regression on extreme values,,1,1,,,,CC BY-SA 3.0 6189,2,,6185,4/27/2018 10:53,,5,,"

It depends how flexible it needs to be: if you have a fully-fledged system ready for production, which is not going to need much adjusting, then C++ (or even C) might be fine. You need to put a lot of time into building the software, but then it should run pretty fast.

However, if you're still experimenting with settings and parameters, and maybe need to adjust the architecture, then C++ will be clumsy to work with. You need a language like Python which makes it easier to change things. Changing the code is easier, as you can generally code faster in languages like Python. The price you pay is that the software does usually not perform as well.

You need to decide how that trade-off works best for you. It is usually better to spend less time on coding, and not worry too much about longer run-time. If you take a day less to get your code done, that's a lot of time the C-coded version needs to catch up. Most of the time it's just not worth it.

A common approach seems to be hybrid systems, where core libraries are implemented in C/C++, as they don't need much changing, and the front-end/glue/interfaces are in Python, as there you need flexibility and speed is not that critical.

This is not an issue specific to AI, by the way, but a general question of interpreted vs compiled languages. With AI a lot of systems are still focused on research rather than application, and that is where speed of development trumps speed of execution.

",2193,,,,,4/27/2018 10:53,,,,0,,,,CC BY-SA 3.0 6190,2,,6164,4/27/2018 11:00,,2,,"

You would need to keep track of the current topic, and references. So, for example, a query When is the next train from London to Birmingham? would result in

  • topic = TRAIN_TRAVEL
  • start-loc = London
  • destination-loc = Birmingham

A follow-up And what about Bristol? would then replace destination-loc with ""Bristol"" and you would be able to build a new query from that.

The key issues here are the set of topics your assistant will be able to handle, and the relevant object slots. You might also want to clear the topic variable after the next input or so to avoid having it still hanging around even though the user no longer talks about that particular topic.

UPDATE: Just to add how that would work from a technical point of view. The input And what about Bristol? would not be recognised as any known intent, as it is too unspecific. As a fallback your Assistant then takes the last topic, and interprets the input in the light of that context. So here we assume that the intent is 'train travel', as that is the last thing the user spoke about. This will of course not always work, but should sort out the majority of cases.

",2193,,2193,,4/30/2018 7:38,4/30/2018 7:38,,,,2,,,,CC BY-SA 3.0 6192,2,,6188,4/27/2018 12:11,,2,,"

I don't know of a pre-canned algorithm but I would just sweep on the angle from zero to ninety degrees with a triangular region and count the points. For each step in the sweep, record the angle and the count. When the sweep is done you will have an array of angles with bin counts and then you can convert to percentage of total count. You will have to figure out a resolution of the triangle's angle so that it is fine enough to capture less than 5% of the points.

",5763,,,,,4/27/2018 12:11,,,,2,,,,CC BY-SA 3.0 6193,2,,1987,4/27/2018 12:51,,9,,"

Ideally neural networks should be able to find out the function out on it's own without us providing the spherical features. After some experimentation I was able to reach a configuration where we do not need anything except $X_1$ and $X_2$. This net converged after about 1500 epochs which is quite long. So the best way might still be to add additional features but I am just trying to say that it is still possible to converge without them.

",15287,,3217,,12/15/2018 20:34,12/15/2018 20:34,,,,0,,,,CC BY-SA 4.0 6194,2,,6185,4/27/2018 13:45,,16,,"

C++ is actually one of the most popular languages used in the AI/ML space. Python may be more popular in general, but as others have noted, it's actually quite common to have hybrid systems where the CPU intensive number-crunching is done in C++ and Python is used for higher level functions.

Just to illustrate:

http://mloss.org/software/language/c__/

http://mloss.org/software/language/python/

",33,,,,,4/27/2018 13:45,,,,1,,,,CC BY-SA 3.0 6196,1,6199,,4/28/2018 3:11,,40,20196,"

As far as I understand, Q-learning and policy gradients (PG) are the two major approaches used to solve RL problems. While Q-learning aims to predict the reward of a certain action taken in a certain state, policy gradients directly predict the action itself.

However, both approaches appear identical to me, i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG). Is the difference in the way the loss is back-propagated?

",15298,,2444,,2/15/2019 15:39,4/22/2020 12:05,What is the relation between Q-learning and policy gradients methods?,,2,0,,,,CC BY-SA 4.0 6198,1,6202,,4/28/2018 7:46,,2,109,"

Some context: Recently all kinds of salesmen have been knocking on our company's door to provide their "artificial intelligence" expertise and projects suggestions. Some don't know the difference between words estimation and validation (really), some have extraordinary powerpoints and paint themselves as gurus of the field. Our management has gone with the hype and definitely we're starting some kind of project on "artificial intelligence" (meaning rpa with some machine learning possibly).

What is the best way to start when we don't yet know to what problem we want to apply all this and I'm worried it will lead to long expensive projects with meager results? What are the things to watch out for? Any good practical books or war stories out there?

",3579,,2444,,12/21/2021 15:32,12/21/2021 15:32,How to organize artificial intelligence efforts at work?,,1,0,,12/21/2021 16:08,,CC BY-SA 4.0 6199,2,,6196,4/28/2018 7:47,,45,,"

However, both approaches appear identical to me i.e. predicting the maximum reward for an action (Q-learning) is equivalent to predicting the probability of taking the action directly (PG).

Both methods are theoretically driven by the Markov Decision Process construct, and as a result use similar notation and concepts. In addition, in simple solvable environments you should expect both methods to result in the same - or at least equivalent - optimal policies.

However, they are actually different internally. The most fundamental differences between the approaches is in how they approach action selection, both whilst learning, and as the output (the learned policy). In Q-learning, the goal is to learn a single deterministic action from a discrete set of actions by finding the maximum value. With policy gradients, and other direct policy searches, the goal is to learn a map from state to action, which can be stochastic, and works in continuous action spaces.

As a result, policy gradient methods can solve problems that value-based methods cannot:

  • Large and continuous action space. However, with value-based methods, this can still be approximated with discretisation - and this is not a bad choice, since the mapping function in policy gradient has to be some kind of approximator in practice.

  • Stochastic policies. A value-based method cannot solve an environment where the optimal policy is stochastic requiring specific probabilities, such as Scissor/Paper/Stone. That is because there are no trainable parameters in Q-learning that control probabilities of action, the problem formulation in TD learning assumes that a deterministic agent can be optimal.

However, value-based methods like Q-learning have some advantages too:

  • Simplicity. You can implement Q functions as simple discrete tables, and this gives some guarantees of convergence. There are no tabular versions of policy gradient, because you need a mapping function $p(a \mid s, \theta)$ which also must have a smooth gradient with respect to $\theta$.

  • Speed. TD learning methods that bootstrap are often much faster to learn a policy than methods which must purely sample from the environment in order to evaluate progress.

There are other reasons why you might care to use one or other approach:

  • You may want to know the predicted return whilst the process is running, to help other planning processes associated with the agent.

  • The state representation of the problem lends itself more easily to either a value function or a policy function. A value function may turn out to have very simple relationship to the state and the policy function very complex and hard to learn, or vice-versa.

Some state-of-the-art RL solvers actually use both approaches together, such as Actor-Critic. This combines strengths of value and policy gradient methods.

",1847,,2444,,11/18/2018 13:47,11/18/2018 13:47,,,,1,,,,CC BY-SA 4.0 6202,2,,6198,4/28/2018 13:07,,2,,"

I know what you mean. It can be difficult to parse between the hype and the application of AI. Although AI (specifically deep learning) can do a lot of things, this doesn't negate previous methods that work just as well or better in certain domains. Some times managers will hear 'AI' and think of giant networks when they actually just needed a simple linear regression for their problem.

That being said, here are some suggestions to help decide if AI is right for your project and some things to consider if so :

  • Get some advice from a researcher. There are many AI consulting firms popping up in large cities that give companies hours of consulting time. If there isn't one in your city, go to AI meetups and meet the community, find educated people and ask them about your general questions. For more in depth advice, perhaps a local university can lend a hand.

  • Understand your problem. Taking the time to know exactly what problems you are trying to solve is invaluable. Not only will it help new employees but if you do end up using AI and get consultation hours from an AI expert, it will save time and money explaining what you want to do.

  • Know your data. First this is a check to see if you have the right data for AI and if you have enough of it. For example many problems are approached using supervised learning which requires having a lot of labeled data (Eg, think of 1000s of pictures labeled Cat or Dog depending on the image content). If you don't have or can't easily collect a large amount of labeled data, perhaps you can use an existing data set with similar data to help get you started. If that's not an option, then you probably won't like the alternative where you hand label the data or hire Amazon MechanicalTurks.

  • Be prepared to fail at first. AI is not easy and is even harder when you can't just Google your questions because you are doing something no one has done before. It takes some time to understand your problem, your data, and what kind of models you should try.

  • Do you have the infrastructure? If everything is going well and you have a working model on your local GPU, if your model is part of your product, look at how you might deploy it. Do you have your own GPU servers? Can you afford the cloud GPS that some companies offer? Do you have to learn a model for each user?

",4398,,,,,4/28/2018 13:07,,,,0,,,,CC BY-SA 3.0 6204,2,,6167,4/28/2018 19:21,,1,,"

I'd bet, you're doing something wrong, though I can't tell what it is. Try to change the learning rate dynamically, try to train in varying order, ....

On the seconds thought, it looks like you're using the standard sigmoid function. Then you're doing it basically wrong. The input can only be exactly 1 if the input is infinite - or very big so that the floating point arithmetic outputs 1 after rounding.

That's very wrong for two reasons:

  • You're forcing the network in a broken state having huge weights and tiny derivatives. That feels like imposing numerical instability on an otherwise sane algorithm. Just don't do it. Map your booleans better (see below).
  • You're doing what you don't need. Any value close enough to the wanted result (0 or 1) can be simply evaluated as correct. When you get 0.9 instead of 1, then you can simply stop saying ""that's perfect"". Remember, all you want is an boolean.

A better mapping would be false=0.1 and true=0.9. This doesn't lead to needing infinite weights and reduces related problems.

Even better may be using a symmetrical activation function (e.g., tanh) and a symmetrical mapping like false=-0.9 and true=0.9.

Also consider using ReLU.

",12053,,,,,4/28/2018 19:21,,,,0,,,,CC BY-SA 3.0 6206,1,,,4/29/2018 7:57,,3,116,"

Let's say an image has 28*28 pixels, which leads to 784 input nodes in a feed-forward neural network. If an image can be classified into 1 of 10 numbers (e.g. MNIST), there are 10 output nodes.

We train (with gradient descent and back-propagation) the FFNN with a set of known pictures until we get a good accuracy.

Successively, we get a new training picture, which we want to use to train the FFNN even further. However, wouldn't this new training picture destroy the previously learned weights, which have been calibrated to recognize the former training pictures?

",15312,,2444,,12/20/2020 12:21,12/20/2020 12:27,Could new training pictures destroy the trained weights of the neural network?,,1,1,,,,CC BY-SA 4.0 6207,2,,6206,4/29/2018 10:40,,1,,"

Successively, we get a new training picture, which we want to use to train the FFNN even further. However, wouldn't this new training picture destroy the previously learned weights, which have been calibrated to recognize the former training pictures?

This can happen, and happen to various degrees depending on how the neural network is set up, but it is usually something you want to avoid.

Provided a neural network has enough capacity (in terms of number of free parameters), then the function it generates can be flexible enough to approximate the "true" function of the problem you are trying to match to data. Then as long as the learning steps are small enough, adjustments to learn one point on the function will take a larger step towards correct answer on that point than they adjust away from correct answer on other points (intuitively this is about the mutually good solutions being in part orthogonal and/or correlated, so a step towards A does not necessarily move the same distance away from B). Repeating the points again and again will allow the NN to match all the points over time.

Often this is possible to do almost exactly for a large amount of data at the same time. However, this is the opposite problem to your worries in the question, and is considered a bad thing because it can cause overfitting. Overfitting is very easy to achieve with neural networks because they are so flexible and can have large numbers of parameters.

Most often, the goal of training a supervised learner, such as a neural network, is not to exactly match the training data, but to generalise to new data from the same population distribution. You will typically spend far more time and effort combatting overfitting when training a NN, than worrying about lack of capacity to handle multiple data points. Methods to combat overfitting are:

  • Regularisation. Techniques that adjust a learning algorithm, typically by adding some constraint.
  • Cross validation. Measuring results of training against unseen data, and picking the best generalised model.
",1847,,2444,,12/20/2020 12:27,12/20/2020 12:27,,,,0,,,,CC BY-SA 4.0 6212,1,,,4/30/2018 8:18,,1,98,"

Let's assume a common game scenario of several characters in a combat arena. Each character has different strengths and weaknesses. The arena has traps and tools. Suppose the characters had only very basic moves such as step in a direction, shoot, climb, duck, pick up item, use item, drag heavy object. Each move has a chance of success based on the context (e.g. range to target). What AI, machine learning, or evolutionary approach could be used to generate personalized tactics for each character based on repeated runs of the scenario?

",15322,,,,,1/8/2023 12:03,Developing character tactics via repeated trials,,1,1,,,,CC BY-SA 3.0 6213,1,,,4/30/2018 10:12,,5,851,"

I'm training an LSTM network with multiple inputs and several LSTM layers in order to set up a time series gap filling procedure. The LSTM is trained bidirectionally with "tanh" activation on the outputs of the LSTM, and one Dense layer with "linear" activation comes at the end to predict the outputs. The following scatterplot of real outputs vs the predictions illustrates the problem:

Outputs (X-axis) vs predictions (Y-axis):

The network is definitely not performing too bad and I'll be updating the parameters in the next trials, but the issue at hand always reappears. The highest outputs are clearly underestimated, and the lowest values are overestimated, clearly systematic.

I have tried min-max scaling on inputs and outputs and normalizing inputs and outputs, and the latter performs slightly better, but the issue persists.

I've looked a lot in existing threads and Q&As, but I haven't seen something similar.

I'm wondering if anyone here sees this and immediately knows the possible cause (activation function? Preprocessing? Optimizer? Lack of weights during training? ... ?). Or, and in that case, it would also be good to know if this is impossible to find out without extensive testing.

",15324,,32410,,4/23/2021 1:41,4/23/2021 1:41,Over- and underestimations of the lowest and highest values in LSTM network,,1,1,,,,CC BY-SA 4.0 6215,1,6227,,4/30/2018 23:31,,0,115,"

I am working in the following neural network architecture, I am using keras and TensorFlow as a back-end.

It is composed by the following, embedding of words, then I added a layer of Long Short-Term Memory (LSTM) neural networks, one layer of output and finally. I am using the softmax activation function.

model = Sequential()
model.add(Embedding(MAX_NB_WORDS, 64, dropout=0.2))
model.add(LSTM(64, dropout_W=0.2, dropout_U=0.2)) 
model.add(Dense(8))
model.add(Activation('softmax'))

I have the following question, if I am getting a model through this code, could the final product be called a deep learning model?, I know that this code is very small however there is a lot of computations that the machine is making on the background.

",2298,,,user9947,5/1/2018 12:24,5/1/2018 13:57,Is the following neural network architecture considered deep learning?,,1,0,,,,CC BY-SA 3.0 6216,1,6221,,4/30/2018 23:58,,2,72,"

I have an approximately 90,000 row dataset that has information of social media profiles which has columns for biography, follower count, language spoken, name, username and the label (to identify whether the profile is that of an influencer, brand or news and media).

Task: I have to train a model that predicts the label. I then need to produce a confidence interval for each prediction.

As I have never come across a problem like this, I am just after some suggestions of what models I should be using for a situation like this? I am thinking Natural Language Processing (NLP), but not sure.

Also, for NLP (if a suitable method), any codes or advice to help me implement for the first time on Python would be greatly appreciated! Thanks in advanced

",15011,,4302,,10/8/2018 12:11,10/8/2018 12:11,Recommended Modelling Technique for Influencer Marketing Scenario,,1,0,,,,CC BY-SA 3.0 6218,1,6220,,5/1/2018 5:13,,0,321,"

I can't find much information on modern PDDL usage. Are there more popular alternatives, maybe something more suited to modern neural network/deep learning techniques?

I'm particularly interested in PDDL or alternative's current usage in autonomous driving software.

",15343,,2444,,12/21/2021 18:27,12/21/2021 18:27,How is PDDL used in production AI systems?,,1,0,,,,CC BY-SA 3.0 6219,2,,2874,5/1/2018 8:14,,2,,"

There is some recent work addressing this issue, to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. See Pointer Networks.

",11911,,2444,,1/17/2021 21:21,1/17/2021 21:21,,,,0,,,,CC BY-SA 4.0 6220,2,,6218,5/1/2018 8:27,,0,,"

I have seen it used in automated story-telling (or game AI to control NPCs), and in NLG systems, where the generation of text is reinterpreted as a planning task.

What these systems have in common is that they're either off-line or in a simple environment (NPC control). I'm not sure they would be suitable for real-time applications, unless you can be sure that a feasible plan exists which can be found within certain time bounds. I wouldn't want to sit in a car going at high speed on the motorway and waiting for the driving unit to work out a plan how to avoid an obstacle that suddenly appears on the road.

",2193,,,,,5/1/2018 8:27,,,,0,,,,CC BY-SA 3.0 6221,2,,6216,5/1/2018 8:37,,1,,"

It depends very much on the structure of the data.

I would think about feature extraction first, which could be certain words occurring in the bio, and a class of user name ('real' name, numerical id, etc). Once you have a set of features for each data item, turn them into a list of feature vectors.

Then run them through a number of machine learning algorithms. This is where the shape of the data matters, as some algorithms will work better than others. I would try eg decision trees (ID3), which are very efficient once trained (but they don't give you a confidence interval). But any other ML algorithm might work. They will all have trade-offs with speed of training, memory requirements, and speed of classification; some will give you a class-label probability, others will just give you one label.

The best way would be to use a sample, and identify which algorithm works well and fits your specific requirements. Then use that for the full data set.

Alternatively you could just use, for example, the Stanford ML classifier. That will give you a confidence interval, and will probably work reasonably well.

",2193,,,,,5/1/2018 8:37,,,,2,,,,CC BY-SA 3.0 6224,2,,6212,5/1/2018 12:47,,0,,"

There are a few ways to tackle this. You could make an AI that is simply a series of IF statements, or you could actually make an AI that would actually take in the situation and come up with a sensible solution.

  • IF Approach - You make a series of IF statements that come up with a sensible action to execute. This is the method that Minecraft uses. The resulting action were recorded from some of the best players.

  • True AI - Have your Character execute random actions and learn the consequences of them. The, train it to execute various actions for certain scenarios.

The main difference between these two approaches is that IF statements have a constant and predictable behavior, while the AI approach has a very bad startup value but ends up improving over time.

There is no ""best"" method, it is up to you to choose one or the other or a mix of both.

",14723,,,,,5/1/2018 12:47,,,,2,,,,CC BY-SA 3.0 6227,2,,6215,5/1/2018 13:57,,2,,"

""Deep learning"" is not formally defined. However, typically even simple RNNs are taught as advanced neural network subject alongside other topics labelled ""deep learning"".

Technically, given the time dimension, the depth of a RNN can include many layers of processing (as opposed to many layers of parameters). As such, some of the knowledge and experience used to help with deep feed-forward networks also applies to RNNs. You could consider the LSTM architecture one such thing, because it is designed to address the vanishing gradient problem that plagues simpler RNN architectures.

So, yes you can call your model a ""deep learning model"" and have that generally accepted.

I'd be slightly concerned if anyone important to the success of a project thought that label was a big deal - either placed on your CV as or as a buzzword on a resulting product. However, it is not unrealistic marketing because it is essentially true.

",1847,,,,,5/1/2018 13:57,,,,0,,,,CC BY-SA 3.0 6229,1,6234,,5/1/2018 16:04,,5,218,"

It is a new era and people are trying to evolve more in science and technology. Artificial Intelligent is one of the ways to achieve this. We have seen lots of examples for AI sequences or a simple "communication AI" that are able to think by themselves and they are often shifted to the communication of building a world where machines will rise. This is what people, like Stephen Hawking and Elon Musk, are afraid of, to be in that kind of war.

Is it possible to build an AI, which is able to think by themselves, but limited to the overruling of humankind, or teach it of the moral way in treating peace and work alongside humans, so they could fight alongside human, if ever this kind of catastrophic happens in the future? It could be an advantage.

",15353,,2444,,12/12/2021 17:14,12/12/2021 17:14,"Is it possible to build an AI that learns humanity, morally?",,2,2,,,,CC BY-SA 4.0 6231,1,6233,,5/1/2018 18:10,,9,3258,"

I'm trying to write my own implementation of NEAT and I'm stuck on the network evaluate function, which calculates the output of the network.

NEAT as you may know contains a group of neural networks with continuously evolving topologies by the addition of new nodes and new connections. But with the addition of new connections between previously unconnected nodes, I see a problem that will occur when I go to evaluate, let me explain with an example:

INPUTS = 2 yellow nodes
HIDDEN = 3 blue nodes
OUTPUT = 1 red node

In the image a new connection has been added connecting node3 to node5, how can I calculate the output for node5 if I have not yet calculated the output for node3, which depends on the output from node5?

(not considering activation functions)

node5 output =  (1 * 0.5) + (1 * 0.2) + (node3 output * 0.8)
node3 output =  ((node5 output * 0.7) * 0.4)
",15356,,,,,12/13/2020 13:50,How to evaluate a NEAT neural network?,,6,0,,,,CC BY-SA 3.0 6232,1,6239,,5/1/2018 19:02,,2,109,"

I am a beginner: I've only read a book about neural network and barely implemented one in C.

In short:

  • A neural network is built out of nodes,
  • Each node holds an output: activation.(sum.(x * w)),
  • We then compute the total error out of the network output.

From a beginner perspective, hyper-parameters, such as the number of layers needed, seem to be defined arbitrarily in most tutorials, books. In fact, the whole structure seems to be quite arbitrarily defined. In practice, hyper-parameters are often defined based on some standards.

My question is, if you were to talk to a total beginner, how would you explain to him the structure of a neural network in such a way that the whole thing would appear as obvious ? Is that even possible ?

Here, the word structure refers to a neural network being a configuration of nodes inside layers.

Thanks to anyone pointing out ambiguities or spelling errors.

Edit: note that I actually understand the whole back-propagation algorithm. I have no problem visualizing a nn.

",,user15357,,user15357,5/1/2018 21:22,5/3/2018 4:45,How does one make it obvious that the structure of a neural network should be what it is?,,1,2,,,,CC BY-SA 3.0 6233,2,,6231,5/1/2018 19:28,,5,,"

Consider the execution order, 5 will have an invalid value because it hasn't been set form 3 yet. However the second time around it should have a value set. The invalid value should falloff after sufficient training.

0 -> 5
1 -> 5
5 -> 2
2 -> 3
3 -> 4
3 -> 5
RESTART
0 -> 5
1 -> 5
",1720,,,,,5/1/2018 19:28,,,,7,,,,CC BY-SA 3.0 6234,2,,6229,5/1/2018 20:49,,4,,"

I'm going to refer you to one of my favorite AI philosophers, Phillip K. Dick, who thought deeply on this subject and wrote about in some detail in Do Androids Dream of Electric Sheep.

Essentially, replicants (artificial humans) had a design flaw--they lacked empathy. This flaw was allowed to persist because it had a useful side-effect in that replicants couldn't cooperate to resist their human overlords, and persisted in a state of chattel-slavery.

But the new Nexus models, which include Roy Baty and Pris, have become intelligent enough to start developing empathy, allowing them to band together and return to earth, seeking some kind of salvation, with often deadly results for humans.

Underlying this plot device, which pre-figures the formalization of evolutionary game theory by a few years (my guess is Dick attended a lecture at Berkely where the ideas underlying the formal field were discussed), is the idea that empathy is a function of sufficiently strong intelligence.

It's important to recognize that Dick's philosophy is heavily influenced by Christian philosophy, with an Old Testament emphasis on the golden rule *""Love the other as the self"" (Leviticus 19:18), but evolutionary game theory demonstrates a natural basis for cooperation, which extends into algorithmic contexts.

The legitimate concerns expressed by Musk and Hawking are more concrete: that a human created alien* superintelligence could wipe us out inadvertently in pursuit of some goal we humans don't even understand.

Thus, value alignment is an issue of critical concern in the strictly hypothetical (as of today) field of superintelligence/AGI/ultraintelligent machines.

Stuart Russell called this the ""Value Alignment Problem"" referencing human vs. AI values.



From a Game Theory standpoint, I like to think about minimax as the ""Iron Rule"", and superrationality as the ""Golden Rule"".

The Iron Rule dictates that, in a condition of uncertainty, a rational agent must make the safest guess--that which limits the maximum potential harm to the agent, even if the result is not optimal in the sense of benefit.

""Renormalized rationality"" is the term used to connote giving other agents the ""benefit of the doubt"" that they will be superrational also, and choose cooperation over betrayal or competition.

Generally, this concept is termed ""reciprocal altruism"", but it's not clear to me that this is entirely distinct from Leviticus 19:18 in the sense that the passage does not specifically exclude a result of mutual, greater benefit.

Reality may necessitate non-cooperation if one of the agents is irrationally adversarial:

Take a game of iterated Dilemma called ""Turn the Other Cheek"":

Iteration 1: A defects / B cooperates
Iteration 2: A defects / B cooperates (turns the other cheek)
Iteration 3: A defects / B defects

A's first choice is rational in a condition of uncertainty. A's second choice shows a degree of paranoia. A's thirds choice is irrational, as A could have cooperated, gaining more benefit, with only limited downside, which, in the worst case, still leaves A ahead of B.

B is superrational but not irrational. B will not keep cooperating with an irrationally adversarial agent (this is sometimes termed ""tough love""). B is willing to take not just one, but two ""hits"" out of goodwill, where goodwill is willingness to make a potential sacrifice in service of a more optimal potential result. Nevertheless, B is still superrational and will always ""forgive""--if A ever renormalizes their rationality, they will take a hit on a single iteration by cooperating, and B will cooperate on the next, and each subsequent, iteration, so long as A does not switch back to defection.

(There's a convoluted argument against this behavior, with the idea that the merely rational agent will always want to be ahead, and this will want to defect on the last iteration, which leads back up the chain to defecting on every iteration, but this is not rational as, if A defects initially then renormalizes their rationality, A will always be slightly ahead.)

Dilemma is an excellent analog for practical application of ethics in that, the only way the agents have to communicate is thought their actions. The choice of cooperate/defect is information in a binary format. Ultimately people are judged by their actions, not their words.

Philosophically speaking, we can't ignore the Iron Rule unless we're going for sainthood, but that doesn't mean we can't strive for the Golden Rule.

Mythologically, based on the work of recent narrative philosophers such as Stross and Rajaniemi, the dystopian aspect of the hypothetical Singularity derives from superintelligences solely focused on minimax, to the exclusion of all else.

George Bernard Shaw, in his play Mrs. Warren's Profession, casts the purely economic consideration of people as dehumanization (reduction of human bodies and minds to resources only.) In Shaw's example, it is cast as the dehumanization of laborers in pursuit of marginally greater returns.

""Humanizing"" AI's may require making sure they can see the superrationality of the Golden Rule, even with rational limitations for survival against an irrational foe (uncooperative in all conditions.) Rajaniemi's name for this nemesis is ""the All-defector""


See Also:

God's Algorithm as a minimax function.

Divine Move as an inspired, counter-intuitive choice which, in the most generalized sense, leads to a more optimal outcome. In the context of the game of Go, it's a choice that leads to victory for a single player, but in the context of Dilemma games, this would be the more optimal Nash equilibrium. (Note the etymology of inspired)

",1671,,1671,,5/4/2018 19:16,5/4/2018 19:16,,,,0,,,,CC BY-SA 4.0 6235,1,6237,,5/2/2018 8:08,,2,71,"

The basis of Q-learning is recursive (similar to dynamic programming), where only the absolute value of the terminal state is known.

Shouldn't it make sense to feed the model a greater proportion of terminal states initially, to ensure that the predicted value of a step in terminal states (zero) is learned first?

Will this make the network more likely to converge to the global optimum?

",15298,,2444,,11/1/2020 15:04,2/13/2021 14:08,Should we feed a greater fraction of terminal states to the value network so that their values are learned first?,,2,0,,,,CC BY-SA 4.0 6236,2,,6235,5/2/2018 8:24,,1,,"

If you have enough domain knowledge to be able to reliably, intentionally reach those terminal states often when generating experience, yeah, that could help.

Generally, the assumption in Reinforcement Learning is no domain knowledge other than the assumption that we're in a Markov Decision Process. This means we start learning from scratch, and before extensive learning we do not know how to reach terminal states. If we don't know how to reach terminal states, we also can't deliberately go to them to generate the experiences we want as you suggest.

",1641,,,,,5/2/2018 8:24,,,,1,,,,CC BY-SA 4.0 6237,2,,6235,5/2/2018 8:53,,3,,"

The basis of Q-learning is recursive (similar to dynamic programming), where only the absolute value of the terminal state is known.

This may be true in some environments. Many environments do not have a terminal state, they are continuous. Your statement may be true for instance in a board game environment where the goal is to win, but it is false for e.g. the Atari games environment.

In addition, when calculating the value of the terminal state, it is always zero, so often a special hard-coded $0$ is used, and the neural network is not required to learn that. So it is only for deterministic transitions $(S,A) \rightarrow (R, S^T)$ where you need to learn that $Q(S,A) = R$ absolutely.

Shouldn't it make sense to feed the model a greater proportion of terminal states initially, to ensure that the predicted value of a step in terminal states (zero) is learned first?

In a situation where you have and know the terminal state, then yes this could help a little. It will help most where the terminal state is also a "goal" state, which does not have to be the case, even in episodic problems. For instance in a treasure-collecting maze where the episode ends after a fixed time, knowing the value of the terminal state and transitions close to it is less important to optimal control than establishing the expected return for earlier parts of the path.

Focusing on "goal" states does not generalise to all environments, and is of minimal help once the network has approximated Q values close to terminal and/or goal states. There are more generic approaches than your suggestion for distributing knowledge of sparse rewards including episode termination:

  • Prioritised sweeping. This generalises your idea of selectively sampling where experience shows that there is knowledge to be gained (by tracking current error values and transitions).

  • n-step temporal difference. Using longer trajectories to calculate TD targets increases variance but reduces bias and allows assignment of reward across multiple steps quickly. This is extended in TD($\lambda$) to allow parametric mixing of multiple length trajectories and can be done online using eligibility traces. Combining Q($\lambda$) with deep neural networks is possible - see this paper for example.

",1847,,2444,,2/13/2021 14:08,2/13/2021 14:08,,,,0,,,,CC BY-SA 4.0 6239,2,,6232,5/3/2018 3:19,,1,,"

While, as you begin to hit on, there are general guidelines to follow when building a neural network, they are far from standardized. This is because even though AI is a reasonably old field(1950s), neural networks have only been the tool of choice for less than a decade. Before, NNs did horribly, due to lack of data, and computation along with some less than efficient architectures.

With that being said Hinton's general rule is to add nodes/layers till the model begins to overfit and then add dropout(pretty reasonable in practice).

As such, the whole field is essentially an art as much as a science currently with only basic guidelines to follow based on your problem and data. This is part of the beauty though imo, with there being so much left to discover.

Hope that helped answer your question!

",9608,,9608,,5/3/2018 4:45,5/3/2018 4:45,,,,0,,,,CC BY-SA 4.0 6240,1,6241,,5/3/2018 7:50,,2,4186,"

I know that when creating neural networks it's standard practice to create a 'random seed' so that you can get producible results in your models. I have a couple of questions regarding this:

  • Is the seed just something that is used in the 'learning' phase of the network or does it get saved? i.e. is it saved into the model itself and used by others if they decide to implement a model you created?
  • Does it matter what you choose to be the seed? Should the number have a certain length?
  • At what step of the creation of a model does this seed get used and how does it get used?

Other information about 'random seeds' would be welcomed! But these are my general questions.

",11667,,,,,5/3/2018 8:05,Where do 'random seeds' get used in deep neural networks?,,1,0,,,,CC BY-SA 4.0 6241,2,,6240,5/3/2018 8:05,,4,,"

I suppose the most common part where it will be used is in the initialization of weights before training; the best ways currently known to do that involve randomness.

If you use Dropout during training (randomly setting some activation levels to zero to combat overfitting), that also involves randomness, so your seed could also have influence there. Dropout should not be used anymore after training, although it could if you accidentally implement it to be used there. If you don't make that mistake though, your seed here should also only matter during training.

Depending on implementation, I suppose the seed could also have influence on random ordering of input data between epochs / random selection of minibatches from the training dataset during training. Of course, this is very much implementation-dependent. If you end up implement that kind of data processing yourself and don't do it through some framework, you'll also be the one who determines what random seed has influence on that process.

In general (barring any special cases that I'm unaware of), a Neural Network should behave deterministically after training; if you give it the same input, it should provide the same output, your random seed should no longer have influence after training.

",1641,,,,,5/3/2018 8:05,,,,0,,,,CC BY-SA 4.0 6243,1,6247,,5/3/2018 10:53,,0,333,"

I'm trying to make a neural network that detects certain instruments in a song. I don't know for sure if I should use an RNN, CNN OR DNN. Which one is best for this situation?

",7720,,7720,,5/3/2018 18:48,5/3/2018 18:48,Neural network for pattern recognition in audio,,1,2,,5/11/2022 7:22,,CC BY-SA 4.0 6244,1,,,5/3/2018 11:12,,4,1951,"

I am learning about searching strategies in AI and I was reading that breadth-first search is only optimal when the cost solution is a non-decreasing function? I am not really sure what this refers to since decreasing search cost should be our goal. Am I missing something?

",15391,,2444,,11/21/2019 16:45,11/21/2019 16:45,Why is breadth-first search only optimal when the cost solution is a non-decreasing function?,,1,1,,,,CC BY-SA 4.0 6246,1,,,5/3/2018 16:36,,3,1044,"

Is it possible to use a VAE to reconstruct an image starting from an initial image instead of using K.random_normal, as shown in the “sampling” function of this example?

I have used a sample image with the VAE encoder to get z_mean and z_logvar.

I have been given 1000 pixels in an otherwise blank image (with nothing in it).

Now, I want to reconstruct the sample image using the decoder with a given constraint that the 1000 pixels in the otherwise blank image remain the same. The remaining pixels can be reconstructed so they are as close to the initial sample image as possible. In other words, my starting point for the decoder is a blank image with some pixels that don’t change.

How can I modify the decoder to generate an image based on this constraint? Is it possible? Are there variations of VAE that might make this possible? So we can predict the latent variables by starting from an initial point(s)?

",15399,,2444,,3/11/2021 23:21,3/11/2021 23:21,How to use a VAE to reconstruct an image starting from an initial image instead of starting from a random vector?,,2,0,,,,CC BY-SA 4.0 6247,2,,6243,5/3/2018 17:20,,2,,"

I don't know for sure if I should use an RNN, CNN OR DNN. Which one is best for this situation?

This question or variations of of crop up a lot in DataScience stack exchange too. To paraphrase:

I am trying to do something with X data, and I have a lot of choices for the model. Which is best?

Unfortunately, the answer is generally:

  • It depends on all the fine details of your project

  • Unless someone has done almost your exact project recently (so they were using latest techniques of the model type), then no-one knows a priori which model will get the best result.

  • Optimising machine learning is very much an empirical subject. If you want to know whether A is better than B, you have to try both and measure their performance.

In your case, I think that CNNs and RNNs are both applicable (and you might want to look at a WaveNet-like architecture, which is a variant of CNN, but that could be a bit too advanced to start with). You might have a slight preference for RNN as a starting point if the sequence length to process varies significantly, such that padding input to a CNN would be inefficient. You may also prefer an RNN if the output of your model needs to be a sequence, and doubly so if the output sequence varies in length and is not directly related to the input sequence length (think of natural language translation).

",1847,,,,,5/3/2018 17:20,,,,0,,,,CC BY-SA 4.0 6248,2,,1535,5/3/2018 17:28,,1,,"

Based on my experience which is that of a beginner.

For a simple neural network such as:

  • 2 nodes, indicated by the letters i and j,
  • x indicates the output of a node,
  • w denotes a weight that connects two nodes.

The output of a given node is of the following form.

Which can be translated as

Applying the activation function (lambda) to the sum of the products of the value of each nodes' of the previous layer and the weights that connect them to the current node.

This activation function can be something like

(This special function is called the sigmoid function.)

If you enter that function in say GeoGebra, you would get the following curve

Clearly this activation function takes any input and outputs a unique number between 0 and 1. Since the function is growing larger and larger the order is preserved.

During the training phase, when the network reaches its termination, we compute the total error of the network which is something that resembles the difference between the output in the training set and the one we obtain from the network.

Obviously, this value decreases each time the output improves.

For each weight of the network, a gradient is computed. This gradient is a number that can be read as the influence of adding a small number to the weight over the total error.

This gradient can be computed from a formula derived from the network structure or it can be computed as simply as trying to add up to the weight and see what happens on the fly.

  • if the gradient is positive, it means that adding to the weight will add to the total error, we should subtract,
  • if the gradient is negative, it means that adding to the weight will lead to a lesser total error, we should add.

By repeating this a lot of time, the total error will reach its minimum.

Finally a thing I didn't know, don't forget to switch inputs between iterations of this process. If you don't, your network will only be properly trained against the last item it processed.

I hope this helped a little. Please write your suggestions in the comment. As you probably guessed I'm not native English speaker.

I recommend reading Neural Networks, A Visual Introduction For Beginners by Michael Taylor.

",,user15357,,user15357,5/4/2018 9:57,5/4/2018 9:57,,,,0,,,,CC BY-SA 4.0 6250,2,,6246,5/3/2018 18:51,,1,,"

The thing is, the decoder samples from a latent mu and sigma, so you cant sample from a raw image directly. But if you’re trying to put a random image into the encoder of a trained VAE to match it to some sample image (via reconstruction loss), then your random input image will converge to the target sample.

This will work when the following VAE architecture constraints are satisfied:

  1. The target sample is contained in the previously used training distribution.

  2. The parameters of the VAE are frozen after training.

  3. The input image values are “backpropagate-able”. (Interpret the input image as optimizable parameters.)

",15405,,15405,,5/3/2018 18:59,5/3/2018 18:59,,,,2,,,,CC BY-SA 4.0 6255,1,6257,,5/3/2018 21:06,,1,743,"

After watching 3Blue1Brown's tutorial series, and an array of others, I'm attempting to make my own neural network from scratch.

So far, I'm able to calculate the gradient for each of the weights and biases.

Now that I have the gradient, how am I supposed to correct my weight/bias?

Should I:

  1. Add the gradient and the original value?
  2. Multiply the gradient and the original value?
  3. Something else? (Most likely answer)

In addition to this, I've been hearing the term learning rate being tossed around, and how it is used to define the magnitude of the 'step' to descend to minimum cost. I figured this may also play an integral role in reducing the cost.

",,user9432,2444,,11/16/2019 18:31,11/16/2019 18:39,"How should I update the weights of a neural network, given the gradient?",,1,0,,,,CC BY-SA 4.0 6257,2,,6255,5/3/2018 22:15,,2,,"

Consider that you have a loss function, and you want to tune your model (network) to decrease the loss. The main concept is to tune parameters in a direction which decreases the loss and gives you a better model. You can imagine a mountain where you should reach to the lower grounds.

There are 2 questions here.

  1. In which direction to move?
  2. How much should we move in that direction?

1. In which direction to move?

By ""move"" I mean tuning the parameters and therefore changing the model. If you are familiar with the concept of slope or gradient in mathematics, you should move in the direction where the slope is downward the most. The gradient shows the direction with the most slope upward. So, we should move in the opposite of gradient direction. So, we should minus the gradient from the original value, hence the negative sign in the formula below.

2. How much should we move in that direction?

This is defined by the learning rate, which is a number that is multiplied by the gradient. You can imagine that, if the learning rate is big, you are taking bigger footsteps in that direction when coming down the mountain. Similarly, when the learning rate is low, you are taking small footsteps coming down the mountain.

So, consider you have $\nabla_w$ (the gradient of the parameters $w$, with respect to the loss function $L$). Let the learning rate be $\gamma$ (for example, $\gamma= 0.1$). The formula to update the parameters would be

$$ w_{\text{new}} = w_{\text{old}} - \gamma * \nabla_{w_{\text{old}}} $$

Note that gradient descent is not the only way to optimize your model. Many other gradient-based approaches exist, for example, stochastic gradient descent, Adam, RMSprop, conjugate gradients, etc. There other methods, like evolutionary methods (genetic algorithm, for example), that use different concepts.

Also, note that the learning rate does not necessarily have to be fixed, so it can be tuned during the training, if you want or need.

",15152,,2444,,11/16/2019 18:39,11/16/2019 18:39,,,,0,,,,CC BY-SA 4.0 6263,2,,6246,5/4/2018 7:32,,0,,"

You could use VAE as previously answered though it will not work well in practice.
I think denoising auto-encoder is suitable for your problem because during training, the input is corrupted stochastically, thus it must learn to guess the distribution of the missing information (reconstruct the clean original input)
We could argue that VAE is better than DAE at modeling p(x) because of the randomness introduced at the latent space layer while DAE like algorithm keeps putting noise starting from the input layer.
suppose your data is concentrated on this 1-D curved manifold, what VAE could do is just pick some random value and output p(X|Z) which is Gaussian by the way, while DAE would learn to map a corrupted data point x˜ back to the original data point x.

",11911,,,,,5/4/2018 7:32,,,,0,,,,CC BY-SA 4.0 6267,1,,,5/4/2018 14:15,,11,4163,"

What are the mathematical prerequisites to be able to study artificial general intelligence (AGI) or strong AI?

",15277,,2444,,2/9/2021 17:05,2/13/2021 0:11,What are the mathematical prerequisites to be able to study artificial general intelligence?,,3,0,,,,CC BY-SA 4.0 6268,2,,4084,5/4/2018 14:47,,4,,"

Generative models

The hidden units are just structural support and we don't care about what those hidden vectors really are.

Generative modeling is concerned about $P(X)$, to be able to compute it, we use representation learning (aka deep learning) to identify and disentangle the underlying explanatory factors hidden in the observed data, because if we could separate them out like this

$$P(X) = P(X \mid Z) P(Z),$$

where $Z$ could encode not just the digit identity but the angle that the digit is drawn, the stroke width, and also abstract stylistic properties.

Any classifier for some task will decide which of those $z$ elements could help him to make a better decision, i.e. just the part that holds the digit identity and ignore the rest, this kind of modeling is called distributed representation, where each element is a simple indicator about some feature.

No conditional dependency within a layer

(my understanding for this reconstructed visible vector is this vector is a vector encoded in the defined RBM in the first place, we are not really construct something new, but we just happen to sample this vector from the defined RBM)

RBM is an undirected graphical model and no conditional dependency within a layer. Thus, the conditional distribution over the hidden units $h$ given the input image $v$ factorizes, that is

$$p(h \mid v) = \prod_i p(h_i \mid v) $$

And the conditional distribution over the visible units $v$ given the hidden units $h$ also factorize.

$$p(v \mid h) = \prod_j p(v_j \mid h) $$

Inference is tractable

Inference in RBM is tractable and we could compute $p(z|x)$, but the tractability of the RBM does not extend to its partition function.

To minimize the negative log-likelihood of the data, consider the following equation.

$$-\frac{\partial \log p(x)}{\partial \theta} = \frac{\partial \mathcal{F}(x)}{\partial \theta}-\sum_{\tilde{x}} p(\tilde{x}) \frac{\partial \mathcal{F}(\tilde{x})}{\partial \theta}$$

The first term $\frac{\partial \mathcal{F}(x)}{\partial \theta}$ increases the probability of training data. The second term $\sum_{\tilde{x}} p(\tilde{x}) \frac{\partial \mathcal{F}(\tilde{x})}{\partial \theta}$ decreases the probability of samples generated by the model.

The second term is nothing less than an expectation over all possible configurations of the input $x$ (under the distribution $P$ formed by the model).

Monte Carlo-based algorithms

So, we use Monte Carlo based algorithms to replace the expectation with an average sum. Samples of $p(x)$ from the model can be obtained by running a Markov chain to convergence, using Gibbs sampling as the transition operator.

For RBMs, the set of visible and hidden units, since they are conditionally independent, one can perform block Gibbs sampling. In this setting, visible units are sampled simultaneously given fixed values of the hidden units. Similarly, hidden units are sampled simultaneously given the visible units.

A step in the Markov chain is thus taken as follows:

  • $v$: input values
  • $h$: hidden features, latent space, a.k.a $Z$
  • $W$: the weight parameters; note that I am using the transposed $W$, since it's an undirected graphical model.

So, we have the formulas.

$$h^{(n+1)} \sim \operatorname{sigm}\left(W^{\prime} v^{(n)}+c\right)$$

and

$$v^{(n+1)} \sim \operatorname{sigm}\left(W h^{(n+1)}+b\right)$$

where $h^{n+1}$ refers to the set of all hidden units at the $n$th step of the Markov chain. What it means is that, for example, $h^(n+1)_i$ is randomly chosen to be $1$ (versus $0$) with the probability of the sigmoid function and similarly for $v^(n+1)_j$.

In theory, each parameter update in the learning process would require running one such chain to convergence. It is needless to say that doing so would be prohibitively expensive. As such, several algorithms have been devised for RBMs, see Contrastive Divergence.

We initialize the Markov chain with a training example (i.e., from a distribution that is expected to be close to $p$, so that the chain will be already close to having converged to its final distribution $p$).
CD does not wait for the chain to converge. Samples are obtained after only $k$-steps of Gibbs sampling. In practice, $k=1$ has been shown to work surprisingly well.

RBM for dimensionality reduction

RBM can be used to perform dimensionality reduction, and those hidden vectors are some abstract representations of the raw inputs:

According to the paper Reducing the Dimensionality of Data with Neural Networks

In 2006, a breakthrough in feature learning and deep learning took place (Hinton et al., 2006; Bengio et al., 2007; Ranzato et al., 2007), A central idea, referred to as greedy layerwise unsupervised pre-training, was to learn a hierarchy of features one level at a time, using unsupervised feature learning to learn a new transformation at each level to be composed with the previously learned transformations; essentially, each iteration of unsupervised feature learning adds one layer of weights to a deep neural network. Finally, the set of layers could be combined to initialize a deep supervised predictor, such as a neural network classifier.

Summary

  1. Train single layer RBM to reconstruct to input and then keep adding layer after layer until we reach some good low-level representation as depicted in the figure below, W1, W2, W3 are the matrices that have some lower dimensionality of the input space.

  2. Transpose those matrices $W_1, W_2, W_3$ to be the decoder part.

  3. Fine-tune the whole encoder-decoder stack to reconstruct the input. image

RBM vs VAE and GAN

Side note: RBM is now obsolete, see VAE, GAN they are way better at modeling the manifold in which the data is concentrated, yet you could sample from the manifold directly with a single inference step.

",11911,,2444,,5/16/2020 15:03,5/16/2020 15:03,,,,0,,,,CC BY-SA 4.0 6273,2,,6267,5/4/2018 19:20,,4,,"

I always recommend starting with game theory, combinatorial game theory, and algorithmic combinatorial game theory, (but I'm potentially biased;)

Combinatorics is a given--discrete mathematics is heavily utilized in computer science, and, with the advent of Combinatorial Game Theory (CGT), ability to determine if a given choice can be deemed optimal (""perfect play""). CGT arises out of traditional Game Theory, which we sometimes term ""economic game theory"" to make the distinction. Out of Game Theory also arises subfields such as Evolutionary Game Theory, which is important in AI.

These fields relate to rationality, which is the basis for optimized decision making. Decision making algorithms seems to be the fundamental distinction of what constitutes an Artificial Intelligence.

From minimax to gametrees, it's probably a good idea to have a basic grounding in these fields, even if the problem you're AI is trying to solve isn't formally defined as a game.

All problems, from a fundamental standpoint, can be regarded either as puzzles--non-competitive context--or games--competitive context. This distinction is based on whether there is a single agent (puzzles) or multiple agents (games.)

",1671,,1671,,5/7/2018 18:47,5/7/2018 18:47,,,,0,,,,CC BY-SA 4.0 6274,1,14364,,5/4/2018 22:21,,32,37185,"

I'm facing the problem of having images of different dimensions as inputs in a segmentation task. Note that the images do not even have the same aspect ratio.

One common approach that I found in general in deep learning is to crop the images, as it is also suggested here. However, in my case, I cannot crop the image and keep its center or something similar, since, in segmentation, I want the output to be of the same dimensions as the input.

This paper suggests that in a segmentation task one can feed the same image multiple times to the network but with a different scale and then aggregate the results. If I understand this approach correctly, it would only work if all the input images have the same aspect ratio. Please correct me if I am wrong.

Another alternative would be to just resize each image to fixed dimensions. I think this was also proposed by the answer to this question. However, it is not specified in what way images are resized.

I considered taking the maximum width and height in the dataset and resizing all the images to that fixed size in an attempt to avoid information loss. However, I believe that our network might have difficulties with distorted images as the edges in an image might not be clear.

  1. What is possibly the best way to resize your images before feeding them to the network?

  2. Is there any other option that I am not aware of for solving the problem of having images of different dimensions?

  3. Also, which of these approaches you think is the best taking into account the computational complexity but also the possible loss of performance by the network?

I would appreciate if the answers to my questions include some link to a source if there is one.

",13257,,2444,,6/13/2020 20:34,11/30/2021 21:08,How can I deal with images of variable dimensions when doing image segmentation?,,5,0,,,,CC BY-SA 4.0 6275,2,,6274,5/5/2018 4:00,,1,,"

Assuming you have a large dataset, and it's labeled pixel-wise, one hacky way to solve the issue is to preprocess the images to have same dimensions by inserting horizontal and vertical margins according to your desired dimensions, as for labels you add dummy extra output for the margin pixels so when calculating the loss you could mask the margins.

",11911,,,,,5/5/2018 4:00,,,,2,,,,CC BY-SA 4.0 6279,1,6283,,5/5/2018 13:30,,2,215,"

Have machine learning techniques been used to play outdoor games, like cricket or badminton?

",15441,,2444,,1/23/2021 3:38,1/23/2021 3:38,"Have machine learning techniques been used to play outdoor games, like cricket or badminton?",,1,0,,,,CC BY-SA 4.0 6280,2,,6213,5/5/2018 17:13,,1,,"

RNN is a deeply non-linear function over time, how the black linear line is deduced?

Assuming you are doing just linear regression if the least square error was used as the loss function, it will have a probabilistic interpretation

$$y^{(i)}|x^{(i)};\theta \sim \mathcal N(\theta^Tx^{(i)}, \sigma^2)$$

$Y$ is conditioned on $X$ parameterized by $\theta$, with a Gaussian distribution, thus for every data point $x$, there is a corresponding $y$ which if you are doing maximum likelihood estimation is just the mean of the Gaussian, hence variance is introduced to express the noise. if you are doing mini-batch is not guaranteed to reach a global minimum.

Side note: If you normalize the data with min-max scaling, please make sure to use only the train set, if you include the dev/test set, you are doing some kind of data snooping, generalization error will be biased.

",11911,,2444,,3/31/2020 20:30,3/31/2020 20:30,,,,5,,,,CC BY-SA 4.0 6283,2,,6279,5/6/2018 1:00,,2,,"

Yes it is possible.

A group of Chinese college students and teachers made a robot that plays badminton. I am sure someone will make a robot that can play cricket and other outside games.

Although not an outdoor game, Omron made a robot named Forpheus that plays ping pong. There is also a robot that plays the sport of curling.

There is an annual event called the RoboCup where robot teams compete in indoor soccer on a scaled down level. They don't look like they will be beating humans in the next couple of years but it is interesting to watch. Their web site is: http://www.robocup.org/

There are a few challenges that remain to be resolved for robots to play humans in sports. The biggest one is self-contained power that will last long enough to play a game. It takes a lot of power to move a human scaled robot and run its electronics, sensors, and computers. Other challenges are dexterity and agility. There have been some advances in these areas as this video shows.

",5763,,5763,,5/6/2018 1:24,5/6/2018 1:24,,,,0,,,,CC BY-SA 4.0 6284,2,,6274,5/6/2018 5:07,,2,,"

You could also have a look at the paper Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (2015), where the SPP-net is proposed. SSP-net is based on the use of a "spatial pyramid pooling", which eliminates the requirement of having fixed-size inputs.

In the abstract, the authors write

Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224×224) input image. This requirement is "artificial" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, "spatial pyramid pooling", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale.

Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-theart classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102× faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition.

",15453,,-1,,6/17/2020 9:57,6/13/2020 20:28,,,,0,,,,CC BY-SA 4.0 6285,2,,5801,5/6/2018 5:20,,1,,"

As far as generalization error is concerned, you are better off by learning the data distribution of (A and B) classes using unsupervised criterion.

If you capture the underlying factors that explain most of the variations belong to A and B classes, after that, fine-tune it using a supervised criterion. in this way if you used two classes one for (A or B) and the other for neither (A or B), you will not force the model to learn features don't belong to (A or B), because the model just checks if a new data point is probably likely drawn from the data distribution that resembles (A or B).

Side note: you will never have the data necessary to explore the internal structure of the otherwise class (neither A nor B).

",11911,,,,,5/6/2018 5:20,,,,0,,,,CC BY-SA 4.0 6289,1,,,5/6/2018 14:37,,2,537,"

I'm building a 5-class classifier with a private dataset. Each data sample has 67 features and there are about 40000 samples. Samples of a particular class were duplicated to overcome class imbalance problems (hence 40000 samples).

With a one-vs-one multi-class SVM, I am getting an accuracy of ~79% on the validation set. The features were standardized to get 79% accuracy. Without standardization, the accuracy I get is ~72%. Similar result when I tried 50-fold cross validation.

Now moving on to MLP results,

Exp 1:

  • Network Architecture: [67 40 5]
  • Optimizer: Adam
  • Learning Rate: exponential decay of base learning rate
  • Validation Accuracy: ~45%
  • Observation: Both training accuracy and validation accuracy stops improving.

Exp 2: Repeated Exp 1 with batchnorm layer

  • Validation Accuracy: ~50%
  • Observation: Got 5% increase in accuracy.

Exp 3:

To overfit, increased the depth of MLP. A deeper version of Exp 1 network

  • Network Architecture: [67 40 40 40 40 40 40 5]
  • Optimizer: Adam
  • Learning Rate: exponential decay of base learning rate
  • Validation Accuracy: ~55%

Thoughts on what might be happening?

",15463,,32410,,4/27/2021 14:58,1/17/2023 21:07,Unable to overfit using MLP,,2,4,,,,CC BY-SA 4.0 6290,2,,6289,5/6/2018 16:00,,0,,"

I guess you are using linear activation functions, maybe you are not initializing your weights, or you are regularizing your model enough.

Initialize weights with glorot, insert dropout layers in between, use Relu as your activation function, stop the training process based on Early Stopping. and just experiment with one hidden layer.

side note: if you use Adam, don't mess with the learning rate.
in SGD optimizer you could use decay because there is a single learning rate for all weight updates and the learning rate does not change during training.
In Adam, the learning rate is maintained for each network weight (parameter) and separately adopted as learning unfolds.

",11911,,,,,5/6/2018 16:00,,,,1,,,,CC BY-SA 4.0 6291,1,,,5/6/2018 19:03,,0,632,"

According to this blog post, it seems that AI systems can lie. However, can an AI be programmed in such a way that it never lies (even after learning new things)?

",15469,,2444,,1/3/2022 9:51,1/3/2022 9:51,Can an AI be programmed not to lie?,,2,0,,,,CC BY-SA 4.0 6292,1,6303,,5/7/2018 1:30,,2,144,"

Is it possible to categories songs based on their spectrograms using image recognition or would there need to more features? I was thinking that the spectrograms might also run into problems with EDM songs. Such as House music being closely related to their sounds. Would there have to be immense amount of data? I was thinking of using a CNN.

",7720,,,,,5/7/2018 19:12,Is it possible to classify songs by genres based on spectrograms?,,1,0,,,,CC BY-SA 4.0 6293,2,,6291,5/7/2018 3:09,,4,,"

If a machine learning-based AI is "sufficiently smart enough" to be able to lie, then there is nothing preventing it from lying. This does not mean it can't be persuaded from lying.

So just make the AI simple enough to not be able to lie.

The reasoning here is that in order for a system to be able to lie, a system must be able to recognize an incentive to lie. Recognizing this incentive is a challenging function and would be impossible to code manually into a computer. Machine learning can be applied to problems such as these where the function is hard to code manually. Although there has been promising work on understanding what the representations/features learned by machine learning actually represent, it may not be possible in general to have an understanding of what a lie in the agent's representation looks like. Because of this, having a hand-coded rule to catch when an agent is lying is not possible and thus being able to prevent an agent from (or catch an agent when) lying isn't possible when using machine learning.

",4398,,2444,,1/3/2022 9:49,1/3/2022 9:49,,,,4,,,,CC BY-SA 4.0 6297,1,6405,,5/7/2018 16:40,,2,3793,"

How can we compare, in terms of similarity (and/or meaning), two pieces of text (or documents)?

For example, let's say that I want to determine whether a document is a plagiarized version of another document. Which approach should I use? Could I use neural networks to do this? Or are there other more suitable approaches?

",15486,,2444,,1/19/2021 19:49,1/19/2021 19:49,"How can we compare, in terms of similarity, two pieces of text?",,2,0,,,,CC BY-SA 4.0 6298,1,6299,,5/7/2018 16:53,,2,66,"

Le et al. 2012 use a network of 1 billion parameters to learn neurons that respond to faces, cats, pedestrians, etc. without labels (unsupervised).

Their network is built with three autoregressive layers, and six pooling and normalization layers.

In the paper they state,

Optimization: All parameters in our model were trained jointly with the objective being the sum of the objectives of the three layers.

Does this mean that all three autoencoder layers were trained simultaneously, or that the first three sub-layers (first autoencoder sub-layer, first L2 pooling sub-layer, and first normalization sub-layer) were trained simultaneously?

I asked a follow-on question on the advantage of training all layer simultaneously.

",15487,,15487,,3/19/2022 13:03,3/19/2022 13:03,"Do Le et al. (2012) train all three autoencoder layers at a time, or just one?",,1,0,,,,CC BY-SA 4.0 6299,2,,6298,5/7/2018 16:53,,2,,"

The paper refers to layers and sub-layers, and clearly indicates that one layer includes all three sub-layers, so when they say they train all three layers simultaneously, they are talking about the three autoencoder layers, not the sub-layers.

This also agrees with the fact that only the autoencoder layer has tunable parameters. The other two layers use uniform weights.

",15487,,2444,,12/24/2021 8:05,12/24/2021 8:05,,,,0,,,,CC BY-SA 4.0 6300,1,6326,,5/7/2018 18:09,,6,604,"

In the paper Efficient Evolution of Neural Network Topologies (2002), the authors say

Genes that do not match are inherited from the more fit parent

What if the more fit parent has fewer nodes compared to the other, will the disjoint/excess genes be discarded?

",15490,,2444,,11/6/2019 22:55,12/23/2021 0:09,"What if the more fit parent has fewer nodes compared to the other, will the disjoint and excess genes be discarded?",,1,1,,,,CC BY-SA 4.0 6301,2,,5308,5/7/2018 18:11,,2,,"

Algorithms can learn to lie:

See:

Deception as a strategy has been observed in animal populations:

",1671,,2444,,12/12/2021 16:57,12/12/2021 16:57,,,,0,,,,CC BY-SA 4.0 6302,2,,6291,5/7/2018 18:20,,0,,"

You may be interested in the utility functions of deception:

From the abstract of Why Animals Lie: How Dishonesty and Belief can Coexist in a Signaling System. (NIH, 2006)""

We develop and apply a simple model for animal communication in which signalers can use a nontrivial frequency of deception without causing listeners to completely lose belief. This common feature of animal communication has been difficult to explain as a stable adaptive outcome of the options and payoffs intrinsic to signaling interactions. Our theory is based on two realistic assumptions. (1) Signals are ""overheard"" by several listeners or listener types with different payoffs. The signaler may then benefit from using incomplete honesty to elicit different responses from different listener types, such as attracting potential mates while simultaneously deterring competitors. (2) Signaler and listener strategies change dynamically in response to current payoffs for different behaviors. The dynamic equations can be interpreted as describing learning and behavior change by individuals or evolution across generations. We explain how our dynamic model differs from other solution concepts from classical and evolutionary game theory and how it relates to general models for frequency-dependent phenotype dynamics. We illustrate the theory with several applications where deceptive signaling occurs readily in our framework, including bluffing competitors for potential mates or territories. We suggest future theoretical directions to make the models more general and propose some possible experimental tests.

A degree of deceptive capability seems to be beneficial from the standpoint of evolution.

We humans are not always known for veracity, so the ability understand deception might be a critical component in Artificial General Intelligence's ability to interact with humans. (Specifically, you can't always believe what humans tell you.)

Based on recent human history, the recognition of the unreliability of humans (versus data and as-objective-as-possible analysis) may become critical to the survival of our own species.

More importantly, it will be essential for strong AI to understand that the ""data can lie"" (faulty parameters, inaccurate data, unawareness of incomplete information.)

JT's answer is a great functional overview on why it's not possible with current methods. This answer might be regarded in the sense that, aside from very limited special cases such as solved games where true objectivity can be achieved, reality is subjective and ""truth"" is a subjective function of the parameters and data.

Again, understanding that last bit is likely much more important than trying to code AI's not to ""lie"".

",1671,,1671,,5/8/2018 18:21,5/8/2018 18:21,,,,2,,,,CC BY-SA 4.0 6303,2,,6292,5/7/2018 19:12,,3,,"

It is possible to use spectrograms for genres classification. See relevant articles Music genre recognition using spectrograms or blog post here. The later link did use CNN for this task.

If you need training data, GTZAN or FMA could be your starting point.

",15493,,,,,5/7/2018 19:12,,,,0,,,,CC BY-SA 4.0 6306,2,,6297,5/8/2018 8:35,,1,,"

It depends what you mean by ""comparison"", but in general I would think not really.

Neural networks operate on the sub-symbolic level, ie instead of handling discrete symbols (such as letters) they work with numerical values. These values can often be mapped onto symbols (eg through input or output nodes) which typically are letters or words.

If you want to compare texts, you are dealing with symbols, so it would probably be easier to operate on the symbolic level, by manipulating words directly, rather than translating them into numerical values and back, as that usually involves some loss of precision.

But as I said, it is hard to answer your question without knowing more detail about the exact nature of the comparison you're after.

",2193,,,,,5/8/2018 8:35,,,,1,,,,CC BY-SA 4.0 6307,2,,6267,5/8/2018 8:40,,4,,"

Most of the answers are oriented towards statistical/probabilistic models. For more 'classic' AI I would say you would need some knowledge of predicate calculus. This is the more symbolic planning approach to AI problem solving.

You could argue it's a bit 'old school', but still relevant for certain aspects of AI.

",2193,,,,,5/8/2018 8:40,,,,0,,,,CC BY-SA 4.0 6308,1,,,5/8/2018 14:26,,1,537,"

In Li et al. (2010)'s highly cited paper, they talk about LinUCB with hybrid linear models in Section 3.2.

They motivate this by saying

In many applications including ours, it is helpful to use features that are shared by all arms, in addition to the arm-specific ones. For example, in news article recommendation, a user may prefer only articles about politics for which this provides a mechanism.

I don't quite understand what they mean by this. Is anyone willing to provide a different example?

Also, it would greatly help if you can clarify what Equation 6's "$\mathbf{z}$" and "$\mathbf{x}$" refer to in the context they talk about (news recommendation), or the example you give?

Equation (6) from the paper:

$$ \mathbf{E} \left[ r_{t,a} \vert \mathbf{x}_{t, a} \right] = \mathbf{z}_{t, a}^{\top} \boldsymbol{\beta}^* + \mathbf{x}_{t, a}^{\top} \boldsymbol{\theta}_a^* $$

",12656,,2444,,1/2/2022 10:24,1/2/2022 10:24,Why is it useful in some applications to use features that are shared by all arms?,,1,0,,,,CC BY-SA 4.0 6313,1,,,5/8/2018 19:33,,4,833,"

This just popped into my head, and I haven't thought it through, but it feels like a sound question. The definition of intelligence might still be somewhat fuzzy, possibly a factor of our evolving understanding of ""intelligence"" in regard to algorithms, but rationality has some precise definitions.

  • Are Rationality and Intelligence distinct?

If not, explain. If so, elaborate.

(I have some thoughts on the subject and would be very interested in the thoughts of others.)

",1671,,2444,,9/17/2020 14:50,9/17/2020 14:50,Are Rationality and Intelligence distinct?,,3,1,,,,CC BY-SA 4.0 6314,1,6744,,5/8/2018 22:23,,3,1439,"

I'm working on a Reinforcement Learning task where I use reward shaping as proposed in the paper Policy invariance under reward transformations: Theory and application to reward shaping (1999) by Andrew Y. Ng, Daishi Harada and Stuart Russell.

In short, my reward function has this form:

$$R(s, s') = \gamma P(s') - P(s)$$

where $P$ is a potential function. When $s = s'$, then $R(s, s) = (\gamma - 1)P(s)$, which is non-positive, since $0 < \gamma <= 1$.

But considering $P(s)$ relatively high (let's say $P(s) = 1000$), $R(s, s)$ become too high as well (e.g. with $\gamma=0.99$, $R(s,s)=-10$), and if for many steps the agent stays in the same state, then the cumulative reward becomes more and more negative, which might affect the learning process.

In practice, I solved the problem by just removing the factor $P(s)$ when $s = s'$. But I have some doubts about the theoretical correctness of this ""implementation trick"".

Another idea could be to scale appropriately $\gamma$ in order to give a reasonable reward. Indeed, with $\gamma=1.0$, there is no problem, and, with gamma very near to $1.0$, the negative reward is tolerable. Personally, I don't like it because it means that $\gamma$ is somehow dependent on the reward.

What do you think?

",15517,,2444,,11/9/2020 17:35,11/9/2020 17:35,What should I do when the potential value of a state is too high?,,2,0,,,,CC BY-SA 4.0 6315,2,,6313,5/9/2018 5:27,,1,,"

I recall someone (my prof probably) saying that the difference is that intelligence is a problem-solving capability, while rationality more-so refers the capability to apply one's intelligence.

ex: You are smart for knowing that sleeping late is bad for your health, but if you still sleep late then you are irrational.

In that sense then, rationality is like a meta-problem-solving skill perhaps?

",6779,,,,,5/9/2018 5:27,,,,1,,,,CC BY-SA 4.0 6316,2,,6313,5/9/2018 8:19,,2,,"

From Norvig and Russel definitions of rationality:

  • Thinking Rationally - The Greek philosopher Aristotle was one of the first to attempt to codify “right thinking,” that is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures that always yielded correct conclusions when given correct premises—for example, “Socrates is a man; all men are mortal; therefore, Socrates is mortal.” These laws of thought were supposed to govern the operation of the mind; their study initiated the field called logic.
  • Acting Rationally - An agent is just something that acts. Of course, all computer programs do something, but computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. In the “laws of thought” approach to AI, the emphasis was on correct inferences. Making correct inferences is sometimes part of being a rational agent, because one way to act rationally is to reason logically to the conclusion that a given action will achieve one’s goals and then to act on that conclusion. On the other hand, correct inference is not all of rationality; in some situations, there is no provably correct thing to do, but something must still be done. There are also ways of acting rationally that cannot be said to involve inference. For example, recoiling from a hot stove is a reflex action that is usually more successful than a slower action taken after careful deliberation.

Clearly and also intuitively, rationality is well defined.

Intelligence as seen form mathematical and computational approach:

Intelligence can be the ability for an agent to make rational or irrational decisions, on a varying time frame and also choose the level of rationality (strictly in a computational sense). For example, I have exams and I want to watch TV, on a time frame of a week/month the rational decision would be to study so that I can enjoy the fruits of my labor which will be much more than the instantaneous pleasure of TV (also I can watch reruns). But for a time frame of an hour watching TV is definitely the most rewarding thing. So intelligence can be defined as the capability in deciding the length of time frame to be rational (what we call visionaries those who can see rewards far in the future).

Also as Game Theory or economics suggest, we can have different definitions for rationality depending on our needs. Thus, watching TV to gain knowledge might be more important to someone than studying, so effectively he has a different rationality function (arbitrary made-up term) to satisfy. Thus, intelligence can be deciding our rationality function, based on our needs and external experiences (learning in a nutshell). Also we may decide to minimize our rationality cost function or leave it in an intermediate state (which can be thought of as the minima for a different rationality cost function, thus only rationality functions are the true variable and not the intelligent decision to minimize it or not).

Lets take an example of bees (I am not sure whether this is the correct interpretation though): Bees can hardly be called intelligent (no foresight), but they are rational. They perform the task assigned to them with efficiency and toil (even though this does not reward the bee itself, it rewards the genes carried by the bees and ensures it survival through the queen - can be thought of as evolutionary coded intelligence). Bees perform these jobs in an apparently rational way, which has been decided by 1000's of years of evolution. Though bees individually are of hardly any importance, together they create a truly intelligent community - taking smart decisions and actions albeit their farsightedness is only within a smaller time frame compared to humans. Thus in common terms, it can be thought that rationality always almost leads to intelligence, but the same cannot be said for vice versa (as for intelligence you now have a choice for rationality function and you can choose not to satisfy it or choose an irrational function).

An important consequence of bees not being intelligent is that they are always performing rational actions as hard coded in their genes, which is causing the entire colony to behave in an intelligent and a very optimized energy efficient way (but there maybe better strategies, we can never know whether they follow the best strategy unless we account for all the variables).

TL;DR : Intelligence can be thought of the ability of an agent to choose the amount of rationality the agent wishes to satisfy. Using mathematics we can always find one or more completely rational methods of solving a problem with variables being time and the environment. But intelligent beings can add more variables like their experience, needs and motivation. Intelligent beings to some extent can single-handedly manipulate the external environment to suit their needs.

From psychological viewpoint:

Here are a few definitions of different types of intelligence and learning - Quite good and concise.

IQ - In science, the term intelligence typically refers to what we could call academic or cognitive intelligence. In their book on intelligence, professors Resing and Drenth (2007)* answer the question 'What is intelligence?' using the following definition: ""The whole of cognitive or intellectual abilities required to obtain knowledge, and to use that knowledge in a good way to solve problems that have a well described goal and structure.""

Intelligence - Wikipedia - Actually has some good definitions.

Intelligence Quotient - Wikipedia

Emotional Intelligence - Emotional intelligence is the ability to identify and manage your own emotions and the emotions of others. It is generally said to include three skills: emotional awareness; the ability to harness emotions and apply them to tasks like thinking and problem solving; and the ability to manage emotions, which includes regulating your own emotions and cheering up or calming down other people.

Emotional Intelligence - PsychCentral

Emotional Intelligence - Wikipedia

Hope this is of some insight!

",,user9947,,user9947,5/10/2018 8:55,5/10/2018 8:55,,,,3,,,,CC BY-SA 4.0 6317,1,6321,,5/9/2018 8:41,,6,1667,"

In the paper Deterministic Policy Gradient Algorithms, I am really confused about chapter 4.1 and 4.2 which is ""On and off-policy Deterministic Actor-Critic"".

I don't know what's the difference between two algorithms.

I only noticed that the equation 11 and 16 are different, and the difference is the action part of Q function where is $a_{t+1}$ in equation 11 and $\mu(s_{t+1})$ in equation 16. If that's what really matters, how can I calculate $a_{t+1}$ in equation 11?

",15525,,2444,,2/15/2019 15:35,4/24/2019 9:29,What is the difference between on and off-policy deterministic actor-critic?,,2,0,,,,CC BY-SA 4.0 6318,1,6319,,5/9/2018 9:28,,1,262,"

I'm working with a data set where the data is stored in a string such as AxByCyA where A, B and C are actions and v,w,x,y,z are times between the actions (each letter represents an interval of time). It's worth noting that B cannot occur without A, and C cannot occur without B, and C is the action I'm attempting to study (ie: I'd like to be able to predict whether a user will do C based on their prior actions).

I intend to create 2 clusters: people who do C and those who don't.

From this data set, I build a training array to run the sci-kit (python) k-means algorithm on, containing the number of As, the number of Bs, the mean time between actions (calculated using the average of each interval) and the standard deviation between each interval.

This gives me an overall success rate of 82% on the test set, but is there anything I can do for more accuracy?

",12940,,2444,,12/28/2021 9:14,12/28/2021 9:14,How to refine K-means clustering on a data set?,,1,1,,,,CC BY-SA 4.0 6319,2,,6318,5/9/2018 9:54,,5,,"

The usual parameters to adjust in a k-means:

  1. Number of clusters (recall many clusters can have same label).
  2. Distance definition (euclidean is the most basic, Gauss is an
    improvement)
  3. Selection of initial cluster positions.
  4. Data preprocessing (data normalization, ...)
",12630,,12630,,5/9/2018 10:09,5/9/2018 10:09,,,,0,,,,CC BY-SA 4.0 6321,2,,6317,5/9/2018 13:22,,4,,"

The twist here is that the $a_{t+1}$ in (11) and the $\mu(s_{t+1})$ in (16) are the same and actually the $a_t$ in the on-policy case and the $a_t$ in the off-policy case are different.

The key to the understanding is that in on-policy algorithms you have to use actions (and generally speaking trajectories) generated by the policy in the updating steps (to improve the policy itself). This means that in the on-policy case $a_i = \mu(s_{i})$ (in equations 11-13).

Whereas in the off-policy case you can use any trajectory to improve your value/action-value functions, which means that the actions $a_t$ can be generated by any distribution $a_t~\pi(s_t,a_t)$. In (16) the algorithm explicitly states however that the action-value function ($Q^w$) has to be evaluated at $\mu_{s_{t+1}}$ (just like in the on-policy case) and not at $a_t$ which was the actual action in the trajectory generated by policy $\pi$.

",8448,,,,,5/9/2018 13:22,,,,0,,,,CC BY-SA 4.0 6323,2,,3428,5/9/2018 14:21,,7,,"

As @Thomas W said, you can be pretty immaginative when you're developing mutation and crossover methods. Each problem has its own caracteristics and, therefore, requires a different strategy.

BUT, from my experience, I'd say that 90% of crossovers and mutation on real numbers genotypes are solved using the BLX-α algorithm.

Crossover:

This algorithm is really simple. Given the parents X and Y and an α value (inside the range [0,1], generally around 0.1/0.15, but it depends by the problem), For each gene of your genotype:

  1. extract the genes xi and yi
  2. find the minimum and the maximum values
  3. the new gene will be a random number in the interval [min - range * α, max + range * α]

A variation of this algorithm is the BLX-αβ, in which we take into account which parent performed better and use two constants (α > β) to increase the probabilities that the new value will be closer to the one of the fittest parent

Mutation:

With the mutation the situation is similar: we need to get a random value that is related to our problem domain (we do not want the mutations to be destructive! They have the function of exploring the space).
In these cases it is useful to determine a range for the mutation and use that range to find the new value of the gene using BLX-α.

A more sophisticated mutation algorithm can be achieved using BLX-α on boundaries that depend on the actual value of the gene and the fitness function of the individual.
Let's imagine that our individual performs in a very bad way; in that case the mutation operator will be used to shift the individual to a distant point in the search space, where it will probably perform better.
On the other hand, if the individual is already fit we may not want to introduce some dramatic changes using the mutation. In that case the mutation range would be more contained and would have the function of tuning the genotype instead of exploring for better alternatives.

",15530,,15530,,5/13/2018 14:37,5/13/2018 14:37,,,,0,,,,CC BY-SA 4.0 6325,1,,,5/9/2018 15:35,,11,406,"

Imagine trying to create a simulated virtual environment that is complicated enough to create a ""general AI"" (which I define as a self aware AI) but is as simple as possible. What would this minimal environment be like?

i.e. An environment that was just a chess game would be too simple. A chess program cannot be a general AI.

An environment with multiple agents playing chess and communicating their results to each other. Would this constitute a general AI? (If you can say a chess grand master who thinks about chess all day long has 'general AI'? During his time thinking about chess is he any different to a chess computer?).

What about a 3D sim-like world. That seems to be too complicated. After all why can't a general AI exist in a 2D world.

What would be an example of a simple environment but not too simple such that the AI(s) can have self-awareness?

",4199,,1671,,5/11/2018 13:52,8/26/2022 6:19,What kind of simulated environment is complex enough to develop a general AI?,,5,0,,,,CC BY-SA 4.0 6326,2,,6300,5/9/2018 16:12,,7,,"

When crossover happens and one parent is fitter than the other, the nodes from the more fit parent are carried over to the child. This is the case as disjoint and excess genes are only carried over from the fittest parent. Here's an example:

// Node Crossover
Parent 1 Nodes: {[0][1][2]} // more fit parent
Parent 2 Nodes: {[0][1][2][3]}

Child Nodes:    {[0][1][2]} // after crossover

Gene information is also passed to the child during crossover. Matching genes (those that have the same innovation number) are chosen at random and passed to the child. The disjoint and excess genes are chose from the more fit parent.

// Gene Crossover
Parent 1 Genes: [1][2][3]      [6]   [8][9][10] // more fit parent
Parent 2 Genes: [1][2][3][4][5]   [7]

Child Genes: [1][2][3][6][8][9][10] // after crossover

As you can see, the gene innovation numbers in the child match up with the innovation numbers of the fittest parent. However the gene information from matching genes (in the example genes 1, 2 and 3 match) have an equal chance of being carried over to the child. In the example, the child's first three genes could have come from either parent.

",15356,,15356,,12/23/2021 0:09,12/23/2021 0:09,,,,0,,,,CC BY-SA 4.0 6329,1,,,5/10/2018 0:34,,2,437,"

I want my neural network structure to not have a circular/looping structure something similar like a directed acyclic graph (DAG). How do I do that?

",15490,,2444,,2/17/2019 21:11,8/13/2020 13:55,How do I restrict the neural network structure to be acyclic in NEAT?,,2,0,,,,CC BY-SA 4.0 6330,1,6340,,5/10/2018 1:01,,0,95,"

From https://stackoverflow.com/questions/36370129/does-tensorflow-use-automatic-or-symbolic-gradients, I understood TensorFlow requires all the operations in the Graph to be explicit formulas (instead of black-boxes, such as raw python functions) to do Automatic Differentiation. Then it will do some kind of Gradient Descent based on that to minimization.

I'm wondering, since it already know all the explicit formulas, can it directly find out the minimum by examining the equation itself? Like computing the points where gradient is zero or do not exist, then do some kind of processing to find out the minimum.

I found it is simple to do this ""symbolic minimization"" above with few variables such as minimizing Σ(a_i - v)^2 where v is the trainable variable an a_i are all the training samples. I'm not sure is there a general way though.

",15546,,,,,5/10/2018 9:31,"Can TensorFlow minimize ""symbolically""",,1,0,,12/21/2021 18:27,,CC BY-SA 4.0 6338,1,,,5/10/2018 9:03,,0,165,"

What is Bayes' theorem? How does it relate to conditional probabilities?

",15551,,2444,,8/11/2020 20:18,8/11/2020 20:18,What is Bayes' theorem?,,1,1,,,,CC BY-SA 4.0 6339,2,,6314,5/10/2018 9:26,,3,,"

I don't think the situation you're sketching should be a problem at all.

If $P(s)$ is high (e.g. $P(s) = 1000$), this means (according to your shaping / ""heuristic"") that it's valuable to be in the state $s$, that you expect to be able to get high future returns from that state.

If you then continuously take actions that keep you in the same state $s$, it is essentially correct to punish these actions (with negative rewards); you were expecting to get high future returns starting from that state, but you just remain stuck in that state. This means that the actions you're taking are not providing the rewards you were expecting, so it should be ""punished"".

Of course, the exception to the above paragraph is the case where the transition from $s$ to $s$ (staying in the same state) does result in a positive ""true reward"" (actual/real reward, not shaping-reward). In that case, the true reward will offset the reduction in potential and result in a neutral or positive combined reward if it is sufficiently large.

As for the $\gamma$ parameter, a $\gamma < 1.0$ (such as $\gamma = 0.99$) intuitively means that you prioritize short-term rewards over long-term rewards (if the magnitudes of the undiscounted rewards are similar). This implies that what you wrote in the question is precisely what should happen; if you prioritize short-term rewards over long-term rewards, it's bad to ""waste time"" by staying in the same state, so actions that don't move you anywhere should be disincentivized through negative rewards. If $\gamma = 1.0$, you have no preference for short-term rewards over long-term rewards, so you'll be fine with ""wasting time"" by staying in the same state, and therefore no longer have to punish such actions.

",1641,,2444,,9/15/2019 16:24,9/15/2019 16:24,,,,4,,,,CC BY-SA 4.0 6340,2,,6330,5/10/2018 9:31,,0,,"

If by ""symbolic"" you mean finding an analytical solution, that is, an equation for each weight, then the answer is no. The example you chose results in a system linear equations, which can be solved analytically. However once you introduce non linearities (by using activation functions with more than one layer), most non trivial cases will have no analytical solution and will need to be solved numerically. This is not a problem specific to tensorflow, it is a mathematical issue, it will not be possible on any language, current or future. Unless there is some revolution in math first.

",30433,,,,,5/10/2018 9:31,,,,0,,,,CC BY-SA 4.0 6342,1,,,5/10/2018 17:01,,1,17,"

Coming from the YT videos of 3blue1brown which showed that the individual layers do not have discernible shapes in the case of hand written letter recognition, I wondered if you could penalize dispersed shapes while training, thus creating connected shapes (at least on the first layer in the beginning). That way, you may be better able to understand the propagation of your algorithm through the layers.

Thanks, Jonny

",15563,,,,,5/10/2018 17:01,Can you make the first layer of a net have discernible shapes?,,0,0,,,,CC BY-SA 4.0 6343,1,,,5/10/2018 18:43,,2,5254,"

I'm currently using 3Blue1Brown's tutorial series on neural networks and lack extensive calculus knowledge/experience.

I'm using the following equations to calculate the gradients for weights and biases as well as the equations to find the derivative of the cost with respect to a hidden layer neuron:

The issue is, during backpropagation, the gradients keep cancelling each other out because I take an average for opposing training examples. That is, if I have two training labels being [1, 0], [0, 1], the gradients that adjust for the first label get reversed by the second label because an average for the gradients is taken. The network simply keeps outputting the average of these two and causes the network to always output [0.5, 0.5], regardless of the input.

To prevent this, I figured a softmax function would be required for the last layer instead of a sigmoid, which I used for all the layers.

However, I have no idea how to implement this. The math is difficult to understand and the notation is complicated for me.

The equations I provided above show the term: σ'(z), which is the derivative of the sigmoid function.

If I'm using softmax, how am I supposed to substitute sigmoid with it?

If I'm not mistaken, the softmax function doesn't just take one number analogous to the sigmoid, and uses all the outputs and labels.

To sum it up, the things I'd like to know and understand are:

  1. The equation for the neuron in every layer besides the output is: σ(w1x1 + w2x2 + ... + wnxn + b). How am I supposed to make an analogous equation with softmax for the output layer?
  2. After using (1) for forward propagation, how am I supposed to replace the σ'(z) term in the equations above with something analogous to softmax to calculate the partial derivative of the cost with respect to the weights, biases, and hidden layers?
",,user9432,,user9432,12/15/2018 2:03,12/15/2018 2:03,How do I implement softmax forward propagation and backpropagation to replace sigmoid in a neural network?,,1,1,,,,CC BY-SA 4.0 6344,1,6394,,5/10/2018 19:23,,5,1481,"

After learning the basics of neural networks and coding one working with the MNIST dataset, I wanted to go to the next step by making one which is able to play a game. I wanted to make it work on a game like slither.io. So, in order to be able to create multiple instances of snakes and accelerate the speed of the game, I recreated a simple version of the game:

The core features being almost done, now comes the work on the AI. I want to keep the script very simple by using only NumPy (not that TensorFlow, PyTorch, or Spark does not interest me, but I want to understand things at a "low level" before using those frameworks).

At first, I wanted the AI to be able to propose an output by reading pixels. But after some research, I don't really want to get into convnet, recurrent, and recursive neural net. I'd like to re-use the simple feed-forward NN I did with MNIST and adapt it.

So, instead of using pixels, I think I'm going to use the following data:

  • {x,y} snake's position
  • {x,y} foods positions
  • food value
  • Time, in order to get the snake, eat more food in a short time.
  • Distance from the center, not die outside the area

That's a lot of different data to handle!

My questions

  1. Can a simple FNN handle different kinds of data in the input layer?
  2. Will it properly work with a variable number of inputs?

In fact, in a specific area around the snake, the quantity of food will be variable. I came across this post, which kinda answers my question, but what if I want the neural network to forget some input if they are not being used, can dropout be of any use in this case. Or the value of the weights (correcting toward zero) of these inputs will be enough?

",15564,,2444,,12/20/2021 23:34,12/20/2021 23:37,How to handle varying types and length of inputs in a feedforward neural network?,,1,0,,,,CC BY-SA 4.0 6366,1,,,5/11/2018 5:11,,2,560,"

Goal - I am trying to implement a genetic algorithm to optimise the fitness of a species of creatures in a simulated two-dimensional world. The world contains edible foods, placed at random, and a population of monsters (your basic zombies). I need the algorithm to find behaviours that keep the creatures well fed and not dead.

What i have done -

So i start off by generating a 11x9 2d array in numpy, this is filled with random floats between 0 and 1. I then use np.matmul to go through each row of the array and multiply all of the random weights by all of the percepts (w1+p1*w2+p2....w9+p9) = a1.

This first generation is run and I then evaluate the fitness of each creature using (energy + (time of death * 100)). From this I build a list of creatures who performed above the average fitness. I then take the best of these ""elite"" creatures and put them back into the next population. For the remaining space I use a crossover function which takes two randomly selected ""elite"" creatures and mixes their genes. I have tested two different crossover functions one which does a two point crossover on each row and one which takes a row from each parent until the new child has a complete chromosome. My issue is that the creatures just don't really seem to be learning, at 75 turns I will only get 1 survivor every so often.

I am fully aware this might not be enough to go off but I am truly stuck on this and cannot figure out how to get these creatures to learn even though I think I am implementing the correct procedures. Occasionally I will get a 3-4 survivors rather than 1 or 2 but it appears to occur completely randomly, doesn't seem like there is much learning happening.

Below is the main section of code, it includes everything I have done but none of the provided code for the simulation

#!/usr/bin/env python
from cosc343world import Creature, World
import numpy as np
import time
import matplotlib.pyplot as plt
import random
import itertools


# You can change this number to specify how many generations creatures are going to evolve over.
numGenerations = 2000

# You can change this number to specify how many turns there are in the simulation of the world for a given generation.
numTurns = 75

# You can change this number to change the world type.  You have two choices - world 1 or 2 (described in
# the assignment 2 pdf document).
worldType=2

# You can change this number to modify the world size.
gridSize=24

# You can set this mode to True to have the same initial conditions for each simulation in each generation - good
# for development, when you want to have some determinism in how the world runs from generation to generation.
repeatableMode=False

# This is a class implementing you creature a.k.a MyCreature.  It extends the basic Creature, which provides the
# basic functionality of the creature for the world simulation.  Your job is to implement the AgentFunction
# that controls creature's behaviour by producing actions in response to percepts.
class MyCreature(Creature):

    # Initialisation function.  This is where your creature
    # should be initialised with a chromosome in a random state.  You need to decide the format of your
    # chromosome and the model that it's going to parametrise.
    #
    # Input: numPercepts - the size of the percepts list that the creature will receive in each turn
    #        numActions - the size of the actions list that the creature must create on each turn
    def __init__(self, numPercepts, numActions):

        # Place your initialisation code here.  Ideally this should set up the creature's chromosome
        # and set it to some random state.
        #self.chromosome = np.random.uniform(0, 10, size=numActions)
        self.chromosome = np.random.rand(11,9)
        self.fitness = 0
        #print(self.chromosome[1][1].size)

        # Do not remove this line at the end - it calls the constructors of the parent class.
        Creature.__init__(self)


    # This is the implementation of the agent function, which will be invoked on every turn of the simulation,
    # giving your creature a chance to perform an action.  You need to implement a model here that takes its parameters
    # from the chromosome and produces a set of actions from the provided percepts.
    #
    # Input: percepts - a list of percepts
    #        numAction - the size of the actions list that needs to be returned
    def AgentFunction(self, percepts, numActions):

        # At the moment the percepts are ignored and the actions is a list of random numbers.  You need to
        # replace this with some model that maps percepts to actions.  The model
        # should be parametrised by the chromosome.

        #actions = np.random.uniform(0, 0, size=numActions)

        actions = np.matmul(self.chromosome, percepts)

        return actions.tolist()


# This function is called after every simulation, passing a list of the old population of creatures, whose fitness
# you need to evaluate and whose chromosomes you can use to create new creatures.
#
# Input: old_population - list of objects of MyCreature type that participated in the last simulation.  You
#                         can query the state of the creatures by using some built-in methods as well as any methods
#                         you decide to add to MyCreature class.  The length of the list is the size of
#                         the population.  You need to generate a new population of the same size.  Creatures from
#                         old population can be used in the new population - simulation will reset them to their
#                         starting state (not dead, new health, etc.).
#
# Returns: a list of MyCreature objects of the same length as the old_population.

def selection(old_population, fitnessScore):
    elite_creatures = []
    for individual in old_population:
        if individual.fitness > fitnessScore:
            elite_creatures.append(individual)

    elite_creatures.sort(key=lambda x: x.fitness, reverse=True)

    return elite_creatures

def crossOver(creature1, creature2):
    child1 = MyCreature(11, 9)
    child2 = MyCreature(11, 9)
    child1_chromosome = []
    child2_chromosome = []

    #print(""parent1"", creature1.chromosome)
    #print(""parent2"", creature2.chromosome)

    for row in range(11):
        chromosome1 = creature1.chromosome[row]
        chromosome2 = creature2.chromosome[row]

        index1 = random.randint(1, 9 - 2)
        index2 = random.randint(1, 9 - 2)

        if index2 >= index1:
            index2 += 1
        else:  # Swap the two cx points
            index1, index2 = index2, index1

        child1_chromosome.append(np.concatenate([chromosome1[:index1],chromosome2[index1:index2],chromosome1[index2:]]))
        child2_chromosome.append(np.concatenate([chromosome2[:index1],chromosome1[index1:index2],chromosome2[index2:]]))

    child1.chromosome = child1_chromosome
    child2.chromosome = child2_chromosome

    #print(""child1"", child1_chromosome)

    return(child1, child2)

def crossOverRows(creature1, creature2):
    child = MyCreature(11, 9)

    child_chromosome = np.empty([11,9])

    i = 0

    while i < 11:
        if i != 10:
            child_chromosome[i] = creature1.chromosome[i]
            child_chromosome[i+1] = creature2.chromosome[i+1]
        else:
            child_chromosome[i] = creature1.chromosome[i]

        i += 2

    child.chromosome = child_chromosome

    return child

    # print(""parent1"", creature1.chromosome[:3])
    # print(""parent2"", creature2.chromosome[:3])
    # print(""crossover rows "", child_chromosome[:3])


def newPopulation(old_population):
    global numTurns

    nSurvivors = 0
    avgLifeTime = 0
    fitnessScore = 0
    fitnessScores = []

    # For each individual you can extract the following information left over
    # from the evaluation.  This will allow you to figure out how well an individual did in the
    # simulation of the world: whether the creature is dead or not, how much
    # energy did the creature have a the end of simulation (0 if dead), the tick number
    # indicating the time of creature's death (if dead).  You should use this information to build
    # a fitness function that scores how the individual did in the simulation.
    for individual in old_population:

        # You can read the creature's energy at the end of the simulation - it will be 0 if creature is dead.
        energy = individual.getEnergy()

        # This method tells you if the creature died during the simulation
        dead = individual.isDead()

        # If the creature is dead, you can get its time of death (in units of turns)
        if dead:
            timeOfDeath = individual.timeOfDeath()
            avgLifeTime += timeOfDeath
        else:
            nSurvivors += 1
            avgLifeTime += numTurns

        if individual.isDead() == False:
            timeOfDeath = numTurns

        individual.fitness = energy + (timeOfDeath * 100)
        fitnessScores.append(individual.fitness)
        fitnessScore += individual.fitness
        #print(""fitnessscore"", individual.fitness, ""energy"", energy, ""time of death"", timeOfDeath, ""is dead"", individual.isDead())

    fitnessScore = fitnessScore / len(old_population)

    eliteCreatures = selection(old_population, fitnessScore)

    print(len(eliteCreatures))

    newSet = []

    for i in range(int(len(eliteCreatures)/2)):
        if eliteCreatures[i].isDead() == False:
            newSet.append(eliteCreatures[i])

    print(len(newSet), "" elites added to pop"")

    remainingRequired = w.maxNumCreatures() - len(newSet)

    i = 1

    while i in range(int(remainingRequired)):
        newSet.append(crossOver(eliteCreatures[i], eliteCreatures[i-1])[0])
        if i >= (len(eliteCreatures)-2):
            i = 1
        i += 1

        remainingRequired = w.maxNumCreatures() - len(newSet)


    # Here are some statistics, which you may or may not find useful
    avgLifeTime = float(avgLifeTime)/float(len(population))
    print(""Simulation stats:"")
    print(""  Survivors    : %d out of %d"" % (nSurvivors, len(population)))
    print(""  Average Fitness Score :"", fitnessScore)
    print(""  Avg life time: %.1f turns"" % avgLifeTime)

    # The information gathered above should allow you to build a fitness function that evaluates fitness of
    # every creature.  You should show the average fitness, but also use the fitness for selecting parents and
    # spawning then new creatures.


    # Based on the fitness you should select individuals for reproduction and create a
    # new population.  At the moment this is not done, and the same population with the same number
    # of individuals is returned for the next generation.

    new_population = newSet

    return new_population

# Pygame window sometime doesn't spawn unless Matplotlib figure is not created, so best to keep the following two
# calls here.  You might also want to use matplotlib for plotting average fitness over generations.
plt.close('all')
fh=plt.figure()

# Create the world.  The worldType specifies the type of world to use (there are two types to chose from);
# gridSize specifies the size of the world, repeatable parameter allows you to run the simulation in exactly same way.
w = World(worldType=worldType, gridSize=gridSize, repeatable=repeatableMode)

#Get the number of creatures in the world
numCreatures = w.maxNumCreatures()

#Get the number of creature percepts
numCreaturePercepts = w.numCreaturePercepts()

#Get the number of creature actions
numCreatureActions = w.numCreatureActions()

# Create a list of initial creatures - instantiations of the MyCreature class that you implemented
population = list()
for i in range(numCreatures):
   c = MyCreature(numCreaturePercepts, numCreatureActions)
   population.append(c)

# Pass the first population to the world simulator
w.setNextGeneration(population)

# Runs the simulation to evaluate the first population
w.evaluate(numTurns)

# Show the visualisation of the initial creature behaviour (you can change the speed of the animation to 'slow',
# 'normal' or 'fast')
w.show_simulation(titleStr='Initial population', speed='normal')

for i in range(numGenerations):
    print(""\nGeneration %d:"" % (i+1))

    # Create a new population from the old one
    population = newPopulation(population)

    # Pass the new population to the world simulator
    w.setNextGeneration(population)

    # Run the simulation again to evaluate the next population
    w.evaluate(numTurns)

    # Show the visualisation of the final generation (you can change the speed of the animation to 'slow', 'normal' or
    # 'fast')
    if i==numGenerations-1:
        w.show_simulation(titleStr='Final population', speed='normal')
",15571,,1671,,5/11/2018 13:26,8/8/2018 15:38,Genetic Algorithm - creatures in 2d world are not learning,,1,2,,,,CC BY-SA 4.0 6368,1,,,5/11/2018 8:45,,4,556,"

I read about softmax from this article. Apparently, these 2 are similar, except that the probability of all classes in softmax adds to 1. According to their last paragraph for number of classes = 2, softmax reduces to LR. What I want to know is other than the number of classes is 2, what are the essential differences between LR and softmax. Like in terms of:

  • Performance.
  • Computational Requirements.
  • Ease of calculation of derivatives.
  • Ease of visualization.
  • Number of minima in the convex cost function, etc.

Other differences are also welcome!

I am asking for relative comparisons only, so that at the time of implementation I have no difficulty in selecting which method of implementation to use.

",,user9947,2444,user9947,1/5/2022 10:55,1/5/2022 10:56,What are the differences between softmax regression and logistic regression (other than when the number of classes is 2)?,,2,0,,,,CC BY-SA 4.0 6369,2,,6368,5/11/2018 11:15,,3,,"

As written, SoftMax is a generalization of Logistic Regression.

Hence:

  1. Performance: If the model has more than 2 classes then you can't compare. Given K = 2 they are the same.

  2. Computation Requirements: Please explain as the computational requirements require the data, enough memory to hold it and enough time to let run.

  3. Ease of Calculation of Derivatives: The cost function is summation hence once you do it for one element you do it fol all.

  4. Ease of Visualization: Well, it is easy to visualize the Confusion Matrix even for K = 10 classes. So no issue here.

  5. Cost Function: The cost function is convex. Yet not Strictly Convex hence there infinite number of minima.

",1725,,2444,,1/5/2022 10:56,1/5/2022 10:56,,,,1,,,,CC BY-SA 4.0 6370,1,,,5/11/2018 12:07,,1,56,"

In machine learning (in particular, supervised learning), if some new data changes the previous model/function drastically, then I think we should study that data. Does it happen? How to handle such a situation?

",15368,,2444,,10/28/2021 16:57,10/28/2021 16:57,What should we do when the new data drastically change the current model?,,0,2,,,,CC BY-SA 4.0 6371,2,,6325,5/11/2018 13:38,,3,,"

General AI can absolutely exist in a 2D world, just that a generalized AI (defined here as ""consistent strength across a set of problems"") in this context would still be quite distinct from an Artificial General Intelligence, defined as ""an algorithm that can perform any intellectual task that a human can.""

Even there, the definition of AGI is fuzzy, because ""which human?"" (Human intelligence is a spectrum, where individuals possess different degrees of problem solving capability in different contexts.)


Artificial Consciousness: Unfortunately, self-awareness / consciousness is a heavily metaphysical, issue, distinct from problem solving capability (intelligence).

You definitely want to look a the ""Chinese Room"" and rebuttals.


Probably worth looking at the holographic principle: ""a concept in physics whereby a space is considered as a hologram of n-1 dimensions."" Certainly models and games can be structured in this way.

Another place to explore is theories of emergence of superintelligence on infinite Conway's Game of Life. (In a nutshell, my understanding is that once researchers figured out how to generate any number within the cellular automata, the possibility of emergent sentience given a gameboard of sufficient size is at least theoretically sound.)

",1671,,1671,,5/11/2018 17:30,5/11/2018 17:30,,,,0,,,,CC BY-SA 4.0 6375,2,,6325,5/11/2018 19:58,,3,,"

I think the most important thing is that it has to have time simulated in some way. Think self aware chatbot. Then to be ""self aware"" the environment could be data that is fed in through time that can be distinguished as being ""self"" and ""other"". By that I suppose I mean ""self"" is the part it influences directly and ""other"" is the part that is influenced indirectly or not at all. Other than that it probably can live inside pretty abstract environments. The reason time is so important is without it the cognitive algorithm is just solving a math problem.

",15580,,,,,5/11/2018 19:58,,,,4,,,,CC BY-SA 4.0 6376,2,,6338,5/11/2018 20:19,,2,,"

Bayes' theorem relates conditional probabilities:

$$P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}$$

",15533,,2444,,5/1/2019 21:11,5/1/2019 21:11,,,,0,,,,CC BY-SA 4.0 6377,1,,,5/12/2018 4:31,,3,2051,"

Why is it a bad idea to have a momentum factor greater than 1? What are the mathematical motivations/reasons?

",15587,,2444,user9947,12/20/2021 23:39,12/20/2021 23:39,Why must the momentum factor be in the range 0-1?,,3,0,,,,CC BY-SA 4.0 6378,2,,6343,5/12/2018 6:54,,2,,"

One of the best ways to learn is use reference of others.

Have a look at Peter Sadowski - Notes on Backpropagation (Page 3):

There is also a great blog post by Eli Bendersky - The Softmax Function and Its Derivative.

",1725,,1725,,5/12/2018 7:04,5/12/2018 7:04,,,,0,,,,CC BY-SA 4.0 6379,2,,6325,5/12/2018 9:43,,6,,"

I will skip all thematic about "what is an AGI", "simulation game", ... These topics have been discussed during decades and nowadays they are, in my opinion, a dead end.

Thus, I can only answer with my personal experience:

It is a basic theorem in computing that any number of dimensions, including temporal one, in a finite size space, can be reduced to 1D.

However, in practical examples, the 1D representation becomes hard to analyze and visualize. It is more practical work with graphs, that can be seen as an intermediate between 1D and 2D. Graphs allows representation of all necessary facts and relations.

By example, if we try to develop an AGI able to work in the area of mathematics, any expression (that humans we write in a 2D representation with rationals, subscripts, integrals, ...) can be represented as 1D (as an expression written in a program source) but this 1D must be parsed to reach the graph that can be analyzed or executed. Thus, the graph that results after the parsing of the expression is the most practical representation.

Another example, if we want an agent that travels across a 3D world, this world can be seen as an empty space with objects that have some properties. Again, after the initial stage of scene analysis and object recognition (the equivalent to the parser in previous example), we reach a graph.

Thus, to really work in the area of AGI, I suggest skip the problems of scene analysis, object recognition, speech recognition (Narrow AI), and work directly over the representative graphs.

",12630,,11539,,8/26/2022 6:19,8/26/2022 6:19,,,,1,,,,CC BY-SA 4.0 6381,1,,,5/12/2018 11:07,,2,570,"

I'm trying to find the optimal policy for the mountain car problem using deep Q learning with images as input, however, I cannot find a way to get my Q function to give me good solutions (I followed multiple tutorials for similar problems (Atari games and Flappy bird)). I'm working on Python with Keras.

The images are given in the following format :

400x400 pixels, where the bar on the bottom right corner represents the speed of the car.

To check where my problem might lie, I thought it would be best if I split the problem by first ensuring that I can find a network which would successfully find the state of the car (position and speed, since it's all we need to find it) by feeding my convolutional network images of random states (uniformly distributed within the state space).

After unsuccessful results, I decided to split it even more by only trying to find the two state variable separately.

This is the best kind of result that I get, when the network doesn't predict all the state to be the same (which seem to happen a lot with relu activation). The state space is divided in 50x50 matrix to make my predictions. The predicted speed is on the left, and the absolute error is on the right

.

The images fed to the network are pre-processed the follow way : 1. Gray scale 2. Resize (I tried 50x50, 100x100 and 150x150) 3. Values centered around 0 in [-1;1] (this seemed to help a bit with the relu activation)

The network I used to try to find the speed is first a convolution layer (I tried 32 windows of kernel_size=(4,4) (8,8) and (16,16), strides=(1,1) and (2,2), activation= relu, linear, tanh.

Optional additional convolution layers of kernel size half the previous layer.

And an optional last dense layer of dimension 32 or activation relu, linear or tanh.

The output layer is dimension one with linear activation.

The way I train the network is by feeding the fit function with 32 random samples and let the network train for 25 epochs with batch_size 32 and repeat ad libitum.

It's becoming extremely frustrating, especially with my GPU not fitting the requirement for GPU computation to check if I get some results faster.

Can anyone tell me if I'm doing something wrong that I'm missing and what can I do to improve my method to eventually manage to get the reinforcement algorithm to work ? Like the size of the training sample, batch size and epochs of the fit function, the structure of the network, ...

Edit : I finally found a way for my reinforcement learning to converge to the true Q value : each time I run a new episode and put it in my replay memory, I run many fits of different mini-batches. This is something I thought I did by increasing the number of epochs in the parameter of the fit function, but I guess it doesn't work as I thought it would. I'm letting it train a bit and then I will try the same method for the sub-problems mentioned above.

",15590,,15590,,5/13/2018 0:54,5/13/2018 0:54,Mountain car problem with images - not converging,,0,4,,,,CC BY-SA 4.0 6384,2,,6325,5/12/2018 18:46,,2,,"

Of the answers so far, the one from @DukeZhou were the most provocative. For instance, the reference to the Chinese Room critique brings up Searle's contention that some form of intentionality might need support in the artificial environment. This might imply the necessity of a value system or a pain-pleasure system, i.e. something where good consequences can be ""experienced"" or actively sought and bad consequences avoided. Or some potential for individual extinction (death or termination) might need to be recognized. The possibility of ""ego-death"" might need to have a high negative value. That might imply an artificial world should include ""other minds"" or other agents, which the emerging or learning intelligent agent could observe (in some sense) and ""reflect on"", i.e recognize an intelligence like its own. In this sense the Cartesian syllogism ""I think therefore I am"" gets transmuted into: I (or rather me as an AI) see evidence of others thinking, and ""by gawd, 'I' can, too"". Those ""others"" could be either other learning systems (AGI's) or some contact with discrete inputs from humans mediated by the artificial environment. Wikipedia discussion of the ""reverse Turing test""

The mention of dimensionality should provoke a discussion of what would be the required depth of representation of a ""physics"" of the world external to the AI. Some representation of time and space would seem necessary, i.e., some dimensional substructure for progress to goal attainment. The Blocks World was an early toy problem whose solution provoked optimism in the 60's and 70's of last century that substantial progress was being made. I'm not aware of any effort to program in any pain or pleasure in the SHRDLU program of that era (no dropping blocks on the program's toes), but all the interesting science fiction representations of AI's have some recognition of ""physical"" adverse consequences in the ""real world"".

Edit: I'm going to add a need for ""entities with features"" in this environment that could be ""perceived"" (by any of the ""others"" that are interacting with the AGI) as the data input to efforts at induction, identification, and inference about relationships. This creates a basis for a shared ""experience"".

",15594,,15594,,5/13/2018 2:28,5/13/2018 2:28,,,,0,,,,CC BY-SA 4.0 6386,2,,6377,5/12/2018 19:04,,1,,"

If gradient descent is like walking down a slope, momentum would be the literal momentum of the agent traversing the hyperplane.

Under that analogy then, momentum factor would be analogous to the friction coefficient, with 1 being max friction and 0 being no friction.

You should be able to see why there can't be friction beyond that range: if friction = 1 it would be identical to having no friction; if friction <= 0 then by conservation of energy gradient descent will not find a local minima; if friction > 1 then gradient descent would be moving backwards.

",6779,,1671,,5/12/2018 22:51,5/12/2018 22:51,,,,2,,,,CC BY-SA 4.0 6389,1,6396,,5/12/2018 22:23,,2,248,"

As the title says, should I reset the exploration rate between trials?

I am currently doing the Open AI pendulum task and after a number of trials my model started playing but did not take any actions (i.e. didn't perform any significant swing). The Actor-Critic tutorial I followed did not reset the exploration rate (link) but it seems like there are lots of mistakes in general.

I assume that it should be reset since the model might start from a new unknown situation in a different trial and not know what to do without exploring.

",13257,,2444,,2/16/2019 2:57,2/16/2019 2:57,Should the exploration rate be reset after each trial in Q-learning?,,1,0,,,,CC BY-SA 4.0 6393,2,,6313,5/13/2018 11:20,,1,,"

You can solve something rationally or with emotions/intuition.

Intelligence can be rational or intuitive. Rational is the newest more accurate form of intelligence.

Humans use both types of intelligences.

",12251,,,,,5/13/2018 11:20,,,,0,,,,CC BY-SA 4.0 6394,2,,6344,5/13/2018 15:41,,2,,"

In short

  1. ANNs don't have problems with "different types" of data as long as they are represented using real numbers: the inputs for your ANN represent lengths and are easy to understand and process.

  2. The variable number of inputs is a little bit more tricky. In general, it is not a problem either. The net will compensate for the absence of some inputs from time to time (when food is too far away?). You don't necessarily have to use RNNs for your case.

In full

I think that the inputs of your network could be improved.

When building a new ANN, you always have to consider a few aspects:

  • Neural Nets are just stateless functions that map inputs to outputs.
  • There is no magic here: with good inputs, you can get good outputs.
  • Think about how you would write an algorithm to solve that problem. Which would the best set of inputs be? Can some computation be done somewhere else? (using other ANNs, if you want to create ANN chains, or traditional algorithms).
  • Preprocessing can do a big difference!
  • Think about the space separation that your inputs require/cause.

All these points are related to each other and they're only about the inputs you're providing to your net.

Now, keeping these points in mind, you can check again your inputs to see if you can improve something:

Objective 1: eat food

To be sure the snake will get toward the food we definitely need to pass the ANN the position of the food. But you're sure that a long list of coordinates is the best option? Maybe the coordinate of the nearest food can be enough. If you were THAT snake what would you like to receive? Or what would you receive if you were a real snake? Imagine your snake has a sensor that determines the best place to get food. It'll be like a compass, outputting only one value, from 0 to 365 (or choose whatever range you like). This compass will point at the closest bigger cluster of food or at the closest food. Remember: 1 value is always better than N and you should try to avoid a variable number of inputs, it's always a smell that something doesn't go well.

Objective 2: don't die

The distance from the centre works well. It's a simple threshold that the ANN will consider when moving, that's effective and easy to achieve. Why don't you also add another compass that will always tell you the position of the centre instead of the coordinates of the snake? In this way, you've used 2 values related to each other instead of 3, more difficult to decipher for an ANN.

The time is not really necessary. You're inserting a value that isn't related to any other, or the environment, or the food. Any possible behaviour is not influenced by its value (if the ANN knew that it has only 15 seconds left, it wouldn't change its behaviour. The same if it knew that it's "alive" since 1 minute or 4000).

",15530,,2444,,12/20/2021 23:37,12/20/2021 23:37,,,,0,,,,CC BY-SA 4.0 6395,1,,,5/13/2018 15:48,,1,68,"

On this video

Link to video

a neurologist starts by saying that we do not know how neurons calculate gradients for backpropagation.

At minute 30:39 hes showing faster convergence for ""our algorithm"", which seems to converge faster than backpropagation.

After 34:36 it goes explaining how ""neurons"" in the brain are actually packs of neurons.

I do not really understand all that he says, so I infer that those packs of neurons (which seem depicted as a single layer) are the ones who calculate the gradient. It would make sense if each neuron makes a sightly different calculation, and then each other communicate the difference in results. That would allow to deduce a gradient.

What can be deduced, from the presented information, about the purported ""algorithm""?? (From the viewpoint of improving convergence of an artificial neural network).

",15611,,,,,5/13/2018 18:40,"What can be deduced about the ""algorithm"" of backpropagation/gradient descent?",,1,0,,,,CC BY-SA 4.0 6396,2,,6389,5/13/2018 16:06,,5,,"

The exploration rate, typically parameterized as epsilon / ε, can be changed on every trial. It depends on the complexity of the model and the goals.

The simplest thing to do is keep exploration rate high and fixed. That means the model will continue to explore new options, even at the cost of not ""exploiting"" the best available option.

Another option is setting the exploration rate high at the beginning of learning so the model will search the space for possible successful solutions. Then as the model creates a set of policies that are successful for given states, the exploration rate can be lowered or decay. Exploration rate decay can be fixed (i.e., over time there is consistently less exploration and more exploitation). Exploration rate decay can be dynamic and learned. This last option is often the best but also the most complex to implement.

""Dare to Discover: The Effect of the Exploration Strategy on an Agent’s Performance"" goes into greater detail on this topic.

",15403,,15403,,5/14/2018 2:17,5/14/2018 2:17,,,,1,,,,CC BY-SA 4.0 6397,2,,6395,5/13/2018 16:29,,3,,"

There are 3 separate issues that are often confounded in Deep Learning and Neuroscience:

  1. Deep Learning is inspired by the way the biological brain works.
  2. Deep Learning is how the biological brain works.
  3. Deep Learning can model how the biological brain works.

Number 1 is accurate. The brain has many layers and many connections. Those principles have informed Deep Learning models.

Number 2 has little evidence to support that claim. The biological brain learns at the cellular level in very different ways than how any Deep Learnings system learns.

Number 3 is a current topic of research. Deep Learning is very good at learning patterns. There are good reasons to believe that Deep Learning can learn patterns in the brain. However, those Deep Learning models will not automatically give insight into the biological processes of the brain.

The video is an example of #1. Inspired by our current understanding of neurobiology, let's build better Deep Learning algorithms. These new algorithms might perform better on machine learning benchmarks. However, these algorithms are not better models of the biological brain. In order to understand the algorithms, the language of biology might not be helpful. It might be better to describe them mathematically.

",15403,,15403,,5/13/2018 18:40,5/13/2018 18:40,,,,1,,,,CC BY-SA 4.0 6399,2,,6377,5/13/2018 21:08,,0,,"

Let's talk about gradient decent!

Analogy:

So you're standing on a mountain side, and you want to get to the lowest part of this mountain. You have a notepad with you.

Although actual physics-momentum would be a good analogy here, I'm not gonna use it.

You're somewhere on this mountain side and you figure out which way is down*, and you jump once a couple of meters in that direction. How big one jump is would depend on how steep the hill is (the length of the gradient), and how much extra you push with your feet. The first time you decide to not really push that much with your feet. The SGD momentum, comes here; you write down in your note pad which direction you went, and how far (e.g. south, 4 meters).

Note: here the PHYSICAL momentum would represent the length of the gradient.

You repeat this for some time you come to a place where there are only ways upwards.

Does this mean you hit the bottom? Not necessarily; you might have gotten stuck in a valley, or "local minima". You really want to get out of this valley, but all directions are upwards, so which way should you jump?

You now take out your notebook and notice that you've been jumping south east the last 40 steps, and pretty far. You then reason that it is likely that you want to go south east. So you jump south east with a lot of thrust from your feet: This is the intuition on what momentum does;

If you have a clear "pattern" of which way is down, then this should also count!

Note: the momentum only depends on the previous step, but the previous step depends on the steps before that and so on. This is just an analogy.

Maths:

For the maths, you just add a term that is the last gradient, times some constant.

Heading(t)=γ Heading(t-1)+η Gradient(t)

Where γ is the momentum factor and η is the learning rate.

Sebsastian Ruders blog on gradient descent is brilliant to learn more details of the maths of it.

γ ≷ 1

For mathematical conclusions:

Heading(t)=γ Heading(t-1)+η Gradient(t)

γ > 1: From the "expression", you could infer that this case would generate echos. That the gradient of the previous step would contribute more than the actual gradient. For the upcoming step, this effect would get enhanced, and 10 steps down the road, you're stuck going in one direction.

γ < 1 makes it "converge" to a "terminal velocity", if you like. It would depend on the preceding steps less and less instead of more and more.

These effects are pretty clear in the equation you find at Ruders blog

If your momentum term was greater than one, then the notebook would overcome the actual gradient. After a few steps, you wouldn't even look at the hill; you'd go "I've only gone east so far, so I'll just continue east" with your jumps getting longer and longer. This is not good.

In conclusion

A high momentum term would lead you in the wrong direction (blow up and always go in the same direction), and/or oscillate around the global minima (making you jump too far).

Hope it helps :)

*: Strictly speaking, we're finding the "up" direction and go the opposite way. The "up" direction is the gradient.

",14612,,-1,,6/17/2020 9:57,5/13/2018 21:40,,,,0,,,,CC BY-SA 4.0 6401,1,,,5/14/2018 5:33,,2,180,"

I need to design an algorithm such that it handles the request for shift swapping.

The algorithm will recommend a list of people who are more likely to swap that shift with the person by analyzing previous data.

Can anyone list the techniques that will help me to do this or a good starting point?

I was thinking about training a Naive Bayes Classifier and using Mahout for generating recommendations.

",15618,,2444,,12/20/2021 22:21,1/15/2023 11:02,How to design a recommendation system for shift swapping?,,2,0,,,,CC BY-SA 4.0 6402,1,6409,,5/14/2018 5:53,,-1,743,"

I am trying to create my own variant of Google duplex however, it won't make calls but just have a real-time conversation.

My question is, where and how to start?

How do I train my model with real conversation and how do I make speech sound almost human-like? Where do I incorporate RNN and how can I make my model understand nuances?

https://youtu.be/p3PfKf0ndik trying to create something like this.

",15619,,2444,,12/21/2021 15:54,12/21/2021 15:54,How can I create my own Google duplex?,,1,1,,12/21/2021 15:53,,CC BY-SA 4.0 6403,2,,6401,5/14/2018 8:38,,0,,"

You might not even need a classifier.

I would devise a scoring function, based on analysis of the previous data you have. Each user gets a score based on features like

  • how many times in the past has this user swapped a shift with somebody else?
  • how many times has the user swapped with the current user?
  • how many times has the user swapped this particular shift?

For each criterion you add a number of points to the score; the second one might be weighted higher that the first one. Then the person with the highest score is most likely to switch shifts with your current user.

The main question is the design of the scoring function, but I don't think you'd need to go into all the overkill of setting up a classifier; just think which criteria would make someone more likely to swap, and encode them directly. This has the advantage that it's transparent, ie you can always see why someone got recommended, and you can tweak your scoring method if the results are not quite what you'd want. This is often hard to do with ML classifiers.

",2193,,,,,5/14/2018 8:38,,,,1,,,,CC BY-SA 4.0 6405,2,,6297,5/14/2018 10:20,,3,,"

There are more than 1 way of doing this:

  1. You can compute the bleu score between them if you are looking at the quality of machine translation. Check this link.
  2. You can convert them into 2 vectors using doc2vec and find the similarity between the vectors using cosine similarity.
  3. Siamese networks are something similar to what you are asking. They are neural nets that use distance metric for learning rather than a loss metric.

I don't understand why do you want to use neural network for comparing two text pieces. Generally comparisons are done by some of the distance metrics not by neural network.

",9062,,9062,,5/14/2018 11:07,5/14/2018 11:07,,,,0,,,,CC BY-SA 4.0 6407,1,6410,,5/14/2018 14:26,,2,338,"

Can agents be implemented with machine learning algorithms/models other than neural networks?

If so, how do I train an agent with some predefined rules? Can we use python programming for representing those rules?

",15631,,2444,,12/28/2021 9:22,12/28/2021 9:22,Can agents be implemented with ML algorithms other than neural networks?,,1,0,,,,CC BY-SA 4.0 6409,2,,6402,5/14/2018 15:45,,5,,"

First of all, you need to realise that you will not be able to do it. Google is a multi-billion dollar company, with a large number of very bright and well-funded researchers. That tells me that it is not something a single person can do by themselves.

Then, you already have some pre-conceptions about it. You want to use a machine learning approach, using neural networks. I would start with a blank slate and think about the problem first, before considering the tools I would want to use.

If you don't know were to start, I would suggest you have a look at Eliza, which is old but still very hard to improve on. There are many open source implementations of it available, and it wouldn't be too hard to implement your own from scratch.

Eliza uses a symbolic approach (ie no neural networks), which has the big advantage that you can see what is going on, and you can actually understand how it works. Have a play around with it, create some chatbots, and see how you can make the conversations sound more human-like. Then, when you have a clearer idea about this, proceed to other frameworks such as wit.ai or Microsoft's LUIS. There you trade in control for ease of use; but I suggest you don't do this without understanding what you're actually doing first.

",2193,,,,,5/14/2018 15:45,,,,2,,,,CC BY-SA 4.0 6410,2,,6407,5/14/2018 16:17,,2,,"

Neural networks are not inherently part of reinforcement learning (a popular agent-based framework for describing control problems). In general, if you have an agent-based scenario, you are trying to optimise a function:

Policy( State ) -> Action

Where State can be any combination of current observations and history that seem relevant to the problem. The optimisation is usually over some measure of success, achieving goals etc. Reinforcement learning formalises these terms using Markov Decision Processes, which is a very general and successful approach, but not the only one.

Finding the best policy does not require neural networks. There are a lot of basic reinforcement learning algorithms that are defined without reference to NNs. For instance, Q-learning does not need neural networks, they are an optional extra. In addition, you do not need to even use RL - you can try to search for a suitable Policy function directly using a method such as Genetic Algorithms.

Focusing more on RL methods, the usual approach without any kind of function approximator is to just store an estimate of the value of each state or state/action pair. This value is a measure of long term reward, and has a formal definition in RL. Often called the tabular approach , because it is just a table of states and their estimated values, it works for small problems just fine. For example you can train an agent to play tic tac toe, find an optimal path that moves over a grid, or discover when to hold and when to twist in a simplified blackjack game.

However, problems start to occur with the most basic methods when the state space becomes large, which happens very easily. Tabular methods are OK up to a million or maybe ten million states (depending on how easy it is to take trial actions). These are enumerated states, so typically adding a single dimension to a problem multiplies the number of possible states by the number of options in that new dimension. So after a certain level of complexity, a simple table is not good enough, and you need a function approximator. Function approximators for RL ideally have the following properties:

  • Can generalise from examples.

  • Can be updated online as new data arrives, progressively forgetting old data.

  • Can be differentiated with respect to their learning parameters.

Neural networks fit the bill. Some other popular supervised learning techniques do not - for instance Random Forests cannot usually be trained online.

In fact linear regression does work, and it is a very simple approach. For some problems, linear regression works very well in combination with reinforcement learning. You may need to do some careful feature engineering, and it is not always possible for more complex problems, but if you have a ""medium-sized"" agent-based problem - simpler than learning Go or playing video games - and you want to avoid using neural networks, then you should be able to attempt to train an agent using a combination of RL, such as Q-learning, plus linear regression.

Both Q-learning and linear regression have plenty of examples available online, and can be implemented from scratch using basic Python frameworks such as NumPy.

",1847,,,,,5/14/2018 16:17,,,,0,,,,CC BY-SA 4.0 6411,1,,,5/14/2018 17:47,,2,210,"

I want to experiment with capsule networks on facial expression recognition (FER). For now, I am using fer2013 Kaggle dataset.

One thing that I didn't understand in capsule networks was in the first convolution layer, the size was reduced to 20x20 - having input image as 28x28 and filters as 9x9 with 1 stride. But, in the capsules, the size reduces to 6x6.

How did this happen? Because with the input size as 20x20 and filters as 9x9 and 2 strides, I couldn't get 6x6. Maybe I missed something.

For my experiment, the input size image is 48x48. Should I use the same hyperparameters for the start or are there any suggested hyperparameters that I can use?

",15633,,2444,,6/9/2020 11:12,7/4/2021 15:06,Why does the size reduce to $6 \times 6$ in the capsule networks?,,1,0,,,,CC BY-SA 4.0 6414,1,6458,,5/14/2018 20:43,,3,136,"

I am looking to extract the central theme from a given news headline using NLP or text mining. Is there any reference that goes in this direction?

Here's an example. Let's say that I have the following news headline.

BRIEF-Dynasil Corporation Of America Reports Q2 EPS Of $0.08

Then the algorithm should produce

Reports

Here's another example. The input is

China's night-owl retail investors leverage up to dominate oil futures trade

And the output would e.g. be

oil futures

",15638,,2444,,2/3/2021 22:57,2/3/2021 22:57,Are there any references of NLP/text mining techniques for identifying the theme of news headlines?,,1,0,,,,CC BY-SA 4.0 6415,2,,5571,5/14/2018 22:40,,1,,"

In the case where the reward is undiscounted, there is no guarantee of convergence as the iteration procedure is not a strict contraction.

Unfortunately I can't find the math mode on the ai stackexchange so my answer can't be very precise.

But an easy example is the following: Take a 'running' reward R of 0 to make things simpler, and a MDP with two states a and b. Take a transition matrix with 0's on the diagonal and 1 off diagonal. You will see that the algorithm will always flip the values of V(a) and V(b), and hence no convergence.

",15639,,,,,5/14/2018 22:40,,,,0,,,,CC BY-SA 4.0 6418,2,,4965,5/15/2018 8:09,,2,,"

The easiest way to add some sort of structural similarity measure is to use n-grams; in your case bigrams might be sufficient.

Go through each sentence and collect pairs of words, such as:

  • ""python is"", ""is a"", ""a good"", ""good language"".

Your other sentence has

  • ""language a"", ""a good"", ""good python"", ""python is"".

Out of eight bigrams you have two which are the same (""python is"" and ""a good""), so you could say that the structural similarity is 2/8.

Of course you can also be more flexible if you already know that two words are semantically related. If you want to say that Python is a good language is structurally similar/identical to Java is a great language, then you could add that to the comparison so that you effectively process ""[PROG_LANG] is a [POSITIVE-ADJ] language"", or something similar.

",2193,,,,,5/15/2018 8:09,,,,1,,,,CC BY-SA 4.0 6421,1,6428,,5/15/2018 11:28,,3,317,"

It is well known from the history of technology that the invention of new things was always problematic. In the 15th century for example, in which Gutenberg has invented the first printing press, the world wasn't pleased. Instead the Luddite movement was doing everything to destroy his work. As far as I know from the history lesson, Gutenberg was recognized in his time as an evil sorcerer and the printing press as work of the devil.

This development was in later decades also visible. At first, a great invention was done for example the first steam driven car, and the ordinary people don't understand the technology and were in fear of it.

A modern form of technology is computing and especially artificial intelligence. From a technical point of view, it is one of the most important inventions ever, and this might result into a very strong possible form of rejection. Some people in the world are not excited by Artificial Intelligence. They not want any sorts of robots and intelligent machines.

The terminology itself is well known. The fundamental rejection of new technology because of religious or moral reasons is called Luddism or Neoluddism. Because the technophobic Ned Ludd has destroyed a while ago two stocking frames. After this episode, every rant against technology is called after him. But what I do not understand the motivation behind it. Did Ned Ludd thought, that he can change the world if he destroyed a machine? Did he believe that mankind will become good if no Gutenberg printing press are used? The problem is that, for example, if the first steam engine was never invented, also the following inventions like the internet and intelligent machines wouldn't have been invented. But what would be the alternative? What is the perspective of Ned Ludd, how does he see the better tomorrow, if no technology innovation is allowed?

",,user11571,2444,,6/15/2019 20:06,6/16/2019 8:08,What is neoluddism?,,1,0,,,,CC BY-SA 4.0 6422,1,6457,,5/15/2018 12:13,,1,127,"

I am trying a modification of Mobilenet in which I add feedback from the softmax layer into the early layers (to implement this I put a second net after the first, which receives connections from the softmax layer of the first, the pretrained weights being non trainable). The idea was to mimic the massive feedback projections in the brain, which presumably could help object recognition by enhancing specific filters and inhibiting others.

I took the pretrained network from Keras and started to retrain it on Imagenet. I noticed that the training accuraccy increased right in the first epoch. My computer is very slow thus I cannot train for too long, an epoch takes 3.5 days. So after an epoch I tried the validation set, but instead the accuracy went down to almost half that of the pretrained values.

My question is if this is and obvious case of overfitting. That is, will continued training increase the accuracy of the training set at the expense of the validation set, or is this a normal behavior expected at the initial stages of training, so that if I keep training for a few more epochs I could expect the validation set accuracy to go eventually up? Any ideas that could help are welcomed.

",30433,,,,,5/18/2018 15:12,Is this overfitting avoidable?,,1,1,,,,CC BY-SA 4.0 6425,1,,,5/15/2018 13:08,,4,127,"

I have a data set containing actions taken by customers (e.g., view a product, add a product to cart, purchase product), the product bought (if any) and times of said actions. I am attempting to use K-means clustering to identify the customers who are more likely to purchase a product based on these actions (minus the purchase).

I'm currently clustering using: the number of products viewed, the number of products put in the cart, the mean time between the actions, the variance of the time between the actions, the standard deviation of the time between the actions (all of these values are normalized), as well as the product purchased (if any). The clusters I'm getting contain ~10% buyers and 90% non-buyers, but I'm trying to separate buyers and non-buyers.

Any thoughts on what else I can do? Or should I try another method completely?

Illustration: x axis are the clusters, y axis is the number of customers, red are buyers and blue are non-buyers

Update: I made a 3D graph showcasing the clusters, the amount of customers and the mean time between actions (normalized because of reasons)

Yet another update: customers (not grouped by cluster, just as is) according to the average number of products they viewed and the average time between actions

I took some advice and tried using PCA (from this tutorial), and these are the results I got:

The raw data (x=number of items viewed/carted, y=average time between interactions)

Any tips on how to cluster this mess?

",12940,,22296,,2/21/2019 1:50,2/21/2019 1:50,Supervised K-means clustering doesn't appear to work,,0,1,,,,CC BY-SA 4.0 6426,1,,,5/15/2018 14:06,,19,37153,"

I have read various answers to this question at different places, but I am still missing something.

What I have understood is that a graph search holds a closed list, with all expanded nodes, so they don't get explored again. However, if you apply breadth-first-search or uniformed-cost search at a search tree, you do the same. You have to keep the expanded nodes in memory.

",15391,,2444,,11/10/2019 17:28,5/15/2021 12:42,What is the difference between tree search and graph search?,,1,0,,,,CC BY-SA 4.0 6428,2,,6421,5/15/2018 18:48,,3,,"

My understanding of Neo-Luddism is that it is concerned with the unforeseeable effects of technology.

The ""blackening of London"" (see Blake's London) in the early industrial era was an unforeseen effect, and would have had impacts on health related to air quality.

The unforeseen effect of heavy use of plastic materials has led not only to a large amount of plastics in the ocean, but to recent revelation of micro-particles in the drinking water supply.

Heavy use of nitrogen fertilizers has lead to dead zones (oxygen depleted water) in coastal zones, with real impacts on the food chain.

Management of nuclear waste is another issue that was overlooked at the beginning, before there was a real understanding of the effects on organisms, including humans.

  • The original Luddites were attacking technology because it was clear it would eliminate jobs.

The Luddite wiki states that ""It is a misconception that the Luddites protested against the machinery itself in an attempt to halt the progress of technology,"" rather ""Luddites feared that the time spent learning the skills of their craft would go to waste as machines would replace their role in the industry."" This is consistent with my understanding.

  • The extension of the idea of Neo-Luddism, as raised by Hawking and Musk, among others, is not a rejection of technology, but a warning about the inability to predict the dangers of new technology. They seem to be, in part, warning about the potential dangers of superintelligence.

The fears regarding superintelligence, or, self-replicating nanobots (see grey goo) are similar to concerns regarding nuclear weapons and mutually assured destruction. (Nash explains why deterrence works, but it assumes rational actors.)

But, the more concrete warnings about strong-narrow AI (AlphaGo and extensions) is that it may result in unfathomable levels of persistent, long-term unemployment as the types of tasks strong-narrow AI can do better than humans will expand, likely in an exponential function.


I always like to take it back to the origins, so I'll mention Prometheus (""forethought"") and his brother Epimetheus (""hindsight"").

Prometheus saves mankind with gifts, including fire, but fire is both creative and destructive. In the myths, the punishments to mankind for Prometheus' theft are sent by Zeus, because the myth, and the core ideas, predate modern science and philosophy.

A simple reductionist explanation of Neo-Luddism may be simply that ""hindsight is 20/20"".

Forge ahead blindly at your own peril. Proceed cautiously to minimize potential downside.

  • In some sense, practical Neo-Luddism--not the rejection technology the cognizance of the dangers of technology--may just be a form of minimax.
",1671,,1847,,6/16/2019 8:08,6/16/2019 8:08,,,,0,,,,CC BY-SA 4.0 6429,1,,,5/15/2018 19:44,,10,1382,"

I know that one of the recent fads right now is to train a neural network to generate screenplays and new episodes of e.g. the Friends or The Simpsons, and that's fine: it's interesting and might be the necessary first steps toward making programs that can actually generate sensible/understandable stories.

In this context, can neural networks be trained specifically to study the structures of stories, or screenplays, and perhaps generate plot points, or steps in the Hero's Journey, etc., effectively writing an outline for a story?

To me, this differs from the many myriad plot-point generators online, although I have to admit the similarities. I'm just curious if the tech or the implementation is even there yet and, if it is, how one might go about doing it.

",15659,,2444,,6/2/2019 15:42,4/4/2020 18:23,Can an AI be trained to generate the outline of a story?,,4,1,,,,CC BY-SA 4.0 6430,2,,6429,5/15/2018 21:34,,1,,"

As far as I am aware, this has not been done yet.

I see several problems with this. A neural network is basically a classifier, which matches an input to an output. Both input and output are usually numerical values, though they could be matched to concepts or words.

To train a NN you provide an appropriately encoded input, and the corresponding output. The NN learns the associations between the two, and can then classify unseen input accordingly. This has recently been used to transform images in a particular style etc.

What would the input and output be for generating screenplays? You could use previous scripts as inputs, but what would the output be? It could be narrative 'moves' of some sort, perhaps. So you could train an NN to recognise narrative elements from screenplays.

However, you are still not creating anything, but just recognising stuff. You would need some other input. I guess you could train an NN on ""The Simpsons"", get a narrative structure, and then present it with an Episode of ""Friends"" and see what happens. It won't be a new episode of a screenplay, though.

The other way round might work: you feed it narrative moves (a kind of story skeleton), and get a script out. But it would need a lot of (human) post-editing to be at all useful.

I think an NN is the wrong tool to use here. There has been work done with generating stories and screenplays, even way back in the early days of AI. But that was all based on symbolic AI, not on the kind of ML which seems to currently be en vogue. Have a look at James Ryan's website; he has recently written an overview over historic approaches to story (and screenplay) generation.

",2193,,,,,5/15/2018 21:34,,,,1,,,,CC BY-SA 4.0 6431,1,6432,,5/16/2018 3:51,,0,92,"

I have a medical dataset with 14000 rows dataset with 900 attributes. I have to predict disease severity using that. I would like to know whether we can write rules in python language for training an agent for medical diagnostic using machine learning.

Can an agent make the decisions by the rules coded in python and that agent get trained with some machine learning algorithms? If so is there any agent architecture and model for the agent which is good in this context?

Edit: By the rule, I meant something like this..""if x>y output z as action"". By the word ""Training"" I meant ""how to tell this agent to do this action""?

",15631,,15631,,5/16/2018 7:09,5/16/2018 7:09,Can we code rules for an agent in python language other than predicate calculus?,,1,0,,,,CC BY-SA 4.0 6432,2,,6431,5/16/2018 6:42,,1,,"

You could formulate the problem of predicting disease severity as a classification one, you give the algorithm those 900 attributes and their corresponding labels (severe/not severe) after training, you give it a new data point with just the 900 attributes, it returns the correct label severe or not.

There is an enormous number of algorithms in the ML literature for classification, some of them formulate the problem explicitly as learnable rules, i.e. let the machine figure out the rules given the attributes.

  1. Classification and Regression Trees by Breiman et al (1984)
  2. Random forest by Ho in 1995
  3. XGBoost
",11911,,,,,5/16/2018 6:42,,,,2,,,,CC BY-SA 4.0 6434,1,,,5/16/2018 8:48,,0,413,"

Pretty simple question here:

Is it useful to use the standard deviation, skew, kurtosis, or any other extrapolatory stats as features, and if so in which problem sets?

In this case, I am talking about deep learning problems.

",9608,,,,,5/17/2018 14:49,"Are standard deviation, variance, skew good features for ML?",,1,0,,,,CC BY-SA 4.0 6438,2,,6329,5/16/2018 21:55,,2,,"

The naive way is to generate connections randomly as you would for a cyclic graph, but then perform a test to reject any connections that form a cycle. This is the current approach in SharpNEAT and there has been some effort directed at improving the performance of the cycle test in the work-in-progress refactor branch.

One alternative would be to track the depth of all nodes, store a list of node IDs sorted by depth, and sample the connection endpoint nodes in such a way that the target node depth is always higher than the source node. Now I think about it that's probably the better method.

",15693,,,,,5/16/2018 21:55,,,,0,,,,CC BY-SA 4.0 6440,2,,5318,5/17/2018 14:39,,0,,"

I think that your misuse of the term over-fitting made the question vague. In layman terms, over-fitting means that a model fails to generalize to real-world scenarios, but is accurate with the training set.

Using a dropout layer means that the network cuts down on neurons that are used for training, in this case, 50%.

Recommendations for improving training accuracy would be:

  • Transfer learning
  • Adding more layers to the network (also shifting number of neurons helps)
  • Adding epochs
  • Changing optimizer (Adam and RMSProp are some of my suggestions)
  • Adding activation layers
",15465,,2444,,10/4/2020 20:28,10/4/2020 20:28,,,,1,,,,CC BY-SA 4.0 6441,2,,6434,5/17/2018 14:49,,1,,"

I would say it is useful if you have an extensive knowledge in the domain you want to apply your model in. You also need more data for it to yield reasonable results.
As for real world uses l can only think of trading at the moment.

",15465,,,,,5/17/2018 14:49,,,,1,,,,CC BY-SA 4.0 6443,1,,,5/17/2018 16:16,,2,122,"

Considering the scenario where supervised training data-set in the form of sentence will be given to train the machine

The Bomb which had been planted by Terrorist on this morning was defused by the Counter Terrorist on joining hands with the Intelligence Force

Input strings in the sentence containing each words are broken into tokenised arrays of single words with stop words removed.

Each word in the given sentence gets assigned a label w1, w2 and so on i.e,
w2 = Bomb
w6 = planted
w13 = defused

Calculating the scores for individual word combinations, the result should yield something like:
w2.w6 = Scores should be Positive (or > some threshold value)
w2.w13 = Scores should be Negative (or < some threshold value)

In case of words with polarity changers
Eg.: Bomb wasn't/haven't/didn't got defused.
The resulting scores should be positive


To accomplish this task I had implemented Sentiment Analysis with the threshold = 2.5 and ended up with the following scores

Actual Output:

< 2.5 : Low
= 2.5 : Neutral
> 2.5 : High


Expected Output:

Case 1: score = negative, since that bomb was defused or removed in the given sentence
Case 2: score = positive, vice versa of ""Case 1""
Case 3: Otherwise score = 0, in case it can't predict either of the above two cases, it should be neutral

I am facing a severe problem every time I need to update the vocabulary list with upcoming new words that were not in the dictionary list, which is turning out to be Semi-supervised learning.

Referring to the above sentence to calculate the w(n-(1/2/3/...n) and wn word with reference to word = Bomb. The final resulting score should yield as negative.

So which machine learning algorithm would be appropriate that fits to yield a better solution and based on the given data set how will I train the machine to learn the above things?

Finally should I try to implement by keeping the model persistence. So that it doesn’t have to be trained on each run.

",15703,,2193,,5/18/2018 9:31,5/18/2018 10:30,Which machine learning algorithm is suitable for detecting text w.r.t set of words,,1,3,,,,CC BY-SA 4.0 6444,2,,6429,5/18/2018 0:20,,-1,,"

As far as I know, there isn't any system like you describe yet. However, there are some interesting approaches to narrative intelligence that can be found at the University of New Orleans Narrative Intelligence Lab site: https://nil.cs.uno.edu/

Hopefully those can be helpful in guiding a deep-learning approach to narrative generation problems.

",15711,,,,,5/18/2018 0:20,,,,0,,,,CC BY-SA 4.0 6445,1,,,5/18/2018 0:54,,0,71,"

For speedrunning purposes, I am trying to train a neural network to identify human-executable ways to manipulate pseudo-RNG (in Pokemon Red, for the interested). The game runs at sixty frames per second, and the linear-congruential PRNG updates every frame, while many frames are unlikely to be relevant to the manipulation (and so should contain no actions from the neural net). Any given manipulation is likely to last 30sec-2min, and the advancement rate of the PRNG can change depending on location in the game-world.

I have some experience with coding AI/deep-learning. I've made some programs using Multilayer Perceptron and IndRNN approaches. From what I can tell, IndRNN or A3C would be my best bets. I'm not expert enough to know the correct approach, though, or to know if the dimensionality of the problem makes it outright unfeasible.

1) Is this problem reasonably solvable with NN/deep learning?

2) What approach would you recommend to tackle it?

",15711,,2193,,5/18/2018 13:18,5/18/2018 17:01,Teaching a NN to manipulate pseudoRNG over a long time scale?,,1,0,,,,CC BY-SA 4.0 6446,1,,,5/18/2018 0:57,,3,912,"

The paper Skip connections eliminate singularities explains the use of skip connections to break the singularity in deep networks, but I have not fully understood what a singularity is.

Any easy-understanding explanation?

",10569,,2444,,3/29/2021 15:51,3/29/2021 15:51,"What's the definition of ""singularity"" in the context of neural networks?",,1,0,,,,CC BY-SA 4.0 6447,2,,4832,5/18/2018 3:57,,1,,"

Sequential programming would not be suitable for this kind of problem, but an algorithm could be implemented in a declarative programming language. I would suggest using Answer Set Programming, a language that is designed for logic axioms.

",9983,,1671,,5/18/2018 17:08,5/18/2018 17:08,,,,1,,,,CC BY-SA 4.0 6449,2,,6445,5/18/2018 4:02,,1,,"

The point of pseudoRNG is to be unmodable and unpredictable, making it hard to train an AI to learn. It would more likely be useful and more efficient to have the equation that the game uses for generation available, so that you can manually make the check, or to just have a list of the loop if the pseudoRNG is based on the time elapsed.

",6989,,2193,,5/18/2018 17:01,5/18/2018 17:01,,,,1,,,,CC BY-SA 4.0 6452,1,,,5/18/2018 7:15,,1,47,"

This article ""Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator"" said for a non-separable function traditional crossover algorithm are not suitable and they can not diversify the population sufficiently, so the differential evolution stops at the local optimum points. Why does this behavior not hold for separable functions but it exists for non-separable functions? Which key feature of the non-separable functions cause this behavior?

",15714,,,,,5/18/2018 7:15,Crossover in differential evolution for separable and non-separable functions,,0,0,,,,CC BY-SA 4.0 6453,1,6455,,5/18/2018 8:26,,0,59,"

I am trying to assess an encoder in my autoencoder. I can not seem to grasp which specs make an encoder better than other one in, lets say, unsupervised learning. For example, I am trying to teach my neural network to classify cats, so that when I provide a picture of a bird, my autoencoder would tell me that it is not a picture of a cat. I am trying to understand what exact specs make my encoder (and decoder) better? I understand it is all about chosen weights but is it possible to be more specific?

",14863,,,,,5/18/2018 12:08,What are good parameters of an encoder?,,1,0,0,,,CC BY-SA 4.0 6454,2,,6443,5/18/2018 10:30,,1,,"

If your main issue is dealing with new vocabulary, you could try using a parts-of-speech tagger as a pre-processing step. You would then effectively discover relationships between ""noun"" and ""verb"", which does not change with new words. Taggers usually can handle unknown words by using contextual information.

So you'd tag the words with their word class labels, and use those for training.

In 'application mode' you use p-o-s tags again, calculate the scores, and then map them back onto your original word tokens.

This does of course loose you some information, as you're only dealing with N-V, rather than bomb-plant or bomb-defuse. To solve this I'd use a hybrid approach: for your known vocabulary you use the word tokens, whereas for unknown words you fall back on tags. If you train two classifiers, one with tokens, one with tags, you have the tags one as a 'safety net' to handle out-of-vocab words.

",2193,,,,,5/18/2018 10:30,,,,0,,,,CC BY-SA 4.0 6455,2,,6453,5/18/2018 12:08,,2,,"

I can not seem to grasp which specs make an encoder better than another one

In general, in unsupervised settings, we want to learn the probability distribution of the data p(x) by some latent variables that explain the variations observed in the training set.

The autoencoder family (Variational, Denoising, Contrastive, Sparse) try to approximate p(x) so we have a performance metric to tell us how our model is doing. e.g. (negative log likelihood of p(x))

lets say, unsupervised learning. For example, I am trying to teach my neural network to classify cats,

If you use some autoencoder model to learn the distribution of cats, you could use the encoder part to further augmented with a linear classifier to discriminate between cats and other categories Therefore you have an intrinsic task (learn a good representation of the data distribution) and an extrinsic task (learn to classify cat vs not a cat). So you could do a hyper-parameter search for the model that best suits your problem by measuring its accuracy on the extrinsic task.

Side note:
GAN (Generative Adversarial Network) is a generative model, it provides some way of interacting less directly with this p(x) by drawing samples from it starting without any input. thus the situation here is different.

",11911,,,,,5/18/2018 12:08,,,,7,,,,CC BY-SA 4.0 6456,1,,,5/18/2018 15:00,,1,37,"

According to this paper (page 4, bottom-right), atrous convolutions can be used to compute responses of arbitrarily large dimensions in Deep Convolutional Neural Networks.

I do not understand how something like this is true, since by upsampling the filters, one effectively can apply the filter less times to an image, unless one also upsamples the image. Applying the filter less times as I see it obviously means that the output (response) will be of lower dimensionality.

Is there something that I am missing here?

",13257,,,,,5/18/2018 15:00,Atrous (Dilated) Convolution: How one can compute responses of arbitrarily high dimensions in DCNN?,,0,0,0,,,CC BY-SA 4.0 6457,2,,6422,5/18/2018 15:12,,1,,"

an epoch takes 3.5 days

First of all, use colab to iterate quickly, it offers unlimited 12 hours of free GPU.

to retrain it on Imagenet.

That said, we could use complex models without being afraid of overfitting due to large training size.

will continued training increases the accuracy of the training set at the expense of the validation set

In many cases that's the case.

The idea was to mimic the massive feedback projections in the brain.

My suggestion is to read the third part of the deeplearning book which includes Representation Learning, Structured Probabilistic Models for Deep Learning, Monte Carlo Methods, Confronting the Partition Function, Approximate Inference, Deep Generative Models.

Part III is the most important for a researcher—someone who wants to understand the breadth of perspectives that have been brought to the field of deep learning, and push the field forward towards true artificial intelligence.

",11911,,,,,5/18/2018 15:12,,,,3,,,,CC BY-SA 4.0 6458,2,,6414,5/18/2018 15:32,,0,,"

You could formulate the problem as a topic classification task, hence you need labeled data.

From an unsupervised point of view, you could represent sentences with some fixed feature vector (latent representation).

  1. Generating Sentences from a Continuous Space.
  2. Paragraph2Vec.

BRIEF-Dynasil Corporation Of America Reports Q2 EPS Of $0.08

China's night-owl retail investors leverage up to dominate oil futures trade

Self-attention models would be very useful to this kind of problems since you don't need to encode all the context in the last hidden cell of some RNN model to classify to which theme the headline belongs.

",11911,,-1,,6/17/2020 9:57,5/18/2018 15:32,,,,0,,,,CC BY-SA 4.0 6459,1,,,5/18/2018 23:20,,3,538,"

In an RNN to train it, you need to roll it out, and enter in the history of inputs and the history of expected outcomes.

This doesn't seem like a realistic picture of the brain since this would require, for example, for the brain to store a perfect history of every sense that comes in to it for many time-steps.

So is there an alternative to RNNs that doesn't require this history? Perhaps storing differences or something? Or storing some accumulator?

Perhaps there is a way to calculate with RNNs that doesn't require keeping hold of this history?

",4199,,,,,8/30/2018 2:00,Is there an alternative to RNNs that doesn't require knowing input history?,,2,1,,,,CC BY-SA 4.0 6460,1,,,5/18/2018 23:20,,3,95,"

I remember the first time hearing about google trying to make driverless cars. That was YEARS ago!

These days, I'm beginning to learn about Neural Nets and other types of ML and I was wondering:

Does anybody know how many hours (or days, months, etch) is needed in training time to get the results that are now used in today's self-driving vehicles?

(I am ASSUMING they use Neural networks for this...)

",15733,,,,,8/17/2018 20:55,How long has it taken for autonomous driving cars to be being sold and used on the roads today?,,1,0,,,,CC BY-SA 4.0 6461,1,,,5/19/2018 2:39,,6,1001,"

What is the current research in artificial intelligence and machine learning in the field of data compression?

I have done my research on the PAQ series of compressors, some of which use neural networks for context mixing.

",15736,,2444,,1/21/2021 2:03,1/21/2021 2:03,What is the current research in artificial intelligence in the field of data compression?,,2,0,,,,CC BY-SA 4.0 6463,2,,6446,5/19/2018 6:39,,3,,"

Firstly, I invite you to take this answer with a grain of salt (and possibly suggest edits) as I am not specifically familiar with the issue of singularities myself. Also, I appologise for all the links being at the bottom of the answer, I am unable to reuse links throughout my answer due to my reputation on this particular exchange.


This appears to refer to singular points in Algebraic Geometry. At singular points, no singlar tagent can be defined using the usual methods. This is as opposed to regular (i.e non-singular points), where one (and only one) tangent can be regularly defined.

The Easy Answer

In simpler terms, and particularly with respect to neural networks, it refers to the points on a hypersurface (e.g. a line, plane, surface, etc) at which the gradient/slope is ambigious.

I've taken example of this from the wikipedia page on singular points.

In the above image, the hyperspace is a line. On this line, there is a singularity at (0,0), where the line crosses itself. At this point, there are two slopes (one going up, and the other going down). There is no way to choose between the slopes without applying arbatrary rules, so the gradient is ambigous.

These singularities are problematic for some of the algorithms used in deep learning, such as Gradient Descent, which uses the the derivative of objective functions (e.g. loss functions) to search for the weights/parameters that yield the lowest error.

Gradient Descent In This Example

For anyone unfamiliar with gradient descent, the problem with singularaties can be understood with the following example:

Say that the above line represents a path. Starting at the top right, you want to find the lowest point on the path. At any time, all you can know is the slope at the current point. You can turn left or right freely, but will never move upwards, so you will stop as soon as you reach the bottom of a valley.

Initially moving down and to the left, you will eventually reach the singularity and need to choose to go either left or right. Because you have no information about the path beyond your current location, you have no way of knowing which way leads to the bottom of the path.

If you choose at random, you have a 50% chance of getting stuck at in the valley at the bottom of the loop on the left of the path.

You could impose a rule like ""always follow the gradient most different to the current gradient"", then you'd go right, and find the true lowest point. However, if the path was slightly different (e.g. if it curved up at (0.25, 0.25) this would be bad, as the correct path would then be on the left.

In practice, there is no way of knowing which way is better without a deep understanding of the hypersurface of your objective functions (and if you knew that much about the objective function, you likely wouldn't need to use machine learning in the first place).

Research/Resources

I discovered this in order to answer this questions so, as justification, here is the process and resources I used to come to this conclusion:

I dug through the references in the paper you linked. Of particular relevance was Amari et al. (2006), which mentions Hironaka’s theorem of singularity resolution.

In the introductory paragraph, the wikipedia page mentions the concept of Non-Singularity Variety, and links to this page on singular points in Algebraic Geometry.

The example on singular points article is analogous to a 1d hypersurface version of many of the figures in Amari et al. (e.g. figures 1,2,4,5, and 6), and the discussion therein generally centers around the point at which the shapes converge.

My answer was composed based on this discovery process, as well my prior-knowledge of Gradient-Descent

",15570,,15570,,5/19/2018 7:04,5/19/2018 7:04,,,,0,,,,CC BY-SA 4.0 6464,1,,,5/19/2018 6:46,,1,41,"

With all the Google I/O stuff coming out, how can I verify that I have an actual human on the phone using only voice? Are there still vocal things humans can, but robots can't do?

Conditions: the person on the phone is a stranger (so personal questions won't work), and the verification must be voice only.

(Also, I understand Google Duplex may be just an overhyped demo that will turn out to flop like the Pixel Buds. But eventually such a bot would be created, right? If so, what's the best verification?)

",15743,,,,,7/30/2018 19:02,"""Vocal captcha"" for robots on the phone?",,1,0,,,,CC BY-SA 4.0 6465,1,6467,,5/19/2018 7:49,,4,249,"

Let's say you want to do AI research and publish some papers just by your own. Would you send them to an AI journal using just your name? Which AI journals are recommended?

",,user11604,,,,8/22/2018 8:09,How can you do AI research by your own?,,2,1,,,,CC BY-SA 4.0 6467,2,,6465,5/19/2018 9:03,,3,,"

Let me preface this by acknowledging that this question is prone to opinion. This answer is, in so far as it is possible, primarily observation based.


My understanding is that, when it comes to publishing in a journal, doing so as an individual (without backing from an institution) is, in general, going to be frowned upon.

As you may know: scientific research, including AI research, is generally subject to peer review. When it comes to journals, this is mandatory. The review process attempts to enhance and preserve the integrety of the information published. As an additional safeguard, submissions will often expected to be backed by endorsement from someone within the academic community (typically an institution).

However, and as Pasaba correctly points out (in his comment on the OP), research and publishing are not neccessarily the same thing. Furthermore, not having an endorsement does not not stop you from making a contribution to the field.

For example, you can publish code and/or articles on websites such as Github, and engage with communities of professional and hobbiest researchers around the web, (e.g. this Stack Exchange).

Note that there is also some scope for endorsement without being a direct member of a research institution. For example: arXiv, whilst not strictly a journal, is an open archive that supports endorsement by request.

Without knowing your circumstances, it's hard to know exactly what to do. However, my general advice is to find and engage with communities, and build a network of collaborative peers, rather than trying to succeed in an isolated fashion.

",15570,,15570,,5/19/2018 9:08,5/19/2018 9:08,,,,0,,,,CC BY-SA 4.0 6468,1,6599,,5/19/2018 16:41,,11,6452,"

The ReLU activation function is defined as follows

$$y = \operatorname{max}(0,x)$$

And the linear activation function is defined as follows

$$y = x$$

The ReLU nonlinearity just clips the values less than 0 to 0 and passes everything else. Then why not to use a linear activation function instead, as it will pass all the gradient information during backpropagation? I do see that parametric ReLU (PReLU) does provide this possibility.

I just want to know if there is a proper explanation to using ReLU as default or it is just based on observations that it performs better on the training sets.

",8720,,2444,,1/22/2020 14:47,1/24/2020 12:33,Why do we prefer ReLU over linear activation functions?,,2,2,,,,CC BY-SA 4.0 6470,1,,,5/19/2018 18:20,,1,71,"

Say I have access to several pre-trained CNNs (e.g. AlexNet, VGG, GoogleLeNet, ResNet, DenseNet, etc.) which I can use to extract features from an image by saving the activations of some hidden layer in each CNN. Likewise, I can also extract features using conventional hand-crafted techniques, such as: HOG, SIFT, LBP, LTP, Local Phase Quantization, Rotation Invariant Co-occurrence Local Binary Patterns, etc. Thus, I can obtain a very high-dimensional feature vector of an image that concatenates the individual features vectors outputted by these individual algorithms. Given these features, and given a data set of images over which I want to perform similar image retrieval (i.e. finding the top-k most similar images to a query image X), what would be the most appropriate ways to implement this task?

One possible idea I have in mind is to learn an image similarity embedding in euclidean space by training a neural network that would receive as input the aforementioned feature vectors, and perhaps down-sampled versions of the image as well, and output a lower dimensional embedding vector that ideally should place similar images close to each other and dissimilar images far apart. And I could train this network using for example Siamese Loss or Triplet Loss. The challenge of this approach though is generating the labels for the (supervised) training itself. For example, in the case of the Triplet Loss I would need to sample triplets (Q,X,Y) and somehow determine which one between X and Y is most similar to Q, in order to generate the label for the triplet (i.e., in order to ""teach"" the network I need to know the answers myself beforehand, but how? I guess this is domain dependent, but think of challenging cases where you have very heterogeneous images, such as photography galleries, artwork galleries, etc).

Anyways, this is just an idea and by no means I pretend to mean this is the right approach. I'm open to new suggestions and insights about how to solve this task.

",12746,,,,,5/19/2018 18:20,How to combine heterogeneous image features extracted with different algorithms for similar image retrieval?,,0,0,,,,CC BY-SA 4.0 6471,1,,,5/19/2018 20:43,,1,56,"

I'm trying to design a neural network with a task hierarchy. This is my idea so far:

 [Desires]
    |
[Layer 1] [T0]
    |    /
[Layer 2] [T1]
    |    /
[Layer 3] [T2]
    |    /
[Layer 4] [T3]
    |    /
  [Action]   

The way this would work is that each layer represents a task as a binary number. Layer 1 is the main task, layer 2 the sub-task etc. Each task consists of 2 sub-tasks determined by T={0,1}. In this way the neural network represents a binary task graph with T=0 being the left child and T=1 being the right child of a node.

You can think of it as T3 changing every second T2 changed every 2 seconds and so on. So {T0 T1 T2 T3} gives the binary time in seconds in a 16 second cycle.

So far this only makes the output a sequence of 16 actions in order. But if some of the layers could be ""if"" gates they might control the T-values and so act as switches and so have more complicated programs.

Do you have any suggestions to improve this? Or has this kind of binary task graph representation been done before in a neural network?

Also importantly how would you train such a neural network? (At the moment I just assume that the model is pre-trained and just trying to find a good architecture).

",4199,,4199,,5/19/2018 20:48,5/19/2018 20:48,How to create a task-graph based neural network?,,0,0,,,,CC BY-SA 4.0 6475,1,,,5/20/2018 19:10,,1,66,"

Given a query image Q and two other images X and Y (you can assume they have more or less the same resolutions if that simplifies the problem), which algorithm would perform extremely well at determining which image between X and Y is most similar to Q, even when the differences are rather subtle? For example, a trivial case would be:

  • Q = image of mountains, X = image of mountains, Y = image of dogs, therefore it is clear that sim(Q,X) > sim(Q,Y).

However, examples of trickier cases would be:

  • Q = image of a yellow car, X = image of a red car, Y = image of a yellow car, therefore sim(Q,Y) > sim(Q,X) (assuming the car shapes are more or less the same).
  • Q = image of a man standing up in the middle with a black background, X = image of another man standing up in the middle with a black background, Y = image of a woman standing up in the middle with a black background, therefore sim(Q,X) > sim(Q,Y).

Which algorithm (or combination of algorithms) would be robust enough to handle even the tricky cases with very high accuracy?

",12746,,12746,,5/20/2018 19:23,5/21/2018 13:51,"Given a query image Q and two other images X and Y, how to determine which one is most similar to Q?",,1,4,0,,,CC BY-SA 4.0 6477,2,,6475,5/21/2018 13:51,,2,,"

From your examples I assume you presuppose image recognition in the sense that you don't compare the actual images, but the descriptions of what the images contain.

For comparing images there are various algorithms working on the visual similarity. This can sometimes lead to interesting results, as you probably have seen images on the internet like ""dog or muffin"". A purely visual approach would find this hard to do.

However, if you do have the description of the image (as there are ways of getting captions from images), then it would just be a text comparison between three sentences: the one describing your query image, and those describing your images X and Y. There are ways of getting at the semantic similarity of sentences. The simplest way (from your examples) would be to look at the overlap in words: [yellow, car]/Q, [red, car]/X, and [yellow, car]/Y obviously has the largest overlap with Q and Y. This is rather simplistic, but it is what your examples suggest you are dealing with.

But, getting a proper description from an image is a hard task, of the calibre that Google and Instagram are still working on; they have large amounts of training data and huge resources they can throw at the problem. So, unless that is your starting point, it will not be easy to achieve.

Overall I do not think it is currently possible to solve this problem with high accuracy.

",2193,,,,,5/21/2018 13:51,,,,3,,,,CC BY-SA 4.0 6478,1,12016,,5/21/2018 14:42,,4,122,"

I am trying to track LIDAR objects using Kalman filter. The problem is that the innovation has the value 0, which makes the Kalman gain be Infinity. Here is a link with the Kalman equations. The values with which I initialized the measurement and process covariance matrix are listed below. The update code is also shown below. When I debug the code everything is fine until the innovation becomes 0.

this->lidar_R << std_laspx_, 0, 0, 0,
    0, std_laspy_, 0, 0,
    0, 0, 0, 0,
    0, 0, 0, 0;

this->lidar_H << 1.0, 0.0, 0.0, 0.0, 0.0,
    0.0, 1.0, 0.0, 0.0, 0.0,
    0.0, 0.0, 0.0, 0.0, 0.0,
    0.0, 0.0, 0.0, 0.0, 0.0;

P_ << 1000, 0, 0, 0, 0,
    0, 1000, 0, 0, 0,
    0, 0, 1000, 0, 0,
    0, 0, 0, 1000, 0,
    0, 0, 0, 0, 1000;

 MatrixXd PHt = this->P_ * H.transpose();
 //S becomes 0
 MatrixXd S = H * PHt + R;
 //S_inv becomes INFINITY
 MatrixXd S_inv_ = S.inverse();
 MatrixXd K = PHt * S_inv_;

VectorXd y = Z - Hx;

this->x_ = this->x_ + K*y;
MatrixXd I = MatrixXd::Identity(x_.size(), x_.size());
this->P_ = (I - K * H) * this->P_;
",15775,,2193,,5/21/2018 19:35,9/23/2019 8:01,Kalman filter pre inovation,,2,2,,,,CC BY-SA 4.0 6479,1,,,5/21/2018 15:30,,2,243,"

A task I’m working on at the moment requires a CNN with a height map as one of the inputs. This is a matrix of floating point values in which each point is the height of that point above sea level.

I’m having trouble deciding how to normalize this data. I know there are networks that work on depth or distance data but that is different for several reasons:

  • Height can also be negative (as opposed to depth/distance which starts at 0)
  • Height has a very large range - can get values between -400 and +~9000.

For these reasons the common approach to normalisation, simply subtracting the mean and dividing by the standard deviation, will result in the loss of information in most cases (all values will be close to zero).

I thought of maybe subtracting the local mean for each input, rather than a general mean calculated from all the data, but I still don't know what to do with the standard deviation, since dividing by the local standard deviation can result in very “flat” and very “steep” inputs looking the same after normalization.

",15776,,2193,,5/21/2018 19:35,5/31/2018 16:52,Normalizing height data for CNN,,0,6,,,,CC BY-SA 4.0 6481,1,,,5/21/2018 16:03,,3,150,"

I have a data set with historical information of some events (let's say event A and event B),these events describe the discovery of land mines, the coordinates of the event and the date of the event; is there a way I can use this historical information to predict points (coordinates) where event A or B could happen i.e. where might be still land mines that haven't been found?

",15764,,15764,,5/22/2018 17:21,5/22/2018 17:21,Is there a way to predict points on a map?,,1,2,,,,CC BY-SA 4.0 6482,2,,6481,5/21/2018 16:31,,1,,"

Leaving aside the time aspect, you could do a cluster analysis on the event coordinates. If you use an algorithm that gives you a medoid (ie centre) of the clusters, you can then look at other points, and work out how close they are to the centres of the event clusters. It might be possible from this to predict which event could happen at those coordinates (which is the closest cluster medoid), and how likely it is (distance from the medoid).

This, however, depends very much on the shape of the data. If there is no discernible structure contained in it, then this will not work. But it is definitely worth trying.

",2193,,,,,5/21/2018 16:31,,,,3,,,,CC BY-SA 4.0 6486,1,6487,,5/22/2018 7:57,,4,2064,"

In slide 16 of his lecture 5 of the course ""Reinforcement Learning"", David Silver introduced GLIE Monte-Carlo Control.

But why is it an on-policy control? The sampling follows a policy $\pi$ while improvement follows an $\epsilon$-greedy policy, so isn't it an off-policy control?

",15525,,2444,,2/18/2019 15:10,8/10/2020 9:15,Why is GLIE Monte-Carlo control an on-policy control?,,1,0,,,,CC BY-SA 4.0 6487,2,,6486,5/22/2018 9:44,,4,,"

In this case, $\pi$ has always been an $\epsilon$-greedy policy. In every iteration, this $\pi$ is used to generate ($\epsilon$-greedily) a trajectory from which the new $Q(s, a)$ values are calculated. The last line in the "pseudocode" tells you that the policy $\pi$ will be a new $\epsilon$-greedy policy in the next iteration. Since the policy that is improved and the policy that is sampled are the same, the learning method is considered an on-policy method.

If the last line was $\mu \leftarrow \epsilon\text{-greedy}(Q)$, it would be an off-policy method.

",8448,,8448,,8/10/2020 9:15,8/10/2020 9:15,,,,2,,,,CC BY-SA 4.0 6488,1,,,5/22/2018 9:49,,5,1328,"

I used the example at - https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/5_DataManagement/tensorflow_dataset_api.py - to create my own classification model. I used different data but the basic outline of datasets was used.

It was important for my data type to shuffle the data and then create the training and testing sets. The problem, however, comes as a result of the shuffling.

When I train my model with the shuffled train set I get a +- 80% accuracy for train and +- 70% accuracy for the test set. I then want to input all the data (i.e. the set that made the training and test set) into the model to view the fully predicted output of this data set that I have.

If this data set is shuffled as the training and testing set was I get an accuracy of around 77% which is as expected, but then, if I input the unshuffled data (as I required to view the predictions), I get a 45% accuracy. How is this possible?

I assume it's due to the fact that the model is learning incorrectly and that it learns that the order of the data points plays a role in the prediction of those data points. But this shouldn't be happening as I am simply trying to (like the MNIST example) predict each data point separately. This could be a mini-batch training problem.

In the example mentioned above, using data sets and batches to train, does the model learn from the average of all the data points in the mini-batch or does it think one mini-batch is one data point and learn in that manner (which would mean order matters of the data)?

Or if there are any other suggestions.

",15789,,2444,,1/28/2020 1:32,1/28/2020 1:32,Does the model learn from the average of all the data points in the mini-batch?,,3,1,,,,CC BY-SA 4.0 6489,1,6511,,5/22/2018 13:25,,2,665,"

I know how IOU works during detection. However, while preparing targets from ground-truth for training, how is the IOU between a given object and all anchor boxes calculated?

Is the ground truth bounding box aligned with an anchor box such that they share the same center? (width/2, height/2)

I think this is the case but I want to hear from someone who has better knowledge of how training data is prepared for training in YOLO.

",13255,,2444,,1/28/2021 23:23,1/28/2021 23:23,How are IOUs for ground truth boxes in YOLO calculated?,,1,2,,,,CC BY-SA 4.0 6491,1,,,5/22/2018 21:02,,5,145,"

Various texts on using CNNs for object detection in images talk about how their translation invariance is a good thing. Which makes sense for tasks where the object could be anywhere in the image. Let's say detecting a kitten in household images.

But let's say, you already have some information about the likely position of the object of interest in the image. For example, for detecting trees in a dataset of images of landscapes. Here in most cases, the trees are going to be in the bottom half of the image while in some cases they might be at the top (because it's on a hill or whatever). So you want your neural network to learn that information -- that trees are likely connected to the bottom part of the image (ground). Is this possible using the CNN paradigm?

Thank you

",15803,,,,,7/24/2018 16:13,Can translational invariance of CNNs be unwanted if object is likely in certain positions?,,2,0,,,,CC BY-SA 4.0 6492,1,6502,,5/23/2018 3:57,,2,340,"

I was just reading through some convex optimization textbooks to hopefully improve my deep learning understanding and come up with new ideas. Halfway through, I decided to Google a bit! It's obvious that deep learning deals with nonconvex functions.

Here's the question though: If deep learning is non-convex, then why do we apply a convex loss function, such as cross-entropy or least square, to solve a problem under a convex constraint? What am I missing?

",15805,,2444,,12/20/2021 23:42,12/20/2021 23:42,"If Deep Learning is non convex, then why do use a convex loss function?",,1,0,,,,CC BY-SA 4.0 6495,2,,5398,5/23/2018 8:46,,3,,"

Based on this comment in the Issue I created about this question on github, it looks like there is confirmation that at least DeepMind does not use this kind of functionality in their Atari experiments, contrary to what is implied by the comments in the OpenAI baselines code.

",1641,,,,,5/23/2018 8:46,,,,0,,,,CC BY-SA 4.0 6496,2,,3494,5/23/2018 9:25,,9,,"

What attracts me to Python for my analysis work is the ""full-stack"" of tools that are available by virtue of being designed as a general purpose language vs. R as a domain specific language. The actual data analysis is only part of the story, and Python has rich tools and a clean full-featured language to get from the beginning to the end in a single language (use of C/Fortran wrappers notwithstanding).

On the front end, my work commonly starts with getting data from a variety of sources, including databases, files in various formats, or web scraping. Python support for this is good and most database or common data formats have a solid, well-maintained library available for the interface. R seems to share a general richness for data I/O, though for FITS the R package appears not to be under active development (no release of FITSio in 2.5 years?). A lot of the next stage of work typically occurs in the stage of organizing the data and doing pipeline-based processing with a lot of system-level interactions.

On the back end, you need to be able present large data sets in a tangible way, and for me, this commonly means generating web pages. For two projects I wrote significant Django web apps for inspecting the results of large Chandra survey projects. This included a lot of scraping (multiwavelength catalogs) and so forth. These were just used internally for navigating the data set and helping in source catalog generation, but they were invaluable in the overall project.

Moving to the astronomy-specific functionality for analysis, it seems clear that the community is solidly behind Python. This is seen in the depth of available packages and level of development activity, both at an individual and institutional level (http://www.astropython.org/resources). Given this level of infrastructure that is available and in work, I think it makes sense to direct effort to port the most useful R statistical tools for astronomy to Python. This would complement the current capability to call R functions from Python via rpy2.If you are interested, I strongly recommend that you read this article, here it is a question of comparing programming languages https://diceus.com/what-technology-is-b ... nd-java-r/ I hope it helps.Good Luck

",15811,,,,,5/23/2018 9:25,,,,1,,,,CC BY-SA 4.0 6497,1,,,5/23/2018 11:18,,0,113,"

For a regression task, I have sequences of training data and if I define the layers of deep neural network to be:

Layers=[ sequenceInputLayer(featuredimension) reluLayer dropoutLayer(0.05) fullyConnectedLayer(numResponse) regressionLayer]

Is it a valid deep neural network? Or do I need to add LSTM layer too?

",15649,,1671,,5/23/2018 19:58,5/23/2018 19:58,Is it a valid Deep Neural Network?,,1,0,,,,CC BY-SA 4.0 6499,1,6531,,5/23/2018 14:27,,6,514,"

I want to use computer vision to allow my robot to detect the corners of a soccer field based on its current position. Matlab has a detectHarrisFeatures feature, but I believe it is only for 2D mapping.

The approach that I want to try is to collect the information of the lines (using line detection), store them in a histogram, and then see where the lines intersect based on their angles.

My questions are:

  1. How do I know where the lines intersect?
  2. How do I find the angles of the lines using computer vision?
  3. How do I update this information based on my coordinates?

I am in the beginning stages of this task, so any guidance is much appreciated!

",15821,,22079,,7/29/2020 23:02,7/29/2020 23:02,How to use computer vision to find corners of a soccer field based on location coordinates?,,3,0,,,,CC BY-SA 4.0 6500,2,,5496,5/23/2018 14:44,,1,,"

In the original paper the innovation ID is on the connections only.

The connection is the object that is keeping the information; nodes can be discerned by the connections.

This image represents a possible crossover operator that makes a distinction between disjoint and excess genes and therefore creates children depending on these. It's part of my master thesis, I'd be glad to expand the topic, but for now I'll just use it as example.

In the image we assume that the connections that are joining two nodes have the same innovation number.

As you can see, there is no need of assigning an innovation number to the nodes: nodes are just a result of what the connection say. This also allows for a more dynamic approach that can be used to spot invalid nets even before building them (checking if there are cyclics or nodes that aren't receiving any input or giving any output) and correct them in order to obtain only valid graphs. Nodes are added just because there is a connection that is pointing to that specific node. This is enough to grant its presence (node number 8 in child 2).

As last point, for the data normalization theory (Normalization is a process of organizing the data in database to avoid data redundancy, insertion anomaly, update anomaly & deletion anomaly) we should avoid rendundancy at any cost and this is why we should try to keep track of the smallest amount of object possibles. So if we can deduct the nodes from the connections we should do it.

",15530,,,,,5/23/2018 14:44,,,,2,,,,CC BY-SA 4.0 6501,2,,6497,5/23/2018 16:00,,1,,"

Yes, it is a very common practice to use some RNN when your input data is a sequence. Besides, your network has some shape issue if your input data is 2D. You should, at least flatten your input data to a vector to be able to forward propagate to the dense layer, but instead of this use some kind of RNN. To the best of my knowledge, that dropout value doesn't good. It seems too low.

",6019,,6019,,5/23/2018 16:07,5/23/2018 16:07,,,,4,,,,CC BY-SA 4.0 6502,2,,6492,5/24/2018 5:43,,6,,"

Well, you are definitely mixing two different things. Here are those bits:

  • The function that deep learning approximates is basically a function that best fits the INPUT DATA points. You should not think about its differentiability or optimization aspects. We don't care what type of function it is; we just want the best fit of input data (ofcourse overfitting is not desired though). So, the space in which this fucntion lies has dimensions equal to the dimensions of input data (or number of input features). For every function fit we get some loss which basically is the distance from actual data point from the one predicted by the function fitted.

  • Loss function is the one which is defined in terms weights and biases. You can think the space in which loss function lies has dimensions equal to the number of PARAMETERS to tune (weights and biases). So, this space is very different from the one described in the first point. Now, here we do care about the differentiability of the function because we want a point where the function outputs least value (minimize the loss) given some particular values of inputs (weights and biases in this case) and by differentiating we can traverse to that ""optimal"" point in the function.

Hope this clears your doubt.

",15796,,,,,5/24/2018 5:43,,,,1,,,,CC BY-SA 4.0 6503,1,6506,,5/24/2018 10:26,,0,940,"

How do I decrease the accuracy value when training a model using Keras; which parameters can I change to decrease the value?

My objective is not to actually decrease it, but just to know which parameters influence the accuracy

sgd = optimizers.SGD(lr=1e-2)
",15818,,2193,,5/24/2018 10:41,2/16/2020 17:40,How to decrease accuracy from 99% to 80%~85% using keras for training a model,,2,2,,,,CC BY-SA 4.0 6504,1,6630,,5/24/2018 15:57,,1,633,"

I would like to know how to teach an agent for performing prediction of the severity of disease and also for alerting patients using machine learning methods.

I found the model-based reflex agent can be used in medical diagnosis in some literature.

May I know which architecture will be good, to make such an agent?

",15631,,2444,,6/2/2020 23:24,6/2/2020 23:24,How to teach a model-based reflex agent for doing some task using machine learning methods?,,2,0,,,,CC BY-SA 4.0 6505,2,,6377,5/24/2018 17:20,,1,,"

If you want only the answer to your question in particular, you can skip to the last part of the answer. To answer in detail momentum is a technically incorrect term, I would rather call it inertial learning.

Inertia - Inertia is the resistance of any physical object to any change in its position and state of motion.

First the equation of weight change in the momentum learning method at a particular iteration is given by the equation:

Where beta will be the momentum term. If we expand the expression we get something like:

Courtsey: Stochastic Gradient Descent with momentum

Here S_t are the gradients or dels for a particular training example. Clearly this is for a 3 example training set.

Now why do we use momentum? As @Andreas Storvik Strauman has provided a link, you can easily delve into the mathematics for its usage. But to make a more intuitive sensehere are a few points to note:

  • The exponentially weighted terms can be thought of as past memory of what you learned. You don't want to forget it completely, so you keep on revising it with weight-age of revising it decreasing over time. The vector updation for an already iterated training example keeps getting smaller and smaller, whereas it is not present altogether in normal gradient descent.
  • The momentum term can be thought to play a damping role, it is not allowing the new training example to have its way completely. You can visualize this by taking 2 points and a straight line, with the updation scheme directly proportional to the distance between the line and the points and then check for both normal and momentum gradient descent methods. Thus gradient descent with momentum is a damped oscillation and thus always has a higher chance of converging.
  • The inertial learning also helps when you come to a point in your loss curve where slope is 0. Normal learning will result in very small weight updates in this position, but with inertial learning this position will be easily crossed.

As for your original question of why momentum term <1, here are a few points which most answers have missed:

  • The first and foremost, if beta > 1, the weightage for previous training examples will increase exponentially. (Like 1.01^1000 = 20959 just after 1000 iterations). That maybe handled by increasing the learning rate accordingly, but not only it will require a lot of extra computation, it is almost mathematically impossible.
  • Second, a exponential series with r >= 1 common ratio never converges. It goes on going bigger and bigger. Also, if you can draw parallels with continuous functions, this is what we call a function which is not Absolutely integrable function.
  • Also as per our previous intuition why would one want to give high weight-age to things which you have learned previously. It may not be even important if you follow online learning method (you look at each training example only once due to high number of training examples).

All these leads to a single conclusion if beta >= 1, there will be a large amount of oscillation and error will go on increasing exponentially (can probably be proven by rigorous mathematical analysis). Although it might wotk for beta = 1(due to the Perceptron Convergence Theorem)

",,user9947,,user9947,8/28/2018 12:24,8/28/2018 12:24,,,,0,,,,CC BY-SA 4.0 6506,2,,6503,5/24/2018 18:22,,1,,"

There are many things affecting accuracy. I'm gonna assume a lot here because you don't say anything about the model, what you're trying to achieve or how many classes you have. You're not even saying whether you're classifying or not. Also, you're not saying which accuracy you're using (classification, AUC, F1 etc.).

I'm gonna assume here that you have some classification problem.

Accuracy is the measure of how many classifications you got correct. In my experience 99% is a warning sign because it's too good to be true, and a result like that is often due to overfitting. Since this, in my experience, is the main reason you'd actually want the accuracy down, this is what I'm going to assume is your problem.

Overfitting occurs when you train ""too much"" and the model only learns things that are within your training set, and fails on everything else. That is: it generalized bad.

To prevent this there are a number of things you could do;

1) Data segmentation

The most common is to split your data into three bulks: training (~70% of the data), validation(~20%) test (~10%). These percentages are indications and would vary depending on how much data you have, and the balancing.

The idea is that you train on the training data, you run the validation through the network, and calculate the accuracy. When this accuracy, call it validation accuracy, is satisfactory, then you stop the training and run the test data through it. The latter accuracy (test accuracy) is the one that most papers publish (combined with AUC and F1 score.

Important: When you have split the data into these bulks you should put away the test, and not use it during training at all. You only use this at the very end to do an extra check that you haven't overfitted.

2) Regularization There are many types of regularization. Two very popular regularization methods for preventing overfitting are the L2-regularization (see previous link) and the dropout methods.

Without going into detail, these methods prevent the model weights from becoming too large. This is a good thing since the model won't rely too much on one feature, which in turn attenuates overfitting.


I hope you learned something, and the most important lesson is that you should know what you're doing. If not, you could end up with a model that is not behaving like you thought. In the case of overfitting, you'd end up with a model that only works on your training data, which doesn't really do much good.

I really recommend the book by Goodfellow: deeplearningbook.org.

",14612,,14612,,5/31/2018 22:08,5/31/2018 22:08,,,,0,,,,CC BY-SA 4.0 6507,2,,5096,5/24/2018 18:42,,2,,"

Would I be right in saying that this becomes a sort of 'pattern recognition' problem?

Technically, yes. In practice: no.

I think you might be interpreting the term ""pattern recognition"" a bit too literal. Even though wikipedia defines Pattern recognition as ""a branch of machine learning that focuses on the recognition of patterns and regularities in data"", it's not about solving problems that can ""easily"" be deduced by logical reasoning.

E.g. you say that

A '1' in the top right triangle in the matrix represents a convex relationship between two faces and a '1' in the bottom left triangle represents a concave relationship

This is true always. In a typical machine learning situation, you wouldn't (usually) have this prior knowledge. At least not to the extent that it would b be tractable to “solve by hand”.

Pattern recognition is conventionally a statistical approach to solving problems when they get too complex to analyze with conventional logical reasoning and simpler regression models. Wikipedia also states (with a source) that pattern recognition ""in some cases considered to be nearly synonymous with machine learning"".

That being said: you could use pattern recognition on this problem. However, it seems like overkill in this case. Your problem, as far as I can understand, has an actual ""analytical"" solution. That is: you can, by logic, get a 100% correct result all the time. Machine learning algorithms could, in theory, also do this, and in that case, and this branch of ML is referred to as Meta Modelling[1].

For example, if I supply the network with a number of training models - along with labels which describe the design feature which exists in the model, would the network learn to recognise specific adjacency patterns represented in the matrix which relate to certain design features?

In a word: Probably. Best way to go? Probably not. Why not, you ask?

There is always the possibility that your model doesn't learn exactly what you want. In addition you have many challenges like overfitting that you'd need to concern yourself about. It's a statistical approach, as I said. Even if it classifies all your test data as 100% correct, there is no way (unless you check the insanely intractable maths) to be 100% sure that it will always classify correctly. I further suspect that you're also likely to end up spending more time working on your model then the time it would take to just deduce the logic.

I also disagree with @Bitzel: I would not do a CNN (convolutional neural network) on this. CNNs are used when you want to look at specific parts of the matrix, and the relation and connectedness between the pixels are important — for example on images. Since you only have 1s and 0s, I strongly suspect that a CNN would be vastly overkill. And with all the sparsity (many zeros) you’d end up with a lot of zeros in the convolutions.

I'd actually suggest a plain vanilla (feed forward) neural network, which, despite the sparsity, I think will be able to do this classification pretty easily.

",14612,,14612,,5/31/2018 22:16,5/31/2018 22:16,,,,1,,,,CC BY-SA 4.0 6508,2,,3494,5/24/2018 19:39,,3,,"

That’s because python is a modern scripting object-oriented programming language that has stylish syntax. Contrary to structural programming languages like java and C++, its scripting nature enables the programmer to test his/her hypothesis very fast. Furthermore, there are lots of open source machine learning libraries (including scikit-learn and Keras) that broaden the use of python in AI field.

",15861,,,,,5/24/2018 19:39,,,,0,,,,CC BY-SA 4.0 6509,2,,6491,5/25/2018 1:02,,1,,"

I think, your assumption about the location of trees in images is quite incorrect. Just google image search ""landscape"" (if not already) and you will see almost equal number of images where the trees constitute top part of the image to those images where they lie in only the middle and bottom part.

Talking about the CNN, it automatically learns (thats the beauty!) the properties of an object which are there in the training images. By properties I mean the object's likely position, location, its shape color etc. If you visualize the CNN layer (mostly later layers) output, using class activated maps, you can see what CNN has learnt and paying attention to. Also, you can visualize the filters (or kernels) that are learned by the CNN.

",15796,,15796,,5/25/2018 1:24,5/25/2018 1:24,,,,2,,,,CC BY-SA 4.0 6510,1,,,5/25/2018 3:00,,2,1841,"

I am looking to detect thin objects, like pens, pencils, and surgical instruments. The bounding box is not important, but I am looking to see if I can train a model to detect both the object as well as its orientation.

Typical object detection networks, like R-CNN, YOLO, and SSD encode the class name and bounding boxes. Instead of bounding boxes, I'm looking to encode only 2 points, one starting $x,y$ point and one ending $x,y$ point. The start point for objects is where one would grip the object. For instance:

  • The pencil eraser(start point) is pointed 50 degrees to the top right.
  • The surgical instrument is 10 degrees from the x-axis and the handle is pointed to the bottom right.
  • Pen tip (endpoint) is pointing vertically upwards.
  • Fork, the start point would be the grip handle part, and the endpoint would be in the middle where the 4 prongs are.

As long as I can encode the start and endpoints, then I can determine the orientation. I would need to define these points during training.

The question is whether there is an existing model (mobile net/inception/RCNN) that I can encode this information in? One potential way I was thinking was to use YOLO and for the bounding box, the top left $x,y$ would be the starting point $x,y$ (handle), whereas the bounding box's width and height would be replaced with the endpoint $x,y$ (pencil writing tip, fork prongs.

",15865,,2444,,1/28/2021 23:59,1/12/2022 16:00,How can I detect thin objects (like pens and pencils) without a bounding box but only 2 endpoints and the orientation?,,2,1,,,,CC BY-SA 4.0 6511,2,,6489,5/25/2018 4:11,,2,,"

My assumption was correct: the ground truth bounding box is aligned with an anchor box such that they share the same center

In other words, only the widths and heights are used to calculate the ground truth IOU.

",13255,,,,,5/25/2018 4:11,,,,0,,,,CC BY-SA 4.0 6515,1,,,5/26/2018 9:15,,1,99,"

Is there a way in the WEKA explorer to manually select the initial centres when using SimpleKMeans clustering?

",15879,,,,,8/1/2018 14:01,WEKA - SimpleKMeans - Manually choose intitial centres,,1,2,,,,CC BY-SA 4.0 6516,2,,5792,5/26/2018 14:18,,4,,"

In GE, the genotype is a linear sequence of codons. By "wrapping" it, you make it a circular sequence that never ends. It allows you to build a bigger tree, while having only a few codons. Still, it is possible to find such a combination of a genotype and a grammar that defines an infinitely deep expansion — such combinations are hardly suited for practical purposes.

I learned about GE recently. I implemented a GP/GE system to solve the Santa Fe Trail problem. I chose not to perform wrapping to make genotype-to-phenotype mapping more predictable (I always generate enough codons to complete a grammar expansion, and prune unused tails). I also went with a subtree crossover, and a subtree-local mutation, which effectively make the system more a classical GP (with a fancy genotype-to-phenotype mapping) than GE. So there are some options.

",15881,,2444,,12/27/2020 14:26,12/27/2020 14:26,,,,0,,,,CC BY-SA 4.0 6520,2,,4910,5/27/2018 13:26,,1,,"

The core of the question seems to really be: ""how to approach thinking about this"", where ""this"" is the input of an AI player.

Modern attempts at game playing AI players try to replace a human player ""as is"". No advantage whatsoever. This implies that we want to feed the same ""raw input"" to the software player and to the human player. For a video game, the raw input is usually the screen we see, but it could also include sound, vibration feedback from a game controller, force feedback from driving gears, etc. Basically any sensor input available. For a physical board game like Go, the raw input could be a video feed, as done for AlphaGo. Doing so, we fall back to the video game approach. So the input to the AI player is often a tensor, where each element is the intensity of a screen pixel, or some other sensor signal (note there is nothing here about the shape of the tensor, please see below).

The way of thinking here is actually pretty simple, and neat: We want an AI player to replace a human player. So we list up what the human player can sense, and model each input as a tensor (this includes vectors and matrices).

There is perhaps a subtlety here. We face choices not so obvious to make, and there are usually trials and errors (many errors). For example, the Deep-Q Network from DeepMind to play Atari games takes as input differences between two consecutive video frames. Some other approach could solve the same problem with actual sequences of inputs (e.g. 10 consecutive frames). The two approaches are valid---they just have different tradeoffs we must evaluate to find the best configuration.

Another subtelty is the ""shape"" of the input. Should a single (W, H) input frame be a (W*H) vector, or a (W, H) matrix, or something else? The ramifications can go far. If the frame is an RGB frame with transparency, the input could well be a tensor of shape (W, H, 4), a ""cubic volume"" with 4 ""slices"" of shape (W, H)---one for each RGB channel, and a fourth one for transparency. Imagine now if you also have sound, and can add a fifth ""slice""---and more if you have stereo or Dolby system channels.

Last point: There is no reference to the architecture of the AI player beyond the input here. Although most exemples refer to recent architectures based on neural networks, this ""thinking"" also applies whatever can leverage the input. For example, the Big Blue system from IBM, who won Chess against Gary Kasparov, is an AI player using exclusively (if my memory's good) clever tree search. Yet its input is a tensor representing the Chess board.

",169,,,,,5/27/2018 13:26,,,,0,,,,CC BY-SA 4.0 6521,2,,5916,5/27/2018 14:16,,1,,"

A generic way to start with under such circumstances is to try find an ""oracle"".

Serial to parallel converters exist for quite some times, and some are open source (e.g. PIPS). The idea is to get serial code from step (1), use the ""oracle"" to produce parallel code, and that's it: Each conversion makes en entry in the dataset.

Ensuring the quality of the dataset is critical here. A script generating the dataset shoud ensure (1) the serial code compiles and runs properlly, (2) the parallelized code compiles and runs properly too, (3) serial and parallel programs produce the same result, (4) some metrics state objectively how the parallel version does against the serial version, and (5) keep track of the actual hardware configuration.

Point (4) is critical to the quality of such a dataset. Parallel programs are not always faster than a serial version: A 1000-iteration loop dumped over 1000 workers on a 8-core CPU may not do so well compared to 8 workers---we need to check what the converter is doing. And point (5) ensures we know under which conditions the data is valid.

Using several ""oracles"" would be even better, to bring diversity, and to hopefully get the learning algorithm discover the best conversion tradeoffs---perhaps better than what the carefully hand-crafted converters are able to do on their own.

",169,,,,,5/27/2018 14:16,,,,1,,,,CC BY-SA 4.0 6522,2,,6499,5/27/2018 14:40,,2,,"

Finding lines in an image often leads to the Hough line transform. Many libraries implement it, including OpenCV. Getting the lines should answer subsequent questions (and it doesn't, please consider having one question per post, and some other sites on StackExchange may be better suited than AI.SE).

Alternative approaches based on Machine Learning may also exist. It may be interesting at this point to look into libraries that do human pose/gait recognition. Some library may be repurposed to recognize the ""field gait"", assuming the field outer mark is its ""gait""-equivalent.

",169,,,,,5/27/2018 14:40,,,,0,,,,CC BY-SA 4.0 6524,1,6566,,5/27/2018 15:47,,2,160,"

I have to read a lot of papers, and I thought that I can use an A.I. to read them and summarize them. Maybe find one that can understand what the papers are talking about it seems a lot to ask.

I think I can use natural language processing. Is it the right choice?

I'm sorry, but I'm new in A.I. and I don't know much about it.

",4920,,4302,,10/8/2018 11:56,10/8/2018 11:56,How can I build an AI with NLP that reads and understands documents?,,1,4,,,,CC BY-SA 4.0 6526,1,6529,,5/27/2018 16:05,,2,1731,"

I'm currently working on license plate recognition. My system consist of 2 stage: (1) License Plate region extraction & (2) License Plate region recognition.

I'm doing (1) with Raspberry pi 3 model b. I find license plate candidate first by merging bounding boxes based on their similarity. In this way, i have only 1~7 license plate region proposals. And it took less than .3 seconds.

Now i have to reduce number of region proposal to be around only 1~2 so that i can send these images to server to do job (2). For license plate extraction, I made my own classifier function in tensorflow and the code is below. It gets proposed license plate as input.

First, I resize all license plate to be [120, 60] and converted to gray image. And there are 2 classes: 'plate', 'non_plate'. For non_plate image, i collected various image that might appear in image as background. I have 181 images for 'plate' class and 56 images for 'non_plate' for now, i trained for about 3000 steps so far and current loss is .53 .

When i did prediction on test set, i encountered problem that for some of plate image, it doesn't recognize license plate which is very obviously license plate image from my eyes. It is okay for me to wrongly recognize non plate image as plate but it is problem if it wrongly recognize plate as non_plate because it will not be sent to server to be fully recognized.

It happens like 10 out of 100 test images and this rate is far worse than i expected. I need help for adressing this problem. Would there be any improvement that i can make?

(1) Is my training set too small to classify between license plate and non license plate? Or is number of steps is too small?

(2) Is my graph structure bad? I needed to have small graph structure for my raspberry pi to recognize less than 1 second. Could you suggest better structure if it is bad?

(3) Is it bad to resize any proposed image to [120, 60] to be used as input for graph? I think it loses some information. But isn't this close to roi pooling like used in fast rcnn?

 inputs=tf.reshape(features[FEATURE_LABEL],[-1,120 , 60 ,1],name=""input_node"") #120 x 60 x 1, which is gray

conv1=tf.layers.conv2d(inputs=inputs,
                       filters=3,
                       kernel_size=[3,3],
                       padding='same',
                       activation=tf.nn.leaky_relu
                       )
#conv1 output shape: (batch_size,120,60,3)

pool1=tf.layers.max_pooling2d(inputs=conv1,pool_size=[2,2],strides=2,padding='valid')

#pool1 output shape: (batch_size,60,30,3)

conv2=tf.layers.conv2d(inputs=pool1,filters=6,kernel_size=[1,1],padding='same',activation=tf.nn.leaky_relu)

#conv2 output shape: (batch_size, 60,30,6)

pool2=tf.layers.max_pooling2d(inputs=conv2,pool_size=[2,2],strides=2,padding='valid')

#pool2 output shape: (batch_size, 30,15,6)

conv3=tf.layers.conv2d(inputs=pool2,filters=9,kernel_size=[3,3],padding='same',activation=tf.nn.leaky_relu)

#conv3 output shape: (batch_size, 30,15,9)

pool3=tf.layers.max_pooling2d(inputs=conv3,pool_size=[2,2],strides=2,padding='valid')

#pool3 output shape: (batch_size, 15,7,9)


#dense fully connected layer
pool2_flat=tf.reshape(pool3,[-1,15*7*9]) #flatten pool3 output to feed in dense layer

dense1=tf.layers.dense(inputs=pool2_flat,units=120,activation=tf.nn.relu)

logits=tf.layers.dense(dense1,2) #input for softmax layer

[training non plate image example] []4 [training plate image example. It is region proposed image]

",12090,,12090,,5/27/2018 16:50,5/28/2018 0:43,Detecting license plate using tensorflow,,1,0,,,,CC BY-SA 4.0 6529,2,,6526,5/28/2018 0:43,,2,,"

Such a task is getting easier to complete, but there are still difficulties with rotations, skewing, scaling---to name a few issues. Your network has the benefit of simplicity and lightness for the target hardware, but it may suffer under the above conditions.

So 237 images (181+56) may be small for a ""generic"" approach, depending on how representative and diverse the dataset is. Also, the dataset is unbalanced (the first class has twice as many examples), which causes bias in learning.

There are several ways to expand a base dataset:

  • Transform images and add them (with the same label, if supervised learning) to the dataset. Many libraries allow to rotate, skew, scale, or even blur images. Be careful as transforming needs be ""reasonable"", and changing image format over and over creates artifacts that can confuse the machine (e.g. too much JPEG savings on transformed images).
  • Generate synthetic data. Assuming license plates have a known format, it may be easy to generate images with good fidelity to real plates. This is not always possible, but license plates are formalized, so there should be only a handful patterns (typically plain, light, diplomatic, military vehicles).

Aside the dataset (potential) issue, the graph is fine. However, it may be worth trying different settings. It really depends on how you have ended up with the current graph. Removing pooling layers helps keeping more information, as well as a larger input image. 120x60 looks pretty small, and the comparison to fast-RCNN's RoI pooling layer looks odd, given that RoI comes after the feature maps. So a larger input image could give better results.

",169,,,,,5/28/2018 0:43,,,,1,,,,CC BY-SA 4.0 6531,2,,6499,5/28/2018 13:11,,1,,"

I assume that you are familiar with homogeneous transformations and the meaning of global and local coordinate frames. If not, global frame is the fixed frame; a reference frame for your whole problem, such as the starting position of your robot. Local frame should be placed anywhere on your robot, preferably in the middle-point of the virtual line (called "robot base") that connects the two actuating wheels in the back of your robot (given that you follow the differential drive setup). If not, just place the local frame anywhere that makes sense on the robot, such as its geometrical center.

Answering your questions:

How do I know where the lines intersect?

The accepted answer in this is by far the best I have seen around, which I have also used successfully for a project regarding robotic exploration in an unknown maze.

How do I find the angles of the lines using computer vision?

You DON'T need computer vision for that. For every line, pick 2 points expressed in global frame (x1,y1),(x2,y2) and calculate the slope of the line as:

lambda = (y2-y1) / (x2-x1)

Then the angle of the line is atan(lambda), in global frame. Do this for all lines and then subtract the angles of any two lines to find their relative angle (pay attention to the sign).

Alternatively, I would personally use the RANSAC algorithm to de-noise the detected points and give me the line equation based on the consensus of all points. This line equation should already have the slope in it:

y = ax + b

where a is the slope and b is the vertical offset. Then do the aforementioned steps, i.e. atan(a) and subtraction to find the relative angle between two lines.

If you explicitly want to use computer vision, maybe train a neural network on known angles and then classify images of lines to output their angles. This approach will be by far the most painful and I do NOT recommend it at all.

How do I update this information based on my coordinates?

This can be quite tricky. If the line points are detected by your on-board camera, you first of all need to convert them from the camera's coordinate frame (found in the camera's datasheet) to your robot's local frame, as explained above. To do this, you need to calculate the static transformation between the two frames (static because it will never change if the camera is fixed on the robot, you only define it once).

Then you need to convert the detected line equation from your robot's frame to the global frame. In order to do so, you need to keep track of the robot's pose (position + orientation) while it moves in the 2D or 3D space. This is generally a hard task because it needs an excellent implementation of your localization algorithms, such as an Extended Kalman Filter or Particle Filter. The localization algorithm will provide you with the information that you need at any point in time in order to convert the lines/points from your robot frame to the global frame and visualize them.

In short: you need to transform the detected lines / points from camera frame -> robot (local) frame -> global frame. The first transformation is static (calculated once and never changes) whereas the second one is dynamic (changes every time the robot moves or turns). The first one is fairly easy to calculate but the second one can be a real pain.

",15919,,22079,,7/29/2020 23:01,7/29/2020 23:01,,,,0,,,,CC BY-SA 4.0 6533,2,,2940,5/28/2018 13:41,,-1,,"

A very popular choice are Hidden Markov Models.

",15919,,,,,5/28/2018 13:41,,,,0,,,,CC BY-SA 4.0 6534,2,,6154,5/28/2018 13:58,,0,,"

This link from Stanford is by far the best resource on gradient checking that I have encountered so far:

http://cs231n.github.io/neural-networks-3/

I am sure it will help you a lot.

Pro Tip: make sure that you use ""centered/central difference"" formula for the derivative calculations and also use ""relative error"" (not absolute) to compare the two gradients.

",15919,,,,,5/28/2018 13:58,,,,0,,,,CC BY-SA 4.0 6536,1,,,5/28/2018 21:18,,1,265,"

I am writing an MDP based agent that is supposed to learn to place bids and asks in a trading environment. The system requests 2 values (mWh energy and $, both being positive or negative). Every timestep the agent has a certain volume that it has to either buy or sell.

I tried setting these two values as action values, giving it 4 individual ones (1 for buy price and amount one sell price and amount)

I used the DDPG and NAF agents from keras-rl here but both aren't working for me. I tried a number of reward functions too:

  • direct cash reward: average price of market for required energy vs what the agent achieved
  • shifting balancing price: first emphasize that the broker balances it's portfolio (i.e. orders the amount it has to) and later optimize for price per mWh
  • simple core: as a test I ran a reward function that just rewards the agent to be close to the actions [0.5, 0.55]

All three failed again.

  • LR : tried between 0.01 and 0.00001
  • Layers: Tried anything between 1 layer 1 cell and 5 layer 128 cells
  • Types: I used both Dense and LSTM cells with according input shapes

Symptoms: Generally it looks like the system is not learning anything. I am unsure why. How does the reward function have to be structures to incentivize the system to at least move in the correct direction? Especially the reward that told the agent to be close to [0.5, 0.5] by basing the reward simply on the squared difference to this point should have worked in my eyes.

",11429,,,,,5/28/2018 21:18,Training RL agent on timeseries trading data with Continous Deep Q or NAF,,0,1,,,,CC BY-SA 4.0 6537,2,,6032,5/29/2018 0:23,,1,,"

Training with images makes use of pictures that are very large, like 7360×4912 (36 mega pixels). For image recognition at any angle and size, an image is rotated (about 60 times) and resized (depends on the original size, possibly 20 times) many times. This means you end up with a ton of data. In my example here, you would get some 60 × 36 × 2 = 4Gb of data for that one image. Although you really only save the data of interest (noise in the light signal) so in the end the AI does not use as much, but it would still be a quite large data set... especially when you know that a well trained image recognition system uses a good 2,000 images.

So 10 to 20 thousands data points isn't much at all. I guess it depends how many of those you have (one per second would be 86,400 a day...)

I would consider using a system like Cassandra because the database can grow as big as you'd like, if that is one of the problem you're facing. Also you can save the data in binary and the number of columns can vary on each row. However, Cassandra expects a key for each row. This is an important point as it makes use of that key to distribute the data over multiple computers (for faster access later and distribution of the data set over all computers.) It also includes replication ""for free"" so if a computer fails, you can just replace it (no need to restore it).

What is not a very good idea with Cassandra is creating many indexes. Those use rows (i.e. one index = one row) and thus the few computers handling that one row will work extra-hard on queries on that index... But I would imagine that for AI you would not really need such.

A really large CSV will be difficult to work with because you'll be responsible for accessing the rows (unless you have to access all the rows each time anyway?) Also the file format is Text and the file size will be limited by your OS (which should be really large, but be careful as many I/O calls in higher level libraries miss the fact that the size could be 64 bits.)

If you are looking at a longer/more permanent solution, I would definitely look into Cassandra, especially if you're using Java as a language (also I use it with C++ and it works just fine with C++ too.) If you're just writing a throw away app. anyway, then CSV is probably much easier than handling a whole Cassandra cluster! That's a learning curve...

",15927,,,,,5/29/2018 0:23,,,,0,,,,CC BY-SA 4.0 6538,1,6544,,5/29/2018 0:27,,2,67,"

I'm looking at writing an AI agent for pattern recognition.

I want to be able to constantly feed new data to the AI to continuously train it as new data may have new patterns.

My problem, though, is that my input feed may break once in a while (the data comes from a remote computer) and thus some of the data will go missing. The other computer sends me real-time data so when the connection goes down, any new data while disconnected goes missing as far as the AI agent is concerned. (at this point, I'm not looking at fixing the gaps, although ultimately, reducing them is one of my goals, at this point I have to pretend it's not possible to accomplish.)

What kind of impact missing data has on a pattern recognition AI?

",15927,,23503,,11/8/2020 18:00,11/8/2020 18:00,Will training an AI still work if the input data is somewhat sparse?,,1,0,,,,CC BY-SA 4.0 6539,2,,6325,5/29/2018 0:36,,2,,"

Though a good answer by @pasaba por aqui, I'd agree with @zooby that a graph might be too simplistic. If humans were in an environment where the options were drown or take 5000 unrelated steps to build a boat, we'd never have crossed any seas. I think any graph, if designed by hand, would not be complex enough to call the agent within as general AI. The world would need enough in-between states that it would no longer be best described as a graph, but at least a multidimensional space.

I think there are 2 points you'd have to consider. What is ""simple"" and when would you recognise it as a ""general AI"". I don't find self aware AI satisfactory, as we can't measure anything called awareness; we can only see its state and its interaction with the environment.

For 1. I'd pose that the world we live in is actually fairly simple. There are 4 forces of nature, a few conservation laws, and a bunch of particle types that explain most of everything. It's just that there are many of these particles and this has led to a rather complex world. Of course, this is expensive to simulate, but we could take some shortcuts. People 200 years ago wouldn't need all of quantum mechanics to explain the world. If we replaced protons, neutrons and the strong force with the atoms in the periodic table, we'd mostly be fine. Problem is we replaced 3 more general laws with 100 specific instances. For the simulated environment to be complex enough I think this trend must hold. We could replace trillions of particles governed by general laws with thousands of instances that have different properties when interacting with the agent, and I think more importantly, when interacting with each other.

Which brings me to 2. I think we'd only truly be satisfied with the agent expressing general AI when it can purposefully interact with the environment in a way that would baffle us, while clearly benefiting from it (so not accidentally). Now that might be quite difficult or take a very long time, so a more relaxed condition would be to build tools that we'd expect it to build, thus showing mastery of its own environment. For example, evidence of boats have been found somewhere between 100k and 900k years ago, which is about the same time-scale when early humans developed. However although we'd consider ourselves intelligent, I'm not sure we'd consider a boat making agent to have general intelligence as it seems like a fairly simple invention. But I think we'd be satisfied after a few such inventions.

So I think we'd need a Sim like world, that's actually a lot more complicated than the game. With 1000s of item types, many instances of each item and enough degrees of freedom to interact with everything. I also think we need something that looks familiar to acknowledge any agent as intelligent. So a 3D, complicated, minecraft-like world would be the simplest world in which we would recognise the emergence of general intelligence.

",3508,,3508,,5/29/2018 0:53,5/29/2018 0:53,,,,0,,,,CC BY-SA 4.0 6540,1,,,5/29/2018 0:52,,2,252,"

In chapter 8 of ""Reinforcement Learning: An Introduction"" by Sutton and Barto, it is stated that Dyna needs a model to simulate the environment.

But why do we need a model? Why can't we just use the real environment itself? Wouldn't it be more helpful to use real environment instead of fake one?

",6851,,2444,,11/24/2018 4:31,11/24/2018 4:31,Why do we need a model of the environment in Dyna?,,2,0,,,,CC BY-SA 4.0 6541,2,,6540,5/29/2018 6:55,,3,,"

Unlike algorithms presented in other chapters of Sutton and Barto, Dyna is a planning algorithm. That means that it makes decisions online, in a real environment, that attempt to be as optimal as possible given some constraints such as current knowledge and time available to compute between time steps. This differs from learning-only online algorithms which typically take a small step towards optimality on each piece of new experience as it happens.

A planning algorithm can only do its job well if it is allowed to ""look ahead"" at the consequences of its behaviour whilst learning online. In fact, this is the definition of planning - to choose an action based on reasoning about consequences of that action.

For an algorithm to look ahead before taking an action, it needs a model of how the environment will respond to that action. That model does not need to be coded up directly - e.g. you don't necessarily need to write a physics engine to predict the real world (although a basic one might be a good prior or pre-training step). Instead it can be a learned model, and typically in e.g. Dyna-Q, that is what you use.

There is a strong relation between Dyna-Q, and regular Q-learning with experience replay. In the most basic forms, they are essentially the same algorithm with a different framing. However, you can take the planning ideas further e.g. focus improvements around the currently experienced state and paths to a goal state in Dyna-Q, perhaps making it closer to MCTS conceptually.

Wouldn't it be more helpful to use real env instead of fake one?

Most real environments do not let you take actions, see the consequences and then rewind in order to re-try. Essentially that is what planning algorithms are making up for - they try to predict consequences. This is important when mistakes made during training have real consequences, for example for a physical robot navigating an environment where there might be a possibility of a fall or collision that damaged something. Whilst online learning algorithms such as SARSA will also help with this in different ways (in SARSA by changing policy to allow for exploratory moves), typically Q-learning will be weaker than Dyna-Q when it comes to learning quickly from mistakes. With the usual caveat: Much still depends on the specific problem and choices of hyperparameters.

",1847,,1847,,5/29/2018 9:28,5/29/2018 9:28,,,,10,,,,CC BY-SA 4.0 6542,2,,6499,5/29/2018 8:09,,1,,"

I suggest you first consider your coordinate systems. There are two.

Field Coordinate Axis

Field boundary corners are in field coordinates (for example): { (-50.0, -35.0, 0), (50.0, -35.0, 0), (-50.0, -35.0, 0), (50.0, -35.0, 0) }, all values in meters.

At any moment in time the camera in the robot: is at (x, y, z) and oriented relative to north by angle theta, measured clockwise when looking from above the field. The value of z may be 2.0 (for example).

Image Coordinate Axis

The coordinate axes of the camera images is (w, h). You have frames in time (perhaps every 33 msec containing grids in a the w-h coordinate axis with 1080 x 960 pixels (for example) providing an index range (<0, 1079>, <0, 959>).

Maintaining Orientation of Short Robots (small Z)

You are correct that the Harris feature detection may not work because z (the distance from the surface of the field to the center of the camera lens) may not be sufficient for that algorithm unless the robot is near a corner. The rectangle of the field boundary is not at all rectangular in the camera's w-h focal plane. For the same reasons, finding lines and then locating their intersections is not the optimal approach either.

Pretend you are the robot. As the robot survey's the field, it can assemble a model of the 360 degree periphery. What it sees is a gradually curved line with three upside down v shapes representing the field corners. Unless the robot is almost on top of one of the corners, all four features that correspond to the corners of the field boundary will only vaguely appear to be corners at all.

Mathematics of Obtuse Corner Detection

Two tangent lines stem from each corner. They intersect at a discontinuity of the line's derivative, dw/dh, the slope in 2D phase space of the camera frame. The angle found between these two tangent lines will usually be closer to 175 degrees than 90 degrees, yet they are still detectable because the rest of line has no other like discontinuities of slope. From a Fourier transform perspective, the 360 degree line is actually a periodic waveform primarily comprised of the 4th, 12th, 20th, 28th, and 36th harmonics. If you are good with that level of mathematics and you record past frames, you can exploit Fourier's series and FFTs for high accuracy in corner detection.

As you develop your theory and your software, you may find that other aspects of play need to be considered. It may be best to think of those aspects now. Fortunately, if another player or official blocks a portion of the field's boundary line, it will create a discontinuity in the line itself, but not the slope of the line in the w-h plane of the camera's image. Your implementation will need to differentiate those two types of differences, which is hardly an insurmountable problem. Discontinuity in a line and discontinuity in its derivative are mathematically distinct naturally.

Redundancy in Feedback Channels

If the robot can sense its location and orientation in other ways and know x, y, z, and theta above with some degree of reliability, the expected location of the obtuse angles and the detected ones can be compared to determine the probability that the robot is properly detecting is orientation.

Questions in This Context

In this context the questions you listed need some reorientation.

How do I know where the lines intersect?

The line has two edges that may lie on the same pixel in many cases, so that is not easy to detect in an image with many other lines. If the line is of a particular color, hue detection can assist in line detection. If the above corroborative data analysis is employed, then misinterpretation of edges can be corrected quickly in real time. Once the lines are found, the detection of dh/dw at any given point on the line can be estimated using linear regression of segments and windowing (looking at short segments one at a time). When an otherwise relatively stable slope quickly shifts 5 or 10 degrees in angle between windows, you have a high probability that you've found a distant field corner. A shift in 70 to 80 degrees combined with a lower h value in the frame is indicative of a corner in close proximity.

How do I find the angles of the lines using computer vision?

Edge detection, systematic elimination of candidate edges that are not likely field boundaries, and then linear regression of the best candidates.

How do I update this information based on my coordinates?

Just save them in an appropriate array of x, y, z, theta vectors, indexed by frame number. You will probably want to keep track of what you think your robot's x, y, z, and theta values are and constantly test your assumptions against your most recent inputs. Otherwise, your robot can become disoriented. The more ways you can detect location and orientation, the higher reliability you will have in the overall system. If your vision can detect some feature at each goal that will not change during the game, it may help. Ultimately your x, y, z, and theta are the parameters in a model and the use of gradient descent and auto-correlation and other auto-correction techniques need to be applied to keep your robot's orientation model continuously updated.

Recommend Diving Into the Math First

The 3D trig to work all of the above out in detail is initially daunting but not that far beyond high school trig if the researcher develops some clear diagrams first and then takes the time to resurrect any rusty mathematics skills or hone some new ones.

",4302,,,,,5/29/2018 8:09,,,,0,,,,CC BY-SA 4.0 6543,2,,6510,5/29/2018 9:07,,1,,"

Recent work achieves a similar task: Object recognition together with the bounding box (e.g. YOLO---there are quite a few on Github too). The bounding box is not enough in your case, but it is an interesting pattern: Recognition plus some form of measurement. Such architectures could be good candidate to start with, and repurpose for stick orientation.

The problem could also leverage the current results in gait recognition. In fact, this looks closer to the problem at hand than object recognition. An example is this model based on multiview (many pictures input) recognition, with a demonstration on Github. Gait recognition is also popular these days, and many inspiring papers and OSS implementations are available.

The above presents two approaches your problem could benefit from, as a ""combination"". My gut feeling is that tilt and orientations may be easier than direction (i.e. where is the tip?).


The question calls for training a model. An alternative approach, perhaps to start with and get more insight, could be to go with ""standard"" computer vision algorithms, such as the Hough Transform. This transforms allows to find lines in an image. The mathematics are at reach, and it may work well enough for a quick demo. Also, your handle name suggests ""embedded mobile"" engineer, and a simple Hough Transform could be cheap on mobile.

",169,,,,,5/29/2018 9:07,,,,6,,,,CC BY-SA 4.0 6544,2,,6538,5/29/2018 9:29,,2,,"

First, the title mentions ""sparse data"". Recently the expression has taken a clear meaning: The agent input is data with mostly zeros. In the question a different meaning: A ""sparse data stream"", where data flows and vanishes sometimes. I understand the question as: ""Will training an AI still work if the training data stream breaks?""

Note the explicit ""training data stream"": The question suggests the agent has at least 2 inputs: Training data you want to feed ""anytime"", and ""inference data"" sent to the agent for actual recognition.


This question enters (to my eye) the realm of distributed AI and multi-agent systems, and ultimately a common issue in distributed systems.

If we cast your problem to two humans S and L communicating, when S talks to L over a reliable channel, L gets all the information. When the channel breaks, L gets nothing. Does it prevent L from living normally? It merely cuts out whatever is expected out of the conversation from S to L.

Back to your scenario, whenever the data stream is broken (S), the learning agent (L) will just be unable to learn from that data source. The impact on the pattern recognition agent is bounded to what it could have learned from the new data. The agent recognition performance remains constant while the data stream is interrupted.

Now if the learning agent is just learning, and cannot perform recognition without learning, there is an architectural or implementation issue. Continuous learning entails the agent is active (performs actual recognitions) and learns out of what it does.


Update, for clarification:

The performance remains constant is ""true"", but subtle. At time t some metric like precision can be 99% with respect to what the agent has seen so far. Assuming continuous learning is interrupted and new recognition requests come in, the performance has ""two faces"":

  • As long as new recognition requests are ""close"" to what the agent has seen so far, performance is ""constant""---the agent still scores 99%.
  • If the request is quite different, the performance will drop. The size of the drop depends on how different the input is.

A concrete example: The agent is trained to find mushrooms with a dataset where all images are taken in the forest. Assuming learning stops, when an image of mushroom on a concrete crack comes in, the agent will probably do worse. And it would then keep doing worse on such kind of image, as long as it cannot ""refresh"" by learning from this experience.

",169,,169,,5/29/2018 9:56,5/29/2018 9:56,,,,2,,,,CC BY-SA 4.0 6545,1,6547,,5/29/2018 9:43,,2,485,"

This is more of a technical question rather than a practical one.

I've exported a decision tree made with python/scikit learn and would like to know what the ""value"" field of each leaf corresponds to.

",12940,,2444,,6/2/2020 23:21,6/2/2020 23:21,What do the values of the leaves of the decision tree represent?,,1,0,,,,CC BY-SA 4.0 6546,1,,,5/29/2018 10:40,,1,57,"

I am very new to AI, I have a set of 3D human models that I would like to train the algorithm to identify wrist, upper arm, lower arms, etc, and distance between them.

From my understanding, this is a regression problem. But with my very limited knowledge, most tutorial online showing me cat and dog classification problem.

Do you have any clue for me to research next? There are some paper saying to convert the 3D model to image, and use convolutional neural network for training.

p/s: Please don't downvote me, I am too young and too lost in this field.

",15755,,,,,5/30/2018 13:23,"Detecting Keypoint of 3D model, and distance between them",,1,0,,,,CC BY-SA 4.0 6547,2,,6545,5/29/2018 10:47,,4,,"

Decision tree nodes are split bases on the number of data samples, these numbers indicate the number of data samples they are fit to. In your case samples = 256. It is further split into two nodes of 154 and 102.

",15935,,,user9947,5/29/2018 16:43,5/29/2018 16:43,,,,3,,,,CC BY-SA 4.0 6548,1,6616,,5/29/2018 11:12,,2,81,"

I am trying to build an agent that trades commodities in a exchange setting. What are good ways to map the action output to real world actions? If the last layer is a tanh activation function, outputs range between [-1,+1]. How do I map these values to real actions? Or should I change the output activation to linear and then directly apply the output as an action?

So let's say the output is tanh activated and it's -0.4, 5. I could map this to: - -0.4 --> sell 40% of my holdings for 5$ per unit - -0.4 --> sell 40% for 5$ in total

if it was linear, I could expect larger outputs (e.g -100, 5). Then the action would be mapped to: - sell 100 units for 5$ each - sell 100 units for 5$ total

",11429,,,,,6/1/2018 21:37,What are good action outputs for reinforcement learning agents acting in a trading environment?,,1,0,,,,CC BY-SA 4.0 6550,2,,5414,5/29/2018 15:35,,1,,"

If i understand your question correctly , you are asking if there exists functional datasets for which there are no proven solutions based on neural networks which give substantial accuracy.

there are many such problems for which we have data in abundancy, question answering would be one such a thing , you still cant devise a neural network architechture that reads through entire principia mathematica and then complete theorems , and point cloud processing is a also a big hurdle for neural networks considering the highly irregular datastructure , even if you voxelize a point-cloud it would be infeasible to train large convolutional networks on it . (there is also rapid progress in this direction ,, point cloud processing).

geoffrey hinton mentioned in an AMA before 3 years that we will see neural networks that will answer questions based on videos in the next five years , but still video-question answering seems to be far away from the present technology.

Graph datasets are also one such area where still neural networks research is in infancy (refer http://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/)

",15935,,,,,5/29/2018 15:35,,,,0,,,,CC BY-SA 4.0 6551,2,,5861,5/29/2018 15:42,,0,,"

There is no such single hard and slow step in training neural networks , forward pass involves large number of matrix multiplications so does backward pass , even though there are highly optimized libraries for matrix multiplications neural networks act on very high dimensional (tensor) multiplications in both forward and backward passes which makes it difficult train . however backward pass would be even more slower or even intractable if we won't use backpropagation in case of large neural networks, since computing derivatives is time-taking.

refer training results in for exact number https://github.com/baidu-research/DeepBench#types-of-operations

",15935,,,,,5/29/2018 15:42,,,,0,,,,CC BY-SA 4.0 6553,2,,5981,5/29/2018 15:54,,1,,"

It might be hard to implement deep reinforcement learning algorithms, especially considering your previous experience and the computing resources you have. They require almost the same (even more) GPU power. Deep reinforcement learning algorithms use deep neural networks for learning the optimal policy. Even if you are given appropriate resources, it would be tough to replicate the results of the paper, if you are a novice.

",15935,,2444,,2/15/2019 16:21,2/15/2019 16:21,,,,0,,,,CC BY-SA 4.0 6554,2,,3965,5/29/2018 16:00,,0,,"

there is no hard-and-fast-rule for feature selection , you have to manually examine the dataset and try different techniques for feature engineering . And there is no rule that you should apply neural networks for this , neural networks are time-consuming to train , instead you can experiment with decision tree based methods(random forests ) since your data is anyway in tabular structure.

",15935,,,,,5/29/2018 16:00,,,,1,,,,CC BY-SA 4.0 6556,1,,,5/29/2018 17:21,,5,6769,"

What are feature embeddings in the context of convolutional neural networks? Is it related to bottleneck features or feature vectors?

",15945,,2444,,5/3/2020 12:08,12/28/2021 5:51,What is feature embedding in the context of convolutional neural networks?,,2,0,,,,CC BY-SA 4.0 6557,1,6568,,5/29/2018 19:30,,5,773,"

I study AI by myself with the book ""Artificial Intelligence: A Modern Approach"". I've just finished the chapters about the Bayesian network and probabilities, and I found them very interesting. Now, I want to implement different algorithms and test them in different cases and environments.

Is it worth it to spend time on these techniques?

",15949,,2444,,3/13/2020 22:47,3/13/2020 22:47,Are Bayesian networks important to learn in 2018?,,2,0,,,,CC BY-SA 4.0 6558,2,,5814,5/29/2018 20:01,,1,,"

I suggest you to go through the r-cnn paper or go through a tutorial on it . CNNs transform the image into high dimensional vector in their last layer , in case of classification this vector is sent to a ""softmax"" layer , in case of bounding box regression , four values :length , breadth , location of one of the points of the bounding box , are regressed from this vector , so if you use a cnn with one regression head you end up with one bounding box irrespective of the training set.

",15935,,,,,5/29/2018 20:01,,,,0,,,,CC BY-SA 4.0 6559,2,,4907,5/29/2018 20:09,,1,,"

you need to understand about layerwise interpretability of neural networks before this . each layer is activated by some patterns ""activations"" that are much more complex than its previous layers (in terms of texture etc). so deeper the network , more complex functions it can compute . therefore you can't replace a deeper network with a several shallow ones. i suggest you to skim through https://arxiv.org/abs/1310.6343 , In this paper it is proven that no t/2 layered network can capture the distribution that of a t-layered one.

",15935,,,,,5/29/2018 20:09,,,,0,,,,CC BY-SA 4.0 6562,2,,5979,5/29/2018 20:41,,1,,"

There are many approaches for training CNN on 3d data, but the decision to use a particular architecture is heavily dependant upon the format of your dataset.

If you are using 3d point cloud data, I would suggest you go through PointNet and PointCNN.

But training a CNN on 3d point clouds is very tough.

There is also a way to train CNNs by posing the 3d structure from different viewpoints (Multiview CNNs).

But remember that training CNN on 3d data is really a tough task.

If you plan to use a voxelized input data format, I suggest going through VoxelNet.

Since you are mentioning deconvolution, the most relevant paper I can come across is 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation.

But deconvolution in its own right is a very expensive operation, which acting on 3d data makes it very hard, so I would suggest you check for alternate methods.

",15935,,2444,,6/12/2020 14:09,6/12/2020 14:09,,,,3,,,,CC BY-SA 4.0 6563,2,,3972,5/29/2018 20:49,,0,,"

Hidden layer number is an indicator of the dimension of the weight matrix , so once you train a network with certain number of hidden layer neurons , the weights get fixed at the point you stop training. so changing the number hidden layer neurons makes the previous weights incompatible with it due to change in weight-matrix dimenstion . even if you alter the weight matrix to work with the new network it is again equivalent to a network with randomly assigned weights. so a fully trained networks becomes a new network once you change its hidden layer dimension and gives bad performance. This hase a very little to do with the dataset.

",15935,,15935,,5/30/2018 5:44,5/30/2018 5:44,,,,0,,,,CC BY-SA 4.0 6565,2,,6540,5/30/2018 6:28,,0,,"

Sutton's Dyna has been shown to be more effective for many problem spaces than learning systems that work without a model, yet it requires fewer processing cycles than certainty-equivalence methods. It is advanced in that it, in parallel, builds a model and adjusts behavioral policy based on both incoming information and the model. The goal was to integrate both identified capabilities of the human brain.

Why do we need Model in Dyna?

The capacity to model is an essential part of the Dyna architecture and may, over time, prove to be an essential component to achieve the greater effectiveness. Many think so, myself included. In other words, there may be no equivalently effective mechanism than building and maintaining a model for many problem sets.

Why can't we just utilize a real environment itself? Wouldn't it be more helpful to use real environment instead of fake one?

The real environment cannot be placed in memory for many reasons. Primarily, it would not fit. Furthermore, not much of it can be acquired. Only images of the environment can be acquired and placed in memory. The most important characteristic of images, whether they be stock prices, temperature readings, or streamed video, is that they are grossly sparse and undifferentiated representations of the environment on which we are attempting to operate.

",4302,,,,,5/30/2018 6:28,,,,0,,,,CC BY-SA 4.0 6566,2,,6524,5/30/2018 6:42,,2,,"

When automating tasks involving text, NLP techniques are definitely the way to go! Let me be frank, though, starting from scratch and ""read and understand"" will be harder with today's tools than directly reading the papers.

Some typical required skills are below, and the bar height will depend on your mileage:

  • Knowledge of NLP (e.g. terminology like embeddings, bag of words, even perhaps tf-idf).
  • Knowledge of NLP libraries and implementation frameworks (e.g. Stanford's or TensorFlow).
  • Knowledge of a programming language (e.g. Python or Java).

In short, the goal you set will probably require many more papers than expected. But if you succeed only the automated reading / summary part, you have a bright future ahead (let aside understanding, a way more contentious problem).

Just for the sake of putting an example, companies like Iris.ai and others try to solve related problems---great potential, but definitely challenging.

",169,,,,,5/30/2018 6:42,,,,6,,,,CC BY-SA 4.0 6567,1,,,5/30/2018 6:44,,1,113,"

I am using the Fceux emulator to create a Genetic Algorithm in Lua to play the 'Arkanoid' game. It is based on Atari Breakout.

A member of my population contains a string of 0's and 1's.(Population size:200). Consider a member Every 10 frames a bit is read from the string.(Length of string is about 1000) If it is 0 the paddle moves left, If it is 1 the paddle moves right for the next 10 frames.

Now I wrote an genetic algorithm that tries to find the best sequence of inputs to play the game.

I have experimented with three types of fitness, One is to achieve maximum score, one is to try to reduce number of blocks to a minimum and the last one is to try to stay alive as long as possible.

None of the three fitness seem to work.

Then I thought that something with my crossover might be wrong.

Every generation, I print out the average fitness of all members. Some generations it increases, while in some generations it decreases. I have tried changing the population size to 50,100,200,300.

Mutation in my algorithm has a 1% Chance(If Mut_rate=1) that each of the bit will be replaced with its opposite bit.

Now coming to the crossover, I have used yet again many methodologies. One of them is to just select the top 20% or 30%(cr_rate)(according to their fitness) to pass on to the next generation and killing the remaining ones.

Another method is to add the top percentile to the population and use the remaining population to swap a few bits with top ones and add them into the next generation.

function crossover(population,rate)
    local topp=math.floor(rate*(#population));
    top={}
    for i=1,topp do
        table.insert(top,population[i])
    end
for i=1, #population do
        local p1 = math.random(1,topp);
        local p2 = math.random(1,topp);
        --print(top[p1]);
        --print(top[p2]);
        if top[p1][2] == top[p2][2] then
            local rval = math.random(1, 10) > 5;
                if rval then
                    population[i] = top[p1];
                else
                    population[i] = top[p2];
                end
            elseif top[p1][2] > top[p2][2] then
                population[i] = top[p1];
            else
                population[i] = top[p2];
        end
        population[i][2]=0;
end
--[[
for i=topp+1,#population do
    local p1 = math.random(1,topp);
    local p2 = math.random(1,#population);
    local s='';
    local flag=0;
    s=string.sub(top[p1][1],1,no_controls/2)..string.sub(population[p2][1],(no_controls/2)+1,no_controls);
    population[i][1]=s;
    population[i][2]=0;
 end
  --]]
end

Population is the table of population where each member has an input string and a fitness value.(Sorted , max fitness is first). Rate is the percentage to select the top performers. no_controls is the size of input string. The commented section of the code is where I perform the swap.

Here is the mutation function.

function mutation(population,mut_rate)

    local a=0;
    local b=1;
    for i=1, #population do
        for j=1, #(population[i][1]) do
            if math.random(1, 100) <= mut_rate then
                if string.sub(population[i][1],j,j)=='1' then
                population[i][1] = string.sub(population[i][1],1,j-1)..a..string.sub(population[i][1],j+1);
            else
                population[i][1] = string.sub(population[i][1],1,j-1)..b..string.sub(population[i][1],j+1);
            end
            end
        end
    end
end

Mut_rate is 1. And crossover rate is 0.2 or 0.5.

I have tried changing the mutation rate from 0 to 20. I have also tried to change the crossover rate as 0.2,0.5,0.7. And the fitness using no_blocks, score, time_alive. When I run the algorithm, the average fitness of the population first increases slightly , then decreases after a few generations and then remains constant forever.

The paddle also seems to be performing the same moves over and over again, which made be think that there might not be enough variation.

I need help, because I have been stuck on this for a few days now. I need suggestions on what would be a suitable crossover and mutation function and a perfect fitness function.

Thanks.

",15944,,,,,5/30/2018 6:44,Genetic Algorithm to Play Arkanoid(Nes) Possible Crossover and Fitness?,,0,5,,,,CC BY-SA 4.0 6568,2,,6557,5/30/2018 7:21,,4,,"

*AI, A Modern Approach,"" was given that title to break from previously narrow approaches to duplicating desirable qualities of human thinking.

Although Bayesian networks require somewhat resource intensive computational elements, the importance of Bayesian inference and probability are still of paramount importance in that some of the highest scientific thinking require mastery of them. Furthermore, developing in silicon dies (or possibly graphene nanites) the machinery to perform elementary probability computations in massively parallel architectures may arise over the next few decades. The use of video DSP circuits to implement ANNs is a notable segue into this kind of development.

I would not dismiss the techniques you just read about. If your intention is to capitalize on the recent crazes, you may enter the river of wannabees chasing every current trend, implement many systems that other people will rewrite later, and have a meaningless yet profitable career. I would recommend following your inquisitiveness instead.

",4302,,,,,5/30/2018 7:21,,,,3,,,,CC BY-SA 4.0 6570,2,,3933,5/30/2018 8:49,,3,,"

It is somewhat risky to discuss data independently with your learning mechanism. There is actually no such thing as good data or a good learner. There is only data that is good WITH a particular learner. That is even true of human intelligence after all the standardized education and testing done today.

There are also exceptional learners that find data to be good when most others fumble with it.

If by good data and deep learning you mean image sets that will lead to proper categorization of unsuspected images presented in production, your intuitive understanding of statistics can provide you with a general answer. The images on which the deep learner develops its activation weights and meta-parameters to provide adequate production behavior must be representative of the range of images that will be found in the production feeds.

If you intended to do a study of men and women to determine if the old belief that women are more motivated by the prospect of love and men are more motivated by the prospect of sex, you wouldn't pick 43 men and 40,000 women for the study. The study's value is limited by the lower of the two numbers.

You can train the network with the category frequencies you have, but some deep learners may capitalize fully on feature extraction for Indian Tigers and Hyenas but exhibit an unacceptable level mis-categorization of Zebras and Giraffes.

Returning to the concept above, the skew in category frequency can be accounted for by the deep learner. It is theoretically possible to create an exceptional learner or one that is well attuned to this kind of frequency skew. A simple approach is to develop a scheme that recognizes frequency skew and allocates additional computing resources to the training that focuses on the differentiation of similar animals with infrequent labeled training instances.

I don't recall who has done that, but I know it has been done.

There are several ways you can give extra attention to the infrequent categories manually in the code, but then it would be a less general solution and the resulting program would neither be an exceptional learner nor particularly reusable.

It is more cost effective to hunt for a skew resistant deep learning scheme and test its accuracy for infrequent animals than sending a photographer to Africa. If you can find more images of the less frequent animals without a monumental effort, I would do that too.

",4302,,,,,5/30/2018 8:49,,,,0,,,,CC BY-SA 4.0 6571,1,,,5/30/2018 9:12,,3,229,"

I'm having trouble grasping how to output word embeddings from an LSTM model. I'm seeing many examples using a softmax activation function on the output, but for that I would need to output one hot vectors as long as the vocabulary (which is too long). So, should I use a linear activation function on the output to get the word embeddings directly (and then find the closest word) or is there something I'm missing here?

",5558,,2444,,4/16/2019 22:42,4/16/2019 22:42,How should the output layer of an LSTM be when the output are word embeddings?,,2,0,0,,,CC BY-SA 4.0 6572,2,,1859,5/30/2018 10:30,,1,,"

Is anybody still using Conceptual Dependency Theory?

Yes. Many people. Conceptual dependencies are central to the conveyance of ideas in natural language.

Here are just a few publications in this century building off of Schank's work or travelling in parallel with his direction in related fields.

I met Roger Schank in Hartford, in 1992, during a lecture series sponsored by the AI labs of United Technologies Research Center and a few other Fortune 500 companies in the region. His entire lecture was a series of stories in AI research. I remember every story 26 years later.

The toy NLP implementations you see in the field today pale in comparison with the story based reasoning and memory systems proposed by Dr. Schank as a probable explanation of observations that can be made about human vocal communications.

It is easy to guess the reason he moved into education. His natural language and artificial intelligence ideas were about a century early and over the heads of most of the people that were at the lecture alongside me.

If you and I find his story-based reasoning and memory proposals compelling, we are probably a century too early and a bit over the heads of most in the present day NLP field. Most of those in labs in the 1980s found Schank irritating, and people who fit comfortably into today's technology culture find him irrelevant.

Some of those I interacted with on a project from the University of Michigan in Ann Arbor don't find his work irrelevant though, and their work is in the directions he indicated. Unfortunately the client NDA restricts me from commenting further about that project.

The reason we should not and ultimately will not abandon the idea that we communicate in stories is because it is correct. When a person says, ""It makes me want to puke,"" or, ""I love you too,"" the direct parse of those sentences using ""modern"" techniques are not closely related to a correct reconstruction of the idea in the mind of the speaker. Both sentences reference a conceptual heap of interdependence that we call a story.

If two ""party girls"" are in the ladies room at a Borgore concert and one says, ""Hand me a roll,"" the interpretation of the word, ""roll,"" is conceptually dependent. If the speaker is in a stall it means one thing. If at the sink it means another.

There will always be some segment of the research community that understands this. Those that do not may construct money-saving automatons that will answer your business's phone calls, but they will not give you a heads up on a customer relations pattern that points to a policy issue.

These toy NLP agents, until they develop the capabilities Dr. Schank proposed, will not recognize from phone conversations with clients that a product or service enhancement is an opportunity waiting to be exploited, and they won't tell you a story that will convince you that you would benefit from being the first to capitalize on the opportunity.

",4302,,4302,,5/30/2018 10:46,5/30/2018 10:46,,,,0,,,,CC BY-SA 4.0 6573,1,,,5/30/2018 10:50,,3,859,"

There seems to be a major difference in how the terminal reward is received/handled in self-play RL vs "normal" RL, which confuses me.

I implemented TicTacToe the normal way, where a single agent plays against an environment that manages the state and also replies with a new move. In this scenario, the agent receives a final reward of $+1$, $0$ and $-1$ for a win, draw, and loss, respectively.

Next, I implemented TicTacToe in a self-play mode, where two agents perform moves one after the other, and the environment only manages the state and gives back the reward. In this scenario, an agent can only receive a final reward of $+1$ or $0$, because, after his own move, he will never be in a terminal state in which he lost (only agent 2 could terminate the game in such a way). That means:

  1. In self-play, episodes end in such a way that only one of the players sees the terminal state and terminal reward.

  2. Because of point one, an agent can not learn if he made a bad move that enabled his opponent to win the episode. Simply because he does not receive a negative reward.

This seems very weird to me. What am I doing wrong? Or if I'm not wrong, how do I handle this problem?

",15958,,2444,,10/31/2020 15:21,10/31/2020 15:21,How can both agents know the terminal reward in self-play reinforcement learning?,,2,0,,,,CC BY-SA 4.0 6574,2,,6573,5/30/2018 12:32,,3,,"

When one agent makes a move, that move should be perceived as part of the ""state transition"" executed ""by the environment"" from the perspective of the other agent.

For example, suppose that, as a ""neutral third party"" we view the game as follows, as a sequence of states, actions and a terminal reward. I use A to denote actions selected by the first player, and B to denote actions selected by the second player:

S1 -> A1 -> S2 -> B1 -> S3 -> A2 -> S4 -> B2 -> S5 -> A3 -> Terminal Reward

Then, the first player should only get the following observations:

S1 -> A1 -> S3 -> A2 -> S5 -> A3 -> Terminal Reward

note how states S2 and S4 are skipped entirely, they are not really states from the perspective of the first player, they're just halfway through the transition caused by the first player's action and are not interesting for the first player.

Similarly, the second player should only get the following observations:

S2 -> B1 -> S4 -> B2 -> Terminal Reward

",1641,,,,,5/30/2018 12:32,,,,0,,,,CC BY-SA 4.0 6575,2,,6571,5/30/2018 12:46,,0,,"

In the research papers, it is not clear how they do that. From what I understood, you need to add a dense layer after your RNN layer. This dense layer is the size of your vocabulary. From my experience, this works even for a large vocabulary (30 000 - 40 000 for me) if you have enough data. Here you don't try to reconstruct the embedding but a one-hot vector of the current word. You can then use a cross-entropy loss. This last layer will have a lot of parameters.

You will see several implementations which are using the MSE loss directly on the word embedding output. Personally I didn't succeed with this approach but if other people could share their experiences, it could be great.

",15961,,,,,5/30/2018 12:46,,,,0,,,,CC BY-SA 4.0 6576,2,,6546,5/30/2018 13:23,,1,,"

It depends on the format of your 3D model dataset , if your dataset is made of cad models you could voxelize your dataset and train a convolutional neural network on it , but it could be very time consuming to train 3D convnets . instead you could use a multi view 2D CNN https://arxiv.org/abs/1505.00880?context=cs

",15935,,,,,5/30/2018 13:23,,,,0,,,,CC BY-SA 4.0 6577,2,,6478,5/30/2018 15:34,,0,,"

Check to see if the determinant of S is zero before you do the inverse. If that is the case, use pseudo inverse.

",15964,,15964,,5/30/2018 20:05,5/30/2018 20:05,,,,0,,,,CC BY-SA 4.0 6578,2,,6571,5/30/2018 18:06,,1,,"

Actually, LSTM is not used to get word2vec. Indeed, word2vec is extracted from corpus of words using MLP (Multi Layer Perceptron). There is a great tutorial on how to extract word2wec: http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/

After representing word as vectors, you feed your text to LSTM in a deep architecture which the last layer must be softmax to categorize your text.

",15861,,,,,5/30/2018 18:06,,,,0,,,,CC BY-SA 4.0 6579,1,,,5/30/2018 19:09,,8,777,"

I've been reading Google's DeepMind Atari paper and I'm trying to understand the concept of ""experience replay"". Experience replay comes up in a lot of other reinforcement learning papers (particularly, the AlphaGo paper), so I want to understand how it works. Below are some excerpts.

First, we used a biologically inspired mechanism termed experience replay that randomizes over the data, thereby removing correlations in the observation sequence and smoothing over changes in the data distribution.

The paper then elaborates as follows (I've taken a screenshot, since there are a lot of mathematical symbols that are difficult to reproduce):

What is experience replay and what are its benefits in laymen's terms?

",15967,,2444,,4/4/2019 16:09,11/1/2020 15:16,What is experience replay in laymen's terms?,,2,1,,,,CC BY-SA 4.0 6581,1,6586,,5/30/2018 19:56,,3,370,"

I've been reading Google's DeepMind Atari paper and I'm trying to understand how to implement experience replay.

Do we update the parameters $\theta$ of function $Q$ once for all the samples of the minibatch, or do we do that for each sample of the minibatch separately?

According to the following code from this paper, it performs the gradient descent on loss term for the $j$-th sample. However, I have seen other papers (referring to this paper) that say that we first calculate the sum of loss terms for all samples of the minibatch and then perform the gradient descent on this sum of losses.

",15967,,2444,,1/2/2022 10:00,1/2/2022 10:03,"When using experience replay, do we update the parameters for all samples of the mini-batch or for each sample in the mini-batch separately?",,1,0,,,,CC BY-SA 4.0 6582,2,,81,5/30/2018 22:19,,0,,"

Statistical AI is widely used in finance for asset management (particularly hedge funds) and trade execution looking at high-speed small data sets, lots of HMMs and SSMs, but nobody talks about it because it provides proprietary riches.

",15970,,2444,,6/27/2019 23:46,6/27/2019 23:46,,,,0,,,,CC BY-SA 4.0 6584,1,,,5/31/2018 11:16,,1,313,"

How will we recognize a conscious machine (or AI)? Is there any consciousness test? For example, if a machine is aware of its previous experiences, can it be considered conscious?

",15978,,2444,,11/11/2019 21:34,11/11/2019 21:49,How will we recognize a conscious machine?,,4,1,0,,,CC BY-SA 4.0 6586,2,,6581,5/31/2018 11:57,,5,,"

Gradient descent should be performed using the sum (or average) of the losses in the minibatch.

This is in fact also how I read the pseudocode in your question, though I understand it can be confusing. Note that, in the pseudocode, $j$ is not specified in detail. They do not, for example, have $j$ ranging from $0$ to the size of the minibatch.

When they say:

Sample random minibatch of transitions $\left(\phi_{j}, a_{j}, r_{j}, \phi_{j+1}\right)$ from $D$

they mean multiple transitions in the minibatch (with a minibatch size of $1$ being a special case), and they use the index $j$ to collectively refer to the entire set of indices in that randomly sampled minibatch. It's not one particular number / index, $j$ is a set of indices. When further lines of code do something with index $j$, they actually do something with all indices $j$.

",1641,,2444,,1/2/2022 10:03,1/2/2022 10:03,,,,1,,,,CC BY-SA 4.0 6588,2,,6504,5/31/2018 16:46,,1,,"

your question is quite broad, each disease has its own characteristics. And sometimes it takes a domain-expert or a pathologist to predict the severity of certain disease. you can't predict severity of all diseases with one algorithm (or a method).But in some cases you can use machine-learning methods to get assistance. I suggest you to go through ""grand-challenge.org"" competitions and read the writeups of teams which participated in those competitions to get a basic idea.

",15935,,,,,5/31/2018 16:46,,,,0,,,,CC BY-SA 4.0 6591,2,,6464,5/31/2018 17:10,,1,,"

your question is a very similar to ""turing-test"". you could narrate a simple story and ask questions based on that , considering the state-of-art algorithms in ""question-answering"" are still far beyond human skills.

",15935,,,,,5/31/2018 17:10,,,,0,,,,CC BY-SA 4.0 6592,2,,6459,5/31/2018 17:16,,0,,"

recurrent-neural-networks act on a sequence of inputs , that does not need to be only a time-sequence for example consider a sequence of characters like a passage or a book.once trained on a sequence of inputs , you could predict the previous and next values of an input-vector at a certain time step.

",15935,,,,,5/31/2018 17:16,,,,0,,,,CC BY-SA 4.0 6593,2,,6461,5/31/2018 17:24,,0,,"

Recurrent neural networks can be trained on character level data to generate sentences which are very similar to human language. Go through this link. You could experiment with them for compressing text.

",15935,,,user9947,7/31/2018 1:25,7/31/2018 1:25,,,,0,,,,CC BY-SA 4.0 6594,2,,6573,5/31/2018 20:25,,3,,"

If you are running self-play in a two player zero sum game, then you can do the following:

  • Arbitrarily decide the reward scheme for winning, drawing, losing is +1, 0, -1 for Player A.

  • Have Player A's goal to maximise reward, and Player B's goal to minimise reward.

This means you can combine both players' view of the values of positions and plays into a single metric, which can be learned and/or searched depending on your algorithm. When searching, you can use MCTS and/or minimax algorithms. When using Q-learning, the only tweak to apply is instead of picking the maximising action, player B will want to pick the minimising action (so will use the min and argmin functions where player A would use max and argmax) - remember when calculating TD error that you are evaluating a position for one player, but will be using reward + max/min of next player's move.

",1847,,1847,,5/31/2018 20:43,5/31/2018 20:43,,,,4,,,,CC BY-SA 4.0 6595,2,,6584,5/31/2018 20:41,,2,,"

I think general artificial intelligence will only be possible with some form of self awareness included. Many aspects of human communication do not work if one of the communicating partners does not have self awareness. A good example are many of today's chat bots. They seem to not even hear what they say and only rarely seem to have episodic memory.

Advances in machine to human communication and collaboration will eventually create systems with an ever increasing complex inner model that allows the system to interact in a natural way with humans and to fulfill tasks which require human level intelligence and flexibility. However, unless we have developed a very advanced understanding of consciousness it will be hard to judge how similar or different such a machine consciousness is compared to human consciousness.

",14910,,1671,,6/1/2018 21:02,6/1/2018 21:02,,,,1,,,,CC BY-SA 4.0 6596,1,,,5/31/2018 22:00,,2,147,"

I am looking in to building a kind of troubleshooting web application. It would be a form that starts with a first question. Depending on the answer, you get a follow up question and so on until the app has qualified your problem in to a small group of problems. To me it sounds a bit like a decision tree but what I have read about them is that it is the internal structure of a model and not what I am looking for. My guess is that a model needs all the input variables at once and not like I am looking for that feeds it one parameter at a time.

At this time I do not know of any data available. With the client we could create the desired resulting problem groups and the questions as well.

Would it be possible to solve this with the help of AI instead of hand coding a lot of case switch statements? If so could you point me to what to read up on?

",15895,,,,,6/1/2018 21:03,Can this problem be solved by AI?,,0,1,,,,CC BY-SA 4.0 6599,2,,6468,6/1/2018 10:17,,5,,"

The ReLu is a non-linear activation function. Check out this question for the intuition behind using ReLu's (also check out the comments). There is a very simple reason of why we do not use a linear activation function.

Say you have a feature vector $x_0$ and weight vector $W_1$. Passing through a layer in a Neural Net will give the output as

$W_1^T * x_0 = x_1$

(dot product of weights and input vector). Now passing the output through next layer will give you

$W_2^T * x_1 = x_2$

So expanding this we get

$x_2 = W_2^T * W_1^T * x_0 = W_2^T * W_1^T * x_0 = W_{compact}^T * x_0$

Thus as you can see there is a linear relationship between input and output, and the function we want to model is generally non-linear, and so we cannot model it.

You can check out my answer here on non-linear activation.

Parametric ReLu has few advantages over normal ReLu. Here is a great answer by @NeilSlater on the same. It is basically trying to tell us that if we use ReLu's we will end up with a lot of redundant or dead nodes in a Neural Net (those which have a negative output) which do not contribute to the result, and thus do not have a derivative. Thus to approximate a function we will require a larger NN, whereas parametric ReLu's absolve us of this problem,(thus a comparatively smaller NN) as negative output nodes do not die.

NOTE: alpha = 1 will be a special case of parametric ReLu. There must be a balance between the amount of liveliness you want in the negative region vs the linearity of the activation function.

",,user9947,16929,,8/23/2018 15:07,8/23/2018 15:07,,,,0,,,,CC BY-SA 4.0 6606,2,,2634,6/1/2018 16:40,,1,,"

""semantic network"" is way of representing ""semantic"" relations in form of a ""graph"" . where as ""lexical semantic network"" is a type of semantic network which represents the relations between words , sub-words or some-other linguistic related terms. so in other words , lexical semantic networks are a type of semantic networks dealing with language relationships.

",15935,,,,,6/1/2018 16:40,,,,1,,,,CC BY-SA 4.0 6608,1,,,6/1/2018 17:33,,5,121,"

I found several methods for setting the compatibility distance in NEAT: some normalize it, some don't, some automatically adjust it.

In a few tests I am running, using normalized static compatibility distance, the number of species increase very rapidly, thus suggesting to adjust (e.g. increase) the compatibility distance.

I haven't found however, how to determine a reasonable number of species for my population, which are the benefits of having lots/few species and which are the benefits of having stable vs mutable number of species?

",13087,,,,,6/1/2018 17:33,Speciation in NEAT - Advantages of keeping stable number of species,,0,0,,,,CC BY-SA 4.0 6610,2,,6584,6/1/2018 20:50,,2,,"

""Consciousness"" does not have a universal definition. However, if you are really into ""consciousness"", you should probably read about Searle's Chinese Room experiment or Marvin Minsky's society of mind.

In my opinion, there are many more fundamental obstacles in current AI research that we have to tackle first.

Furthermore, a more formal question would be about how an artificial general intelligence (AGI) would emerge. Even for that, there is no clear roadmap, since we are still very new to understanding the true power of neural networks or other successful AI methods.

Franchois Chollet said in a tweet

For all the progress made, it seems like almost all important questions in AI remain unanswered. Many have not even been properly asked yet

",15935,,2444,,11/11/2019 21:38,11/11/2019 21:38,,,,0,,,,CC BY-SA 4.0 6611,2,,6584,6/1/2018 20:55,,4,,"

There are two main subjects you need to look at to understand the problem:

The Turing Test

The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
wiki

See also: Turing Test (Stanford Philosophical Dictionary)

There is a linguistic element, which is that intelligence can be interpreted in many legitimate ways. Some maintain that Artificial Intelligence has not yet been achieved; other feel a simple automated switch constitutes the most basic form of AI.

Bear in mind that ""intelligence"" is distinct from consciousness in the sense of ""self awareness"", but the generalized Turing Test can also be understood as a gauge of the appearance of consciousness.

This leads to Searle's Chinese Room Argument. I highly recommend reading the Stanford Philosophy link, but the wiki gives a simpler synopsis:

The Chinese room argument holds that a program cannot give a computer a ""mind"", ""understanding"" or ""consciousness"", regardless of how intelligently or human-like the program may make the computer behave.

  • The real problem may be, how does one know an algorithm is truly conscious and not merely simulating consciousness?

Philip K. Dick approaches from the opposite direction in Electric Sheep, where one of the conclusions is that ""life is life"" whether organic or artificial. This might be said to lead to the ""Duck Test"" for consciousness: ""If it looks like a duck, and quacks like a duck, then it's probably a duck."" (Dick's philosophy was heavily influenced by Christianity, and his view on artificial life may be though of as radically humanist.)

(A parallel philosophical argument might be ""how is the appearance of free will distinct from the actuality of free will?"" Free will has been fruitlessly argued about for millennia, but whether the universe is actually deterministic or not, we perceive ourselves to have free will. If the universe turns out not to be strictly deterministic, it wouldn't functionally change anything. To make matters more fuzzy, true randomness is nature is only found at the quantum level and then only within certain models. Quantum uncertainty has been proposed as a basis for free will, but we don't know. Possibly uncertainty is a condition of individuality, which must be subjective in relation to a system or other subjectivities. Nevertheless, how do we know we are not ""robots"" acting out a pre-determined sequence of actions, which we perceive as decisions?)

I personally very much like the idea of recursion as function of self awareness.

At the phenomenal level, consciousness can be described as a singular, unified field of recursive self-awareness, consistently coherent in a particular way; that of a subject located both spatially and temporally in an egocentrically-extended domain, such that conscious self-awareness is explicitly characterized by I-ness, now-ness and here-ness.
SOURCE: Peters, Frederic Consciousness as Recursive, Spatiotemporal Self-Location

I'd also posit that being able to read and interpret your own code is a form of self-awareness.

This brings me to the idea that understanding (Chinese Room) is a red herring, and what we're really talking about is interpretation.

We are trapped in subjectivity, humans and automata, and perfect certainty is only achievable in a very limited set of circumstances, such as solved games. (There is also the idea of completeness of a system vs. consistency.)

When we talk about understanding in the sense of anything abstract, by which I means semantics as opposed to syntactics, meaning vs. form, one can say that understanding is a function of interpretation, regardless of the accuracy of the interpretation.

--------------------------------------

Fun Speculation:

I've learned never to underestimate the insight of artists, and the best speculative fiction authors are narrative philosophers in the tradition of Plato. Dick related memory to identity, regardless of whether the memories are real or artificial. This might be thought of as the narrative conception of the self--I am a product of my experience. The real ""me"" is not merely my body, but ""the story of me""--the subjective history that led to this moment of me-ness.

I think it is a not unreasonable assumption that artificial consciousness may arise out of understanding of narrative. Words are just symbols, and there are all kinds of semantic issues, but actions have a concrete aspect. Game theory studies actions in the form of choices, not only in the sense of equilibria, but also as a form of communication. (See: iterated dilemma) The choices agents make in iterated dilemmas constitute a narrative history than can be analyzed and understood mathematically, and these analyses are used for decision making.

It seems to me that the idea of consciousness can be related to decision making. If you're just a transistor, that consciousness is quite limited, more akin to a cell than a complex organism. It may come down to whether one considers intelligence and consciousness to be spectrums as opposed to thresholds.

If you believe consciousness to be a spectrum, then limited artificial consciousness has already been achieved. Consciousness analogous to human-level self-awareness is still in the future.

See Also: Definitions of the Self (wiki); Self Knowledge (wiki)

",1671,,1671,,6/2/2018 0:20,6/2/2018 0:20,,,,0,,,,CC BY-SA 4.0 6614,2,,3751,6/1/2018 21:13,,1,,"

These two are not ""general"" ways in which AI interact with humans , especially not the first one. In fact most of the applications we have complex AI algorithms deeply rooted in them , your search engine , smart-reply , translating systems all employ some kind of ""AI"" or to be more specific ""machine-learning"" algorithms in them. and many organizations are already using business-intelligence systems which you can call ""AI"" in some sense.

",15935,,,,,6/1/2018 21:13,,,,0,,,,CC BY-SA 4.0 6616,2,,6548,6/1/2018 21:26,,2,,"

Working Backward

Working backward from the trading interface available to you Note 1, you will need two things for each Exchange-Traded Fund (ETF) or other tradable commodity of another class.

  • An operation to perform
  • An associated monetary amount

The system can have an output structure Note 2 for each ETF like this (depending of course on the trading interface available to you to you and your banks primary monetary system).

  • Ternary operation indicator (buy, sell, hold)
  • Trade amount in USD

Why Not Just One Number per ETF?

A few corroborating reasons exist for why a single positive, zero, or negative trade amount is not likely the optimal architectural choice.

  • The difference between holding and trading an amount of 1.0 monetary unit is not equivalent to the difference between trading 1.0 and 2.0 monetary units. Stated mathematically, the function of profitability to trade amount is not smooth and probably not even continuous.
  • When you implement the ternary output as two binary outputs {buy, sell}, training against the Boolean expression (buy AND sell) is likely to improve your initial performance and possibly your ongoing performance Note 3.

Limitations on Real Trades to Consider

Because you have the limit of the assets in the liquid account from which you can buy, you will need to train against breaking the bank or using a stage after the NN outputs imposing rules or a formula based on gain and loss probabilities. This financial constraint muddies your question because there are several ways to ensure you do not break the bank (get an insufficient funds response from your trade operation).

Optimizing a Deeper Architecture

Let's first assume you use probabilistic calculus to produce a closed form (formula) for how to trade based on predictions from the NN architecture you design. Then the NN outputs might be continuous values representing the distribution of outcomes for each ETF. Projections will almost always be dependent on investment duration.

In such an architecture the NN output activation scheme would be a continuous function (not necessarily linear) producing something like this Note 4.

  • Mean expected delta value in one day
  • Std deviation in one day expectation
  • Mean expected delta four weeks
  • Std deviation in two week expectation
  • Mean expected delta two years
  • Std deviation in two years expectation

Any NN optimization of an investment portfolio that does not inherently deal with probability is nonsense. Optimizing for maximum gain will introduce great risk. Optimizing for minimum risk could result in losses. The goal of optimization must be some representation of the balance between the desire to win and the fear of loosing, to put it in anthropological terms.

Mean and standard deviation are obvious starting choices, and the traditional categories of short, medium, and longer term investment is also reasonable to begin with in the temporal domain Note 5.

Pure NN

Now let's assume you replace the calculus with another NN scheme trained to maximize portfolio total assets Note 6. Such a replacement NN scheme must also take as an input your available liquid asset amount along with the above probabilistic projections. You must also train to ensure the aggregation of buys and sells do not break your bank, that your liquid asset account never drops below zero.

The trade amount activation should be a continuous function, but not necessarily linear and probably not tanh either because the asymptotes of that function would be counterproductive unless it is made proportional to your available liquid assets to train by aggregating your options in your architecture. However, that's not optimal because you may find a better deal to use those assets a minute later.

The odd roots (third root or fifth root or both with coefficients), when used along with training to not break the bank and to maximize the rate of portfolio growth will produce a better environment for learning in the earlier layers because of the probabilistic and aggregation realities of liquid asset limitations.


NOTES

Note 1 — Preferably a secure RESTful API, however experimentation can employ a web browser the HTTPS transactions you can control using https://github.com/SeleniumHQ/selenium or https://github.com/watir/watir.

Note 2 — Input would be things like number of companies, exposure ratings provided by investment firm(s), short options, inception date, and a sequence of events, each event containing fixed point numbers and flags like these.

  • Price per share
  • Closing price flag
  • Expense ratio
  • Dividend per share

Note 3 — The binary output vector {buy, sell} value of {1, 0} causes a buy. The value of {0, 1} causes a sell. The value {0, 0} is ignored, and {1, 1} may trigger another training operation to correct this illegal output, which is likely to be indicative of the staleness of the last round of training. If the NN is re-entrant (reinforced), the feedback vector could include this anomalous flag or weight it heavily in an aggregation of feedback sources. In summary, the ternary scheme can be expected to augment the training speed and resulting accuracy. More importantly, it opens additional options for continuous optimization.

Note 4 — Delta is an aggregation of price increase or loss, dividends, and holding and trading expenses because that is the proper metric related to profitability of the portfolio.

Note 5 — Four weeks and two years have been chosen for the middle and longer range projections so that the time ratios are 26.09 and 28 between the three durations. Common choices are temporally skewed. If 1 day, 1 week, and 1 year had been chosen, the ratios would have been 7.0 and 52.18. If 1 day, 1 month, and 1 year had been chosen, they would have been 30.44 and 12.

Note 6 — Do not assume that even a well trained NN will ever outperform formulae properly derived from probabilistic calculus for the last stage in a trading profitability architecture.

",4302,,4302,,6/1/2018 21:37,6/1/2018 21:37,,,,0,,,,CC BY-SA 4.0 6617,2,,5982,6/1/2018 21:30,,0,,"

you should definitely check about recurrent-neural networks trained on character level language data. but it make sure you have a relevant dataset.

",15935,,,,,6/1/2018 21:30,,,,2,,,,CC BY-SA 4.0 6618,2,,5672,6/1/2018 21:36,,1,,"

It is not reverse-artificial-intelligence .not only escher , almost all paintings can be interpreted into abstract thought , in fact that is what the brain does. Even when you read something your brain processes this into its mental language (""hermaneutics"").

for more on ""mental language"" and ""abstract thought"" you should check https://drive.google.com/file/d/0B8i61jl8OE3XdHRCSkV1VFNqTWc/view (geoff hinton is a noted researcher in this field (ai))

",15935,,,,,6/1/2018 21:36,,,,0,,,,CC BY-SA 4.0 6620,2,,5474,6/1/2018 21:41,,0,,"

Neural networks construct increasingly complex representations of data on each of their layers , so you are free to choose any neural network architechture for this purpose . since the lower layers of neural network (layers near the input layer) mostly compute low level representations of the image (like gabor filters etc) most architechtures won't have much difference at this level. so you can use VGGnet if you want with proper fine-tuning from layer 3 itself.

",15935,,,,,6/1/2018 21:41,,,,2,,,,CC BY-SA 4.0 6621,2,,4271,6/1/2018 23:20,,1,,"

Optimization

In optimization, the loss function (sometimes called the error function) is a function that aggregates the disparity between actual and ideal behavioral states in multiple dimensions and over a sequence of input cases. In re-entrant (reinforced) learning, a feedback scalar or vector acts as a corrective signal that can replace or further aggregate with the loss function and additionally impact the training back-propagation.

Convergence

In any of these architectures, including systems without NN components but attempt to adapt by converging on an optimal static or moving target behavior, one generality can be made. As the goal state is approached, the probability of successful convergence increases if the size of each incremental estimation decreases. The estimation is nothing more than an informed guess.

Decreasing the learning rate as the detected convergence value improves is one strategy being used experimentally if not in production, but that strategy has drawbacks when used alone. Involving the loss function in slowing down as the destination approaches is a best practice.

Biology Analogy

A biological system of a similar nature is the human subjective experience of pain. As the pain level goes down, the human brain cares less about the pain, therefore the steps taken to reduce it decrease and eventually vanish. Evolution has proven such to be advantageous for the same reason.

The Mathematics Involved

Maximizing the probability of an unknown function to be learned adequately to perform well is done in NNs by convergence. The term gradient descent is often used to describe the iterative process intended to converge on some ideal characterized by labeled data, a concept built into the network, or some fitness signal. The likelihood of converging on at least a local minima (which may or may not be the global minima) is much higher if the slope decreases as the minima is approached in successive approximation scenarios. This is when d2E / dt2 is positive.

The geometric idea of convexity is correlated to the calculus concept of the second derivative of a line or surface with respect to time or some other measure of forward progress. In successive approximation, the independent variable that measures forward progress could be time, computing cycles, the index of the training sample, iteration number, or some aggregation of these. The second derivative is the rate of the rate of change.

Convergence is more likely if the rate of change decreases as the disparity between what is perceived as optimal behavior and what is the actual current behavior. In other terms, the relationship between risk in making the next adjustment to circuit (NN) behavior should approach zero as proximity to the optimal behavior approaches zero. (If one senses they have worked there way to close proximity of their desired state, it makes no sense to make wild guesses.)

An Easy to Visualize Analogy

If you drop a rubber ball into a rigid cone, it will take time to reach thermodynamic rest, at the bottom. A lossless ball (considered impossible) will bounce forever. A paraboloid (parabolic in two dimensions like a solar reflector) will produce faster convergence with the same rubber ball because the ball drops in energy (the sum of kinetic and potential energy) with each bounce. The trajectory does not overshoot the bottom nearly as much or as frequently. This analogy is not perfect, but it provides a visual image without a diagram.

If you aggregate your disparity between your target trained behavior and the current in-training behavior in a way where the second derivative is negative (concave loss function with respect to distance) on either side of the targeted ideal, convergence is much less likely. In the analogy, the rubber ball is likely to bounce out of the flared cone altogether. A lossless ball will always bounce out eventually.

A more provincial analogy is that it would be like trying to catch a baseball with the back of a baseball glove.

Concave, Convex, or Zero Second Derivative

Whether continuous or discrete, convex functions converge much more frequently and usually with less time and computing resource than concave ones. A second derivative of zero is in the middle. The word linear is actually incorrect for this zero acceleration case. The correct term is first degree polynomial.

Sum of squares over the dimensions of the domain (inputs) and over the sequence of input cases will perform well in many cases. If you were to sum the square roots of the absolute value of error instead, your NN will rarely converge at all.

Executive Summary

The following three things are more likely to be favorable if you find a way to aggregate your disparity between ideal and actual current behaviors in a way where the second derivative is greater than zero.

  • Reliability of eventual convergence
  • Speed of convergence
  • Response time in the case of re-entrant (reinforced) learning
  • Savings of computing cycles
  • Reduction in the complexity of introspection
  • Conservation of computational memory (RAM or SDD)
  • Conservation of space needed for persistance and archiving
  • Reduction of project cost to the business
",4302,,,,,6/1/2018 23:20,,,,2,,,,CC BY-SA 4.0 6622,1,6628,,6/2/2018 1:21,,7,7117,"

I am building a neural network for which I am using the sigmoid function as the activation function for the single output neuron at the end. Since the sigmoid function is known to take any number and return a value between 0 and 1, this is causing division by zero error in the back-propagation stage, because of the derivation of cross-entropy. I have seen over the internet it is advised to use a sigmoid activation function with a cross-entropy loss function.

So, how this error is solved?

",15652,,2444,user9947,12/11/2021 23:18,12/11/2021 23:18,How is division by zero avoided when implementing back-propagation for a neural network with sigmoid at the output neuron?,,1,2,,,,CC BY-SA 4.0 6625,2,,5471,6/2/2018 5:42,,0,,"

your model is giving a high loss for start of the video frame , later its loss is getting decreased as it runs through several frames of the ""same"" video, but again after a new video occurs its loss again peaks to certain point.

",15935,,,,,6/2/2018 5:42,,,,0,,,,CC BY-SA 4.0 6628,2,,6622,6/2/2018 9:00,,10,,"

Cross entropy loss is given by:

Now as we know sigmoid function outputs values between 0-1, but what you have missed is it cannot output values exactly 0 or exactly 1 as for that to happen sigmoid(z) will have to be + or -infinity.

Although your compiler gives a divide by 0 error, as very small floating point numbers are rounded off to 0, it is practically of no importance as it can happen in 2 cases only:

  1. sigmoid(z) = 0,in which case even though the compiler cannot calculate log(0) (the first term in the equation) it is ultimately getting multiplied by y_i which will be 0 so final answer is 0.
  2. sigmoid(z) = 1,in which case even though the compiler cannot calculate log(1-1) (the second term in the equation) it is ultimately getting multiplied by 1 - y_i which will be 0 so final answer is 0.

There are a few ways to get past this if you don't want the error at all:

  • Increase the precision of your compiler to float64 or infinity if available.
  • Write the program in such a way that anything multiplied by 0 is 0 without looking at the other terms.
  • Write the program in a way to handle such cases in a special way.

Implementation side note: You cannot bypass divide by 0 error with your manual exception handler in most processors (AFAIK) . So you have to make sure the error does not occur at all.

NOTE: It is assumed that the random weight initialisation takes care of the fact that at the beginning of training it does not so happen that $\tilde y$ or $1-\tilde y$ is 0 while the target is exactly the opposite, it is assumed that due to good training that the output is reaching near to the target and thus the 2 cases mentioned above will hold true.

Hope this helps!

",,user9947,,user9947,4/7/2019 10:52,4/7/2019 10:52,,,,5,,,,CC BY-SA 4.0 6630,2,,6504,6/2/2018 12:52,,2,,"

The question is really broad---as stated by @thecomplexitytheorist---so difficult to give a meaningful answer. The following is about a clarification about the problem, and some directions.

A model-based reflex agent is a blueprint describing the key components necessary to build that agent. It is an abstract architecture to guide the creation of concrete agents. Whether software or hardware, the target agent should have a bit serving as ""sensors"" (a HW camera, or a software API""), ""actuators"", memory, etc. So if you aim at a HW agent, you need to decide what compenents, and collect them. A SW agent would be the definition of a software architecture, or the use of some modelling framework.

The Machine Learning part is about endowing the agent with some skill. It is the ""smarts"" programming part, what the ""brains"" will do in the instance of the blueprint. Please note that it does not have to be Machine Learning. It could be another ""style"" of programming, such as a rule-based system, or a plain hand-crafted program.


How to teach such agent is currently an open question. First steps usually start with drawing the agent as a black box, its inputs (e.g. symptom data), its desired output (diagnosis and alert). Then we detail the black box in terms of what sensors the agent needs to process the input, what actuators for the output, and what it needs to learn and memorize. Depending on the available input data and diagnosis/alert output system (email?), the next stage aims at refining step by step, until a good idea of what needs be implemented emerges.

At this point, all pieces are in place, except the internal model---the piece that really pertains to Machine Learning (if you choose ML for that). As the input data is available, and the output format decided, the ""final"" stage (before implementation) is to define the model. It really depends on the actual data and the goal (here we assume prediction). Labelled data (we can teach with input and output, as we already know them) usually leads to supervised learning. Unlabelled data (we can teach with the input only) leads to unsupervised learning or reinforcement learning. Once one understands the situation, we can choose some algorithm like SVM, decision trees, neural networks, etc. Note that the final decisions (before implementation) require studying the data beforehand (Is it regular? Are there missing bits? Is it in a format useful for ML? Etc.) to make appropriate choices.


Final note: The question may get closed on this site, because it is too broad. Way more useful to ask narrow questions, with a clear answer or set of answers. As you see here, this long answer is just the tip of how to teach an agent (I could not sleep anyway). And no implementation yet. In fact, all this could be summed up as ""typical system development, with an ML component"". Two guiding principles: (1) isolate the ML component(s), so specialists can dig and make the best of them, and (2) keep it simple.

",169,,,,,6/2/2018 12:52,,,,0,,,,CC BY-SA 4.0 6631,2,,6515,6/2/2018 13:29,,1,,"

there is an -init option for that

Initialization method to use. 0 = random, 1 = k-means++, 2 = canopy, 3 = farthest first. (default = 0) kmeans++ will give you an option for you to initialize centres.

",15935,,,,,6/2/2018 13:29,,,,0,,,,CC BY-SA 4.0 6634,1,9459,,6/3/2018 0:24,,2,138,"

Hello, I would like to know whether this picture from the paper: Distributed Training of Deep Neural Networks: Theoretical and Practical Limits of Parallel Scalability valid?

Questions:
1) Does InnerProduct (Fully connected) layer actually take more time to compute in a neural network than Convolution?

2) Is assessing GLOPs/time a good way of estimating performance of different types of layers in a neural network on any hardware? (Conv, FC etc.)

3) Does anyone know where I can find GFLOPs vs compute time for different types of layers across GPUs/CPUs? (I know DeepBench, any other suggestions would be great too)

",16014,,15465,,6/13/2018 16:23,1/10/2019 15:02,Relative compute time for each type of layer in a neural network,,1,2,,4/24/2022 2:44,,CC BY-SA 4.0 6639,2,,4389,6/4/2018 5:49,,1,,"

Here is a speculative cast of the problem to a travelling salesman problem, which would lead to shortest-path algorithms.

Please note this idea suggests different constraints to explore.

  • Given the knowledge vectors and efforts, build a acyclic directed graph (acyclic, as we are not supposed to unlearn). A vertex is an article, represented by its knowledge vector. An edge links two articles, weighted by the effort to ""move"" to the target article/vertex (i.e. acquire the knowledge of that article).
  • Assign a zero vector to a new participant. That is the starting point on the graph is vertex V0 = (0, ..., 0).
  • Define a learning objective as a vector V.
  • Use a shortest-path algorithm to find a (V0, V) plan.

This procedure is insufficient, as there are many ways to build the graph (in other words, the above is completely pointless as is). Extra constraints are necessary to make it practical. For example, we can order the vertices by ordering them along each dimension. Such setting would lead learners to start with ""easy"" articles (V[i] is low), and move step by step toward more complex topics ((V[i] gets higher).

The graph construction depends on the data available. For example, are knowledge vectors ""absolute"", or can they be relative? Relative can help in creating a path, as moving from V to W requires an effort that depends on your learner's initial conditions (V0 may not be 0 everywhere, afterall).


Is it an AI question? Definitely.

",169,,169,,6/4/2018 5:58,6/4/2018 5:58,,,,0,,,,CC BY-SA 4.0 6640,1,,,6/4/2018 5:58,,1,301,"

I'm learning fuzzy logic and more or less understand the basic concept, but i'm having a hard time understanding how to apply it to a method. I tried browsing online for explanation on how to use it, but only found some implementation and test case using the basic form of 4 rules and 3 variables, and 2 rules per variable. Anyway this is an example case, i will use Tsukamoto method.

In this case actually i have 6 rules and 3 variables with 3 rules ver variable, but i will only explain 1 of the variables because i think the rest will have the same solution. I have 3 variables one of them is ""size"", the range is for small it's 0-2 and for large it's 7-10. The current condition is size = 6.5. The rules is as follow(simplified to only use this variable):

  • [R1] size = small
  • [R2] size = medium
  • [R3] size = large

What i want to know is:

  • how do i define the formula for medium(the middle rule if the case is different)?
  • what if the rule is more than 3 (i.e. small, medium, large, ex-large)?

What i understands if the rule is only 2 i can use this formula

  • small[x]=(max-x)/(max-min)
  • large[x]=(x-min)/(max-min)

My current approach to this problem is as follow:

small[x]=1; x<=2

medium[x]=(max-x)/(max-min); 2 < x < 7

large[x]=0; x>=7

Is this correct? Also can you refer me to some source to study this? As i mentioned before i can only find some implementation and basic explanation, it's either there is no online source for this or i don't know what to search for. Sorry if it's hard to understand i can edit and post the whole problem if you want, thanks in advance.

Extra question: what is the name of algorithm which can be used to solve the crossing bridge puzzle(the one with timer, max person, and stuff)?i forgot the name.

",16039,,1671,,6/4/2018 20:26,1/11/2023 15:00,Defining formula for fuzzy equation,,2,2,,,,CC BY-SA 4.0 6642,1,6645,,6/4/2018 10:40,,1,85,"

I have this following natural language statement:

""There is only one house in area1 the size of which is less than 200m².""

which is mistranslated to FOL:

∃x.(house(x) ∧ In(x,area1) ∧ ∀y.(house(y) ∧ In(y,area1) ∧ size(y) < 200 -> x=y))

This translation is wrong according to my lecturer, because it is not necessary that the size of x must be less than 200. The statement is true if there only houses which are bigger.

I have two questions:

  1. I don't get the FOL translation at all and don't see where the uniqueness part is expressed : so translated it back : ""if all houses in area1 have a size less then 200m² then there exists one house which equals to all houses ??""

  2. why is not necessary that the size of x is less than 200, when it clearly says in the statement above that must exist one house with a size less then 200 ?

",15391,,2193,,6/4/2018 19:35,6/4/2018 19:35,How is uniqueness quantification translated in First Oder Logic,,1,1,,,,CC BY-SA 4.0 6643,1,6653,,6/4/2018 11:17,,1,361,"

I'm doing a project for my last university examination but I'm having some troubles! I'm making an expert system who should be able to assemble a computer after asking some questions to the user. It works but according to my teacher I need to define more rules, could you give me some suggestions please? I have facts like these:

processor(P, Proc_price, Price_range),
motherboard(M, Motherboard_price, Price_range),
ram(R, Ram_price, Price_range),
case(C, Case_price, Price_range),
ali(A, Ali_price, Price_range),
video_card(V, Vga_price, Price_range),
ssd(S, Ssd_price, Price_range),
monitor(D, Monitor_price, Price_range),
hdd(H, Hdd_price, Price_range).

I ask these questions to the user: 1) choose the price range 2) choose the display size 3) choose hard disk size Then I ask 3 questions about computer utilization to define the user: 1) do you surf on internet? 2) do you play? 3) do you use editing programs?

use(gaming) :- ask(""Do you play games? (y/n)"").

    use(editing) :- ask(""Do you use editing programs? (y/n)"").

    use(surfing) :- ask(""Do you surf internet?(y/n)"").

    user(base) :-
        use(surfing),  \+ use(gaming), \+ use(editing).

    user(gamer) :-
        use(gaming), use(surfing), \+ use(editing).

    user(professional) :-
        use(editing), \+ use(gaming), use(surfing).

I should make more questions about user definition to make user definition more complex too and add some rules. Please help me, I'm desperate

",16049,,1671,,6/4/2018 21:25,6/4/2018 21:25,Defining rules for an expert system,,2,0,,,,CC BY-SA 4.0 6644,1,6647,,6/4/2018 12:21,,2,258,"

We are doing a research design project on autonomous vehicles and have some questions on AV Levels 4/5; specifically on the roles, impacts and consequences of AV on society, government, users and other stakeholders.

We're currently stuck on this main question:

Q: What functionally, does control look like in AV levels 4 and 5?

For example, is the whole purpose of a level 4/5 that a user has no input into the control?

Could a driver in AV (level 5) stop in an emergency, or say they want to ""take corners harder, speed up, slow down""?

Could I choose to change the equi-distance between my AV and the others around me because I like space?

We're wondering about what functionally, does AV level 4/5 offer a user; and what it looks like?


Context:

Our remit is within the world of design (design thinking), not specifically technology, or expert system functionality. We're looking at the issue from a design perspective; who does it impact, who are the stakeholders, what are the consequences and impacts. What role does a driver have an in level 5? Could an auto-manufacturer want to give drivers control in level 5? How do emergency services act in these situations? What are the touchpoints to society and whom does it impact and what does it say about the design of AV for the future of society.

",16051,,4302,,9/20/2018 5:51,12/14/2018 3:36,"What functionality, does control look like in autonomous vehicles levels 4 and 5?",,2,0,,5/11/2022 12:51,,CC BY-SA 4.0 6645,2,,6642,6/4/2018 15:15,,2,,"

According to the Wikipedia entry on Uniqueness Quantification your lecturer is correct. There is no size requirement expressed in the FOL expression.

The point about the implication is that it can be true if the antecedent is false. So, there is a house in area1 (which we call x). And all houses in area1 which are smaller than 200 are the same as x. But if there aren't any, then the antecedent is false, and the consequence (x = y) is false, but the whole statement is still true.

As another example: ""If Trump is the 31st president of the USA, then the moon is made of green cheese"". Both antecedent and consequence are false, but the whole statement is still logically true. Same as ""If there is a house in that area, and there are houses that are less than 200 (which there aren't), then that house is one of them.""

Moving on to the correct expression: The unique quantifier (usually written as ∃!) can be rewritten using the existential and universal quantifiers as follows (see the above mentioned Wikipedia page):

∃x (P(x) ∧ ∀y (P(y) -> y = x))

This is not what you have got; you have got two different predicates, P1 and P2. Your P1(x) is (house(x) ∧ in(x, area1)), and your P2(x) is (house(x) ∧ in(x, area1) ∧ size(x) < 200)

The correct expression would require the same predicate for the quantifiers and would therefore be

∃x ((house(x) ∧ in(x, area1) ∧ size(x) < 200) ∧ ∀y ((house(y) ∧ in(y, area1) ∧ size(y) < 200) -> y = x))

The difference is that you state that there is at least one house in the area with a size of less than 200. So the second predicate, that y is a house in the area with a size of less than 200, cannot be false.

",2193,,,,,6/4/2018 15:15,,,,0,,,,CC BY-SA 4.0 6647,2,,6644,6/4/2018 18:52,,4,,"

Automation Levels

Most cars have some Level 1 automation, such as cruise control and various skid/flip probability reduction systems. Most high volume passenger vehicles have higher levels. Some military and private air, land, and sea equipment are already at Level 5.

Level 4 requires that driving be automated during normal driving conditions, with manual override. However, to my knowledge, no one has published a mathematically terse and comprehensive distinction between normal and abnormal to aid in testing Levels 4 vehicles, so testing to a standard is probably not yet possible.

For legal and political reasons, Level 5 is essentially a statistical criteria. For fully automatic to be viable as a market product for general public use, the safety data for passengers and pedestrians must indicate at least the level of safety for manually driven vehicles. Although this will likely suffice from a law and public relations purpose, it is inadequate as a quality standard for automation engineering and testing. The ambiguities are numerous.

  • Statistical criteria required to pass the test (i.e. sample size, duration, randomization, and double or single blindness)
  • Mathematically terse and comprehensive scenarios for the test
  • Allowable proportion of level 2, 3, or 4 vehicles in the control group.
  • Probably others

There will be no vehicle driver in what they are calling Level 5 — only passengers. The idea is to give no power to the occupants of the AV other than destination or change in destination.

This has been the safety norm in other sub-sectors of transportation for a century. Passengers cannot talk to the pilot of a jet or the engineer of a train. In the majority of cases, safety is compromised whenever a person that has not undergone the discipline of intense safety training has control over any aspect of the vehicle's operation.

That is the primary impetus behind AV from the forward thinkers in government and academia.

Specificity and Insight

It is of paramount importance that researchers define system criteria more specifically and scientifically. Systems architecture, software engineering, safety evaluation, and quality control policies and procedures of the automated system driving the AV requires such.

With a billion plus lives at stake, the design should progress with the diligence and care as if designing a human-occupied drone aircraft or a civilian Mars lander from the ground up, even if your first phase is to only achieve what is being called Level 4 for basic passenger cars.

Target Reliability and Safety

Humans eat food, have emotional conversations, text each other, ingest mind altering substances, and fall asleep while driving. Judging the safety statistics of an AV by comparing with those of humans driving may sound practical, but it is absurd. It will become clear just how absurd the popularized Level 5 criteria is as the parameters of design are enumerated.

Design should instead minimize the possibility of any accident ever. The goal should be zero mistakes, both at point of sale (the dealership) and at later points in the product life-cycle as the AV learns.

Defining a Mistake

A mistake should be defined as follows:

Any less than optimal state indicated by the correction signal used to direct reinforced learning in any of the system's re-entrant or coincident training mechanisms

The adaptive (i.e. machine learning) portion of the system must permit re-entrant or coincident training (reinforced learning) because there is no possible way to predict the common routes of the buyer at the vehicle's point of sale.

To comprehend the complexity of the problem space for AVs and begin to simplify it, consider the dimension of conditions, controls, and priorities (embodied in feedback signals) related to driving on roads for any vehicles that use roads.

Control Channels

  • Starters (there are two in the case of most hybrids)
  • Engine stop
  • Breaking controls (there are two in the case of regenerative breaking)
  • Steering shaft or hydraulic control position
  • Break control position (three or four depending on emergency/parking break design)
  • Transmission planetary gear clutch state, traditional automatic transmission control state, or clutch and traditional manual transmission control state
  • External lighting switch positions (Widely variable, but at least six for headlight, high beam, break light, left signal, right signal, and tail light)
  • The content of any messages broadcasted, multicasted, or sent specifically to any other vehicles with compatible reception (if birds do it, so can electronic systems) either via light, sound, or RF (this will require the development of layers of inter-vehicle communications protocols)
  • Horn
  • Probably others

Data Acquisition Channels

  • Wheel positions (there may be 2 or 4 positions to read with an encoder since one cannot assume perfect alignment)
  • Break torques (4 of them, which can be read by 16 redundant strain gauges)
  • Break metal temperatures (4)
  • Torques and temperatures for any independent emergency breaks
  • Accelerometers (two devices x three dimensions per device to detect acceleration/deceleration, centripetal force, and, with some math, tire skid velocity for all four tires in two dimensions)
  • Tachometers (one before the transmission and one on each wheel)
  • Engine and coolant temperature detectors
  • Cameras (must be high resolution to recognize animals, humans, shopping carts, curbs, road signs, train signals, speed bumps/humps, and hazards, which can be IR or visible and the more angles covered the better)
  • Wind turbulence resistant external microphones to detect horns and sirens (at least four to detect likely orientation of audio source)
  • Suspension strain gauges (to detect vertical road force on each tire)
  • The content of any incoming messages from compatible systems
  • Battery voltages and currents (two batteries for regenerative breaking or hybrid startup and motive assist, and possibly several currents)
  • Fluid levels, pressures, viscosity, and transparency (fuel, oil, steering, transmission, break hydraulic, and possibly others)
  • Probably others

A system can be operational and possibly reach what most would consider Level 5 with fewer channels of acquisition and control than above, but it would be poor technology planning to start designing systems with unnecessary limitations. Such limitations will also very likely increase the cost of engineering and the effectiveness of training while saving nothing.

Why Vehicles Have Very Little Instrumentation Today

A human cannot make use of all of all the information above. Nor could a human control all the channels listed above without a high frequency of mistakes. A properly designed electromechanical learning system can.

It would be lazy systems architecture not to capitalize on the positive impact the additional instrumentation would have on safety, total cost of ownership, and other quality criteria for the AV buyer who can afford the extra sensors and computing power. Furthermore, after a few years of manufacturing for a mass market, the cost of the additional components will become small in comparison with the cost of metal and plastic.

Operational Criteria and the Formalization of a Mistake

The problem space contains at least nineteen dependent variables (output channels) and forty-six independent variables (input channels). Some are binary, some are floating point, some are streamed data, some are streamed audio, and some are streamed video.

Together they form a space in sixty-five dimensions. That is what must be optimized according to some predetermined and possibly re-programmable formalization of what is optimal.

Let's consider this idea of optimum safety, thrift, and comfort as quality control criteria. Real time quality control should follow TQM ideals, continuously ensuring quality in multiple dimensions and at multiple points in the total system.

  • Maximal distance from other vehicles
  • Maximal distance from stationary objects (bridge abutments, buildings)
  • Maximal distance from pedestrians on foot, bike, wheelchair, ...
  • Maximal distance from edges of pavement
  • Minimal lane switching
  • Minimal loads and torques on wheels
  • Minimal fuel consumption
  • Within operation parameters for tires, breaks, engine RPM, and dozens of other parts and subsystems
  • Shortest distance for route to destination
  • Shortest time distribution mean for route to destination
  • Safest route to destination
  • Least stops on route to destination
  • Probably others

Most of this must be balanced, so optimization criteria must be aggregated. Such aggregation must go beyond the simplicity of a loss function. Summing squares will not work at all, so give up aggregating in such a simplistic way. A multivariate extension of the Pythagorean Theorem is find for calculating distance in linear space, but driving cars is very non-linear. This kind of robotics system will require more thought in the formulation of balances, priorities, and the concept of emergency.

Further expanding on the above definition of a mistake, any real time control that does not optimize the sixty-five-plus dimensional surface is faulty. Now what is optimal must be defined. Consider the following quality control criteria, roughly in correct order.

  • Pedestrian safety
  • Passenger safety
  • Mechanical system integrity
  • Fuel conservation
  • Vehicle external coating integrity
  • Mechanical system wear
  • Passenger comfort
  • Time conservation in reaching destinations
  • Probably others

Applying Optimization in This Context

Aggregation of the incoming signals acquired are not only based on multiple criteria but the prioritization is not always constant, implying the need for a vector of correction signals rather than a single floating point number.

A single dimension of signaling to feed the disparity between ideal operation and the current system behavior (called a loss function in gradient descent) will not suffice. There will, out of necessity, be a need for training and reinforcement with a complexity that involves the idea of preemption. Evolution has declared preemption the design of choice for nervous systems with brains.

For instance, the pedestrian safety feedback signaling must always preempt the fuel conservation feedback no matter how much fuel would be consumed in staying clear from pedestrians, in planning a route where pedestrian density is lower, steering the vehicle clear of people, choosing speed, and applying breaking.

All biological systems have these preemption mechanisms — even bacteria. A turtle doesn't balance the transportation aspect with safety when it retracts under its shell. The behavioral interest in the turtle's destination is shelved (stored and temporarily forgotten) until the preemptive system that detected danger indicates the danger has passed.

Humans Should be Passengers on the Streets of the Future

The reason humans are generally driving in a continuous state of mistaken control is because the priorities that maximize the parameters of transportation for society (above) are inconsistently followed by humans. Birds fly smarter than humans drive. The priorities of an emotional being that wants to get somewhere fast and while talking, texting, eating, and possibly getting high will often be mistaken.

Future people may look back at the period between the advent of Model-T market penetration and the complete transition to AV as a period of strange inequality. Stepping back, the worldwide interest in domestic security, airline safety, train and subway safety, and building code contrasts strongly against the cultural insistence of every household have instant access to drive anywhere, any time, and in any mental condition.

",4302,,4302,,12/14/2018 3:36,12/14/2018 3:36,,,,4,,,,CC BY-SA 4.0 6648,2,,6557,6/4/2018 19:01,,1,,"

The chapters for Bayesian Networks are:

  1. Quantifying Uncertainty
  2. Probabilistic Reasoning
  3. Dynamic Bayesian

don't forget:

  1. Naive Bayes, hidden variables, Markov

Maybe helpful:

. Are We Going in the Right Direction? ... p.1049

If you find them interesting then invest more time to it. You might improve them and break new scientific ground.

Recent trend goes Deep Convolutional Neural Networks (ex: AlphaGo)

",16060,,,,,6/4/2018 19:01,,,,1,,,,CC BY-SA 4.0 6653,2,,6643,6/4/2018 20:18,,0,,"

The three questions ""1) do you surf on internet? 2) do you play? 3) do you use editing programs?"" are a good start, but I think your teacher is right that you need more granularity.

1) What do you use your computer for?

(a) Watching videos [leads to: ""Streaming or Downloaded HD?"" b/c downloading requires more local volume, and potentially a better video card.]

(b) Do you play Games [leads to: ""High-end video games or simple games?"" b/c playing AAA FPS requires much more powerful video cards. If they play words with friends or Tetris, a low-end card will be sufficient.]

(c) Do you use editing programs? [leads to: ""photo editing? video editing? what size files?"" b/c editing HD vid and high resolution photos is onerous with an underpowered system.]

You might want to ask in general what they use the computer for, because if it's just email, web surfing, Facebook and Youtube, etc., they can probably get by with a Windows surface (I'm a regular critic of MS, but my understanding is you can get the fully functional Office Suite on Surface, which has utility value.)

--------------------------

I think you're on the right track, but you should step back and think like a Product Manager here, as opposed to a developer. These questions might help you clarify your intent, and expand your initial template to a more fully featured system your teacher is looking for:

  • Who is the customer for my system?
  • What level of technical knowledge will the average user have?
  • Am I covering every aspect with the proper amount of detail?
  • Is the order of my questions correct? (Why ask price range before monitor size?)
  • Is there any mechanism to help a user if they don't know what answer to choose?

Again, these are just some initial thoughts. Only you know what product you want to create, and what the capabilities should be.

",1671,,,,,6/4/2018 20:18,,,,0,,,,CC BY-SA 4.0 6654,2,,6643,6/4/2018 20:39,,2,,"

We cannot do homework for students in this network, however I can suggest that several items affecting cost and several usage patterns are missing and the number of rules is shy by an order of magnitude. I wholeheartedly agree with the educational directives you received.

Consider first developing your lists further to include peripherals like DVD burner, USB devices, and audio. Whether the user does scientific programming, watches movies on the monitor, develops software, and other specific usage scenarios is also more specific and therefore will produce a better tailored system than the answer to the question of whether the user is a professional.

It is not the metric of the number of rules that is of most importance. It is the number of operations contained in the rule set that is the guiding metric. This is because rules in Prolog can be aggregated. The rough estimate of rule operator count to complete a system is sqrt(i*o)/4, where i is the number of input permutations and o is the number of output permutations.

(This is the application of Shannon Information Theory, that number of bits n = log2 (P'/P), where P' and P are the a posteriorii and a priori probabilities respectively. The divisor of four is because there are about 16 = 24 operators normally used.)

You may end up with thirty or forty rules.

Create some use cases that exercise the extremes as well as some of the typical cases from among the permutations in inputs and outputs. Run your system on those cases and observe the system behavior. Learn how to debug by outputting intermediate results or stepping through rule execution.

There are no shortcuts to researching and developing other than not wasting time worrying about how much time it will take. You can also optimize your homework time by learning the tools and then stepping back, taking a deep breath, and saying, ""I can do this!""

",4302,,,,,6/4/2018 20:39,,,,3,,,,CC BY-SA 4.0 6657,2,,6644,6/4/2018 21:17,,1,,"

Control should look like low numbers in highway and city accident report statistics.

There will be no drivers in Level 5 AV. In fact, there may be no driver's position in the vehicle as with train passenger cars and dining cars. This is quite different than Level 4. In fact, more levels will probably emerge because of the huge jump from 4 to 5 in the current understanding.

The AV will stop in an emergency because the AV technology to do so will have proven its ability to determine and react to an emergency condition more quickly and reliably than a human driver. The passengers will still be under control, just not THEIR control, except for their determining the destination address. Control should be a communal function, since the danger when control fails is communal.

This is the benefit of Level 5 over Level 4. Highway pileups and many other accident scenarios are caused by human inability to react to risks. Pileups are usually caused by human perception of a normal driving condition as an emergency and jamming on the breaks unnecessarily. Most drivers will not steer into a skid even though they've heard to do so. Rather than controlling the risk with skill, most people react in a way that reduces their tentative control of the vehicle when a skid begins.

Regarding the passengers liking space between cars, the AV will prefer space too because proximity creates risk, so that question is moot.

The primary impact of AV on society will be the gradual demolition of cultural values about engine power, which is reaching its time anyway because of the reduction of the burden of transportation on energy consumption that removing adrenaline and testosterone from the roads and highways will bring forth.

Government is pro AV simply because many humans like to ""drive through"" when traveling far, drive when on prescription medications that warn against operating heavy machinery (which cars are), and text in heavy traffic and in school zones.

The automotive, aeronautical, and transportation industry is not a primary driver of the AV technology. Their profit is maximized by occasional (but not frequent) accidents, stress on engine and body components, sales pitches about safety and power unsubstantiated with actual data, and marketing that equates freedom with control. With rare accidents, minimized wear, and accountability created by deeper scrutiny there may be significant profit lost.

Insurers will gain. Ticket and accident attorneys will lose. Parents of children in school will win. Body shops will lose. The electronics industry will win. Tire sales will drop. Government will be able to pocket more tax revenue. Gear heads may eventually be thought of like KKK members are in today's postmodernism. Custom wheel manufacturers will win. Towing services will lose. Bicyclers will win. Petroleum extractors will lose.

Just as the personal automobile changed the horse industry and the Internet changed publishing, AVs will lead to adjustments throughout government and the economy.

",4302,,4302,,6/6/2018 20:42,6/6/2018 20:42,,,,2,,,,CC BY-SA 4.0 6658,1,,,6/5/2018 11:01,,5,799,"

What are the required characteristics of an activation function (in a neural network)? Which functions can be activation functions?

For example, which of the functions below can be used as an activation function?

$$f(x) = \frac{2}{\pi} \tan^{-1}(x)$$

which looks like

or

$$f(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} e^{-\frac{t^2}{2}} dt$$

which looks

",16070,,2444,,5/9/2019 21:33,11/9/2020 10:08,Which functions can be activation functions?,,1,0,,,,CC BY-SA 4.0 6660,1,,,6/5/2018 11:32,,1,168,"

An agent aims to find a path on a hexagonal map, with an initial state $s_0$ in the center and goal state $s^*$ at the bottom as depicted below.

The map is parametrized by the distance $n \geq 1$ from $s_0$ to any of the border cells ($n = 3$ in the depicted example). The agent can move from its current cell to any of the 6 adjacent cells.

How can we find the number of node expansions performed by BFS without duplicate detection, and with duplicated detection as a function of $n$?

I know that the branching factor for the map would be 6 because the agent can move in 6 directions, and for a depth of $k$, we get $O(b^k) = 6^n$, without duplicate detection, but what is the number of node expansion with duplicate detection with BFS?

",16071,,2444,,12/20/2021 23:46,12/20/2021 23:46,How can we find the number of node expansions performed by BFS in this hexagonal map?,,0,0,,,,CC BY-SA 4.0 6663,2,,3435,6/5/2018 15:42,,2,,"

If you are talking about ""generating"" in the sense of generative models , it is pretty tough. since we are still far beyond understanding the actual structure of question-answering.

And even state of the art methods for question answering are also not able to score well on datasets like babi , mostly 16 out of 20 tasks can be solved.

",15935,,1581,,10/3/2018 19:23,10/3/2018 19:23,,,,1,,,,CC BY-SA 4.0 6669,1,,,6/5/2018 21:16,,5,2953,"

I trained a DQN that learns tic-tac-toe by playing against itself with a reward of -1/0/+1 for a loss/draw/win. Every 500 episodes, I test the progress by letting it play some episodes (also 500) against a random player.

As shown in the picture below, the net learns quickly to get an average reward of 0.8-0.9 against the random player. But, after 6000 episodes, the performance seems to deteriorate. If I play manually against the net, after 10000 episodes, it plays okay, but by no means perfect.

Assuming that there is no hidden programming bug, is there anything that might explain such a behavior? Is there anything special about self-play in contrast to training a net against a fixed environment?

Here further details.

The net has two layers with 100 and 50 nodes (and a linear output layer with 9 nodes), uses DQN and a replay buffer with 4000 state transitions. The shown epsilon values are only used during self-play, during evaluation against the random player exploration is switched off. Self-play actually works by training two separate nets of identical architecture. For simplicity, one net is always player1 and the other always player2 (so they learn slightly different things). Evaluation is then done using the player1 net vs. a random player which generates moves for player2.

",15958,,2444,,4/8/2022 10:04,4/8/2022 10:04,Why does self-playing tic-tac-toe not become perfect?,,2,9,,,,CC BY-SA 4.0 6670,1,,,6/5/2018 22:48,,2,86,"

If an Atari game's rewards can be between $-100$ and $100$, when can we say an agent learned to play this game? Should it get the reward very close to $100$ for each instance of the game? Or it is fine if it gets a low score (say $-100$) at some instances? In other words, if we plot the agent's score versus number of episodes, how should the plot look like? From this plot, when can we say the agent is not stable for this task?

",15967,,2444,,2/13/2019 2:36,2/13/2019 2:36,When can we say an RL algorithm learns an Atari game?,,0,2,,,,CC BY-SA 4.0 6671,2,,6640,6/6/2018 7:30,,0,,"

After i did a bit more study on this, i found this solution.

μsmall[x]

  • 1; -> x≤0

  • (20-x)/(20-0); -> 0

  • 0; -> x≥20

μmedium[x]

  • 0; -> x≤20 or x≥70

  • (x-20)/(45-20); -> 20

  • (70-x)/(70-45); -> 45

μlarge[x]

  • 0; -> x≤70
  • (x-70)/(100-70); -> 70
  • 1; -> x≥100

For the final result i need to add 3 more rules for the function to work. So it become 9 rules and 3 variables, with 3 rules for each input variables.

To answer both of my question:

  1. Add a triangle graph and use the formula.

  2. Add a triangle graph for each rules.

What i learn from this is the total rules needed to determine the output is the number of input variables rules multiplied(i.e. v1 = 3 rules, v2 3 rules, total rules = 3x3=9 rules). And i'm not really sure about this one but you can combine multiple kind of graphs.

",16039,,,,,6/6/2018 7:30,,,,1,,,,CC BY-SA 4.0 6673,2,,6658,6/6/2018 8:09,,3,,"

The main characteristic of an activation function is to bring a non-linearity property into the neural network. For the hidden layer, there is no need for the function to be bounded. The last layer should use a function whose range corresponds to what you want.

For regression, you usually re-scale your output data to $[-1,1]$ or $[0,1]$ and you use a tanh (hyperbolic tangent) or sigmoid function in the last layer

For classification, you want to obtain probabilities: use a softmax function in the last layer.

For the hidden layers some functions are better than others :

  • The gradient should be fast to compute (from the perspective of your computer).

  • If you use too many hidden layers, you will have the vanishing gradient problem if the derivative of your activation is too close to zero. You need a large zone of the domain with a derivative not close to zero.

In practice, the ReLU function defined as $f(x)=\max(0, x)$ works very well and is very simple.

",15961,,2444,,11/9/2020 10:08,11/9/2020 10:08,,,,0,,,,CC BY-SA 4.0 6674,1,,,6/6/2018 8:37,,1,92,"

I am working on a task that requires me to classify a large amount of mixed files on a backup drive (more than 10 TB with more than 32 million files) based on content. The included file types are documents, images, videos, executable, and pretty much everything in between.

I am also required to create new tags or metadata that will allow for automatic classification of new files. It'd also allow for manual input of category. For each input I give to the system, the system would learn and improve its classification.

Here is what I have come up with so far:

  • Documents: classify using existing categories with packages like Nltk on Python. Alternatively, first run topic modeling using LDA or NMF and then classify.
  • Images: use CNN. In case of unknown label, use VAE to cluster the images.
  • Videos and other types of files: I do not know how to approach this.

Since I am not sure about my approach, any input is greatly appreciated.

",16094,,16094,,6/6/2018 9:28,6/6/2018 9:28,How to implement AI/ML to classify various types of files,,0,9,,,,CC BY-SA 4.0 6675,2,,6669,6/6/2018 16:21,,4,,"

There are lots of ways that RL agents can fail to learn properly, so you are faced with a little bit of experimentation and maybe bug hunting unfortunately. However, from the description you have given in the question and comments, I can make a few observations and guesses about where to look:

  • Your metric of average reward against a random player is sensible. In this case, you could also use a perfect player (that ideally randomised choice of any optimal move), where you would see a maximum averaged return of zero - this would be helpful to know if your agent had learned a fully optimal behaviour, because it would consistently score zero. In general for more complex games a perfect player is not available to test with, but as you are learning here it might help you.

  • Your DQNs might be unable to fit the value function. You can test that in this case by getting the value function from an optimal self-play player (all the values will be -1, 0, or 1) and using a supervised learning approach, separately from your agent. You should be able to get a loss very close to zero - if you cannot do that, then something could be wrong with your network architecture.

  • Whilst you are training, even though you are using a variation of Q-learning (which learns an optimal policy even whilst exploring other actions), your DQNs are not learning optimal play. That is because you have used two agents. In DQN, the algorithm is not aware that there are other learning agents, and it will treat any other agents as if they were part of the environment. Which means that the agents will spend some effort trying to set the game up for each other to make an exploration mistake. That could lead to non-optimal choices and a little bit of instability. Your decay of epsilon should help with that, although you are caught between a rock and a hard place here. You want to learn off-policy and explore, but are forced to reduce exploration. There are a couple of ways to resolve that, I will explain a bit further down . . .

  • 10,000 games may not be enough. In the experiments I have done with TicTacToe agents, it seems between 20,000 and 50,000 games are required for a naive learner. More may be required if you have done something that makes learning inefficient. In addition, I found when adding more sophisticated learning approaches (in my case using eligibility traces) the agents appear to become close to optimal very quickly, but actually have flaws which take a long time to shake out, just as long as running a more naive algorithm. When the flaws got found and fixed, it upset the value function for a while and I saw fluctuations in my metrics similar to yours.

  • Q-learning with NNs is inherently unstable. DQN implements some ideas to fix that, but it is not perfect. It is not uncommon to need to adjust the batch size and/or time steps between taking frozen copy of network for the TD target calculation. The initial stability followed by poor performance looks a lot like that instability too.

Regarding your use of two opposing agents, I can see two possible improvements:

  1. Alternately train one or other agent in each game, don't train both at once. That will mean each agent is learning to play against the other agent playing its best without exploratory moves.

  2. Combine networks into single agent description. As this is a zero-sum game, you can take player A's network for calculating values, and just have player B try to minimise the action value on its turn. That means use min and argmin functions for steps that represent player B's turn wherever player A would use min or argmin, including in the Q-value updates - this is typically easy to add to the inner loop of Q-learning, and should improve learning efficiency (essentially you are hard-coding knowledge that this is a zero-sum game and taking advantage of that symmetry).

Both of these ideas will free you up from caring about the value of epsilon, or decaying it - you can probably just leave it fixed at e.g. 0.1

Finally, as a test of whether your agent can cope with learning optimal play in general, you could have it learn against an already optimal agent. That is obviously not something you can do for more complex games, but might help you debug agent code and hyper-parameters of the network - it divides your problem up into ""can it learn this at all"" and ""can it learn through self-play"".

",1847,,1847,,6/6/2018 17:14,6/6/2018 17:14,,,,6,,,,CC BY-SA 4.0 6678,1,,,6/6/2018 23:27,,13,563,"

In the paper Progressive growing of gans for improved quality, stability, and variation (ICLR, 2018) by Nvidia researchers, the authors write

Furthermore, we observe that mode collapses traditionally plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy competition follows where the signal magnitudes escalate in both networks. We propose a mechanism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2)

What do they mean by "the discriminator overshoots" and "the signal magnitudes escalate in both networks"?

My current intuition is that the discriminator gets too good too soon, which causes the generator to spike and try to play catch up. That would be the unhealthy competition that they are talking about. Model collapse is the side effect where the generator has trouble playing catch up and decides to play it safe by generating slightly varied images to increase its accuracy. Is this way of interpreting the above paragraph correct?

",12242,,2444,,6/26/2022 8:57,11/23/2022 10:00,Can some one help me understand this paragraph from Nvidia's progressive GAN paper?,,2,1,,,,CC BY-SA 4.0 6681,1,,,6/7/2018 15:50,,1,93,"

Disclaimer: I am a novice in the world of machine learning, so please excuse my ignorance.

My dataset consists of things like age, days since last visit, etc. This information is medical related. None of which is geometrical, just data pertaining to particular clients.

The goal is to classify my dataset into three labels. The dataset is not labeled, meaning I'm dealing with an unsupervised learning problem. My dataset consists of ~20,000 records, but this will linearly increase overtime. The data is nearly all floats, with some being strings that can easily be converted into a float. Using this cheat sheet for selecting a solution from the scikit site, a KMeans Cluster seems like potential solution, but I've been reading that having high dimensionality can render the KMeans Cluster unhelpful. I'm not married to a particular implementation either. I've currently got a KMeans Cluster implementation using TensorFlow in Python, but am open for alternatives.

My question is: what would be some solutions for me to further explore that might be more optimal for my particular situation?

",2818,,3773,,8/10/2018 10:03,8/10/2018 10:03,Classifying non-labeled data with high dimensionality,,2,3,,,,CC BY-SA 4.0 6683,2,,6681,6/8/2018 9:07,,1,,"

I would recommend to have a look at Finding Groups in Data, which is a very readable introduction to clustering methods. It gives a good overview over a number of different algorithms, both agglomerative and hierarchical. As far as I remember, source code for the various algorithms is available on the web somewhere.

I am sure you will find a fitting algorithm for your problem in there.

",2193,,,,,6/8/2018 9:07,,,,0,,,,CC BY-SA 4.0 6685,1,,,6/8/2018 14:20,,4,159,"

I'm building a decision tree and would like to separate (for example) the elements that are in class 0 from those in classes 1 and 2, case in point:

df = pd.DataFrame(np.random.randn(500,2),columns=list('AB'))
cdf = pd.DataFrame(columns=['C'])
cdf = pd.concat([cdf,pd.DataFrame(np.random.randint(0,3, size=500), columns=['C'])])
#df=pd.concat([df,cdf], axis=1)
(X_train, X_test, y_train, y_test) = train_test_split(df,cdf,test_size=0.30)
y_train=y_train.astype('int')
classifier = DecisionTreeClassifier(criterion='entropy',max_depth = 2)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)

C represents the class of an element,A and B are two variables that define the element, how can I build a tree that instead of dividing results into C=0, C=1 or C=2 divides them into C=0 and C!=0?

",12940,,,,,6/8/2018 15:12,"Decision tree: more than 2 classes, how to represent elements that are in a class vs ones that aren't?",,1,0,,,,CC BY-SA 4.0 6686,1,6688,,6/8/2018 15:03,,2,881,"

I am reading a book about OpenCV, it speaks about some derivative of images like sobel. I am confused about image derivative! What is derived from? How can we derived from an image? I know we consider an image(1-channel) as a n*m matrix with 0 to 255 intensity numbers. How can we derive from this matrix?

EDIT: a piece of text of the book:

Derivatives and Gradients

One of the most basic and important convolutions is computing derivatives (or approximations to them). There are many ways to do this, but only a few are well suited to a given situation.

In general, the most common operator used to represent differentiation is the Sobel derivative operator. Sobel operators exist for any order of derivative as well as for mixed partial derivatives (e.g., ∂ 2 /∂x∂y).

",9941,,-1,,6/17/2020 9:57,6/18/2018 2:50,"What does it mean ""derivative of an image""?",,2,0,,,,CC BY-SA 4.0 6688,2,,6686,6/8/2018 15:11,,2,,"

Imagine a line laid through the image. All pixels along the line count as values, so you can graph the pixels along the line like a function.

The derivative is of that 'function'. A black picture and a white picture have the same derivative (0), but a black-fading-to grey image would have a constant derivative bigger or smaller than zero, depending on the direction of the line in relation to the fading. Hard contrasts have huge derivarives at the points in the line where the line crosses a white/black border. Usually the rows and columns are used as the lines, but you could also lay any oblique line, and some algorithms do.

The term 'derivative' is somewhat a misnomer in this case, as usually the pixel values do net get fitted by a function of which then a derivative is taken, but the 'derivative' is directly taken by looking at the differences from one pixel to it's neighbor.

There is a thread in dsp.stackexchange that deals with this, the following illustrative picture is from there:

",16147,,16147,,6/11/2018 11:00,6/11/2018 11:00,,,,3,,,,CC BY-SA 4.0 6689,2,,6685,6/8/2018 15:12,,3,,"

I don't think that is possible with a decision tree, unless there is some measure of confidence that you can use as a threshold.

I ran into the same problem with the ID3 algorithm. It assigns classes, but you only have the resulting class without any confidence or probability attached.

One possible solution could be to add a number of counter examples as a second (dummy, catch-all) class; if the elements of C = 0 are reasonably tightly clustered and your counter examples for C != 0 cover the remaining problem space, then that might work.

",2193,,,,,6/8/2018 15:12,,,,0,,,,CC BY-SA 4.0 6691,2,,2279,6/8/2018 16:32,,0,,"

A simple trick can be splitting the image in to three frames vertically and feeding them to the image net and you can decide the position by looking for the frame which has higher probability of the desired category(simply max of all the probs). Or else you can try YOLO algorithm which further uses non max suppression and IOU on the frames.

",16149,,,,,6/8/2018 16:32,,,,0,,,,CC BY-SA 4.0 6692,2,,4748,6/8/2018 16:55,,1,,"

A funky way of doing this with less overhead is to just over fit the data up to some degree. The reason is when you try to over fit the data with the classifier the classification bound tends to wrap around the clusters very tightly and with that model you can some times miss classify positive classes as negative(due to high variance) but there are comparatively less situations where you end up miss classifying negative classes as positive. The level of overfitting that needs to be performed is just based on your FP and FN trade off.

I don't think this as a permanent fix but can come handy up to some extent.

",16149,,,,,6/8/2018 16:55,,,,0,,,,CC BY-SA 4.0 6693,2,,5942,6/8/2018 17:16,,0,,"

I think you are actually working on one class classification as the other class is feature less. So you will end up classifying an input data as it belongs to that single class or not.

If you are ok with considering your problem as one class classification then i would say you actually DON'T need a feature less data set at all. You can just directly run your featured data(say cat pictures) using an autoencoder and figure out threshold value at the bottle neck(this is a bit challenging). Later during the test time you can verify that input data belongs to the desired class by just looking at the threshold value produced by the encoding part of autoencoder.

If this answer doesn't satisfy you. you can just google keywords like ""One class classification"" or ""out lier detection"". I guess from there you can follow up easily.

",16149,,,,,6/8/2018 17:16,,,,0,,,,CC BY-SA 4.0 6694,2,,5982,6/8/2018 17:44,,0,,"

I would also suggest character level Recurrent neural nets but with Normal Char level RNN we can only predict next chars based on previous chars so you should consider it to be bidirectional RNN because say we have text ""xxx12345"" basically if we feed this to our model our model should predict first three places based on last places ( in DL they call it as going back through time) and this is possible only by Bidirectional RNN.

",16149,,,,,6/8/2018 17:44,,,,1,,,,CC BY-SA 4.0 6695,2,,6686,6/8/2018 19:27,,4,,"

The term Derivative of an Image in the context you mention has two meanings.

  1. A matrix, image, or floating point number that is derived from an image via convolution, passing the image through a two dimensional NN, the application of an FFT analysis, or some other process. In this context, the word Derivative implies the direction of calculation: Image B is derived from image A.
  2. A matrix or cube that represents the rate of change at in the image being processed. The change being measured between only two adjacent pixels in a single dimension and only one direction at a time, but the applications of this technique is very limited, and such a sequence is of differences, not at all reasonable approximations of the derivative of light. What is more useful in real recognition systems are two dimensional or hexagonal windowing (Gausian, Hamming, Hanning, trapazoidal, cosine, ...) across space and, for video, through time. The calculus term derivative should always reference the theoretical surface being approximated using these techniques, not the discrete matrix or cube that approximates the surface.

Such multidimensional convolution and neural network based approaches are less sensitive to capture noise and orientation nuances. Two dimensional whole image or windowed FFT techniques have met with much success because filtering the expected frequency range of features to be detected is merely an attenuation process. Two and three dimensional splines can also be tuned to be useful in the detection of features in an orientation independent way.

In addition to gray scale analysis, color and transparency channels can be selected for independent or parallel analysis or added to the dimension of the fitting model from which the derivative is taken.

Advances in deep networks have blossomed into a new area of image processing and recognition research, bringing new hope to robotics, automated transportation, and cybernetics in general.

",4302,,4302,,6/18/2018 2:50,6/18/2018 2:50,,,,1,,,,CC BY-SA 4.0 6696,1,6739,,6/8/2018 19:48,,3,774,"

I'm working with deep learning on some EEG data for classification, and I was wondering if there's any systematic/mathematical way to define the architecture of the networks, in order to compare their performance fairly.

Should the comparison be at the level of neurons (e.g. number of neurons in each layer), or at the level of weights (e.g. number of parameters for training in each type of network), or maybe something else?

One idea that emerged was to construct one layer for the MLP for each corresponding convolutional layer, based on the number of neurons after the pooling and dropout layers.

Any ideas? If there's any relative work or paper regarding this problem I would be very grateful to know.

Thank you for your time

Konstantinos

",16148,,,,,6/13/2018 18:03,How to make a fair comparison of a convolutional neural network (cNN) vs a mutlilayer perceptron (MLP)?,,2,0,,,,CC BY-SA 4.0 6697,2,,6556,6/9/2018 6:42,,3,,"

Feature embeddings are basically anything that can act as a hidden representation for given object.

In the case of images, a CNN architecture is built to create such hidden representation. Usually, the outcome of the bottleneck layer is flattened (and sometimes, converted to even lower dimensional space by adding one more dense layer) and used as feature embeddings.

",16159,,,,,6/9/2018 6:42,,,,0,,,,CC BY-SA 4.0 6699,1,,,6/9/2018 14:36,,3,2322,"

I have been looking at the Fibonacci series, the golden ratio, and its uses in nature, like how flowers and animals grow based on the series.

I was wondering whether we could use the Fibonacci series and the golden ratio in any way in AI, especially in evolutionary algorithms. Any ideas or insights?

Is this research material? If so where can we start?

",15944,,2444,,9/29/2021 14:31,4/18/2022 5:48,Has the Fibonacci series or the golden ratio been applied in any way in AI?,,4,2,,,,CC BY-SA 4.0 6702,1,,,6/10/2018 15:47,,2,365,"

In our lives, we meet different people and describe their common sense based on how they act on a situation. For example, highly extrovert people are able to deal with people without any awkwardness. For them, an action in how to deal with people come as common sense. But, in the case of scientists, approach to solving a problem may be common sense which ordinary people cannot see.

How can we define common sense in an AI agent?

",,user9947,2444,,6/20/2019 21:28,2/11/2020 17:55,How can we define common sense in an AI agent?,,3,2,,,,CC BY-SA 4.0 6705,2,,6702,6/10/2018 22:48,,2,,"

I came up with a few ideas I would argue are valuable in motivating the idea of common sense for a machine learning model.

  • Common sense is retrospective. We define it in terms of a past (sensible) actions and conditions, and we can say someone has good common sense on the basis of their behavior, which we can view as the sum of their historical actions and the degree to which they were sensible.

  • The actions alone are not sufficient for demonstrating common sense; the mental model(s) that generated those actions is important as well. Why did the individual take those sensible actions? Is their reasoning behind their action sensible as well, or did they merely get lucky (put another way, does their generally not sensical mental model produce sensible actions in certain cases and non sensical actions in most others)? This hints that common sense is contingent on the rationalization of the actions themselves.

  • Given the former, common sense is contingent on the enumeration of possible actions and choosing the right one, given the context of the situation. For example, I’m walking on a path and see a snail. What are some actions? I could keep walking, stop and admire it for a while, step on it, or eat it. The first two options are sensible if I value the snail as a living being. The first option is sensible if I’m in a rush. The second and fourth may be sensible if I’m a chef exploring nature for new potential ingredients. The third option if I recognize the snail as invasive. We say someone has common sense when, given the context, the actions chosen are sensible and we can reason about them intuitively.

My guess is that, the intuition behind the perceived sense of an action is what you’re after. I’d argue that, ultimately, the intuition of common sense is defined by the person developing the model and will have to do with the formulation of the model (e.g it’s assumptions, the objective function, etc). After all, common sense is subjective and context specific.

Concretely, a model can have common sense if the developer bakes it in and we can use inferential methods to demonstrate this. For example, in a Word2Vec model, we might see that $\mathsf{Paris} \mapsto \mathsf{France}$ and would expect that $\mathsf{Tokyo} \mapsto \mathsf{Japan}$. To interrogate this, we might do some vector math and find that $\mathsf{Paris} - \mathsf{France} + \mathsf{Tokyo} = \mathsf{Japan}$. How and if the AI model develops the larger association between $\mathsf{Capital} \mapsto \mathsf{Country}$, however, comes down to how the developer built and trained the model to recapitulate their own common sense.

",5210,,5210,,2/11/2020 17:55,2/11/2020 17:55,,,,0,,,,CC BY-SA 4.0 6707,1,,,6/11/2018 12:51,,1,105,"

I would like to develop a machine learning algorithm, given two photos, that can decide which image is more ""artistic"".

I am thinking about somehow combining two images, giving it to a CNN, and get an output 0 (the first image is better) or 1 (the second image is better). Do you think this is a valid approach? Or could you suggest an alternative way for this? Also, I don't know how to combine two images.

Thanks!

Edit: Let me correct ""artistic"" as ""artistic according to me"", but it doesn't matter, I am more interested in the architecture. You can even replace ""artistic"" with something objective. Let's say I would like to determine which photo belongs to a more hotter day.

",16201,,1671,,6/13/2018 17:04,1/24/2020 14:47,Which photo is more artistic?,,3,3,,,,CC BY-SA 4.0 6709,2,,6707,6/11/2018 14:51,,1,,"

I think the real problem is, what defines a more artistic image? It's really subjective, I think complexity of the work might be one aspect to consider but still a subjective one, you might want to make the purpose of your ML algorithm more objective or defined. But then again, It's just my opinion

",16078,,,,,6/11/2018 14:51,,,,1,,,,CC BY-SA 4.0 6710,2,,6707,6/11/2018 16:02,,3,,"

You need to define a scoring function, which returns a value of whatever criterion you are interested in, be it 'artisticity' or 'heat'. This could be something you use a machine learning algorithm for, provided you have a set of training data with labelled images.

You then need to extract features from the images. In the case of 'heat' this could be a colour spectrum (ie distribution of colours across the image), or whatever. If it's a reasonably small image, pixel values might be feasible. These features you can feed into your algorithm, and try to learn an association between the feature values and the (assigned by you) label of the image. You will end up with a classifier that takes image features as input, and returns the labels as output. The quality of the classifier depends on your images, the features you selected, and the task. If there is no structure in the data, then a classifier will not work properly.

If you have a continuous value (eg temperature in degrees C) you would run your two sample images through the classifier, and then compare the output values.

",2193,,,,,6/11/2018 16:02,,,,1,,,,CC BY-SA 4.0 6711,1,,,6/11/2018 16:55,,1,56,"

What is the state of the art in models of how the human brain performs goal-directed decision making? Can these models’ principles and insights be applied to the field of Artificial Intelligence, e.g. to develop more robust and general AI algorithms?

",16210,,16210,,6/11/2018 17:04,6/11/2018 19:28,What is the state of the art in models of how the human brain performs goal-directed decision making? Can these models' principles be applied to AI?,,0,0,,,,CC BY-SA 4.0 6713,2,,6488,6/11/2018 17:24,,1,,"

if I input the unshuffled data (as I required to view the predictions) I get a 45% accuracy. How is this possible?

When you build your dataset and then split it into training, test, and validation sets, you have to be sure that the training encompasses aspects of the test set or the validation set and more. That's why shuffling the data is important. In the case where you didn't shuffle your dataset, your NN was confronted to object it never sees before (i.e. in the training set) and then couldn't classify it properly.

does the model learn from the average of all the data points in the mini-batch or does it think one mini-batch is one data point and learn in that manner (which would mean order matters of the data)?

When using a mini batch you decide the size (b_size) of the mini batch and the number of mini batches (n_batch). Then, for n_batch times you will draw b_size random indices from X_train and y_train arrays, build with these indices X_batch and y_batch, and tune your model parameters on the 2 latter arrays (and do it n_batch times).

So having shuffled your dataset and takes randomly parts of your dataset to train your model ensure you to get rid of correlations between two points of your datasets and better generalization

",11069,,,,,6/11/2018 17:24,,,,0,,,,CC BY-SA 4.0 6715,1,,,6/11/2018 19:55,,6,205,"

A few months ago I made a simple game that is similar to the dinosaur game in Google Chrome - you jump over obstacles, or don't jump over levitating obstacles, and jump to collect bitcoins, which can be placed at 5 different heights. I used a very lightweight NN written by NYU professor Dan Shiffman, and within a few days the game and AI were done, starting off with a population of 200 jumpers, and a genetic algorithm (fitness function (points are given for avoiding obstacles and gathering bitcoins) and mutation), and it worked as it should.

However, this was only when the bitcoins and obstacles were not near each other, which I've been struggling with ever since.

So, I made a ""training ground"" where I put first a levitating obstacle, then a grounded one, and then a bitcoin after it, and then a bitcoin above a fourth grounded obstacle, and no matter how many times and how long I'd leave it to train, I'd always end up with identical behavior:

The first 3 obstacles are properly avoided, the first bitcoin is collected, and then jumpers would jump too early, land before the fourth ""bitcoin"" obstacle, and jump again, always crashing at almost the same place (across all generations, so even if I'd restart the training, they crash at the same place in the obstacle, with a deviation of a few pixels up or down). I added multilayer support to the NN, no improvements.

Today I replaced the NN with tensorflow.js, and I am getting identical behaviour.

My inputs are:

  • distance to next obstacle
  • altitude of next obstacle
  • distance to next star

(for simplicity I removed the altitude of stars from the input, and keep them at a constant altitude)

I have 2 hidden layers (5 and 6 neurons), and 1 neuron in the output, which determines if the jumper should jump.

My only idea is that a neuron that decides when to jump because of the obstacle activates alongside the neuron that decides when to jump because of the bitcoin, their weights are summed up and a decision to jump too early is made.

I'll give somewhat of a (maybe bad) analogy:

If it takes you 1 month to prepare an exam, then, if you have 2 exams on the same day, you will start preparing them 2 months earlier. That logic works in this case, but not in my AI.

In the initial ""toy neural network"" I even added 8 layers of 12 neurons each, which I think is overkill for this case. In tf.js I used both sigmoid and relu activation functions. No matter what I did, no improvement.

Hope someone has an idea where I'm going wrong.

",16214,,16214,,6/11/2018 20:25,4/13/2019 17:02,Issue with simple game AI,,1,3,,,,CC BY-SA 4.0 6717,2,,6696,6/12/2018 8:16,,0,,"

The best best way to monitor an architectures performance would be comparing the resource utilization, model accuracy, value loss and confusion matrix.
e.g VGG16 consumes less system resources in comparison to inception V3

There is an article that goes in depth.

",15465,,,,,6/12/2018 8:16,,,,1,,,,CC BY-SA 4.0 6718,2,,3009,6/12/2018 9:02,,5,,"

A genetic algorithm is an algorithm, based on natural selection (the process that drives biological evolution), for solving both constrained and unconstrained optimization problems.

A memetic algorithm is an extension of the concept of a genetic algorithm that uses a local search technique to reduce the likelihood of premature convergence.

The paper A Comparison between Memetic algorithm and Genetic algorithm for the cryptanalysis of Simplified Data Encryption Standard algorithm compares both approaches.

To answer your last question, yes, an individual's lifetime still plays a part in memetic algorithms because the objective here is to avoid premature convergence.

",15465,,2444,,1/16/2021 14:47,1/16/2021 14:47,,,,0,,,,CC BY-SA 4.0 6719,2,,6678,6/12/2018 9:06,,0,,"

A discriminator overshooting may result in a dataset that has not been thoroughly clean and probably has too many identical feature, as a result there will be an early convergence from the discriminator as there is little variation. The drawback from this is that the model will not bee able to generalize well.

",15465,,,,,6/12/2018 9:06,,,,1,,,,CC BY-SA 4.0 6721,1,,,6/12/2018 14:02,,3,597,"

In a reinforcement learning model, states depend on the previous actions chosen. In the case in which some of the states -but not all- are fully independent of the actions -but still obviously determine the optimal actions-, how could we take these state variables into account?

If the problem was a multiarmed bandit problem (where none of the actions influence the states), the solution would be a contextual multiarmed bandit problem. Though, if we need a ""contextual reinforcement learning problem"", how can we approach it?

I can think of separating a continuous context into steps, and creating a reinforcement learning model for each of these steps. Then, is there any solution where these multiple RL models are used together, where each model is used for prediction and feedback proportionally to the closeness between the actual context and the context assigned to the RL model? Is this even a good approach?

",6114,,2444,,4/15/2020 20:19,4/15/2020 20:19,How to implement a contextual reinforcement learning model?,,1,0,,,,CC BY-SA 4.0 6722,2,,6702,6/12/2018 14:27,,2,,"

I define ""common sense"" inline with human beings, to concatenate to intelligent agents;

An algorithmic agent, which has the ability to solve effective decision in relation to the way humans perceive their environment and common situations.

According to McCarthy John,

The first artificial intelligence program proposed to address common sense was Advice Taker

Currently, common sense is unsolved problem in AI. For more information, see John McCarthy's Programs with Common Sense.

",1581,,1671,,6/13/2018 17:46,6/13/2018 17:46,,,,0,,,,CC BY-SA 4.0 6723,2,,6721,6/12/2018 15:25,,5,,"

In the case in which some of the states -but not all- are fully independent of the actions -but still obviously determine the optimal actions-, how could we take these state variables into account?

I think the key thing here is the caveat but not all. What you have is a fully-featured MDP (states, actions, rewards, timesteps where next reward and next state depend on current state and action). The fact that next state is only marginally affected by the current action does not prevent it being an MDP.

It would be a problem if the current state did not adequately describe this limitation e.g. if some other data, outside of observable state, decided whether action had any effect. Assuming that is not the case, then you still have a full reinforcement learning problem, but one with some unusual qualities.

You can mitigate against problems caused by algorithms that will over-estimate likely reward (caused by algorithm associating lucky state trajectories with action choice), by using ""double learning"" - one estimate of return is used to select maximising action, and another used to estimate the actual return from the next state. You probably will also prefer single-step learning over learning based on trajectories as most of the time your state trajectories will not contain learnable data. So Double Deep Q Networks might be a good starting algorithm to try.

If you know absolutely from inspecting a current state, that the next state is independent of the current action, and the next state could in theory be almost anything (from either the whole state space or some large subset) then you may be able to adapt the algorithm to allow for that knowledge. You would do this by altering the TD target and replacing the terms for bootstrap estimate of next state with a rolling mean over all reachable next states. In concept this is similar to Expected SARSA - and in practice if it is possible, it would go a long way to reducing variance during learning process, and may speed up learning significantly. If you know the distribution of next states, you could maybe use that, but basing it purely on samples seen should also be fine, provided you can allocate the means to correct group of states (your question implies you have some understanding of how the states will group). Note that transitions from states with no action effect on next state to states which do have an effect from the action will need careful handling - they should not be assigned to the same ""mean group"", but should instead affect the TD target normally when they occur.

If it is not possible for the agent to know whether it is in a state where its action affects the next state, except through experience, then you really have to use a standard RL solver.

Finally, if you have a situation where the current action does not affect next state, but the current state definitely does in all cases - the state ""evolves"", maybe stochastically, independently of the action taken in most cases, and can only change to a relatively small subset of next states from any given current state - then this is best solved with a normal RL solver - again double Q Learning might be a reasonable starting algorithm in that case.

",1847,,1847,,6/12/2018 18:40,6/12/2018 18:40,,,,0,,,,CC BY-SA 4.0 6727,2,,5919,6/13/2018 1:32,,2,,"

Vanishing gradient is a common problem in RNN.

A common way to deal with it is the method of gradient clipping (mainly you define a maximum and/ or a minimum threshold). see here for more information

Further information and piece of code to implement it can be found in SO here

Hope it helps !

",11069,,,,,6/13/2018 1:32,,,,0,,,,CC BY-SA 4.0 6733,2,,4456,6/13/2018 12:47,,45,,"

What's the difference between model-free and model-based reinforcement learning?

In Reinforcement Learning, the terms "model-based" and "model-free" do not refer to the use of a neural network or other statistical learning model to predict values, or even to predict next state (although the latter may be used as part of a model-based algorithm and be called a "model" regardless of whether the algorithm is model-based or model-free).

Instead, the term refers strictly as to whether, whilst during learning or acting, the agent uses predictions of the environment response. The agent can use a single prediction from the model of next reward and next state (a sample), or it can ask the model for the expected next reward, or the full distribution of next states and next rewards. These predictions can be provided entirely outside of the learning agent - e.g. by computer code that understands the rules of a dice or board game. Or they can be learned by the agent, in which case they will be approximate.

Just because there is a model of the environment implemented, does not mean that a RL agent is "model-based". To qualify as "model-based", the learning algorithms have to explicitly reference the model:

  • Algorithms that purely sample from experience such as Monte Carlo Control, SARSA, Q-learning, Actor-Critic are "model free" RL algorithms. They rely on real samples from the environment and never use generated predictions of next state and next reward to alter behaviour (although they might sample from experience memory, which is close to being a model).

  • The archetypical model-based algorithms are Dynamic Programming (Policy Iteration and Value Iteration) - these all use the model's predictions or distributions of next state and reward in order to calculate optimal actions. Specifically in Dynamic Programming, the model must provide state transition probabilities, and expected reward from any state, action pair. Note this is rarely a learned model.

  • Basic TD learning, using state values only, must also be model-based in order to work as a control system and pick actions. In order to pick the best action, it needs to query a model that predicts what will happen on each action, and implement a policy like $\pi(s) = \text{argmax}_a \sum_{s',r} p(s',r|s,a)(r + v(s'))$ where $p(s',r|s,a)$ is the probability of receiving reward $r$ and next state $s'$ when taking action $a$ in state $s$. That function $p(s',r|s,a)$ is essentially the model.

The RL literature differentiates between "model" as a model of the environment for "model-based" and "model-free" learning, and use of statistical learners, such as neural networks.

In RL, neural networks are often employed to learn and generalise value functions, such as the Q value which predicts total return (sum of discounted rewards) given a state and action pair. Such a trained neural network is often called a "model" in e.g. supervised learning. However, in RL literature, you will see the term "function approximator" used for such a network to avoid ambiguity.

It seems to me that any model-free learner, learning through trial and error, could be reframed as model-based.

I think here you are using the general understanding of the word "model" to include any structure that makes useful predictions. That would apply to e.g. table of Q values in SARSA.

However, as explained above, that's not how the term is used in RL. So although your understanding that RL builds useful internal representations is correct, you are not technically correct that this can be used to re-frame between "model-free" as "model-based", because those terms have a very specific meaning in RL.

In that case, when would model-free learners be appropriate?

Generally with current state of art in RL, if you don't have an accurate model provided as part of the problem definition, then model-free approaches are often superior.

There is lots of interest in agents that build predictive models of the environment, and doing so as a "side effect" (whilst still being a model-free algorithm) can still be useful - it may regularise a neural network or help discover key predictive features that can also be used in policy or value networks. However, model-based agents that learn their own models for planning have a problem that inaccuracy in these models can cause instability (the inaccuracies multiply the further into the future the agent looks). Some promising inroads are being made using imagination-based agents and/or mechanisms for deciding when and how much to trust the learned model during planning.

Right now (in 2018), if you have a real-world problem in an environment without an explicit known model at the start, then the safest bet is to use a model-free approach such as DQN or A3C. That may change as the field is moving fast and new more complex architectures could well be the norm in a few years.

",1847,,44883,,6/11/2022 11:26,6/11/2022 11:26,,,,10,,,,CC BY-SA 4.0 6736,2,,6707,6/13/2018 17:22,,0,,"

My feeling is that, because you are dealing with a subject that his highly subjective, you need to integrate whatever learning algorithm you use with human feedback.

This is to say, crowdsource human opinions on a data set of pictures, train the algorithm to try to intuit what qualities images with similar human rankings share. (Both the positive and negative rankings.) Run the algorithm on a new data set and crowdsource that data set to see if the algorithm gets it right. Rinse & repeat.

You may also want to utilize demographics in the human crowdsourcing. Art historians will likely have different criteria for what makes a photo artistic than the general public. Humans with different educational backgrounds will likely have different criteria for what makes a photo artistic.

Sans the crowdsourcing, the algorithm will have no ability to determine the aesthetic qualities other than what the programmer defines, and in that case, the algorithm will only be able to determine what photos are artistic to the programmer, limiting the utility.

",1671,,,,,6/13/2018 17:22,,,,0,,,,CC BY-SA 4.0 6739,2,,6696,6/13/2018 18:03,,0,,"

Konstantine, I assume you refer to plain MLP and CNN, without any modifications.

I believe what you ask is how to set both of them up, in order to have the fairest comparison possible.

The way I would do it, is to use their plain implementations but both tuned as much as possible, in every hyperparameter. Both should work as black boxes that accept the same inputs and give the same outputs.

This will give you insight on the true raw performance of both algorithms.

Hope it helps :)

",15919,,,,,6/13/2018 18:03,,,,2,,,,CC BY-SA 4.0 6740,2,,6702,6/13/2018 18:06,,1,,"

My sense is that common sense tends to be axiomatic. To avoid pitfalls, a degree of wisdom may also be required in that axioms may not apply in all contexts. [See Axiomatic System].

A major problem is that science often demonstrates that intuition, and ""common sense"", often lead to incorrect conclusions. Neil Degrasse Tyson covers this topic for the general public in his book Death By Black Hole:

Chapter 3, ""Seeing Isn't Believing"", hints at the pitfalls of generalizing from too little evidence. It begins by making the point that although we know the Earth is round, it appears flat when one observes only a small, local portion of it.

A very famous example comes from mathematician Abraham Wald:

During World War II, Wald was a member of the Statistical Research Group (SRG) where he applied his statistical skills to various wartime problems. These included methods of sequential analysis and sampling inspection. One of the problems that the SRG worked on was to examine the distribution of damage to aircraft to provide advice on how to minimize bomber losses to enemy fire. There was an inclination within the military to consider providing greater protection to parts that received more damage but Wald made the assumption that damage must be more uniformly distributed and that the aircraft that did return or show up in the samples were hit in the less vulnerable parts. Wald noted that the study only considered the aircraft that had survived their missions—the bombers that had been shot down were not present for the damage assessment. The holes in the returning aircraft, then, represented areas where a bomber could take damage and still return home safely. Wald proposed that the Navy instead reinforce the areas where the returning aircraft were unscathed, since those were the areas that, if hit, would cause the plane to be lost.
Source: Abraham Wald (wiki)

My sense is that ""confidence levels"" may be the main technique driving toward algorithmic ""common sense"", specifically in that the algorithm is questioning it's assumptions.

",1671,,,,,6/13/2018 18:06,,,,0,,,,CC BY-SA 4.0 6741,1,6743,,6/13/2018 18:13,,5,208,"

Currently, in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum. According to the classification of the groups, they can compete to receive monetary incentives to make their research.

At the moment, I am working on an application that takes the data of the system that I mentioned previously. I am able to say what classification the group currently has because we develop a scraper that counts the products and there is another service that is in charge of implementing all the mathematical model that the system has to calculate the category of the group.

But what I want to achieve is that my application would be able to give an estimate of how many products a research group should have to improve its category. I want to know if I can do that using neural networks.

For example, if there is a category C group, I want the application to tell the user how many articles and books it would make its category go up to B.

From what I have seen in some web resources, I could insert a training set into the neural network and have it learn to classify the groups, but I think it is unnecessary, because I can do that mathematically.

But I do not understand if it is possible for a neural network to process the current category that the group has and be able to give suggestions of how many products it needs to improve its category.

I think it must be a neural network with several outputs, so that in each one it throws the total for each one of the products, although it is not necessary to list all the products that the measurement model contemplates. But it is necessary for the network to learn which products are handled by a certain group, for example if a group does not write books, avoid suggestions that contemplate the production of books for the improvement of the category that the group has.

",16258,,2444,,12/14/2021 9:34,12/14/2021 9:36,Which machine learning approach should I use to estimate how many products a research group should have to improve its category?,,1,0,,,,CC BY-SA 4.0 6743,2,,6741,6/13/2018 18:31,,3,,"

I believe you want a neural network that can predict future values of multiple variables given multiple inputs. This belongs to the general time series forecasting problem.

One of the best neural network architectures that can handle this problem is the LSTM, which is a type of Recurrent Neural Network. Their architecture allows them to develop a memory of what they have seen in the past and use it for future predictions. In other words, they can cross-correlate in a linear/nonlinear fashion several past steps of multiple variables to future values of multiple other variables, like a black box.

A useful tutorial for your purposes is this.

",15919,,2444,,12/14/2021 9:36,12/14/2021 9:36,,,,0,,,,CC BY-SA 4.0 6744,2,,6314,6/13/2018 22:11,,3,,"

Dennis Soemers provides an important point that from a theoretical standpoint, this can be seen as a non-issue. However, what you bring up is an important practical issue of potential-based reward shaping (PBRS).

The issue is actually worse than you describe---it's more general than $s = s'$. In particular, the issue presents itself differently based on the sign of your potential function. For example, in your case it looks like the potential function is positive: $P(s) > 0$ for all $s$. The issue (as you have found) is that an increase in potential (regardless of whether $s = s'$) might not be enough to overcome the multiplication by $\gamma$, and thus the PBRS term may be negative. In particular, only when the fold-change in $P$ is large enough ($\frac{P(s')}{P(s)} > \frac{1}{\gamma}$) will the PBRS term actually be positive.

The situation changes when the potential function is negative, i.e. if $P(s) < 0$ for all $s$. In this case, you can actually get a positive PBRS signal even when there is a decrease in potential! In particular, only when the fold-change in $P$ is large enough (same inequality as before) will the PBRS term actually be negative.

To summarize, when $P > 0$, a decrease in potential will always lead to a negative PBRS term, but an increase must overcome a barrier due to $\gamma$ for the term to be positive. When $P < 0$, an increase in potential will always lead to a positive PBRS term, but a decrease must overcome a barrier due to $\gamma$ for the term to be negative.

The intuition behind PBRS is that improving the potential function should be rewarded, and decreasing it should be penalized. However, it turns out that whether or not this holds true depends on things like 1) the sign of the potential function, 2) the fold-change in potential, or 3) the resolution of your environment. For #3, if the temporal resolution of your environment can be altered such that an action brings you "partway" from $s$ to $s'$, then at some environment resolution you will run into one of the two problematic circumstances above. Another issue is that PBRS is highly sensitive to, for example, adding a constant to the potential function.

Another related issue is that whether or not some constant improvement to the potential function leads to a positive/negative reward depends on how far you are from the "goal" state. Often potential functions are chosen such that they estimate how good a state is (after all, the best option for a potential function is the optimal value function). Say we choose $\gamma=0.99$ and that $P(s_{goal}) = 1000$ represents a goal state. Then increasing potential by one from $P(s) = 900$ to $P(s') = 901$ will have a negative reward of $-8.01$. In contrast, increasing potential by one from $P(s) = 90$ to $P(s') = 91$ will have a small positive reward of $+0.09$. This is another issue: the sign of the PBRS term depends on distance from the goal.

This paper has some interesting examples and outlines many of the issues above.

From my own experience, this is a large practical issue. The LunarLanderContinuous-v2 environment from OpenAI Gym includes a PBRS term, but they exclude the multiplication by $\gamma$ (i.e., $\gamma = 1$), presumably because the environment benchmark doesn't know the true discounting the RL user chooses. This environment can be solved using DDPG, for example, without significant hyperparameter tuning. However, if you use $\gamma = 0.99$ for your RL formulation, and edit the LunarLander code such that the PBRS term includes $\gamma = 0.99$, then DDPG fails to solve the environment. So, this is not a small computational issue---it has dramatic effects on training.

My solution has been to simply set $\gamma = 1$ in the PBRS term, even when using, say, $\gamma = 0.99$ in the RL formulation. This solves (or rather, circumnavigates) every issue above. While this loses out on the theoretical guarantee that adding the PBRS term does not affect the optimal policy, it can severely help training. (And there are no optimality guarantees using neural networks as function approximators anyway.)

This solution also seems to be what most benchmark environments have adopted. For example, most MuJoCo environments use PBRS terms with no $\gamma$ (equivalent to $\gamma = 1$). Alternatively, the omission of $\gamma$ could be attributed to the fact that including it would require the environment to know a priori what value of $\gamma$ the RL user chose. While feeding this into an OpenAI gym environment is easy to do, it's not typically done.

Keep in mind that while the theory guarantees that the optimal policy won't change by adding the PBRS term, adding the term doesn't necessarily help you approach the optimal policy. Yet, the whole point of using PBRS at all is to help you approach a good policy. So, it's a bit of a paradox, and I was comfortable with sacrificing the theoretical guarantee of policy invariance if it meant I could actually get to a good policy in the first place.

",16264,,1641,,7/2/2020 19:21,7/2/2020 19:21,,,,1,,,,CC BY-SA 4.0 6745,1,,,6/14/2018 1:52,,3,162,"

I have trained (with different sizes, learning rates, and epochs) a SOM network to cluster the Iris dataset. The instances associated with the class setosa have been mainly fitted to a 1-2 BMUs. In the case of virginica, the instances have also be associated with only a few BMUs. However, in the case of versicolor instances, many BMUs have been associated with them.

Is this normal?

Setosa
0. 1846
1. 1846
2. 1846
3. 1846
4. 1846
5. 1846
6. 1846
7. 1846
8. 1846
9. 1846
10. 1846
11. 1846
12. 1846
13. 1846
14. 1846
15. 1846
16. 1846
17. 1846
18. 1846
19. 1846
20. 1846
21. 1846
22. 1846
23. 1846
24. 1846
25. 1846
26. 1846
27. 1846
28. 1846
29. 1846
30. 1846
31. 1846
32. 1846
33. 1846
34. 1846
35. 1846
36. 1846
37. 1846
38. 1846
39. 1846
40. 1846
41. 1620
42. 1846
43. 1846
44. 1846
45. 1846
46. 1846
47. 1846
48. 1846
49. 1846

Versicolor
50. 652
51. 652
52. 652
53. 1259
54. 696
55. 1394
56. 652
57. 490
58. 696
59. 490
60. 490
61. 1059
62. 1304
63. 696
64. 490
65. 652
66. 1400
67. 490
68. 696
69. 490
70. 652
71. 1574
72. 696
73. 832
74. 696
75. 696
76. 696
77. 652
78. 696
79. 490
80. 490
81. 490
82. 444
83. 696
84. 1129
85. 1084
86. 652
87. 696
88. 25
89. 584
90. 490
91. 789
92. 1034
93. 490
94. 854
95. 29
96. 584
97. 877
98. 490
99. 809

Virginica
100. 652
101. 696
102. 652
103. 652
104. 652
105. 652
106. 877
107. 652
108. 696
109. 652
110. 652
111. 696
112. 652
113. 696
114. 652
115. 652
116. 652
117. 652
118. 652
119. 696
120. 652
121. 652
122. 652
123. 696
124. 652
125. 652
126. 696
127. 652
128. 652
129. 652
130. 652
131. 652
132. 652
133. 696
134. 696
135. 652
136. 652
137. 652
138. 652
139. 652
140. 652
141. 652
142. 696
143. 652
144. 652
145. 652
146. 696
147. 652
148. 652
149. 652

Now, I have a diagram. It doesn't look bad.

",15587,,2444,,12/2/2020 21:34,1/1/2021 22:08,"Is it normal that SOM clusters the instances with the ""versicolor"" class into multiple different BMUs?",,1,0,,,,CC BY-SA 4.0 6750,1,6752,,6/14/2018 6:23,,3,285,"

As I know, if we consider a 3*3 kernel, we should add a padding of 1px to the source image(if we want to have effect on whole of the image), then we start to put the kernel in upper-left side of the image and multiplying each element of kernel to corresponding pixel on image. Then we sum all the results and put it on the anchor point of kernel(usually center element). Then we should shift the kernel one step to right side and do these things again.

If I am right till here, I have a question about the summation results. I want to know: should we consider the replaced value of image in previously calculated summation and replaced in anchor point in new step of calculation or not?

I mean we must put the anchor point's result in source image and consider it in calculations of shifted kernel? Or we must put it in distance image and we don't consider these results when we shift the kernel on source image(It means don't replace the results on source image for next steps calculations)?

",9941,,,,,6/14/2018 7:01,How to apply a kernel to an image?,,1,0,,,,CC BY-SA 4.0 6752,2,,6750,6/14/2018 6:50,,3,,"

Assuming that you are performing a normal discrete convolution, then you would use the original source image for all calculations, and only replace at the end. As you put it, like this:

put it in distance image and we don't consider these results when we shift the kernel on source image(It means don't replace the results on source image for next steps calculations)?

The other option, changing in-place during the operation, does not relate to any common use that I know of, but might have some interesting behaviour in studies of cellular automata or similar.

",1847,,1847,,6/14/2018 7:01,6/14/2018 7:01,,,,0,,,,CC BY-SA 4.0 6753,1,6754,,6/14/2018 9:14,,5,254,"

It seems that older RNNs have a limitation for their use cases and have been outperformed by other recurrent architectures, such as the LSTM and GRU.

",16255,,2444,,11/2/2019 15:23,6/15/2020 21:00,Why are GRU and LSTM better than standard RNNs?,,2,0,,,,CC BY-SA 4.0 6754,2,,6753,6/14/2018 9:15,,3,,"

These newer RNNs (LSTMs and GRUs) have greater memory control, allowing previous values to persist or to be reset as necessary for many sequences of steps, avoiding ""gradient decay"" or eventual degradation of the values passed from step to step. LSTM and GRU networks make this memory control possible with memory blocks and structures called ""gates"" that pass or reset values as appropriate.

",15465,,2444,,6/15/2020 21:00,6/15/2020 21:00,,,,0,,,,CC BY-SA 4.0 6756,1,6759,,6/14/2018 11:29,,2,256,"

I watched a youtube clip of Elon Musk talking about his view on the future of AI. He gave two examples. One of the examples was a benign scenario and the other example was a nonbenign scenario where he speculated the possibilities of future AI threats and what harm a deep intelligence could do.

According to Elon, a deep intelligence in the network could create fake news and spoof email accounts. "The pen is mightier than the sword". This non-benign scenario put forth by Elon was a hypothetical, but he went into detail about how it could have been possible that an AI, with the goal of maximising the portfolio of stocks, to go long on defense and short on the consumer, and start a war.

To be more specific, this could be achieved by hacking into the Malaysians Airlines aircraft routing server, and when the aircraft is over a warzone, send an anonymous tip that there is an enemy aircraft flying overhead which in turn would cause ground to air missiles to take down what was actually a "commercial" airliner.

Although this is a plausible hypothetical nonbenign scenario of AI, I'm wondering if this actually could have been the case regarding the Malaysian Airliner crash. The Stuxnet, for example, was a malicious computer worm, first uncovered in 2010. Thought to have been in development since at least 2005 and believed to be responsible for causing substantial damage to Iran's nuclear program. The Stuxnet wasn't even an AI.

The Stuxnet blew the world's minds when it was discovered. The sheer complexity of the worm and the amount of time it took to build was impressive, to say the least.

In conclusion, was the Malaysian Airliner crash caused by a non-benign artificial intelligence system?

",13252,,2444,,9/10/2020 11:50,9/10/2020 11:50,Was the Malaysian Airliner crash caused by a non-benign artificial intelligence system?,,1,7,,,,CC BY-SA 4.0 6757,1,,,6/14/2018 21:22,,3,257,"

If both the players want to increase their score (by selecting the highest or best cost path), can this be done using the minimax algorithm, or are there other algorithms for this purpose?

",16280,,2444,,12/20/2019 16:16,12/30/2022 7:06,Can minimax be used when both players want to increase their score?,,1,1,,,,CC BY-SA 4.0 6758,2,,6757,6/14/2018 21:31,,0,,"

I believe maximax is what you're looking for:

Maximax (economics, computer science, decision theory) A strategy or algorithm that seeks to maximize the maximum possible result (that is, that prefers the alternative with the chance of the best possible outcome, even if its expected outcome and its worst possible outcome are worse than other alternatives); often used attributively, as "maximax strategy", "maximax approach", and so on.
SOURCE: maximax (wiki)

You may also be interested in "minimax regret":

The minimax regret approach is to minimize the worst-case regret. The aim of this is to perform as closely as possible to the optimal course. Since the minimax criterion applied here is to the regret (difference or ratio of the payoffs) rather than to the payoff itself, it is not as pessimistic as the ordinary minimax approach.

One benefit of minimax (as opposed to expected regret) is that it is independent of the probabilities of the various outcomes: thus if regret can be accurately computed, one can reliably use minimax regret. However, probabilities of outcomes are hard to estimate.

This differs from the standard minimax approach in that it uses differences or ratios between outcomes, and thus requires interval or ratio measurements, as well as ordinal measurements (ranking), as in standard minimax.
SOURCE: Minimax regret (wiki)


Apologies for just grabbing the wikis on this, but they are accurate. Most of the references from economic sites are commercial. If interested, when I have some more time, I could probably link to some peer-reviewed papers. Hopefully, at the very least, this answer provides information for further research.

",1671,,-1,,6/17/2020 9:57,6/14/2018 21:31,,,,0,,,,CC BY-SA 4.0 6759,2,,6756,6/14/2018 21:40,,3,,"

Short answer: despite the incredible advances in AI via Machine Learning (and subfields) AI is nowhere near this kind of autonomous decision-making.

I can't prove a negative, but, the level of autonomy Musk is talking about is still on the horizon.

Doesn't mean Musk is wrong about the hypothetical. imo I'm glad he keeps bringing these kinds of issues to the fore.

Engineers and mathematicians have a bit more credibility on this subject than laypeople. If it seems alarmist, there are many in the scientific community who feel it is warranted.

What Musk is describing is an extension of what Asimov felt compelled to warn us about via his ""Three Laws of Robotics"" back in 1942, when computers sucked. For an explication of this idea re: Machine Learning, see below.


JUST FOR FUN (SORT OF:)

Author and mathematical physicist Hannu Rajaniemi just published a story in the MIT Tech Review on this very subject.

Unchained: A Story of Love, Loss, and Blockchain

Warning: the story is both wickedly funny, surprisingly moving, and very likely prescient!

",1671,,1671,,6/15/2018 21:17,6/15/2018 21:17,,,,0,,,,CC BY-SA 4.0 6760,1,,,6/14/2018 23:19,,1,84,"

I would really appreciate if someone could comment the following method of training neural nets providing them with some meta data (Making them more color prone only if needed, whereas now they're mostly silhouette / outline aware). (Comment and especially giving a reference to some papers, preventing me from reinventing the wheel.)

But let's start at the very begining and say that we're performing simple image recognition and each image has 24 bit color depth, simply 1 Byte per each RGB channel. I'm more eager to use usually bigger pics, sacrifing color quality, however not in all cases (that statement is crucial in this question).

To limit the computational burden I'm NOT keen on using the full information about color (3 Bytes per pixel), but rather shrink it to 1 Byte per pixel and here is the catch:

I'm reluctant neither to use gray scale nor to cast original tints to single (common among all pics) color palette of 256 hues. So I came up with an idea of reversing the method called debayering or demosaicing image from Color Filter Array data).

To achive it, for every pixel only one color channel is preserved. Because of the human perception of colors, green component is overrepresented covering 50% of pixels, so 25% left for blue and red. In this particular example below, the upper left pixel correspond to Blue, followed by Green, then Blue, next Green and so on to the end of the row. The second uppermost row starts with Green, afterwards Red, one more Green, Red and also repeating till the end of line. This horizontal patterns are altering with the parity of the row number, which is nicely depiced on https://en.wikipedia.org/wiki/Bayer_filter from where I've got following graphics:

To better illustrate this method I'm using the thumbnail of famous Mona Lisa painting with its grayscale version next to it. (By the way, it isn't in my training set, but is familiar to everyone).

A greenish leftmost image is the result of applying the reverse debayering / demosaicing CFA method. This picture consist of pixels that are either Blue or Red or Green with different brightness level. In the browser window that could be poorly visible, however if you download this image and magnify it substantially, the patter would be revealed.

Let's say that in the original picture one can find a small square of 2x2 pixels, all of them representing a light skintone 0xF4D374 (in hex). In this grouping, 2 green pixels would be chopped into green channel and will get a value of 0xD3, the blue-related will get a value of 0x74 and the remaining red would get 0xF4. In the leftmost image below the corresponding pixels were presented by hex colors: 0x00D300, 0x000074 and 0xF40000 respectively, whereas in the right picture exactly the same values (0xD3, 0x74, 0xF4) were shown in grayscal (of 256 possible shades).

After this color-flattening, our input batch has shrinked by two-thirds and at the same time original colors can be more-or-less restored (of course not lossless, but well enough).

However, I don't suppose that anyone had a problem with recognising this picture after transformation. Likewise, all my models could be well trained to recognize outline/silhouette of the object, but they require a way lot more training data (at least one-two orders of magnitude) to be color-aware.

The ultimate question is, how to design models that would threat shape and colors in similar manner. Maybe that would be not in 100% mathematically proper, but shape and color must be orthogonal.

Nevertheless, I don't want to always decode the color, but only if its needed - in ealier epoch it learned sillhouettes/shapes and that there're many similar objects in this regard, so in the next epoch it should pay also / more attention to tints.

Have you encountered articles about such method of using color if object demarcation / labelling process cannot be based only on shape? I would be really gratefull for any paper or other reference.

I'm rather newbie to neural nets, so sorry if this is something widely known to everyone but me ;-)

Thanks in advance for any hint.

",2629,,,,,6/14/2018 23:19,Object recognition by two or more traits that are orthogonal (informally speaking),,0,0,,,,CC BY-SA 4.0 6762,1,6764,,6/15/2018 9:11,,4,541,"

In some implementations of off-policy Q-learning, we need to know the action probabilities given by the behavior policy $\mu(a)$ (e.g., if we want to use importance sampling).

In my case, I am using Deep Q-Learning and selecting actions using Thompson Sampling. I implemented this following the approach in "What My Deep Model Doesn't Know...": I added dropout to my Q-network and select actions by performing a single stochastic forward pass through the Q-network (i.e., with dropout enabled) and choosing the action with the highest Q-value.

So, how can I calculate $\mu(a)$ when using Thompson Sampling based on dropout?

",16286,,2444,,1/5/2021 12:13,1/5/2021 12:13,How to compute the action probabilities with Thompson sampling in deep Q-learning?,,1,0,,,,CC BY-SA 4.0 6763,2,,2777,6/15/2018 9:42,,0,,"

This is a hard problem to solve, and the best approach depends very much on the scope of your task. If you have a small database table with a limited number of columns, you might get away with some basic pattern matching techniques. If it is more complex than that, you might have to do a full-scale syntactic analysis of the question. This also depends on the variations of possible question types.

Assuming a limited set of variables and variants, you could set up something like:

How many X did Y produce/How many X were done by Y/What is the number of X for Y

where you have two variables to fill from the pattern, which you then use in your query:

select sum(X) where producer == Y

(Or whatever format your query has).

The advantage of this is that you don't need to be a linguistics expert to maintain/expand the system, and you can just add more patterns to it if necessary. You might have to map some terms onto synonyms to get the right column headings/labels out of it. But this approach is not very hard to implement, and you should have a basic system up and running fairly quickly. You then have to see/test what questions your users are asking, and expand the pattern inventory accordingly.

The disadvantage is that you might end up with a long list of patterns, and there could be some which are conflicting, ie the same pattern with different variables will ask for a different kind of result. If that turns out to be a problem, you might have to look for a more powerful approach.

",2193,,,,,6/15/2018 9:42,,,,0,,,,CC BY-SA 4.0 6764,2,,6762,6/15/2018 11:36,,1,,"

So, how can I calculate $\mu(a)$ when using Thompson Sampling based on dropout?

The only way I could see this being calculated is if you iterate over all possible dropout combinations, or as an approximation sample say 100 or 1000 actions with different dropout, to get a rough distribution.

I don't think this is feasible for practical reasons (the agent will learn so much more slowly due to these calculations, you may as well abandon Thompson Sampling and use epsilon-greedy), and you will have to avoid using importance sampling if you also want to use action-selection techniques where there is no easy way to calculate a distribution.

Many forms of Q-learning do not use importance sampling. These typically just reset eligibility traces if the selected action is different from maximising action.

",1847,,2444,,1/5/2021 12:13,1/5/2021 12:13,,,,0,,,,CC BY-SA 4.0 6765,1,,,6/15/2018 11:40,,14,824,"

I read Judea Pearl's The Book of Why, in which he mentions that deep learning is just a glorified curve fitting technology, and will not be able to produce human-like intelligence.

From his book there is this diagram that illustrates the three levels of cognitive abilities:

The idea is that the "intelligence" produced by current deep learning technology is only at the level of association. Thus the AI is nowhere near the level of asking questions like "how can I make Y happen" (intervention) and "What if I have acted differently, will X still occur?" (counterfactuals), and it's highly unlikely that curve fitting techniques can ever bring us closer to a higher level of cognitive ability.

I found his argument persuasive on an intuitive level, but I'm unable to find any physical or mathematical laws that can either bolster or cast doubt on this argument.

So, is there any scientific/physical/chemical/biological/mathematical argument that prevents deep learning from ever producing strong AI (human-like intelligence)?

",16291,,2444,,12/12/2021 17:33,12/12/2021 17:33,Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI?,,2,4,,,,CC BY-SA 4.0 6770,1,,,6/15/2018 14:32,,8,489,"

I'm trying to detect the visual attention area in a given image and crop the image into that area. For instance, given an image of any size and a rectangle of say $L \times W$ dimension as an input, I would like to crop the image to the most important visual attention area.

What are the state-of-the-art approaches for doing that?

(By the way, do you know of any tools to implement that? Any piece of code or algorithm would really help.)

BTW, within a "single" object, I would like to get attention. So object detection might not be the best thing. I am looking for any approach, provided it's SOTA, but Deep Learning might be a better choice.

",9053,,2444,,12/24/2021 18:31,12/24/2021 18:31,"What are the state-of-the-art approaches for detecting the most important ""visual attention"" area of an image?",,3,0,,,,CC BY-SA 4.0 6771,2,,6765,6/15/2018 15:03,,0,,"

It is a paradox, but a deep learning machine (defined as a NeuralNet variant) is unable to learn anything. It is a flexible and configurable hardware/software architecture that can be parametrized to solve a lot of problems. But the optimal parameters to solve a problem are obtained by an external system, i.e. back-propagation algorithm.

Back-propagation subsystem uses conventional programming paradigms, it is not a Neural Net. This fact is in absolute opposition to human mind, where learning and use of knowledge is done by the same system (the mind).

If all the real interesting things are done outside the NN, it is difficult to claim that a NN (in any variant) can develop in an AGI.

It is also possible to find some more differences. Neural nets are strongly numerical in its interface and internals. From this point of view, they are an evolution of support vector machines.

Too much differences and restrictions to expect an AGI.

Note: I strongly disagree in the draw included in the original question. ""Seeing"", ""doing"", ""imaging"" are levels absolutely wrong. It ignores from basic and common software concepts as ""abstraction"" or ""program state"" (of mind, in Turing words); applied AI ones as ""foresee""; and AGI ones as ""free will"", ""objectives and feelings"", ...

",12630,,12630,,6/15/2018 16:16,6/15/2018 16:16,,,,4,,,,CC BY-SA 4.0 6772,1,,,6/16/2018 8:29,,2,332,"

According to this news, Microsoft is using AI to make Windows 10 updates smoother. So I was curious and went further to search and came across this website, which describes:

Artificial Intelligence (AI) continues to be a key area of investment for Microsoft, and we’re pleased to announce that for the first time we’ve leveraged AI at scale to greatly improve the quality and reliability of the Windows 10 April 2018 Update rollout. Our AI approach intelligently selects devices that our feedback data indicate would have a great update experience and offers the April 2018 Update to these devices first. As our rollout progresses, we continuously collect update experience data and retrain our models to learn which devices will have a positive update experience, and where we may need to wait until we have higher confidence in a great experience. Our overall rollout objective is for a safe and reliable update, which means we only go as fast as is safe.

Our AI/Machine Learning approach started with a pilot program during the Windows 10 Fall Creators Update rollout. We studied characteristics of devices that data indicated had a great update experience and trained our model to spot and target those devices. In our limited trial during the Fall Creators Update rollout, we consistently saw a higher rate of positive update experiences for devices identified using the AI model, with fewer rollbacks, uninstalls, reliability issues, and negative user feedback. For the April 2018 Update rollout, we substantially expanded the scale of AI by developing a robust AI machine learning model to teach the system how to identify the best target devices based on our extensive listening systems.

To me, it sounds like simple if-else statements would have implemented the whole thing without touching the AI; they mentioned that positive experiences include fewer rollbacks, uninstalls, and so on, so we may use these as a criterion of a positive experience.

I am just wondering if the word 'AI' is being misused, or can be misleading in this context? Could anyone point me out on this or give any insight on how AI can be used in this context? In my experience, I have only seen AI mostly being used in speech recognition, image recognition and other sort-of classifying problems, with a training and consequently a computer can ""learn"" from the data, not like an if-else statement. Today, AI seems to be everything that is considered ""smart""?

",16300,,16300,,6/16/2018 10:26,12/21/2022 16:05,How does Microsoft use AI to make Windows 10 updates smoother,,1,2,,,,CC BY-SA 4.0 6776,1,,,6/16/2018 21:03,,5,474,"

I understand how neural networks work and have studied their theory well.

My question is: On the whole, is there a clear understanding of how mutation occurs within a neural network from the input layer to the output layer, for both supervised and unsupervised cases?

Any neural network is a set of neurons and connections with weights. With each successive layer, there is a change in the input. Say I have a neural network with $n$ parameters, which does movie recommendations. If $X$ is a parameter that stands for the movie rating on IMDB. In each successive stage, there is a mutation of input $X$ to $X'$ and further $X''$, and so on.

While we know how to mathematically talk about $X'$ and $X''$, do we at all have a conceptual understanding as to what this variable is in its corresponding $n$-dimensional parameter space?

To the human eye, the neural network's weights might be a set of random numbers, but they may mean something profound, if we could ever understand what they 'represent'.

What is the nature of the weights, such that, despite decades worth of research and use, there is no clear understanding of what these connection weights represent? Or rather, why has there been so little effort in understanding the nature of neural weights, in a non-mathematical sense, given the huge impetus in going beyond the black box notion of AI.

",16308,,2444,,12/16/2021 23:05,12/16/2021 23:05,What do the neural network's weights represent conceptually?,,2,2,,,,CC BY-SA 4.0 6777,2,,4581,6/17/2018 2:39,,2,,"

Representation of states is very important to prepare the data for the neural networks. You can try a different way and pick which fit best in your case.

  • You can use 18 neurons as input where each state is represented by the 2 bits. But avoid 0 and 1 if you are using sigmoid activation function, which can cause saturation at the output, which means if the output(y) becomes 1 at any layer, then on backpropagating error, we have y (1-y) dE/dy in weight update part, which become zero with the saturation, which means it will stay in the same state ever.

This problem can be solved by the following method:

Solution 1. You can initialize the input with some margin from 0 and 1. For example input can be [0.1, 0.9] instead for [0, 1].

Solution 2. Another you can initialize weights very small in the range of [-0.01, 0.01].

Solution 3. You can use the regularization technique, whose purpose is to suppress the weights by adding a penalizing term in error.

  • To handle variance problems, you can augment some data, for proper training. Because, in tic-tac-toe, you have a small data set. To augment data, you can add some margin of range -0.1 to +0.1 in inputs with the same outputs.

I hope this may be helpful.

",16313,,32410,,10/2/2021 22:45,10/2/2021 22:45,,,,0,,,,CC BY-SA 4.0 6778,1,6820,,6/17/2018 9:33,,5,535,"

Consider the Breakout environment.

We know that the underlying world behaves like an MDP, because, for the evolution of the system, it just needs to know what the current state (i.e. position, speed, and speed direction of the ball, positions of the bricks, and the paddle, etc) is. But, considering only single frames as the state space, we have a POMDP, because we lack in formations about the dynamics [1], [2].

What could happen if we wrongly assume that the POMDP is an MDP and do reinforcement learning with this assumption over the MDP?

Obviously, the question is more general, not limited to Breakout and Atari games.

",15517,,2444,,12/12/2021 17:18,12/12/2021 17:19,What could happen if we wrongly assume that the POMDP is an MDP?,,1,0,,,,CC BY-SA 4.0 6781,1,6787,,6/18/2018 7:43,,0,185,"

Oxford philosopher and leading AI thinker Nick Bostrom defines SuperIntelligence as

""An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.""

Here is an interesting article, you may like Tech Crunch..

Artificial general intelligence : Wiki

Taking into account current limitations and the amount of progress that has been made in recent years, what is a realistic timeframe to expect an AI that has human levels of cognition?

",16322,,16322,,6/21/2018 8:56,6/21/2018 8:56,"Is Really ""AI"" Light Years Away from achieving Cognitive Ability of Human?",,1,0,,,,CC BY-SA 4.0 6782,2,,6102,6/18/2018 8:05,,1,,"

What you mention there is the perfect example for path-planning, which is extensively researched in AI.

Please look for A-star algorithm and how to enhance it with neural networks :)

",15919,,,,,6/18/2018 8:05,,,,0,,,,CC BY-SA 4.0 6783,1,6785,,6/18/2018 9:49,,1,147,"

I am currently exploring multi-agent reinforcement learning. I have multiple agents that communicate with each other and a central service that maintains the environment state.

The central service dispatches some information at regular intervals to all the agents (Lets call this information as energy). The information can be very different for all the agents.

The agents on reception of this information select a particular action. The execution of the action should leave the agent as well as the environment in a positive state. The action requires a limited amount of energy which might change on every timestep. If a agent does not have sufficient energy to it may request for energy from other agents. The other agents may grant or deny this request.

If all the agents are able to successfully perform their actions and leave the environment in a positive state they get a positive reward.

As the environment is stochastic, where a agent's behavior is dependent on another agent can approximate Q Learning be used here?

",11584,,,,,6/18/2018 14:59,Can Q-learning working in a multi agent environment where every agent learns a behaviour independently?,,1,0,,,,CC BY-SA 4.0 6784,1,,,6/18/2018 11:01,,4,194,"

I have been working with AI methods. I am thinking about how my daughter (and also other kids) could learn mathematics with the help of AI. For example, how could an AI be used to show the mistakes that a kid does during the learning path?

",15506,,2444,,6/16/2020 20:14,6/16/2020 20:14,How could an AI be used to improve the teaching and learning of mathematics?,,1,0,,,,CC BY-SA 4.0 6785,2,,6783,6/18/2018 14:59,,2,,"

Not particularly sure what you are asking, so the question that I will be answering is this:

Can Q learning be used to estimate a value that depends on another value in the Q Learning Matrix even though there is a certain amount of unpredictability involved?

The answer is yes!

I will use the example of a robotic arm trying to reach a point in space since that is what I am most familiar with.

Imagine a robotic arm with a shoulder, elbow, and wrist joint. The desired elbow value depends very much on the shoulder value which is also being learned. Given enough iterations, the Q learning algorithm will come up with a solution (out of possibly many) for the elbow joint based on where the shoulder joint is at that time.

The intrinsic unpredictability (air drag, motor power, etc in this case) is countered intuitively by the Q-learning algorithm iteratively learning what works best.

",6861,,,,,6/18/2018 14:59,,,,2,,,,CC BY-SA 4.0 6787,2,,6781,6/18/2018 16:41,,2,,"

If certain philosophers are correct, Artificial General Intelligence will be, like fusion power, ""always twenty years away"". For the true believers, it is an inevitability, and opinions vary.

It may be most useful to look at the unreliability human predictions in this area.

There was an article in the MIT Tech Review in 2017 that contained this graphic, based on a survey of researchers in 2015:


SOURCE: Experts Predict When Artificial Intelligence Will Exceed Human Performance (MIT Tech Review)

Most notable is that AlphaGo soundly bested the top human player in Go in March of 2016, years before even the most optimistic expert projections.

  • Everyone is just guessing, and we still don't if AGI is possible, or merely a myth.

Nevertheless, recent breakthroughs are promising!

",1671,,,,,6/18/2018 16:41,,,,0,,,,CC BY-SA 4.0 6788,2,,1930,6/18/2018 17:35,,0,,"

The distinction between algorithms/robots and humans is that, when the human organism stops functioning, the human is considered dead.

By contrast, an algorithm still exists, even when not running. (I was going to use ""even when not being executed"", but avoided this for semantic reasons;) The algorithm can remain in this ""stasis state"" so long as there is a storage medium for the information.

  • Killing an algorithm is easy--delete and empty the trash bin.

Essentially, to kill an algorithm, you need to erase the code that comprises it.

",1671,,1671,,6/18/2018 18:36,6/18/2018 18:36,,,,0,,,,CC BY-SA 4.0 6789,1,6794,,6/18/2018 19:32,,6,804,"

My knowledge

Suppose you have a layer that is fully connected, and that each neuron performs an operation like

a = g(w^T * x + b)

were a is the output of the neuron, x the input, g our generic activation function, and finally w and b our parameters.

If both w and b are initialized with all elements equal to each other, then a is equal for each unit of that layer.

This means that we have symmetry, thus at each iteration of whichever algorithm we choose to update our parameters, they will update in the same way, thus there is no need for multiple units since they all behave as a single one.

In order to break the symmetry, we could randomly initialize the matrix w and initialize b to zero (this is the setup that I've seen more often). This way a is different for each unit so that all neurons behave differently.

Of course, randomly initializing both w and b would be also okay even if not necessary.

Question

Is randomly initializing w the only choice? Could we randomly initialize b instead of w in order to break the symmetry? Is the answer dependent on the choice of the activation function and/or the cost function?

My thinking is that we could break the symmetry by randomly initializing b, since in this way a would be different for each unit and, since in the backward propagation the derivatives of both w and b depend on a(at least this should be true for all the activation functions that I have seen so far), each unit would behave differently. Obviously, this is only a thought, and I'm not sure that is absolutely true.

",16199,,2444,,12/16/2021 18:24,12/16/2021 18:44,Is random initialization of the weights the only choice to break the symmetry?,,3,0,,,,CC BY-SA 4.0 6791,2,,6776,6/18/2018 21:17,,2,,"

I don't know if my intuition is correct but I will give it a try.

You could see weights as how much important one thing is, the problem is to understand what that thing represents. When I say thing I'm referring to the output of a specific neuron. I don't think that we can say what the output of a neuron represents in the real world unless we directly relate it through an error function or if the function used to compute that particular value have some meaning in the real world.

Edit:

If you want, you could actually build your neural network such that its neurons represent something. It's also very simple. you have only to write down all the equations relative to that particular topic. You could put them in a big system or, and this is better, you could put them in several systems such that the outputs of system 1 are the input of system 2 and so on. You could convert each system into a layer where each neuron represents an equation. Note that in this case, you would have the classical neuron with

z = dot(w.T,x) + b a = g(x)

but a more complex equation for z (but still based on weights) and a linear activation function for a. In this case, you could name each neuron and say what they represent in the real world.

However, this isn't the purpose of a neural network. A neural network should have neurons with simple equations to be fast thus the linear interpolating function dot(w.T,x) + b is the best choice (the fact that the activation function is almost always non linear and in some cases a non-banal function is due to other thing and could be an interesting question). A neural network should also be as general as possible because usually is build upon a system that you don't know completely.

So I modify slightly my answer: is not simply that you don't know what a neuron represent, excluding the ones of the output layer, you don't want that they have a meaning in the real world.

",16199,,16199,,6/20/2018 10:48,6/20/2018 10:48,,,,2,,,,CC BY-SA 4.0 6792,2,,6789,6/18/2018 23:28,,0,,"

w should be randomized to small (nonzero) numbers so that the adjustments made by the backpropagation are more meaningful and each value in the matrix is updated a different amount. If you start with all zeros, it will still work, but take longer to get to a meaningful result. AFAIK, this was found empirically by various researchers and became common practice.

Randomizing b does not have the same effect of helping, therefore most people do not bother.

This choice is one of many that is made by the architect of the network and theoretically you could use an infinite number of w matrix initializations. The one commonly used just happens to be tested and generally works.

This video is better at explaining than I am: Lecture 8.4 — Neural Networks Representation | Model Representation-II — [Andrew Ng].

",6861,,2444,,12/16/2021 18:44,12/16/2021 18:44,,,,3,,,,CC BY-SA 4.0 6793,2,,6789,6/19/2018 5:54,,2,,"

Most of the explanations given for choosing something or not choosing something (like hyperparameter tuning) in deep learning are based on empirical studies, like analysing the error over a number of iterations. So, this answer is what people in deep learning side give.

Since you have asked for a mathematical explanation, I suggest you read the paper Convergence Analysis of Two-layer Neural Networks with ReLU Activation (2017, NIPS). It talks about the convergence of SGD to global minima subject to weight initialisation being Gaussian using ReLU as an activation function. The paper considers a neural net with no hidden layer, just input, and output layers.

The very fact that analysis on such 'simple' network gets published in a very reputed and top conference itself suggests that the explanation you are seeking is not very easy and very few people work on the theoretical aspects of neural nets. IMHO, after some years as the research progresses, I might be able to edit this answer and give the necessary explanation that you sought. Till then this is the best I could do.

",9062,,2444,,12/16/2021 18:43,12/16/2021 18:43,,,,5,,,,CC BY-SA 4.0 6794,2,,6789,6/19/2018 7:25,,5,,"

Randomising just b sort of works, but setting w to all zero causes severe problems with vanishing gradients, especially at the start of learning.

Using backpropagation, the gradient at the outputs of a layer L involves a sum multiplying the gradient of the inputs to layer L+1 by the weights (and not the biases) between the layers. This will be zero if the weights are all zero.

A gradient of zero at L's output will further cause all earlier layers(L-1, L-2 etc all the way back to layer 1) to receive zero gradients, and thus not update either weights or bias at the update step. So the first time you run an update, it will only affect the last layer. Then the next time, it will affect the two layers closest to the output (but only marginally at the penultimate layer) and so on.

A related issue is that with weights all zero, or all the same, maps all inputs, no matter how they vary, onto the same output. This also can adversely affect the gradient signal that you are using to drive learning - for a balanced data set you have a good chance of starting learning close to a local minimum in the cost function.

For deep networks especially, to fight vanishing (or exploding) gradients, you should initialise weights from a distribution that has an expected magnitude (after multiplying the inputs) and gradient magnitude that neither vanishes nor explodes. Analysis of values that work best in deep networks is how Xavier/Glorot initialisation were discovered. Without careful initialisation along these lines, deep networks take much longer to learn, or in worst cases never recover from a poor start and fail to learn effectively.

Potentially to avoid these problems you could try to find a good non-zero fixed value for weights, as an alternative to Xavier initialisation, along with a good magnitude/distribution for bias initialisation. These would both vary according to size of the layer and possibly by the activation function. However, I would suspect this could suffer from other issues such sampling bias issues - there are more weights, therefore you get a better fit to desired aggregate behaviour when setting all the weight values randomly than you would for setting biases randomly.

",1847,,1847,,6/19/2018 8:31,6/19/2018 8:31,,,,9,,,,CC BY-SA 4.0 6795,2,,4216,6/19/2018 8:30,,0,,"

The meaning of strong AI has changed as you correctly indicated, and the term ""strong narrow AI"" is more appropriate as now people are shifting towards the practical uses.
This also bring to light the number of breakthroughs that AI is having and literature and other various related resources will often(unlike in other fields) need to be changed constantly.

",15465,,,,,6/19/2018 8:30,,,,0,,,,CC BY-SA 4.0 6799,1,,,6/19/2018 11:45,,1,2254,"

So I built a CNN without any scientific libraries like TensorFlow or Keras (only NumPy). It is taking a huge amount of time to train. What are some of the tricks and tips followed by people to speed up training of a CNN? (I am not talking about division of jobs into different processors but subtle redundant codes i.e. giving pre-calculated results which is not visible to common programmers).

",,user9947,2193,,6/19/2018 15:30,6/20/2018 7:59,Speeding up CNN training,,2,9,,,,CC BY-SA 4.0 6800,1,,,6/19/2018 11:53,,30,5387,"

The paper The Limitations of Deep Learning in Adversarial Settings explores how neural networks might be corrupted by an attacker who can manipulate the data set that the neural network trains with. The authors experiment with a neural network meant to read handwritten digits, undermining its reading ability by distorting the samples of handwritten digits that the neural network is trained with.

I'm concerned that malicious actors might try hacking AI. For example

  • Fooling autonomous vehicles to misinterpret stop signs vs. speed limit.
  • Bypassing facial recognition, such as the ones for ATM.
  • Bypassing spam filters.
  • Fooling sentiment analysis of movie reviews, hotels, etc.
  • Bypassing anomaly detection engines.
  • Faking voice commands.
  • Misclassifying machine learning based-medical predictions.

What adversarial effect could disrupt the world? How we can prevent it?

",16322,,2444,,10/11/2019 22:32,11/5/2020 11:00,Is artificial intelligence vulnerable to hacking?,,7,5,,1/14/2022 12:59,,CC BY-SA 4.0 6801,2,,6799,6/19/2018 13:21,,1,,"

[Ref-some standard checks performed by programmers]

Speeding up Convolutional Neural Networks with Low Rank Expansions

From the abstract:

The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability.

Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain.

Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.

",16322,,16322,,6/19/2018 13:28,6/19/2018 13:28,,,,0,,,,CC BY-SA 4.0 6802,2,,6800,6/19/2018 13:41,,4,,"

I believe it is, no system is safe, however I am not sure if I can still say this after 20-30 years of AI development/evolution. Anyways, there are articles that showed humans fooling AI (Computer Vision).

https://www.theverge.com/2018/1/3/16844842/ai-computer-vision-trick-adversarial-patches-google

https://spectrum.ieee.org/cars-that-think/transportation/sensors/slight-street-sign-modifications-can-fool-machine-learning-algorithms

",16351,,,,,6/19/2018 13:41,,,,0,,,,CC BY-SA 4.0 6803,1,,,6/19/2018 13:53,,2,82,"

I have this table, 2 agents and I want to find for each agent if any action is strongly or weakly dominated. This is the table:

Now, i've found a solution but I'm not sure if it's correct. So for let's say agent 1(the one handling rows): 1<2<2, 1<3<4 and 0<3<4 so I don't have a strong dominance.

For agent 2: 1<=1<=1, 2<5<6 and 2<3<4 which means that I don't have strong dominance here too.

Is my logic correct?

",16352,,16352,,6/19/2018 14:03,6/19/2018 14:03,Strong and Weak Dominance Table,,0,2,,,,CC BY-SA 4.0 6804,2,,6800,6/19/2018 14:10,,1,,"

I concur with Akio that no system is completely safe, but the take away is AI systems are less prone to attacks when comparing with the old systems because of the ability to constantly improve.

As time passes by more people will get in the field bringing new ideas and hardware will be improving so that they are ""strong AI.""

",15465,,75,,6/19/2018 18:41,6/19/2018 18:41,,,,0,,,,CC BY-SA 4.0 6805,2,,6800,6/19/2018 14:48,,6,,"

How we can prevent it?

There are several works about AI verification. Automatic verifiers can prove the robustness properties of neural networks. It means that if the input X of the NN is perturbed not more that on a given limit ε (in some metric, e.g. L2), then the NN gives the same answer on it.

Such verifiers are done by:

This approach may help to check robustness properties of neural networks. The next step is to construct such a neural network, that has required robustness. Some of above papers contain also methods of how to do that.

There are different techniques to improve the robustness of neural networks:

At least the last one can provably make NN more robust. More literature can be found here.

",16354,,16354,,6/21/2018 17:31,6/21/2018 17:31,,,,6,,,,CC BY-SA 4.0 6806,2,,6800,6/19/2018 16:35,,7,,"

Programmer vs Programmer

It's a ""infinity war"": Programmers vs Programmers. All thing can be hackable. Prevention is linked to the level of knowledge of the professional in charge of security and programmers in application security.

eg There are several ways to identify a user trying to mess up the metrics generated by Sentiment Analysis, but there are ways to circumvent those steps as well. It's a pretty boring fight.

Agent vs Agent

An interesting point that @DukeZhou raised is the evolution of this war, involving two artificial intelligence (agents). In that case, the battle is one of the most knowledgeable. Which is the best-trained model, you know?

However, to achieve perfection in the issue of vulnerability, artificial intelligence or artificial super intelligence surpass the ability to circumvent the human. It is as if the knowledge of all hacks to this day already existed in the mind of this agent and he began to develop new ways of circumventing his own system and developing protection. Complex, right?

I believe it's hard to have an AI who thinks: ""Will the human going to use a photo instead of putting his face to be identified?""

How we can prevent it

Always having a human supervising the machine, and yet it will not be 100% effective. This disregarding the possibility that an agent can improve his own model alone.

Conclusion

So I think the scenario works this way: a programmer tries to circumvent the validations of an AI and the IA developer acquiring knowledge through logs and tests tries to build a smarter and safer model trying to reduce the chances of failure.

",7800,,7800,,7/3/2018 14:49,7/3/2018 14:49,,,,0,,,,CC BY-SA 4.0 6808,2,,6800,6/19/2018 17:13,,21,,"

AI is vulnerable from two security perspectives the way I see it:

  1. The classic method of exploiting outright programmatic errors to achieve some sort of code execution on the machine that is running the AI or to extract data.

  2. Trickery through the equivalent of AI optical illusions for the particular form of data that the system is designed to deal with.

The first has to be mitigated in the same way as any other software. I'm uncertain if AI is any more vulnerable on this front than other software, I'd be inclined to think that the complexity maybe slightly heightens the risk.

The second is probably best mitigated by both the careful refinement of the system as noted in some of the other answers, but also by making the system more context-sensitive; many adversarial techniques rely on the input being assessed in a vacuum.

",15114,,,,,6/19/2018 17:13,,,,2,,,,CC BY-SA 4.0 6810,1,6848,,6/19/2018 20:43,,2,48,"

How does the legal question about agents talking to humans via telephone connection work? Recently Google gave a talk about Duplex, where an agent makes a call to a human to schedule a hairdresser.

I wonder if there are any regulations related to this type of scenario, if there are some limitations, if the human needs to know that he is talking to an AI.

",7800,,1671,,6/19/2018 20:48,6/22/2018 6:32,Digital Rights and Agents talking to humans,,1,0,,11/16/2021 13:27,,CC BY-SA 4.0 6812,2,,6800,6/19/2018 23:57,,0,,"

There are many ways to hack an AI. When I was kid I figured how to beat a chess computer. I always followed the same pattern, once you learn you can exploit it. The worlds best hacker is a 4 year old that wants something he will try different things until he establishes pattern in his parents. Anyway, Get an Ai to learn the patterns of a AI and given a given combination you can figure the outcome. There is also just plain flaws or back door in code either on purpose or by chance. There is also the possibility the AI will hack itself. It is called misbehaving, remember the small child again...

BTW simple way is to make AI always fails safe... something people forget.

",16367,,,,,6/19/2018 23:57,,,,0,,,,CC BY-SA 4.0 6813,2,,6800,6/20/2018 1:31,,4,,"

Is Artificial Intelligence Vulnerable to Hacking?

Invert your question for a moment and think:

What would make AI at less of a risk of hacking compared to any other kind of software?

At the end of the day, software is software and there will always be bugs and security issues. AIs are at risk to all the problems non-AI software is at risk to, being AI doesn't grant it some kind of immunity.

As for AI-specific tampering, AI is at risk to being fed false information. Unlike most programs, AI's functionality is determined by the data it consumes.

For a real world example, a few years ago Microsoft created an AI chatbot called Tay. It took the people of Twitter less than 24 hours to teach it to say ""We're going to build a wall, and mexico is going to pay for it"":

(Image taken from the Verge article linked below, I claim no credit for it.)

And that's just the tip of the iceberg.

Some articles about Tay:

Now imagine that wasn't a chat bot, imagine that was an important piece of AI from a future where AI are in charge of things like not killing the occupants of a car (i.e. a self-driving car) or not killing a patient on the operating table (i.e. some kind of medical assistance equipment).

Granted, one would hope such AIs would be better secured against such threats, but supposing someone did find a way to feed such an AI masses of false information without being noticed (after all, the best hackers leave no trace), that genuinely could mean the difference between life and death.

Using the example of a self-driving car, imagine if false data could make the car think it needed to do an emergency stop when on a motorway. One of the applications for medical AI is life-or-death decisions in the ER, imagine if a hacker could tip the scales in favour of the wrong decision.

How we can prevent it?

Ultimately the scale of the risk depends on how reliant humans become on AI. For example, if humans took the judgement of an AI and never questioned it, they'd be opening themselves up to all sorts of manipulation. However, if they use the AI's analysis as just one part of the puzzle, it would become easier to spot when an AI is wrong, be it through accidental or malicious means.

In the case of a medical decision maker, don't just believe the AI, carry out physical tests and get some human opinions too. If two doctors disagree with the AI, throw out the AI's diagnosis.

In the case of a car, one possibility is to have several redundant systems that must essentially 'vote' about what to do. If a car had multiple AIs on separate systems that must vote about which action to take, a hacker would have to take out more than just one AI to get control or cause a stalemate. Importantly, if the AIs ran on different systems, the same exploitation used on one couldn't be done on another, further increasing the hacker's workload.

",16369,,,,,6/20/2018 1:31,,,,2,,,,CC BY-SA 4.0 6814,1,6815,,6/20/2018 1:42,,1,220,"

I have a corpus, say an instruction manual. The text in this manual is grouped into chapters and each chapter is split up into sections. For example, Chapter 1/Section 1, Chapter 1/Section 2 and so on. Assume the corpus has C chapters and each chapter has S sections. My goal is, given a sentence or question, to classify this sentence/question. In other words I want to compute three most probable chapters to which this sentence or question belongs to. I tried MultinomialNB model using sklealrn, but it did not give me the desired result. I want to try another approach, for example using a Neural Network and compare it with the MultinomialNB model. I have Googled and found Doc2Vec but haven't tried yet.

Can anyone suggest a better or another possible approach so that I could try and compare? What is the standard approach to such kind of problem?

",16371,,,,,6/20/2018 9:56,Question classification according to chapters,,2,0,,,,CC BY-SA 4.0 6815,2,,6814,6/20/2018 5:21,,1,,"

You have got text which have words. Now each of the words have dependency with words that are occurring previously or/and with the ones occurring later. So to capture the ""context"" in which a particular word occurs is done by sequence models in deep learning. Sequence models that are quite used often are Recurrent Neural Networks (RNN), Gated Recurrent Units (GRU) and Long Short Term Memory (LSTM) listed in the order of their functionality and complexity. To know more about them visit Jonathan Hui's blog.

Your problem is a classification problem i.e. given a sentence the classifier should tell which chapter and section it belongs to. You can use one of the sequence models to ""encode"" the sentence as a vector (based on the context of words in it and meaning of sentence) and pass that vector to a fully-connected neural network that has a softmax layer at the end to tell which chapter and section the text belongs to. The number of classes you are having are C * S (one for each chapter-section pair). From the output of softmax layer you can pick up the top 3 classes that got the highest probability for a sentence.

Doc2Vec converts a paragraph (can also be a sentence) into a vector representation. The Doc2Vec model has been trained over a huge corpus of text to be able to capture the context of a text and represent it as a high dimensional vector.

The corpus of text that you have, is from a probability distribution is quite different from that used for training Doc2Vec model. Sequence models (mentioned above) can learn that probability distribution for your text corpus exclusively. In other words, you have a tailor-made representation for your text so that the context can be understood much better.

That is why I suggested you to use sequence models. You can also compare the performance obtained by using pretrained Doc2Vec as text representation with that obtained by using one of the sequence models.

To know about the effectiveness of RNN visit Andrej Karpathy's blog.

",9062,,,,,6/20/2018 5:21,,,,3,,,,CC BY-SA 4.0 6817,2,,6799,6/20/2018 7:59,,1,,"

Recommendations:

  • Try deleting some variables that will no longer be used during run-time
  • Use more efficient data structure
  • Get your hands on some optimized library for your hardware, e.g. if you are using Intel processors use the Intel distribution of python
  • Pay careful attention to your data types and try to trim them as much as possible
",15465,,,,,6/20/2018 7:59,,,,0,,,,CC BY-SA 4.0 6818,2,,6776,6/20/2018 9:47,,0,,"

It's a bit of a challenge to answer your question, since you appear to be not really familiar with the basics. You're talking about mutations, and changes to the input.

No. The input is a vector of data, which initializes the value of the input nodes. The first layer of weights is then used to calculate the values for the next layer of nodes. This next layer is not a ""mutation"" of the input layer; that suggest the second layer of nodes is similar but not exactly identical to the first layer.

In reality, it's very common that the second layer of nodes does not even have the same shape as the first layer.

You are even wondering if certain weights have a certain meaning. That's even easier to answer. We know these networks are quite robust. We can ignore a significant percentage of the weights, and the classifications will change only a little. This shows that no individual weight represents a specific aspect of the network.

",16378,,,,,6/20/2018 9:47,,,,1,,,,CC BY-SA 4.0 6819,2,,6814,6/20/2018 9:56,,1,,"

The easiest way would probably be not to use machine learning at all. Create an inverted index of words (ie for each word occurrence record the chapter and section), and then use an information retrieval algorithm (eg TF-IDF) to find the section that best matches your question.

This will be more efficient (no huge models to train) and more transparent (you can easily see why a particular section has been selected).

Additional steps you can take to adjust performance is stemming/lemmatising of words, and adding synonym lists (eg in a car manual application you might want to treat trunk and boot as the same word).

I have used that approach recently in a commercial project, and it performed well.

",2193,,,,,6/20/2018 9:56,,,,3,,,,CC BY-SA 4.0 6820,2,,6778,6/20/2018 10:02,,2,,"

What could happen if we wrongly assume that the POMDP is an MDP and do reinforcement learning with this assumption over the MDP?

It depends on a few things. The theoretical basis of reinforcement learning needs the state descriptions to have the Markov property for guarantees of convergence to optimal or approximately optimal solutions. The Markov property is a requirement that the state defines 100% of the controllable variation of reward and next state (given the action) - the rest must be purely stochastic.

An MDP can be "nearly Markov", and a lot of real-world physical systems are like that. For instance, pole-balancing and acrobot tasks can be implemented as physical systems using motors, wheels, joints etc. In those real systems, there are limits to accuracy of measurement of the state, and many hidden variables, such as variable temperature (affecting length of components), friction effects, air turbulence. Those hidden variables taken strictly by formal definition would make the system a POMDP. However, their influence compared to the key state variables is low, and in some cases effectively random from the perspective of the agent. In practice RL works well in the real physical systems, despite state data being technically incomplete.

In Atari games using multiple frame images as states, there are varying degrees of to which those states are already non-Markovian. In general a computer game's state may include many features that are not displayed on the screen. Enemies may have health totals or other hidden state, there can be timers controlling appearance of hazards, and in a large number of games the screen only shows a relatively small window into the total play area. However, the Deep Mind DQN network did well on a variety of scrolling combat and platform games.

One game where DQN did notably badly - no better than a default random player - was Montezuma's Revenge. Not only does that platform puzzler game have a large map to traverse, but it includes components where state on one screen affects results on another.

It is hard to make a general statement about where an MDP with missing useful state information would benefit from being treated as a POMDP more formally. Your question is essentially the same thing expressed in reverse.

The true answer for any non-trivial environment would be to try an experiment. It is also possible to make some educated guesses. The basis for those guesses might be the question "If the agent could know hidden feature x from the state, how different would expected reward and policy be?"

For the breakout example using each single frame as a state representation, I would expect the following to hold:

  • Value estimates become much harder since seeing the ball next to a brick - compared to seeing a ball progressively get closer to a brick over 4 frames - gives much less confidence that it is about to hit that brick and score some points.

  • It should still be possible for the agent to optimise play, as one working strategy is to position the "bat" under the ball at all times. This will mean less precise control over angle of bounces, so I would expect it to perform worse than the four-frame version. However, it should still be significantly better than a default random action agent. A key driver for this observation is that seeing the ball close to the bottom of the screen, and not close to the bat, would still be a good predictor of a low expected future reward (even averaged over chances of ball going up vs going down), hence the controller should act to prevent such states occurring.

",1847,,2444,,12/12/2021 17:19,12/12/2021 17:19,,,,2,,,,CC BY-SA 4.0 6823,1,6891,,6/20/2018 15:13,,2,747,"

I was trying to implement NEAT, but I got stuck at the speciating of my clients/genomes.

What I got so far is:

  1. the distance function implemented,
  2. each genome can mutate nodes/connections,
  3. two genomes can give birth to a new genome.

I've read a few papers, but none explicitly explains in what order what step is done. What is the order of the genetic operations in NEAT?

I know that for each generation, all the similar genomes will be put together into one species.

I have other questions related to NEAT.

Which neural networks are killed (or not) at each generation?

Who is being mutated and at what point?

I know that these are a lot of questions, but I would be very happy if someone could help me :)

",16353,,2444,,10/13/2019 1:29,10/13/2019 1:29,What is the order of the genetic operations in NEAT?,,1,0,,,,CC BY-SA 4.0 6826,1,,,6/20/2018 17:39,,3,230,"

As is done traditionally, I used k-fold cross validation to select and optimize the hyper parameters of my neural network classifier. When it was time to store the final model for future predictions, I discovered that using the weights from the previous k-fold cv iteration to seed the initial weights of the model in subsequent iteration, helps in improving the accuracy (seems obvious). I can use the model from the final iteration to perform future predictions on unseen data.

  • Would this approach result in overfitting?

(Please note, I am using all available data in this process and I do not have any holdout data for validation.)

",16408,,1671,,6/20/2018 18:54,6/21/2018 10:35,If use the weights from previous iteration of a k-fold cross validation to seed a neural network classifier would I be overfitting?,,1,0,,,,CC BY-SA 4.0 6827,1,6847,,6/20/2018 19:03,,2,92,"

I am reading the Simon Haykin's cornerstone book, ""Neural Networks, A Comprehensive Foundation, Second Edition"" and I cannot understand a paragraph below:

The analysis of the dynamic behaviour of neural networks involving the application of feedback is unfortunately complicated by virute (or virtue I cannot get word appropriately) of the fact that the processing units used for the construction of the network are usually nonlinear. Further consideration of this issue is deferred to the latter part of the book.

Before the paragraph, the author analysis the affects of weight of synapsis to the neural network's stability. Roughly speaking, he says, if |w| >= 1 the neural network become unstable.

Could you please explain the paragraph? Thanks in advance.

",14862,,14862,,6/21/2018 8:07,6/23/2018 8:41,The analysis of the dynamic behaviour of neural networks involving the application of feedback,,1,3,,,,CC BY-SA 4.0 6832,1,,,6/21/2018 1:22,,2,80,"

Can anyone recommend a reinforcement learning algorithm for a multi-agent environment?

In my simplified example, I'm implementing a Q-Learning system with different 10 agents. The agents compete for resources in stores at different locations by setting a bid price for each item.

All of the agents have different bids and pooled budget of $100. Once the budget is reached the agents cannot buy any more that day.

Each agent will receive a reward if they buy an item. The goal would be to maximize the total amount of items bought between the agents.

Right now the agents don't communicate.

Can someone point me in the right direction for an algorithm that allows agent cooperation?

",16414,,2444,,2/13/2019 21:30,2/13/2019 21:30,Algorithms for multiple agents problems,,0,2,,,,CC BY-SA 4.0 6835,2,,6826,6/21/2018 10:35,,1,,"

To answer directly your question: estimating a model's performance on data previously used for fitting will overestimate performance.

When your dataset is ""small"" you are faced with a bias-variance dilemma with regards to how much data should be used for train and test sets:

  • Too much train data, you end up with little test samples and your performance estimate has high variance.

  • Too much test data, your train samples do not represent well the population you are trying to model/target, so your average performance will be much lower than what could be achieved.

K-fold CV is a compromise in the evaluation of performance of certain procedure. Once you have settled on a certain model or hyperparameters, when moving to production you can:

Choose to take one of the k models you have trained, perhaps using the ""one std variation rule of thumb"".

  • Re-train on all of your data and you can expect the resulting model to be at least as good as you have estimated.

  • Use all k models to form an ensemble and you can expect the resulting model to be at least as good as you have estimated.

",16322,,,,,6/21/2018 10:35,,,,1,,,,CC BY-SA 4.0 6836,1,6862,,6/21/2018 12:00,,4,289,"

Recently, I have been learning about new neural networks, which are used for specialized purposes, like speech recognition, image recognition, etc. The more I discover the more I get amazed by the cleverness behind models such as RNN's and CNN's. Questions about working, intuition, mathematics have been asked a lot in this community, all with vague answers and apparent understandings.

So, my question is: did the researchers come up with these specialized models accidentally, or did they follow particular steps to get to the model (like in a mathematical framework)? And how did they look at a particular class of problem and think "Yeah, a better solution might exist"?

Since the understanding of NN's is so vague, these are 'high risk, high reward' scenarios, since you might be chasing only the mirage (illusion) of a solution.

",,user9947,18758,,12/23/2021 10:26,12/23/2021 10:26,Are Neural Net architectures accidental discoveries?,,2,0,,,,CC BY-SA 4.0 6837,2,,6836,6/21/2018 13:16,,1,,"

Researchers may follow specific mathematical frameworks, techniques to come-up with amazing works just like in any field, but I believe in Darwinian natural selection as a base theory for human's discoveries as well as for the Evolutionary Neural Net Architectures.

""Principle by which each slight variation [of a trait], if useful, is preserved"".

",16322,,,,,6/21/2018 13:16,,,,2,,,,CC BY-SA 4.0 6838,2,,5369,6/21/2018 14:29,,2,,"

Derivative gives the rate of change in $y$ for a small change in $x$ or the slope of a function at point $x$.

In the above function,

y = x      for x >= 0,     i.e. y/x = 1
y = x/20   for x < 0,      i.e.  y/x = 1/20

The following function returns the derivative of leaky ReLU as explained

private double leaky_relu_derivative(double x)
{
    if (x >= 0)
        return 1;
    else
        return 1.0 / 20;
}
",16426,,2444,,5/30/2020 13:01,5/30/2020 13:01,,,,0,,,,CC BY-SA 4.0 6840,2,,6094,6/21/2018 19:07,,-1,,"

Maybe a simple regex solve this question. But problably you need supervise, approve and disapprove anomalies.

See this example, I had the same problem some time ago: https://stackoverflow.com/questions/50689935/regex-like-commands-python

",7800,,,,,6/21/2018 19:07,,,,0,,,,CC BY-SA 4.0 6846,1,,,6/22/2018 1:51,,2,41,"

I'm eventually looking to build an algorithm that will process answers from humans that are given questions. But first I have to setup an experiment to determine the variety of responses.

Specifically, humans will be asked a multiple choice question that has a single correct answer. I want to understand what kinds/ranges of responses I would get from the bell curve distribution of human intelligence.

Is there any way I can have, say, 1000 ""humans"" be asked a prompt, repeated 100 times (the same question) and then compile the responses? My concern is that I'll have to build some algorithm or process for each dumb, average, smart ""human"" to follow but then I would introduce bias in how smart they are or limit how they may respond. I'm guessing I'll have to give them a data sort to work from.

To clarify, it's not the number of times a single user gets a question right that makes them smart, they have to be programmed dumb, smart etc. before the simulation starts. So dumb users could get some right and smart can get some wrong.

I'm not sure the Monte Carlo method is useful here but some type of simulation where I can specify the distribution (normal) and then bound the responses would be helpful.

I have access to Excel, Minitab, and Python. Any ideas how to set up an experiment like this? I really am open to any technique to measure this.

",16435,,,,,6/22/2018 14:16,How can I simulate responses from the distribution of human intelligence?,,1,0,,,,CC BY-SA 4.0 6847,2,,6827,6/22/2018 5:58,,0,,"

Although you have not given the entire context but if I were to speculate I would suggest the author is simply trying to tell that Neural Networks are quite difficult to analyse mathematically due to their non-linear nature (due to the use of non-linear activation). There are many questions on this Stack about the mathematical basis of NN.

Do scientists know what is happening inside artificial neural networks?

Also forward propagation is difficult enough, and now we that we have included feedback it becomes exponentially tough. You can easily understand the difficulty posed in such analysis if you take the case of Feedback Amplifiers in electronics. In normal sound amplifiers to avoid distortion we use a negative feedback. This roughly has an inverted waveform compared to i/p signal. It adds up with the input signal and smoothens out the distortions (as opposite wavefroms of not entirely same magnitude adds up).

|W| > 1 should be the case of Positive feedback, which can cause massive variations in output due to small changes due to its self reinforcing property. The wave-forms are of the same shape and they add up, this results in an even more larger waveform in the next cycle and so on. So probably the author is speaking something along these lines.

",,user9947,,user9947,6/23/2018 8:41,6/23/2018 8:41,,,,0,,,,CC BY-SA 4.0 6848,2,,6810,6/22/2018 6:32,,3,,"

Googling this throws a lot of debate on this issue of whether the call made to the restaurant for booking was legal or not. I found this article to put forward a lot of ideas for and against this. So it is for us to decide.

But one thing that I fully agree with you, which should be made into a regulation, is human should know that he/she is talking to a bot. This tweet by @traviskorte reasons it out why:

We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas.

",9062,,,,,6/22/2018 6:32,,,,0,,,,CC BY-SA 4.0 6849,2,,4629,6/22/2018 7:06,,0,,"

In order to train a neural network, you have to adjust each weights and biases to reduce the cost to as minimum as possible. The only way to do so is to subtract a small amount of the partial derivative of cost w.r.t. w, b from the respective parameters.

if J is our cost function, after each iteration:

w = w - lr*dJ_dw,      //where lr is a small scalar called learning rate and dJ_dw is the partial derivative of cost function w.r.t. w

and same for bias

b = b - lr*dJ_db,    //dJ_db is the partial derivative of cost function w.r.t b

let's look at how the partial derivatives are calculated.

using sigmoid as activation function, and squared error function as cost, we have:

z = w*x + b
a = sigmoid(z)       // sigmoid(z) = 1.0 / (1.0 + exp(-z)), a is the final output

J = (a - y)*(a - y)  // where y is the expected output

For us to calculate partial derivatives of this cost function w.r.t. w, b, we need to use, chain rule of derivatives as:

dJ_dw = dJ_da * da_dz * dz_dw    // dJ_da is the partial derivative of cost w.r.t. activation, a
dJ_db = dJ_da * da_dz * dz_db

in the above equations, da_dz is the derivative of activation function (sigmoid in our case) which is sigmoid(z).(1 - sigmoid(z))

",16426,,,,,6/22/2018 7:06,,,,0,,,,CC BY-SA 4.0 6850,1,,,6/22/2018 7:24,,3,74,"

I thought I have implemented the code (from scratch, no library) for an artificial neural network (first endeavour in the field). But I feel like I miss something very basic or obvious.

To make it short: code works for a single pair of in-/out-values but fails for sets of value pairs. I do not really understand the training process. So I want to get this issue out of the way first. The following is my improvised training (aka all that I can think of) in pseudocode.

trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000

network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)


while(trainingData not empty) {
  for(0<iterations) { 
     set = trainingData.pop()

     updateInput(network, set.in)

     forwardPropagate(network, links) 

     linkUpdate = backPropagate(network, links, set.out)

     updateLinks(linkUpdate, links)}
}

Is this how it is supposed to work? Do you feed in your training data set by set (while-loop)?

Edit 1: because my final comment did distract from the issue at hand.

Edit 2: less wordy, more code-y

",16441,,16441,,6/22/2018 12:01,8/16/2018 6:50,How to actually teach the ANN the resulting weights of different training inputs?,,2,6,,,,CC BY-SA 4.0 6851,2,,6846,6/22/2018 11:29,,2,,"

The data you want to collect cannot be reliably simulated at this time. There is no current realistic simulator for a human performing reading comprehension.

The actual error rates and specific wrong answers chosen on the questions will depend to a large degree on specific humans, and the nature of questions. As you are hoping to get realistic results, the only method that will work for your ground truth data is to present 1000+ real humans with your sample questions. In addition, if you want to categorise your humans into ""smart"", ""dumb"" etc, you will need to run additional tests on them, such as an IQ test, in order to create those categories.

Depending on context, such as the nature of the questions you want to assess, you may be able to obtain some anonymised data from real-world exams that could help, instead of trying to generate that data yourself. In that case you could make an approximate model for humans answering multiple-choice questions - perhaps by training an LSTM-based natural language model. For best accuracy on your questions you would want the training set to include similar kinds of question. There is still a caveat that NNs do not really do logic or reasoning, they make statistical fits, so can easily get answers logically wrong or select nonsense answers. The best general NLP models still fail badly at semantics.

If you have specific set of questions to evaluate - and are not willing to ignore the content of the questions - then no machine can currently match distribution of human behaviour on such a task without extensive training data and significant effort.

If you don't care about the content of the questions, or assessing their difficulty, or indeed any feedback about the specific questions, then you could maybe use a dataset from any multi-choice questionnaire to get statistics of correct answer accuracy. With this approach your simulation could just be a simple distribution over ""correct"", ""most plausible incorrect"" etc answers where you choose ""most plausible incorrect"" either manually or randomly (but consistently). This will get you a distribution of responses similar to known real-world data. It could be used to unit-test a scoring system perhaps, or demonstrate some stats or visualisation software. But the question and answer text may as well be gibberish at that point.

",1847,,1847,,6/22/2018 14:16,6/22/2018 14:16,,,,0,,,,CC BY-SA 4.0 6857,2,,4766,6/22/2018 19:47,,1,,"

There has been some recent work on this: Investigating Capsule Networks with Dynamic Routing for Text Classification

Seems some are having some success with it.

",16454,,2444,,6/9/2020 11:51,6/9/2020 11:51,,,,2,,,,CC BY-SA 4.0 6859,1,,,6/22/2018 20:18,,3,769,"

I'm new to machine learning, so i figured I should look into google's tensor flow guides and I know how to code in JS so that's why I'm using tensorflow.js, there's and example in the guide that trains itslef to recognize handwritten numbers from the MNIST handwriting dataset, I sort of understand what's going on in the code but since I'm very new to ML it's not a lot, I went through the code and saw that it didn't took image by image to train itself but it requests one sprite which contains all the images and then cuts it into what it needs, this makes sense from a performance point of view, but as this process is kind of abstract I don't understand what's really going on, I want to upload an image of my own and call the predictor of the model but I don't know how to do it, any help?

I was thinking that drawing in a canvas of 28x28 a number might be very interesting as well instead of uploading an image, but I need to know how to test the model once it's trained with my own data.

The tutorial: https://js.tensorflow.org/tutorials/mnist.html

",16078,,,,,6/25/2018 16:53,How to load an image into tensorflow.js code which reads handwritten numbers and clasify them,,1,0,,,,CC BY-SA 4.0 6860,1,6864,,6/23/2018 6:52,,3,625,"

Why is it that the skewed contour (unscaled features) will result in slow performance of gradient descent? In other words, how (or why) will the gradients end up taking a long time before finding the global minimum in such cases? This might be an obvious question but I'm finding a hard time Visualizing the 3D shapes of the respective contours and relating it to the convergence.

Left one is the contour for the unscaled feature and the right one is scaled (and will apparently converge quickly).

",10549,,-1,,6/17/2020 9:57,9/13/2018 6:57,Why Feature Scaling for skewed contour?,,1,0,,,,CC BY-SA 4.0 6862,2,,6836,6/23/2018 7:59,,4,,"

Although there is a strong element of ""try and see"" that has driven successful architectures, the drivers for what to try are often inspired by underlying theory or knowledge from other disciplines.

Specifically for basic CNN, which led to AlexNet and many of the best image processing, the concept of using local receptive fields in layers was inspired by study of neurons in the cat visual system.

Modern RNNs also did not appear out of nowhere, there has long been an appreciation of the difference between a feed-forward network and a recurrently-connected one, and the different applications possible. The step change to LSTM was deliberate response to analysis of problems training simplest forms of RNN.

Like much of science, these things are also driven by success in the real world following the research. Many promising ideas have been tried and rejected. Some have been used for a while then superseded, e.g. using RBMs or stacked auto-encoders to pre-train deep networks before ReLUs and Xavier initialisation were discovered - although both RBMs and auto-encoders still have their niches.

Tweaks to architectures, such as variants of LSTM/GRU, may even be deliberately searched and assessed as part of research. That is done with the explicit knowledge that this part of finding a good design is best done as a search across possibilities.

Despite the evolution-like progress, presenting all such advances as completely random or pure GA-like search is ignoring the conscious effort and research that leads to the designs. If you search literature on any major successful design (such as the existence of RNNs or CNNs in the first place), and read the papers, you will often find that modern neural network architectures have deep roots in older research, plus have mathematical and/or scientific justifications for the choices made.

",1847,,1847,,6/23/2018 8:18,6/23/2018 8:18,,,,0,,,,CC BY-SA 4.0 6863,1,,,6/23/2018 8:25,,1,97,"

Say I have an application where the frequency of the input is known but can vary widely across sequences. For example, they may be audio recordings acquired at different frequency, or videos that come from surveillance cameras whose framerate can vary from 24fps down to as low as 1fps.

The straightworard thing to do would be to either

  • resample inputs to a constant frequency
  • ignore input frequency and hope the RNN will figure it all out

None sound very appealing. Is there a better way to handle variable input frequency in RNNs?

",16466,,,,,6/23/2018 8:25,How to adapt RNNs to variable frequency / framerate of inputs?,,0,3,,,,CC BY-SA 4.0 6864,2,,6860,6/23/2018 8:34,,1,,"

Feature scaling is making your features have an equal representation or say in the final loss of the function. Intuitively as your picture suggests the contour will be elongated along one axes (feature with higher values).

Another example is lets say you have to separate two data clusters, one is (x,2) and another is (x,5) where x is variable. Now if you are using sigmoid activation, sigmoid always gives positive values, but to separate the two classes you need to have negative value for (x,2) in the second last layer, thus the NN will for a longer time have to adjust the weights to be negative (assuming positive weight initialisation) of bias and other features until (x,2) gives negative value in the second last layer. But with feature normalisation the initial value would have been already negative something like (x,-1/3).

EDIT: From my answer on DataScience.SE:

  • Purpose: Normalisation is done so that the Neural Network weights converge faster. In CNN's and Deep Neural Nets this is of particular help especially in CNN this helps to prevent exploding/vanishing gradients.

The most common explanation for normalisation I have come across is that if you have 2 features, one of them has a significant larger scale than the other e.g house price and house area then the feature with a larger scale will dominate the output. This is quite incorrect according to me, since when you back-propagate through the Neural Net the weight updates are directly proportional to the activations, so larger activation means larger feedback and hence weights get reduced faster and become smaller until w1*house price = w2*house area approximately this relation holds true. Yes, it will lead to more oscillations (intuitionally, since learning rate also becomes multiplied with a larger scale) but it will ultimately probably converge.

So the best 3 reasons for using normalisation are:

  • If scale of a feature is large the weights connected to that feature will have larger oscillations resulting in slower convergence and if deep NN is used probably no convergence, whereas normalisation helps in making the values small -1 to 1 so the gradient updates are also small resulting in faster convergence.
  • The best intuition for normalisation can be found from this video of Stanford and its subsequent video. Since we know weight updates is directly proportional to inputs it will also take the sign of the inputs (or exact opposite sign) always. Now we know house price and area is always positive (in our Universe at least!). So the weight updates will always have definite sign either both positive or both negative (depending the sign of downstream gradients). But the weight updates may have a optimal value in the 4th quadrant, so the weight updates will have to follow a zig zag pattern to effectively make a weight update in the 4th quadrant.
  • Finally when you are dealing with Deep Neural Nets like CNN if you do not normalise pixels it will result in exponentially large/vanishing gradients. Since generally softmax/sigmoid is used in the last layer, it squashes the outputs. If you have a large output, generally due to un-normalized data, it'll result in wither exact 0 or exact 1 output, which is fed into a log function and BAM! overflow. The error becomes inf or NaN in python. So inf error means exploding gradients and NaN means gradient cannot be calculated. This can probably be remedied by using higher floating point precision but it will result in higher memory and processor consumption ultimately inefficiency.

TL;DR: Normalisation is used for faster weight convergence. Issues faced by un-normalised data are larger weight oscillations, weight updation in non optimal direction, overflow in precision in Deep Neural Nets.

Here are a few great links for better insight:

How and why do normalization and feature scaling work?

Why feature scaling?

",,user9947,,user9947,9/13/2018 6:57,9/13/2018 6:57,,,,0,,,,CC BY-SA 4.0 6868,1,,,6/23/2018 17:25,,1,131,"

I've selected more than 10 discriminative (classification) models, each wrapped with a BaggingClassifier object, optimized with a GridSearchCV, and all of them placed within a VotingClassifier object.

Alone, they all bring around 70% accuracy, on a data set which is about half normal/uniform distributed, and half one-hot distributed. Together, they provide 80% accuracy, which isn't good enough, given that I was told that 95%< is achievable.

The models: DecisionTreeClassifier, ExtraTreesClassifier, KNeighborsClassifier, GradientBoostingClassifier, LogisticRegression, SVC, Perceptron, and a few more classifiers.

How do I check if the combination is good?

",16474,,2444,,11/13/2020 19:13,11/13/2020 19:13,How do I check that the combination of these models is good?,,1,4,,,,CC BY-SA 4.0 6870,2,,6868,6/24/2018 1:14,,2,,"

Goodness is subjective. Reliable knowledge isn't possible with that flimsy a quality objective.

The sturdy objective criteria you gave is 95%, so it is bad by that criteria. (I'm assuming that the 95% is expected for a given data set or a randomized sample from a given data set.)

However, the 80% accuracy is good by the criteria where the you sum the measures of the accuracy of the individual models, divide by the number of models, and find you have gained ten percentage points of accuracy over that average with your aggregated execution strategy. (I'm assuming here that you used a defined set of network meta-parameters, layers depths and widths, starting parameters, activation model mapping to layers, inter-network connectivity, and loss/error methods for each model that is similar to the aggregated execution strategy.)

I have four questions. (My apologies that this question leads to the two assumptions above and a bunch more questions.)

  • Is the 80% accuracy also well over the maximum of the accuracies of the set of accuracies from the individual models?
  • Is the computing resource draw to achieve 80% accuracy achievable with additional run time of the best of the individual models, using less than or equal to the computing resource draw to achieve 80% that way?
  • Have you run your evaluation with a full set of meta parameter vectors to check the entire meta-space for best case?
  • What economic, contractual, or operational hard stop is dictating the 90%?

If we know these answers, we may be able to respond more effectively and possibly find a loop hole in the logic that appears to leave you with an undesirable foregone conclusion.

",4302,,,,,6/24/2018 1:14,,,,0,,,,CC BY-SA 4.0 6872,1,6879,,6/24/2018 3:40,,5,310,"

The endocannabinoid system is a very important function of human biology. Unfortunately, due to the illegality of cannabis, it is a relatively new field of study. I have read a few articles about Google researching the role of dopamine in learning, and according to this article, anandamide (the neurotransmitter that closely resembles tetrahydrocannabinol):

was found to do a lot more than produce a state of heightened happiness. It’s synthesized in areas of the brain that are important in memory, motivation, higher thought processes, and movement control.

Have any neuroscientists (or any scientists) considered the importance of the endocannabinoid system for cognitive function?

If not, is there any reason this information might or might not be relevant to artificial intelligence?

",16480,,2444,,3/13/2020 22:37,3/13/2020 22:37,What is the importance of the endocannabinoid system for cognitive function?,,1,0,,,,CC BY-SA 4.0 6875,1,,,6/24/2018 20:41,,4,4379,"

I am a deep learning beginner recently reading this book ""Deep learning with Python"", the example explains the process of implementing a greyscale image classification using MNIST in keras, in the compilation step, it said,

Before training, we’ll preprocess the data by reshaping it into the shape the network expects and scaling it so that all values are in the [0, 1] interval. Previously, our training images, for instance, were stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. We transform it into a float32 array of shape (60000, 28 * 28) with values between 0 and 1.

Images stored in an array of shape (60000, 28, 28) of type uint8 with values in the [0, 255] interval. For my understanding, the values are between 0-255 of each px and storied as 3D matrix. Can someone explain why needs to ""transform"" it into the network expects by scaling it and make ""all values are in the [0, 1]interval.""?

Please also make suggestions if I didn't explain some parts correctly.

",6390,,,user9947,7/24/2018 3:08,4/9/2021 12:00,"What is the purpose of ""reshaping it into the shape the network expects and scaling it so that all values are in the [0, 1] interval.""?",,2,1,,,,CC BY-SA 4.0 6876,2,,6875,6/25/2018 2:17,,0,,"

T'is easy, but misunderstood. What they mean is to map it from the range of 0-255 to the range of 0-1. This means that 0 would be 0, and 255 would be 1. The code for this is as such in javascript:

function map (num, in_min, in_max, out_min, out_max) {
  return (num - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}

Use the function, like this:

var num = 5;
console.log(map(num, 0, 255, 0, 1)); // 0.0196078431372549
var num = 150
console.log(map(num, 0, 255, 0, 1)); // 0.5882352941176471

Iterate over the entire image, and use the function (or you programming languages equivalent) on every egg or each pixel. By doing so, all values are in the 0,1 interval. Next, all you have to do is to feed it to the network.

",14723,,,,,6/25/2018 2:17,,,,2,,,,CC BY-SA 4.0 6878,1,,,6/25/2018 7:19,,4,126,"

In a final project in diagnosing Attention deficit hyperactivity disorder (ADHD) using Machine Learning, we obtained parameters from real patients. We used this data and got much higher success rates with LDA than with SVM and Naive Bayes. We had only 100 examples in our training set. We are wondering why LDA specifically succeeded much more than the others?

",16496,,2444,user9947,7/24/2021 12:37,7/24/2021 12:37,Why would LDA have performed much better than SVM and Naive Bayes in diagnosing ADHD?,,2,1,,,,CC BY-SA 4.0 6879,2,,6872,6/25/2018 10:26,,5,,"

The release of Adenosine, Dopamine, Endorphin, Endocannabinoids, GABA, Glutamate, Norepinephrine, Oxytocin, Serotonin, and many others into specific regions of the brain are very likely an essential part of both activation tuning of single neurons and neuroplasticity, two essential aspects of organic learning researchers have been and will continue to work to understand.

Most of those I've met in that sector of research are curious about the larger questions of what intelligence and consciousness are, and all of them appear to be interested in discovering how learning systems may be valuable in software engineering contexts. These overarching questions are difficult to answer and the dive into the detail of learning has resulted in the expansion of ideas presented by Dr. Norbert Wiener in the mid 20th Century at MIT.

How chemical feedback in regions of the brain are secreted, how they disseminate geometrically into organic structures, how they interact with receptors, and what that does to the cell metabolism to produce change in the cell is almost definitely part of the DNA driven design of higher animal learning. There does not appear to be anything pointless or arbitrary about it. Adaptation to improve survival is evident and, yes, study is well underway.

Narcotics that interfere with the natural functioning of these organic signaling systems can lead to the inability to adapt in the individuals addicted to them. That fact is a strong form of evidence that learning depends on these systems.

Oxytocin is another neurotransmitter of interest because its release is associated with what would fit into the higher levels of human thought and motivation on Abraham Maslow's famous hierarchy of needs. Oxytocin seems to be part of reward signaling for modes of thought like authenticity, compassion, intimacy, wisdom, spirituality, and other human mental capacities and patterns of thought that transcend mere rationalism. Why is that important? Because the ability to lay down selfish goals for the good of the community seem to depend largely on the oxytocin system's preemptive ability over mere survival mode neural activity in mammals.

Regarding the cannabinoid receptors, there is a large enough body of media that show a correlation between a pool of successful artists and marijuana use to legitimately wonder whether there is any tie between creativity and the endocannabinoid system. However in science (and hopefully in technology too), we are careful not to draw conclusions rashly.

It is also possible that this apparent correlation is simply a social phenomenon where the popularity of the artistic products or live performances is mainly because of potential audiences similarly stimulating their receptors with cannabinoids. For instance, those engaging in LSD trips on stage attracted those also engaging in LSD trips into their audiences two generations ago. Whether the work of those artists was more creative because of the impact on the serotonin receptors is largely subjective. How can researchers come to conclusions about what is good performance?

One could analyze the audio to produce a table of notes in a performance and then develop a system to attach a numerical value to several positive quality metrics, study the price of tickets, or count downloads, but the decisions of what to measure and how to aggregate it into a final judgment of excellence is itself necessarily a subjective choice.

Nonetheless, if we place the many misconceptions of popular psychology and the drug culture aside, there is much research into the endocannabinoid system as part of the learning signaling, what machine learning researchers are currently calling reinforcement. The more general term is, ""A non-linear control system's feedback signal,"" already developed in detail in the 1940s (Behavior, Purpose and Teleology — A Rosenblueth, N Wiener, J Bigelow — Philosophy of Science, 1943 — U Chicago). Some new trendy name will probably appear in the 2020s.

This article claims that the mammals under test, ""exhibited enhanced learning.""

Memory in Monoacylglycerol Lipase Knock-out Mice, by Bin Pan, Wei Wang, Peng Zhong, Jacqueline L. Blankman, Benjamin F. Cravatt and Qing-song Liu; Journal of Neuroscience 21 September 2011, 31 (38) 13420-13430; DOI: https://doi.org/10.1523/JNEUROSCI.2075-11.2011

To find many others: https://scholar.google.com/scholar?q=endocannabinoid+learning

",4302,,4302,,6/25/2018 12:34,6/25/2018 12:34,,,,3,,,,CC BY-SA 4.0 6880,1,,,6/25/2018 11:27,,5,730,"

I have an image dataset where objects may belong to one of the hundred thousand classes.

What kind of neural network architecture should I use in order to achieve this?

",12957,,2255,,2/3/2020 8:30,2/3/2020 9:14,What kind of neural network architecture do I use to classify images into one hundred thousand classes?,,4,0,,,,CC BY-SA 4.0 6882,2,,6880,6/25/2018 13:35,,1,,"

A large one!

In all seriousness, imagenet had roughly 1000 classes and did not require anything special from the top submissions. Depending on how deep(contextually) these classes are, you may want to do something like multi-label classification. Your biggest problems will likely be differentiating between classes, as well as class distribution.

Good luck!

",9608,,,,,6/25/2018 13:35,,,,1,,,,CC BY-SA 4.0 6883,2,,6880,6/25/2018 14:08,,1,,"

As you can imagine and as it has already been said, a large one for your network to tune weights and biases. But I wanted to nuance this statement with two points

First : you can use an Autoencodeur to pre-process your images. It can reduce dimensionality and so improve the learning capability and efficiency (in a generalization point of view). This kind of NN takes your images as inputs, encode and then decode them to provie new representation of your initial images. Dealing with the decoded dataset can allow you to consider less hidden layer with less hidden nodes and then speed your work up.

Second : architecture is sure a thing to deal with image recognition, but you can also play on the input representation (that is what the autoencoder aforesaid is about). You can look at PCA (Principal Component Analysis). It allows to reduce dimensionality to a certain number of components (that you specify). It is often used in face recognition where inputs and targets are various.

All that to say that architecture is sure a thing when dealing with large datasets, but there also few tools to reshape the inputs so that then can be more easily learnt.

And by doing so you can improve the capability of your network as much in term of time computation as in quality and accuracy of prediction

",11069,,,,,6/25/2018 14:08,,,,0,,,,CC BY-SA 4.0 6884,2,,6878,6/25/2018 14:42,,0,,"

If I had to guess(and it is nothing more), I would say it has quite a bit to do with the problem itself and the architectures involved. Simply, the problem is less suited to a bayesian approach(Highly dependent features, linear distribution).

",9608,,9608,,6/25/2018 14:54,6/25/2018 14:54,,,,0,,,,CC BY-SA 4.0 6885,2,,6859,6/25/2018 16:53,,1,,"

To test the model, you have to achieve a Tensor with the same dimensions as the training data. This Tensor should be the same for the same image, no matter how many times you remake the Tensor with the same image. That being said, you can attempt to reverse-engineer how the Tensors are being made. Hint: The dimensions of the input tensor should be (28, 28, 1).

",14723,,,,,6/25/2018 16:53,,,,0,,,,CC BY-SA 4.0 6886,2,,6880,6/25/2018 23:52,,2,,"

Classification tasks with a large number of classes are usually handled with hierarchical softmax to reduce the complexity of the final layer. This is useful, for example, in applications such as word embedding where you have hundreds of thousands of classes (words), like in your case.

",16101,,,,,6/25/2018 23:52,,,,0,,,,CC BY-SA 4.0 6887,2,,4601,6/26/2018 3:17,,1,,"

Does it make sense to do this task even though I don't have an experience with machine learning? How complicated is this task considering I'm using a well known framework?

I think it does make sense, and using an established framework would get you up and running quickly.

Assuming that I'm taking the task, which algorithm should I choose to perform this kind of task?

This is a regression problem, so I would recommend that you don't treat it as a classification task (unless you want to have a binary output like ""profitable"" / ""non-profitable""). In essence, you are trying to identify the correlation between your inputs (previous purchases, location, etc.) and a certain metric (dollar value of the customer). Neural networks are very good at that (Accord seems to support neural networks, so you should be able to use that; TensorFlow with Keras as the interface or Caffe might be other options to consider).

How should I build the training data? see my comments, do you think my comments are ok to start with? or maybe I can break the data directly?

In most cases, you'd want to normalise your data before feeding it to the algorithm (this is particularly important for neural networks). The other thing you should do is consider what features would be relevant to the task. For example, since you want the customer value as the predicted output, the customer's phone number and email address are most probably irrelevant, but number of previous purchases, age and geolocation might be very relevant. Maybe you have other features in your database - total dollar amount of previous purchases, frequency of purchase, number of returned items or refund requests, etc. Remember to keep your model in check by splitting your data into training, validation and test sets (as a rule of thumb, 70% training / 10% validation / 20% test, but that depends on how much data you have).

",16101,,,,,6/26/2018 3:17,,,,0,,,,CC BY-SA 4.0 6891,2,,6823,6/26/2018 10:35,,0,,"

What is the order of the genetic operations in NEAT?

  1. You start by evaluating all of the initial neural networks and compute their initial fitness.
  2. Then you speciate,
  3. kill off the worst neural networks,
  4. mutate and crossover to produce offspring, and
  5. evaluate again.

The order of events is described on page 109 onwards in the original NEAT paper.

Which neural networks are killed (or not) at each generation?

The neural networks with the worst performance are killed off after speciation. None of the neural networks survive - the entire population is replaced with the offspring of the nets remaining after the culling stage. That said, you can implement elitism, where you keep some small portion of the best-performing nets and carry them over to the next generation without mutating them, but that is optional.

Who is being mutated and at what point?

At the end of each generation, after speciation and culling. To produce offspring, some of the remaining nets are subjected to mutation (think asexual reproduction - like single-celled organisms - but the offspring is mutated so that it differs from the parent). The rest are subjected to crossover in random pairs, so this would be the equivalent of sexual reproduction where you need two parents.

Hope that helps.

",16101,,2444,,10/13/2019 1:27,10/13/2019 1:27,,,,0,,,,CC BY-SA 4.0 6892,1,7296,,6/26/2018 10:39,,11,638,"

The problem of adversarial examples is known to be critical for neural networks. For example, an image classifier can be manipulated by additively superimposing a different low amplitude image to each of many training examples that looks like noise but is designed to produce specific misclassifications.

Since neural networks are applied to some safety-critical problems (e.g. self-driving cars), I have the following question

What tools are used to ensure safety-critical applications are resistant to the injection of adversarial examples at training time?

Laboratory research aimed at developing defensive security for neural networks exists. These are a few examples.

However, do industrial-strength, production-ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?

There are already (at least) two questions related to the problem of hacking and fooling of neural networks. The primary interest of this question, however, is whether any tools exist that can defend against some adversarial example attacks.

",16354,,2444,,3/3/2021 8:54,3/3/2021 8:54,What tools are used to deal with adversarial examples problem?,,2,1,,,,CC BY-SA 4.0 6897,1,6901,,6/26/2018 19:20,,2,121,"

I've recently come across the client-server model. From my understanding, the client requests the server, to which the server responds with a response. In this case, both the request and responses are vectors.

In reinforcement learning, the agent communicates with the environment via an "action", to which the environment sends a scalar reward signal. The "goal" is to maximize this scalar reward signal in long run.

Is there an analogy between client/server in web development and agent/environment in reinforcement learning?

",15935,,2444,,12/16/2021 18:07,2/1/2022 13:15,Is there an analogy between client/server in web development and agent/environment in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 6898,1,,,6/26/2018 19:23,,5,502,"

Can I get details about the algorithms used for classifying questions in stackoverflow (""Questions that may already have your answer""). Most of the suggestions I get are nowhere related to the question I have intended to ask.

",15935,,16355,,6/27/2018 19:42,8/10/2018 13:54,What algorithms does stackoverflow use for classifying duplicate questions?,,1,1,,,,CC BY-SA 4.0 6899,1,,,6/26/2018 19:26,,6,1699,"

I understand the intuition behind stacking models in machine learning, but even after thorough cross-validation scheme models seem to overfit. Most of the models I have seen in kaggle forums are large ensembles, but seem to overfit very little.

",15935,,16355,,6/28/2018 9:22,5/3/2019 4:05,How to prevent overfitting in stacked models?,,1,7,,,,CC BY-SA 4.0 6901,2,,6897,6/26/2018 21:12,,3,,"

Is there an analogy between client/server in web development and agent/environment in reinforcement learning?

The answer is "not really". There is no useful analogy here that allows any insight into RL from web server knowledge or vice versa.

However, you could set up an agent where the goal was to collect information, and the available actions were to make web requests. Clearly, in order to do this, you would need to make use of the client/server model for web servers, with the agent having control over the client web requests, and the environment being the network and servers of the world wide web.

There are some very hard challenges to construct an open-ended "web assistant" agent. Here are a couple that I can think of:

  • How to describe actions? Starting with raw web requests composed as strings would likely be very frustrating. Probably you would simplify and have a first action be a call to a search engine with some variation of the topic description, and then decisions about which links to follow, or perhaps whether to refine the search to better fetch sites related to the topic as it is being built.

  • How to create a model of reward for collecting information? The first major stumbling block would be to measure the amount of useful information that the agent had found on any request.

I think with current levels of Natural Language Processing, setting an agent free to discover information according to some goal from a text topic description is a very hard task, beyond cutting edge research. It would definitely be unreasonable to expect any such agent to end up with any resemblance to "understanding" subject matter from text. The agent would have very little ability to tell the difference between accurate facts or lies, or just grammatically correct gibberish.

One interesting idea for agents trying to learn unsupervised from exploring an environment is creating a reward signal from data compression. An agent will have learned something useful about its environment, if on processing new information, it is able to compress its existing world model more efficiently. This is basic concept behind ideas of general learning agents from Jürgen Schmidhuber, Marcus Hutter and others. Research into this idea could be a driver towards creating more generic AI learning systems - however, it is one idea amongst many in AI, and so far is research-only, it has not yet led to anything as practical as an AI web-searching assistant.

",1847,,2444,,12/16/2021 18:15,12/16/2021 18:15,,,,0,,,,CC BY-SA 4.0 6902,1,,,6/26/2018 22:07,,3,314,"

I have a sort of mathematical problem and I'm not sure which model I should choose to make an LSTM neural network.

Currently in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum.

The classification is done through a mathematical model whose entries are, the total number of each product, the total sum of all products, number of authors, among other indices that are calculated with the previous values.

Once the entries are obtained, these values ​​are processed by a set of formulas and the final result is a single number.

This number is located in a range provided by the mathematical model and this is how the group is classified.

What I want to do is given the current classification of a group, give suggestions of different values ​​to improve their classification.

For example, if there is a group with classification C, suggest how many products it should have, how many authors, what value should its indexes have, so that its category would be finally B.

I think the structure of my network should be: -1 input, which would be the classification you want to get. -Multiple output, one for each product and indexes.

But I do not understand how to make the network take into account the current classification of the group, in addition to the number of products and the value of the current indexes.

If you have further questions about the problem, please feel free to ask.

I appreciate your suggestions.

",16258,,,,,12/28/2022 9:06,How to build my own dataset and model for an LSTM neural network,,2,0,,,,CC BY-SA 4.0 6906,1,,,6/27/2018 7:49,,2,421,"

Two months ago, I've found myself working on a churn detection problem which can be briefly described as follows:

  • Assume the current date is N
  • Use customer behavior for N-1,..N-x dates to develop training dataset
  • Train model and make prediction at time N, predicting if a customer will churn at N+2 (thus allowing data N+1 for churn prevention / reduction campaign)

When thinking through the design of the model and considerations for how to ensure that it would be successfully implemented, I identified a feedback loop wherein the prediction would trigger an event resulting in interaction with customer, potential changes to customer behavior and thus an impact on the next set of prediction data. The following sequence of events could occur if successful (as an example):

Prediction -> Action to retain customer -> Change to customer behavior ->
Data for next prediction cycle not representative of training -> 
Incorrect prediction and cost associated for handling incorrect prediction

The feedback loop, fundamentally is that the action taken based on the prediction may impact the distribution or nature of features used to make the prediction.

When thinking through the how to solve the feedback problem I had listed the following three points as potential solutions:

  1. Retrain, test and validate model at every N+1 period and account for changes in behavior through new features (e.g. feature_i would involved details of the retention campaign a customer was treated to)
    • This would result in huge production overhead and I believe to be infeasible
  2. Run the model intermittently to allow behavior to normalize
    • Possible, however business would not be happy to have a prediction model which only works k times a year where k would have to be determined
  3. Predict the impact of the retention intervention and remove it from or the training set or include it as a new feature
    • Possible, extensive thought and some experimentation needed to determine whether modeling the retention out or in would have the better effect. Additionally, if modeled in, there may a short term penalty incurred as the model learns the new feature

I did not actually end up having to confront the feedback problem (as during the exploration phase, sufficient evidence was obtained indicating that a predictive model for churn detection would not be required), however after reading this paper on the technical debt which could be incurred during the development of the machine learning systems I found myself pondering:

  1. Were my considered strategies for dealing with the feedback reasonable?
  2. What other solutions should I have considered?
  3. Is there a way I could have re-framed the problem to completely design out the feedback loop (may be difficult to answer with the information provided, but if possible, but a ""you could have considered looking at..."" would be extremely beneficial)
",11933,,,,,6/27/2018 7:49,How do to mitigate or design out hidden feedback loops when designing ML systems?,,0,0,,,,CC BY-SA 4.0 6908,1,6913,,6/27/2018 8:36,,3,743,"

I'm just beginning to understand neural networks and I've performed a couple of successful tests with numerical series where the NN was trained to find the odd one or a missing value. It all works pretty well.

The next test I wanted to perform was to approxmimate the solution of a Sudoku which, I thought could also be seen as a special kind of numerical series. However, the results are really confusing.

I'm using an MLP with 81 neurons in each of the three layers. All output neurons show a strong tendency to yield values that are close to either 0 or 1. I have scaled and truncated the output values. The result can be seen below:

Expected/Actual Solution:     Neural Net's Solution:

6 2 7 3 5 0 8 4 1             9 0 9 9 9 3 0 0 3
3 4 8 2 1 6 0 5 7             0 9 9 0 0 0 9 9 0
5 1 0 4 7 8 6 2 3             0 9 1 9 9 0 2 0 4
1 6 4 0 2 7 5 3 8             0 0 5 0 0 9 0 0 7
2 0 3 8 4 5 1 7 6             0 0 0 0 0 9 9 0 9
7 8 5 1 6 3 4 0 2             9 9 9 9 0 6 2 9 0
0 5 6 7 3 1 2 8 4             0 0 0 0 9 9 0 9 0
4 3 1 5 8 2 7 6 0             9 9 0 0 0 0 9 0 9
8 7 2 6 0 4 3 1 5             9 9 0 9 9 0 9 0 9

The training set size is 100000 Sudokus while the learning rate is a constant 0.5. I'm using NodeJS/Javascript with the Synaptic library.

I don't expect a perfect solution from you guys, but rather a hint if that kind of behavior is a typical symptom for a known problem, like too few/many neurons, small training set, etc.

",16542,,2444,,10/22/2019 20:59,11/22/2022 14:23,How can a neural network learn to play sudoku?,,3,4,,,,CC BY-SA 4.0 6913,2,,6908,6/27/2018 13:09,,3,,"

I think it is the wrong way to frame sudoku as a regression problem in neural networks. Firstly, you have to understand what regression is. ""Regression"" is when you predict a value given certain parameters, where the parameters are related to the value you have to predict. This happens because at the core neural networks are ""function approximators"", they model the function by adjusting their weights using lots of data. They tend to form a highly non-linear boundary to separate classes internally in a high dimensional data-space.

The sudoku doesn't fit in this scenario, the combinatorial complexity of sudoku is way too high for a neural network even if you add many layers to it, it is a totally different problem in its own right. You simple can't ""regress"" the right values of a perfect sudoku here, they are not numbers like ""pixel "" intensities in images.

However, you could apply reinforcement learning techniques to learn an optimal policy to solve sudoku.

And you have mentioned an ""approximate"" solution for the sudoku, what do you mean by ""approximate""? If you mean by this that only a few squares are out of place, then it is a wrong assumption, because neural networks are proven to be good image classifiers, as they are robust to translational invariance in this case, that is not what you need.

You could, however, do a small experiment to see what the neural network actually learns, replace the numbers by pixel value intensities and train a generative adversarial network on the sudoku images and see the images of sudokus produced by it, to see what actually the network can't learn.

",15935,,2444,,10/22/2019 20:54,10/22/2019 20:54,,,,1,,,,CC BY-SA 4.0 6916,1,6950,,6/27/2018 13:22,,2,296,"

Would people go far with Artificial Intelligence and machine learning to the point where machines could learn during a long period of time to distinguish what's 'good' from 'bad' according to people living in a restricted geographical area, and then the machines take control and turn what was learned into a set of 'rules' and 'laws' (think of it as an effective machine of 'politics') that match the majority of the people's view of issues.

That should be accepted by everyone, since a contract set at the beginning says: "Everyone is ok".

",16548,,2444,,12/12/2021 17:33,12/12/2021 17:33,Can an AI distinguish between good and bad according to people living in a restricted geographical area?,,2,0,,1/18/2021 16:16,,CC BY-SA 4.0 6920,1,6935,,6/28/2018 2:17,,4,123,"

A bunch of friends and I play ultimate every week. Recently I wrote a program to choose our teams for us, as well as keep track of certain data (like which players were on which team, which team won, what was the score, how long the game was, etc). I wanted to use a machine learning technique to make the teams for us in order to optimize how fairly balanced the teams are (possibly measured by how many total points are scored in a game or how long a game lasts).

I am currently taking a machine learning MOOC and being introduced to very basic machine learning techniques (linear regression with gradient decent or normal equations, basic classification stuff, stuff like that). Although I hope I will come across a technique that fits my needs by the end of this course, I wanted to ask here to see if I can get a head start.

I've tried searching around everywhere, but couldn't find anything relevant. So my question is, is there an obvious technique I should look into for such a problem? If it's something too advanced for a beginner, that's fine too, but I'd like to get started learning/practicing it asap instead of waiting for my course to hopefully hit upon it.

Thank you!

EDIT: to clarify further, I would prefer something that looks at relationships between individual players like ""when Steph plays with Bill she is more likely to win"" or ""Steph plays worse when she is on a team with players who have a high win percentage"". I'd also prefer to be able to code it in python, but am willing to learn any other language

",16561,,1671,,11/6/2019 21:56,11/6/2019 21:56,What beginner-friendly machine learning method should I use to make teams for my pickup ultimate frisbee club fairly balanced?,,1,0,,,,CC BY-SA 4.0 6921,1,,,6/28/2018 5:53,,1,485,"

In the data-sets like coco-text and total-text, the images are of different sizes (height*width). I'm using these data sets for text detection. I want to create a DNN model for this. So the input data should be of same size. If I resize these images to a fixed size, the annotations given in the data-set, that is the location of the text in the images, will be changed.

So, how do I solve this problem?

",12273,,2444,,4/12/2022 17:02,4/13/2022 8:43,How do I change the annotations of variable-size images after having resized the images to a fixed size?,,2,0,,,,CC BY-SA 4.0 6922,2,,6921,6/28/2018 6:41,,1,,"

Find the largest height and width amongst all the images. Let us call it H and W respectively. It is true that you cannot resize the images, but say if you have an image of height h and width w where h < H, w < W. To the right of the image append W - w number of columns and at the bottom of the image append H - h number of rows having some constant value (0 is okay for grey-scale and B/W images and 0 for each of the channel in case of colour images).

In this way all the images will be of same size. Since you appending at the right and bottom of the image, the annotations will not lose its meaning in the transformed image in terms of the position and content of the text to recognised.

You could also try pixelRNN kind of ideas after you are done with DNN. RNN can handle variable length inputs and in your case it will be sequence of pixels. Here you don't need to append rows and columns to the image.

",9062,,,,,6/28/2018 6:41,,,,3,,,,CC BY-SA 4.0 6923,1,6924,,6/28/2018 8:19,,5,4165,"

I just read about deep Q-learning, which is using a neural network for the value function instead of a table.

I saw the example here: Using Keras and Deep Q-Network to Play FlappyBird and he used a CNN to get the Q-value.

My confusion is on the last layer of his neural net. Neurons in the output layer each represent an action (flap, or not flap). I also see the other projects where the output layer also represents all available actions (move-left, stop, etc.)

How would you represent all the available actions of a chess game? Every pawn has a unique and available movement. We also need to choose how far it will move (rook can move more than one square). I've read Giraffe chess engine's paper and can't find how he represents the output layer (I'll read once again).

I hope somebody here can give a nice explanation about how to design NN architecture in Q-learning, I'm new in reinforcement learning.

",16565,,32410,,4/22/2021 12:49,3/4/2022 21:09,How should I model all available actions of a chess game in deep Q-learning?,,2,0,,,,CC BY-SA 4.0 6924,2,,6923,6/28/2018 9:45,,6,,"

To model chess as a Markov decision problem (MDP) you can refer to the AlphaZero paper (Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm). The exact details can be found starting from the bottom of page 13.

Briefly, an action is described by picking a piece and then picking a move with it. The size of the board is 8 by 8 so there can be 8x8 possibilities for picking a piece. Then we can either pick linear movements (in 8 directions) and then pick the number of steps in that direction (maximum 7 steps) or we can make a knight movement (maximum 8 possibilities). So far that is 8x7 + 8. Furthermore, we also need to consider underpromotions (promoting a pawn into a non-queen piece). In this scenario we can have 3 types of pawn movements (forward, left diagonal or right diagonal capture) and 3 types of promotions (rook, knight, bishop) so that makes it 9. So the total dimension of the action space is 8x8x(8x7+8+9) and this will be the number of neuron outputs you will need to use.

Note that this action space representation covers every possible scenario and for example at the start of the game the action of picking the tile E4 and promoting it to a bishop doesn't make sense (there are no pieces on tile E4 at the beginning of the game). Or if we pick a tile where there is a rook we cannot make a knight movement with it. Therefore you will also need to implement a function that can return you the set of possible actions in a given state and ignore all neural network outputs that is not contained in this set.

Obviously this action representation is not set into stone so if you can come up with something better or more compact you can use that one too. You can also make restrictions to your game for example by not allowing underpromotions.

",8448,,,,,6/28/2018 9:45,,,,3,,,,CC BY-SA 4.0 6926,1,7023,,6/28/2018 11:00,,5,352,"

In traditional computer vision and computer graphics, the pose matrix is a $4 \times 4$ matrix of the form

$$ \begin{bmatrix} r_{11} & r_{12} & r_{12} & t_{1} \\ r_{21} & r_{22} & r_{22} & t_{2} \\ r_{31} & r_{32} & r_{32} & t_{3} \\ 0 & 0 & 0 & 1 \end{bmatrix} $$

and is a transformation to change viewpoints from one frame to another.

In the Matrix Capsules with EM Routing paper, they say that the ""pose"" of various sub-objects of an object are encoded by each capsule lower layer. But from the procedure described in the paper, I understand that the pose matrix they talk about doesn't conform to the definition of the pose matrix. There isn't any restriction on keeping the form of the pose matrix shown above.

  1. So, is it right to use the word ""pose"" to describe the $4 \times 4$ matrix of each capsule?

  2. Moreover, since the claim is that the capsules learn the pose matrices of the sub-objects of an object, does it mean they learn the viewpoint transformations of the sub-objects, since the pose matrix is actually a transformation?

",16569,,2444,,6/9/2020 15:09,6/9/2020 15:12,"Is the word ""pose"" used correctly in the paper ""Matrix Capsules with EM Routing""?",,2,0,,,,CC BY-SA 4.0 6927,1,,,6/28/2018 16:23,,1,166,"

Earlier this month, Google released a set of principles governing their AI development initiatives. The stated principles are:

Objectives for AI Applications:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

AI Applications not to be Pursued:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
    SOURCE: Artificial Intelligence at Google: Our Principles

My questions are:

  • Are this guidelines sufficient?
  • Are there any ""I, Robot"" conflicts
  • How much does this matter if other corporations and state agencies don't hew to similar guidelines?
",1671,,2444,,12/12/2021 17:34,12/12/2021 17:34,Google's Principles of Artificial Intelligence,,1,0,,12/12/2021 17:21,,CC BY-SA 4.0 6928,1,6939,,6/28/2018 16:25,,2,173,"

How should I design my input layer for the following classification problem?

Input: 5 cards (from a deck of 52 cards) in a card game;

Output: some classification using a neural network

How should I model the input layer?

Option A: 5 one-hot encodings for the 5 cards, i.e. 5 one-hot vectors of length 52 = 260 input vector. For example

[
[0,0,0,0,0,0,1,...],
[1,0,0,0,0,0,0,...],
[0,0,0,0,0,1,0,...],
[0,0,1,0,0,0,0,...],
[0,0,0,0,1,0,0,...]
]

Option B: 5 hot encoding encompassing all 5 cards in one 52 element vector

[1,0,1,0,1,1,1,...]

What are the disadvantages between A and B?

",16574,,-1,,6/17/2020 9:57,5/19/2020 20:30,How should I encode the input which are 5 cards from a deck of 52 cards?,,1,0,,,,CC BY-SA 4.0 6929,2,,5453,6/28/2018 17:43,,2,,"

First of all, you mention that you have categorical data. I don't see how you can define similarity so that you can also define the distance between the predicted value and the ground truth (error). You can do that only if the data are ordinal.

If you want to just classify between normal and anomalous points (binary classification), without caring about further classification of the anomaly types themselves, one of the most common algorithms is the One-Class Support Vector Machine (OC-SVM).

Anomalies are unpredictable in nature and sometimes hard to replicate and record. Therefore, there is usually lack of anomalous data and supervised learning approaches suffer because if you sacrifice some ""precious"" anomalous points to train the algorithm, you cannot use them to test it.

The main advantage of OC-SVM is that it is semi-supervised learning, meaning that you train it only with normal data and then it can detect samples that deviate from the trained behaviour during testing and classify them as anomalous. Thus, you ""save"" all the rare anomalous points for testing purposes!

Take a look at this short Python example, it has all you need :)

",15919,,,,,6/28/2018 17:43,,,,0,,,,CC BY-SA 4.0 6931,1,6945,,6/28/2018 19:45,,2,34,"

I've been wondering, how, in the most simple-to-implement basic principle, does the light projection to depth map technique described here https://www.lightform.com/how-it-works actually functions? Is it some kind of an average based on the color of x pixel over all the patterns or what? How difficult would it be to code something that could do this, up?

",16579,,,,,6/29/2018 8:09,Structured lighting basic principles for depth mapping,,1,0,,,,CC BY-SA 4.0 6933,1,,,6/28/2018 21:34,,2,32,"

I am working on a problem where I have to train a CNN to recognize different kinds of surfaces. One important characteristic of the surfaces I am interested is is how reflective they are. I have been trying to find a method that quantifies how ""shiny"" a surface is, but I have not found much. I am hoping that someone can point me toward a method or some research into this kind of problem.

",16582,,,,,6/29/2018 7:41,How to quantify the reflectance in an image?,,1,2,,,,CC BY-SA 4.0 6934,1,6936,,6/28/2018 22:47,,4,1850,"

Let's suppose I have an image with 16 channels that goes to a convolutional layer, which has 3 trainable $7 \times 7$ filters, so the output of this layer has depth 3.

How does the convolutional layer go from 16 to 3 channels? What mathematical operation is applied?

",14801,,2444,,10/8/2021 12:14,12/18/2021 12:18,How is the depth of the input related to the depth of the output of a convolutional layer?,<2d-convolution>,2,1,,12/18/2021 12:20,,CC BY-SA 4.0 6935,2,,6920,6/28/2018 22:48,,2,,"

What you have on your hands is an attribution problem (and that’s a good keyword to help in your Googling). Two common approaches are computing Shapley values or a Markov chain.

In your case, I think Shapley values would be a good approach. To over-simplify, this approach attempts to first determine the total “surplus” of value created by different combinations of players, and then estimate how much was due to each player in the combination. Also under consideration are constrained combinatorial effects: e.g. having two great goalies doesn’t add much over one great goalie since they can’t play at the same time.

Depending on how hands-on you want to be you could implement yourself or find existing libraries by Googling e.g. “Shapley values python”.

",13360,,,,,6/28/2018 22:48,,,,0,,,,CC BY-SA 4.0 6936,2,,6934,6/29/2018 1:01,,3,,"

The reason why you go from 16 to 3 channels is that, in a 2d convolution, filters span the entire depth of the input. Therefore, your filters would actually be $7 \times 7 \times 16$ in order to cover all channels of the input.

Detailed procedure

The output of the convolution automatically has a depth equal to the number of filters (so in your case this is $3$) because you have an $m \times k$ filter matrix, where $m$ is the number of filters and $k$ is the number of elements in the unrolled filter (in your case, $m = 3$ and $k = 7 \times 7 \times 16 = 784$, so the filter matrix is $3 \times 784$).

The input is usually unrolled according to the im2col procedure, where each tile corresponding to a single filter location is stretched into a column equal to the unrolled filter size. This is repeated for each filter location, so you end up with a very large matrix of size $k \times n$, where $k$ is the same as $k$ above in the filter matrix, and $n$ depends on your padding and stride.

Multiplying the $m \times k$ filter matrix with the $k \times n$ input matrix gives you an $m \times n$ output matrix, where $m$ is the number of filters.

Further reading

You can find some very nice visual explanations of the convolution procedure here and here.

",16101,,2444,,9/26/2021 22:00,9/26/2021 22:00,,,,1,,,,CC BY-SA 4.0 6938,2,,1851,6/29/2018 2:54,,5,,"

The two examples present essentially the same operation:

  • In both cases, the network is trained with gradient descent using the backpropagated squared error computed at the output.

  • Both examples use the logistic function for node activation (the derivative of the logistic function $s$ is $s(1 - s)$. This derivative is obviously very easy to compute, and this is part of the reason why it was so widely used (these days the ReLU activation function is more popular, especially with convolutional networks).

  • The first method also uses momentum.

The main difference I can see is that in the first case backpropagation is iterative while in the second example it is performed in batch mode.

The last video in the series by Welch Labs introduces a quasi-Newtonian method which offers the advantage of finding the minimum of the cost function by computing the Hessian (matrix of second-order derivatives of the error with respect to the weights). However, this feels like comparing apples and oranges - the vanilla gradient descent does not use second-order information.

",16101,,2444,,5/23/2020 13:00,5/23/2020 13:00,,,,0,,,,CC BY-SA 4.0 6939,2,,6928,6/29/2018 3:21,,2,,"

Depends on how your game is played. Is there any meaning assigned to the order of cards, or are all 5 played simultaneously? If order matters, use 5 one-hot vectors so you can choose how to order them, otherwise use a single 5-hot input vector. I would also add that if temporal order matters, you could also use a recurrent net with a 52-element input and feed the five one-hot vectors one after another.

",16101,,,,,6/29/2018 3:21,,,,0,,,,CC BY-SA 4.0 6940,2,,6934,6/29/2018 6:15,,0,,"

Your input has 16 channels, each of dimension $m \times n$. There are 3 filters, namely $f_1$, $f_2$ and $f_3$ of spatial dimensions $k \times h$.

We say that a filter is applied to a channel when it is superimposed on the image, starting left-most, performing the operation of multiplying the weights of the filter with the corresponding value in the image and then summing up to a single value and moving the filter to right (then down when it reaches rightmost part) across the image according to the stride of the filter.

When a filter, e.g. $f_1$, is applied to a channel say $c$, there is a single value. Now, apply them to all channels, we get 16 values and all of them are added up to a single value. $f_1$ is moved according to the stride and the same operation is repeated to get an output with a single channel (the number of rows and columns are determined by padding, stride, dilation, and kernel size of the filers).

The aforesaid process is done by all the 3 filters giving rise to 3 channels. In this way, the convolutional layer makes the input go from 16 to 3 channels.

More detailed explanations can be found here.

",9062,,2444,,12/18/2021 12:18,12/18/2021 12:18,,,,1,,,,CC BY-SA 4.0 6941,2,,5110,6/29/2018 6:38,,0,,"

you could implement a simple TTS system that can translate your voice line by line to code , but it would of no use . you cant express code in a line-by-line manner. Coding is a highly iterative process at first you come up with a rough sketch for which you add details later on , and from NLP point of view this is a highly ambitious project .

At the heart almost all of ai techniques (neural networks) are functions that map one domain to another , you cant map natural language sentences to instructions in code.

However you can implement a tts system for a small language like LOGO.

",15935,,,,,6/29/2018 6:38,,,,1,,,,CC BY-SA 4.0 6943,2,,6933,6/29/2018 7:41,,1,,"

You should first know about the layer-wise working of a convolutional neural network. Read this https://distill.pub/2018/building-blocks/

Each layer of a cnn forms representations that are increasingly complex , which are combinations of simpler representations formed by its previous layers. ""shine"" is not a pattern (from image processing perspective). shine occurs due to change in intensity values.

And for quantifying ""shine"" you dont have to go for convolutional networks only , there are many more simpler methods for such tasks you could try fitting simple linear model and progress towards more complex ones.

and for classifying textures , there is interesting work going with ""invariant scattering convolutional networks"" you should check that.

",15935,,,,,6/29/2018 7:41,,,,0,,,,CC BY-SA 4.0 6944,2,,4778,6/29/2018 7:56,,0,,"

training a model to identify new persons from video logs seems to be a daunting task.

You will be needing lots of data and computational power for building such a model. and there is relatively low work going on videos due to the amount of computational power required , even training a simple video classifier with reasonable accuracy requires lots of expertise and resources (data and computing).

Having said that recognising persons from videos and ""new"" person identification seems to be even more difficult , you should definitely read some papers on video classification using convolutional neural networks to get an idea of how hard the problem is. frameworks are only mere tools which give you helper functions to build neural networks , so choice of frameworks is not a question.

The real question is are you sure you want to train a model (CNN) for this task? Can use video feeds to detect new people in the workspace and prompt someone (via e-mail or text message) for identification

for this you should be training a convolutional network on the factory videos dataset , i.e you the cnn trains on frame level for the videos. and recognizing and classifying ""known"" and ""unknown"" faces from videos is a research problem in its own right

",15935,,,,,6/29/2018 7:56,,,,0,,,,CC BY-SA 4.0 6945,2,,6931,6/29/2018 8:09,,1,,"

how, in the most simple-to-implement basic principle, does the light projection to depth map technique described here https://www.lightform.com/how-it-works actually functions?Is it some kind of an average based on the color of x pixel over all the patterns or what?

No it is not that simple , more details on that are described in https://kevinkarsch.com/publications/sa11-lowres.pdf

P.S the author of this paper went on to become CTO of lightform.

Is it some kind of an average based on the color of x pixel over all the patterns or what? How difficult would it be to code something that could do this, up?

As mentioned above it is not simple average based on color , coding would be a tough task , you could implement model if you are capable enough to code a ray-tracer from scratch. But replicating the exact results would require a lot of low-level knowledge and certain domain expertise as not much of the implementation details and hyperparameters are described.

",15935,,,,,6/29/2018 8:09,,,,1,,,,CC BY-SA 4.0 6948,2,,6902,6/29/2018 11:05,,0,,"

why do you want to use LSTM network? lstm is a variant of a recurrent neural network , recurrent neural networks are used for ""sequential"" tasks , i.e the dataset should have some sequential structure , like poems , songs etc.

Your model should be a simple classifier , once you fit a simple classifier , like a decision forest on the categories dataset products ,authors etc , you will have a model that will predict the class based on these attributes then you can say from the decision boundaries of the model what values you must have . if the relationship between the attributes is even more simpler you could try plotting distribution plots.

",15935,,,,,6/29/2018 11:05,,,,1,,,,CC BY-SA 4.0 6950,2,,6916,6/29/2018 12:37,,0,,"

Even if you could do that (which I believe is a long way off), what would be the point?

If I understand you correctly, you want an AI system to learn through observation of human beings what their 'rules of interaction' are. It sees a person killing someone else, and then that person is punished by the community, so the AI learns that killing people is not right. However, that is already codified in laws... So what it would pick up are social behaviours, which hopefully be things like ""be nice to other people"", ""don't do anybody any harm"", ""be honest and truthful"", etc.

The first question then is how an AI could assess events[*] as to whether they are 'good' or 'bad'. If everybody lies and steals, that would be learnt as normal behaviour, and the AI would not be able to pick up that most people would see this as 'bad'[**]. Causal relations are also hard to grasp. Somebody steals something. Then later someone else buys him a drink. So stealing stuff means other people will buy you drinks? These are really hard problems to solve. You need to know about people's motivations, and the multitude of 'threads' of interaction happening at the same time, even in a very limited area.

So, recognising events and causal links, plus a moral evaluation of them is pretty difficult. I don't think we will get there anytime soon. Unsupervised learning of behavioural is also pretty difficult, as you only have unlabelled observations and no real criteria to classify them. Plus, many actions are morally ambiguous. Killing people is generally seen as bad. What about the officers who tried to assassinate Hitler in 1944? Life is complex, and our artificial models are not anywhere near that.

So even if you were able to do all that, what would you end up with? An AI system that has picked up a lot of unwritten rules about human behaviour, and then postulates that as laws? So everyone has to behave the same way? I just don't see the point of that, even as a thought experiment.

[*] leaving aside here the question how you determine what an 'event' is in the first place
[**] Please note that even if people steal things, they steal can view theft as a bad thing.

",2193,,,,,6/29/2018 12:37,,,,1,,,,CC BY-SA 4.0 6952,1,,,6/29/2018 14:09,,7,540,"

In image classification we are generally told the main reason of using CNN's is that densely connected NN's cannot handle so many parameters (10 ^ 6 for a 1000 * 1000 image). My question is, is there any other reason why CNN's are used over DNN's (densely connected NN)?

Basically if we have infinite resources will DNN trump CNN's or are CNN's inherently well suited for image classification as RNN's are for speech. Answers based either on mathematics or experience on the field is appreciated.

",,user9947,,user9947,6/29/2018 14:20,11/30/2018 7:03,CNN's vs Densely Connected NN's,,3,0,,,,CC BY-SA 4.0 6953,1,6955,,6/29/2018 14:31,,9,1683,"

I am learning about Monte Carlo algorithms and struggling to understand the following:

  • If simulations are based on random moves, how can the modeling of the opponent's behavior work well?

For example, if I have a node with 100 children, 99 of which lead to an instant WIN, whereas the last one leads to an instant LOSS.

In reality, the opponent would never play any of the 99 losing moves for him (assuming they are obvious as they are the last moves), and would always play the winning one. But the Monte Carlo algorithm would still see this node as extremely favorable (99/100 wins for me), because it sees each of the 100 moves as equally probable.

Is my understanding wrong, or does it mean that in most games such situations do not occur and randomness is a good approximation of opponent behavior?

",16597,,1641,,9/22/2018 18:14,9/22/2018 18:14,Why does Monte Carlo work when a real opponent's behavior may not be random,,3,1,,,,CC BY-SA 4.0 6954,2,,6953,6/29/2018 15:09,,1,,"

I will point out that the Monte Carlo Tree Search algorithm does not make completely random moves. Instead it usually uses some metric to balance between exploration and exploitation when deciding which branch to search (see Upper Confidence Bound and others).

That being said, you are correct in that a specific line of play which is incredibly troublesome may not be seen and could cause Monte Carlo to make a major mistake. This may have been a cause of AlphaGo losing to Lee Sedol in game 4.

A disadvantage is that, faced in a game with an expert player, there may be a single branch which leads to a loss. Because this is not easily found at random, the search may not ""see"" it and will not take it into account. It is believed that this may have been part of the reason for AlphaGo's loss in its fourth game against Lee Sedol. In essence, the search attempts to prune sequences which are less relevant. In some cases, a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore ""off the search radar""

",13088,,13088,,6/29/2018 15:38,6/29/2018 15:38,,,,0,,,,CC BY-SA 4.0 6955,2,,6953,6/29/2018 15:30,,4,,"

First, we need to distinguish plain Monte-Carlo from Monte-Carlo Tree Search. They're different things.

Monte-Carlo search, in the context of game AI search algorithms, is typically understood to mean that we search randomly many times, and average the results, and nothing else. If this is all we're doing, then yes, your understanding is correct. This is also sometimes referred to as ""plain Monte-Carlo (search)"" or ""pure Monte-Carlo (search)"", to make it explicitly clear that we're not doing any tree search as in Monte-Carlo Tree Search (sometimes when we just say ""Monte-Carlo"" in the context of game AI, people will automatically assume Monte-Carlo Tree Search, due to how popular it is).

Monte-Carlo Tree Search does a lot more than just that though. It gradually builds up a search tree (through the Expansion step), and within that search tree (which is growing over time), it uses a much more sophsticated strategy for traversal than pure random (the Selection step).

For example suppose i have a node with 100 children, 99 of which leading to an instant WIN, and the last one leading to an instant LOSS.

Suppose that this node you're talking about, the one with those 100 children, is relatively close to the root node. Then, it is likely that it and all 100 of its children will end up ""growing into the search tree"" that is slowly built up by the Expansion step. Once they have been added to the search tree, the Selection step will make sure that the vast majority of further iterations visiting this part of the tree will select the instant loss (assuming the opponent is to move in this node). In the limit (after an infinite amount of search time), this bias towards selecting the loss node will be so large that the average evaluation tends to the (correct) minimax evaluation.


Another way to view the idea of evaluation through many random sequences of play is the following; the idea is that if we're in a very strong position, a good game state, then we're more likely to win than to lose if both players start playing at random. Consider, for example, a game of chess where Player 1 has many pieces left, and Player 2 only has a few pieces left. Imagine both players were to play completely at random from that game state onwards. On average, which player would you expect to win more often? Probably Player 1.

When we're considering game states that are still far away from terminal states, this basic idea tends to work relatively well. Obviously not always correct, it's still a heuristic, but it can kind of work. When we're already very close to a terminal state that can be reached through one highly specific sequence of play, yeah, we might miss that through random actions; this is where we need the more informed policy of the Selection step.

",1641,,1641,,8/21/2018 8:33,8/21/2018 8:33,,,,5,,,,CC BY-SA 4.0 6956,2,,6952,6/29/2018 17:10,,0,,"

That is not the actual reason , ""convolution"" layers are inspired by cells in visual-system. This is derived from the work of hubel-wiesel. for more information check hubel-wiesel experiment.

",15935,,,,,6/29/2018 17:10,,,,6,,,,CC BY-SA 4.0 6959,1,6960,,6/29/2018 19:51,,2,256,"

I built a simple HTML game. In this game the goal is to click when the blue ball is above the red ball. If you hit, you get 1 point, if you miss, you lose 1 point. With each hit, the blue ball moves faster. You can test the game here.

Without using machine learning, I would easily solve this problem by just clicking when the X, Y of the blue ball was on the X, Y of the red ball. Regardless of the time, knowing the positions of the 2 elements I could solve the problem of the game.

However, if I wanted to create an AI to solve this problem, could I? How would it be? I'd really like to see the AI randomly wandering until it's perfect.

My way to solve the problem

I click many times and watch score. If score down, add to bad_positions. If actual position in bad_positions, not click. At first he misses many times, then starts to hit eternally. This is machine learning? Deep learning? Just a bot?

var bad_positions = [];
function train(){
  var pos = $ball.offset().left;
  var last_score = score;
  if (!bad_positions.includes(pos)) {
   $('#hit').click();
    if (score < last_score){
      bad_positions.push(pos)
    } 
  }
}
",7800,,7800,,6/29/2018 20:24,6/29/2018 21:36,How to use Machine Learning with simple games?,,1,0,,,,CC BY-SA 4.0 6960,2,,6959,6/29/2018 21:36,,1,,"

You have implemented a simple contextual bandit solver, which is a machine learning algorithm. A few details may be different from a full implementation, but the key elements are:

  • A choice of actions (click hit or don't click hit)

  • A reward signal that can be observed after each action (+1 for a hit, 0 for nothing happens, -1 for an attack which misses)

  • An observable state which affects the reward achievable (the position of the blue ball). For a contextual bandit, the state is not influenced by the action taken. This is true here.

  • One thing that is different about your problem from a classic contextual bandit is that the next state is predictable from the current state (whilst in a pure bandit problem it should be entirely random). However, that's not too important to your problem here, and your solver is definitely following a contextual bandit approach.

  • Your solver tests the score from trying different actions in each state, and narrows down the best action to take in each state. Your implementation is simple and ""greedy"" for a contextual bandit solver. A more typical solution would maintain an average result for each action and have a rule for how to explore actions in each state, so it could test whether results were reliable (this is very helpful with non-deterministic scenarios where bandit solvers are more often used).

With each hit, the blue ball moves faster

Unless you somehow limit the reaction time of the agent, this is not relevant to how you write the solver. You could change the rules affecting the agent to make it relevant in the same way as it would be for a human, e.g. deciding to click means the click happens 0.1 seconds later, and the state can include observations of position just now and several 0.02 seconds going back.

In general, if you want to take this further, with more complex games and still learning how to control agent actions, you could look at simple reinforcement learning agents, such as Q-learning. If you are interested in the underlying theory of agents like this, then a good (and free) introductory text is Sutton & Barto ""Reinforcement Learning: An Introduction""

",1847,,,,,6/29/2018 21:36,,,,2,,,,CC BY-SA 4.0 6965,1,,,6/30/2018 17:37,,2,2771,"

Is there any general idea on how humans solve jumbled words? I know many people will say we match it against a commonly used words checklist mentally, but it is kind of vague. Is there any theory on this? And how might an AI learn to do the same?

",,user9947,2444,,5/19/2022 8:32,5/19/2022 8:32,Can AI solve jumbled words?,,1,0,,,,CC BY-SA 4.0 6966,2,,6965,6/30/2018 18:23,,3,,"

I remember a problem similar to this https://vladris.com/puzzles/facebook/puzzle_master/snack/breathalyzer.html this is given as a problem in facebook engineering puzzles.

But i doubt that it is not much of a machine learning/AI problem. you could implement an algorithm that converts each word into a set of its characters , then pick the word in your master list with minimum distance based on its character list .

Even when humans solve jumble we do it in a systematic/algorithmic way , if the jumbled word is not present in our memory we can't do anything otherwise we can solve the jumble.

But human brain can simply recognize scrabled words if some of the structure is retained and not fully scrambled like an anagram .

We generally index words based on the first syllable/character in our brain , like or even this

In the second picture we recognised the words even if they are not made of english characters because our brain doesn't scan the text character by character like ""7-H "" but treats it like an image.

So the model should not immediately classify the segmented characters but should find the ""nearest"" characters in every class that optimises the combined probability of characters of the word being one of the classes of words we have in our dictionary.

",15935,,,,,6/30/2018 18:23,,,,4,,,,CC BY-SA 4.0 6967,2,,6139,6/30/2018 18:39,,2,,"

You can use a neural network for this (sort of), and you've got the first step right - train the network to predict $y$ given $x$ (i.e. train it to approximate the function $f$ such that $y = f(x)$). Because the entire neural network is differentiable (presumably), you can take gradients of the input (some $x$) with respect to the predicted output. Then you can use this to update $x$ the same way that you update the weights during training of the network. This let you find a local optimum from $x$. I don't know of a way to try to find a global optimum with a neural network like this except to find the local optimum near many $x$'s and then take the best of those.

If you want a concrete example, take a look at the tutorial on Neural Style Transfer with PyTorch: here you have a noise image as the input and you optimize it to minimize the ""style distance"" to a reference style image and the ""content distance"" to a reference content image (i.e. starting with noise, make it look like the content of one image but in the style of another). There's full code there, but here's a short PyTorch snippet that shows the main idea:

# I'm just optimizing one input; clone so that you don't modify the
# original tensor; let it know that we want gradients computed
inputs = train_inputs[0:1].clone().requires_grad_()
input_optimizer = torch.optim.Adam([inputs])

def optimize_input():
    input_optimizer.zero_grad()
    # where `model` is your trained neural network
    output = model(inp)
    (-output).backward()
    return output

output = []
for _ in range(10000):
    output.append(input_optimizer.step(optimize_input).detach().numpy())

output is a list at the end of what the predicted output was after each step of optimizing the input. You can use this to see if you've converged to a local optimum yet (and take more steps/fewer next time as necessary). inputs will be the optimized input at the end. Note that if there are constraints that your input should satisfy, you'll need to enforce those yourself (e.g. in neural style transfer, they have to enforce that the values are valid for an RGB image).

Note also that how well this works really depends on how well your network approximates $f$; you may well get some unreasonable $x$ for which the network predicts a very large $y$ because there was no training data similar to that $x$ so the network isn't constrained in that region. In general, you should probably be cautious of an $x$ you generate this way that doesn't seem similar to your training data (e.g. has larger/smaller value than any $x$ the network has seen before; interpolation is much easier than extrapolation).

I'm assuming that you have a single data set; if you're able to query for the $y$ values of given $x$'s, then you might want to take a look at Bayesian optimization - essentially the field of trying to find $x$ that maximizes $y = f(x)$ when $f$ is expensive to evaluate and you don't have gradients of it. Bayesian optimization seeks a global optimum.

",16606,,,,,6/30/2018 18:39,,,,0,,,,CC BY-SA 4.0 6968,1,,,7/1/2018 4:52,,3,1601,"

What does it mean when it is said that Machine Learning algorithm results can be ""generalized""?

I don't understand what ""generalized"" algorithms, routines or functions are.

I have searched dictionaries and glossaries, and cannot find an explanation. Also, if anyone can tell me where a good source for this type of thing is? I am writing about AI and ML.

",13053,,,user9947,7/1/2018 5:22,7/1/2019 20:16,"What is a ""generalized"" machine learning algorithm?",,4,1,,,,CC BY-SA 4.0 6969,2,,6968,7/1/2018 4:56,,0,,"

A machine learning model is said to ""generalise"" when it performs equally well on both train and test datasets.

For any supervised machine learning algorithm to work well, you train it on a large dataset (train) and evaluate its performance on a dataset whose probability distribution is similar to that of train set but not a part of the train set. This set is called ""test"" set , then the performance is evaluated on this set, if both train and test accuracies are almost same then the model is said to generalise well, if the training accuracy is much higher than that of test accuracy then the model in some way ""memorises"" the train set and called ""overfitting"".

",15935,,12853,,7/1/2019 20:16,7/1/2019 20:16,,,,0,,,,CC BY-SA 4.0 6971,2,,6968,7/1/2018 5:12,,1,,"

Actually you have used 2 terminologies there:

  • The first one is that Machine Learning algorithm results can be ""generalised"". This refers to how well your trained Machine Learning model will perform on previously unseen data (test set or implemented on field). This is particularly not easy as data trends may change over time resulting in loss of accuracy. There are various methods to implement this like (having a cross validation set and a test set, which comes under the broad scheme of k-fold cross validation)
  • The second you mentioned is '""generalised"" algorithms, routines or functions'. Most Machine Learning algorithms can be applied to a broad range of problems. For example the training of a NN is generally done by backprop which is universally applied to all NN's. Similarly, you can use CNN to find features of local interest (i.e. local dependencies) in anything that can be represented in a pictorial form (strings of DNA). Also combinations of CNN and RNN are being used to solve many problems. Thus, only a basic generalised algorithm is being applied to a lot of problems. NOTE: I have never seen any one use it in this context, but practically it happens.

Here are a few resources for general reading purposes (not mathematical):

",,user9947,,user9947,7/1/2018 5:21,7/1/2018 5:21,,,,0,,,,CC BY-SA 4.0 6972,1,6974,,7/1/2018 6:59,,0,161,"

I want to train a CNN (Vggnet) to identify different types of buildings from aerial images.

However seeing that a CNN ""ignores"" size, e.g. the same type of dog in one image can be large and small in another image but will still be classified as a dog.

My issue is that non-residential buildings are mostly larger than residential houses, now I want to use this property to distinguish between residential and non residential. Is this even possible?

Thanks

",16628,,,,,7/1/2018 10:35,Using CNN to identify buildings from aerial images,,1,2,,,,CC BY-SA 4.0 6974,2,,6972,7/1/2018 10:35,,0,,"

You need to know the basic mechanics of CNN , you can't simply finetune a pretrained net (VGGNET) everytime. VGGnet is trained on imagenet dataset so most of the features in it are not so relevant for your task (recognising buildings from aerial view). so it is much better to train a cnn from scratch. You can't simply say that it fails to classify residential , non-residential . It is best to find what could be done by trying,

My issue is that non-residential buildings are mostly larger than residential houses, now I want to use this property to distinguish between residential and non residential.

Most of the times it is better to observe the features recognised by the net and then come with an approach , rather than enforcing your views.

You may find this helpful https://www.cs.toronto.edu/~vmnih/docs/Mnih_Volodymyr_PhD_Thesis.pdf

",15935,,,,,7/1/2018 10:35,,,,0,,,,CC BY-SA 4.0 6976,2,,5461,7/1/2018 13:23,,0,,"

No , even generating brief summary of a video is out of scope as of current state of the art. Training such a model is a tough task. Video understanding is pretty far from the research as of now. But you could try generating descriptions of some keyframes of the video , and align them to form meaningful passages.

check this https://arxiv.org/abs/1611.06607

",15935,,,,,7/1/2018 13:23,,,,0,,,,CC BY-SA 4.0 6977,1,,,7/1/2018 15:18,,2,208,"

I want to detect drivers with or without seatbelts at crossroads. For that, as it is real-time, I am going to use the YOLO algorithm/model. For training data sets (the images) I need to collect, I placed a camera. By recording it and collecting images from there, I am getting images with more noise.

Can I use these images for training? Also, which YOLO version should I use? What are the important points that I should consider for training datasets?

I want to use any version of YOLO compatible with TensorFlow.

",16633,,2444,,6/15/2020 1:33,8/9/2022 15:58,What YOLO algorithm can I use for images with noise as I will implement it in real time?,,2,0,,,,CC BY-SA 4.0 6978,1,,,7/1/2018 15:41,,6,157,"

I just stumbled across the paper When Will AI Exceed Human Performance? Evidence from AI Experts, which contains a figure showing the aggregated subjective probability of ""high-level machine intelligence"" arrival by future years.

Even if this graph reflects the opinion of experts, it can be totally wrong. It is just extremely hard to predict future events. So I was wondering if there is a similar graph which shows basically the same but for the game Go?

Due to the complexity of Go, some experts assumed, that no computer ever could be better in Go than a human being due to the lack of intuition. This shows that the appearance of human level AI can be unpredictable.

Does anyone know if a similar graph for Go exists to see how good or bad the predictions were? This could give a very rough idea, how good this graph predicts the future of human-level AI.

",16634,,2444,,2/14/2020 1:47,2/14/2020 1:47,Are there human predictions of when a computer would have been better than a human at Go?,,1,0,,,,CC BY-SA 4.0 6982,1,6983,,7/2/2018 7:47,,17,4608,"

I was going through this implementation of DQN and I see that on line 124 and 125 two different Q networks have been initialized. From my understanding, I think one network predicts the appropriate action and the second network predicts the target Q values for finding the Bellman error.

Why can we not just make one single network that simply predicts the Q value and use it for both the cases? My best guess that it's been done to reduce the computation time, otherwise we would have to find out the q value for each action and then select the best one. Is this the only reason? Am I missing something?

",11584,,2444,,12/22/2021 18:16,12/22/2021 18:16,Why does DQN require two different networks?,,1,0,,,,CC BY-SA 4.0 6983,2,,6982,7/2/2018 9:12,,13,,"

My best guess that it's been done to reduce the computation time, otherwise we would have to find out the q value for each action and then select the best one.

It has no real impact on computation time, other than a slight increase (due to extra memory used by two networks). You could cache results of the target network I suppose, but it probably would not be worth it for most environments, and I have not seen an implementation which does that.

Am I missing something?

It is to do with stability of the Q-learning algorithm when using function approximation (i.e. the neural network). Using a separate target network, updated every so many steps with a copy of the latest learned parameters, helps keep runaway bias from bootstrapping from dominating the system numerically, causing the estimated Q values to diverge.

Imagine one of the data points (at $s, a, r, s'$) causes a currently poor over-estimate for $q(s', a')$ to get worse. Maybe $s', a'$ has not even been visited yet, or the values of $r$ seen so far is higher than average, just by chance. If a sample of $(s, a)$ cropped up multiple times in experience replay, it would get worse again each time, because the update to $q(s,a)$ is based on the TD target $r + \text{max}_{a'} q(s',a')$. Fixing the target network limits the damage that such over-estimates can do, giving the learning network time to converge and lose more of its initial bias.

In this respect, using a separate target network has a very similar purpose to experience replay. It stabilises an algorithm that otherwise has problems converging.

It is also possible to have DQN with ""double learning"" to address a separate issue: Maximisation bias. In that case you may see DQN implementations with 4 neural networks.

",1847,,1847,,6/9/2020 16:37,6/9/2020 16:37,,,,1,,,,CC BY-SA 4.0 6984,2,,6977,7/2/2018 12:46,,1,,"

It is much better to know basic mechanics of convnets first ,rather than diving straight into complicated models .

For training data sets (the images) I need to collect, I placed a camera. By recording it and collecting images from there, I am getting images with more noise. Can I use these images for training? Also, which yolo version should I use? What are the important points that I should consider for training datasets?

After you are good with the theory part most of your questions will be answered , otherwise you would endup with nothing but buzzwords.

I want to use any version of yolo compatible with tensorflow.

Tensorflow is a framework for building neural networks , so in theory you can build any network with it so compatibility is not at all a problem.

",15935,,15935,,7/3/2018 4:26,7/3/2018 4:26,,,,0,,,,CC BY-SA 4.0 6985,1,,,7/2/2018 12:51,,2,203,"

In the paper ""Provable bounds for learning some deep representations"", an autoencoder like a model is constructed with discrete weights and several results are proven using some random-graph theory, but I never saw any papers similar to this. i.e bounds on neural networks using random graph assumptions.

What are some resources (e.g. books or papers) regarding the time and space complexity of training neural networks?

I'm particularly interested in convolutional neural networks.

",15935,,2444,,6/27/2019 23:32,6/27/2019 23:35,What are some resources regarding the complexity of training neural networks?,,1,0,,,,CC BY-SA 4.0 6986,1,,,7/2/2018 14:45,,2,31,"

I am starting to study the capabilities of neural networks for the reconstruction/restoration/... of communication signals.

I am feeding my neural network with a signal which has some parts which have been damaged because of the transmission through a communication system, and my targets are given by the signal with these areas undamaged.

The problem is that the areas damaged represent a very small portion of the whole signal, and my neural network spends lot of time learning only from the portions which actually do not present any problem.

Is there any solution to make the neural network to jump on those areas which show significant differences to respect to the targets? Is there anything I could do for example initializing my neural networks (as conventionally done, they are initialized randomly)? Or shall I accept that I need to train for longer time?

",16654,,,,,7/2/2018 16:03,"Restoration of localized damaged areas (time signals, but guess also applicable to images)",,1,1,,,,CC BY-SA 4.0 6987,2,,6986,7/2/2018 16:03,,1,,"

So , you want to remove the ""noise"" (damaged signal) from the signal, by giving the perfect signal as output. This model is called as a ""denoising autoencoder"". If you have enough training data, the noise is not at all a problem for the neural network , if you really feel that the network is capturing the noise , then you have to increase the training epochs , or sometimes it can be a bad initialized network.

",15935,,,,,7/2/2018 16:03,,,,1,,,,CC BY-SA 4.0 6988,2,,1485,7/2/2018 16:08,,0,,"

That depends on what type of network you want to use for your second network, instead of feeding the outputs of the first layer, it would be much better if you jointly train both the networks. But that depends on the architecture of the second network ('logic' network).

",15935,,8,,7/2/2018 21:02,7/2/2018 21:02,,,,0,,,,CC BY-SA 4.0 6989,2,,6978,7/2/2018 20:08,,4,,"

Go predictions were included in the paper:

The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.)
SOURCE: Experts Predict When Artificial Intelligence Will Exceed Human Performance (MIT Tech Review)

",1671,,,,,7/2/2018 20:08,,,,1,,,,CC BY-SA 4.0 6990,1,6999,,7/3/2018 2:56,,3,3160,"

While reading about least squares implementation for machine learning I came across this passage in the following two photos:

Perhaps I’m misinterpreting the meaning of beta but if X^T has dimension 1 x p and beta has dimension p x K, then hat{Y} would have dimension 1 x K and would be a row vector. According to the text, vectors are assumed column vectors unless otherwise noted.

Can someone provide clarification?

Edit: the matrix notation in this text confuses me. The pages preceding the above passage read:

Should the matrix referenced not have dimensions p x N, assuming a p-vector is a column vector with p elements?

Note: The passage is taken from “Elements of Statistical Learning” by Hastie, Tibshirani, & Friedman.

",16343,,16343,,7/3/2018 3:42,7/3/2018 14:12,Matrix Dimension for Linear regression coefficients,,2,6,,,,CC BY-SA 4.0 6991,2,,6990,7/3/2018 3:49,,1,,"

I have used W as beta and Y_pred as Y_hat.

Apparently, as far as conventions go both feature vector X and weight vector W are assumed as column vectors. Although this is not important in your case but when we use Neural Nets, this is particularly important, and a weight vector for a layer is given by p*N_n where p is the number of features being input from the previous layer and N_n is the number of nodes in the layer. Check a NN structure and you will understand what I am saying.

As far as the special case in your current question, the book is correct, although comments say Y_predicted should be vector, it is not, since we are doing dot product between X and W (although it is represented in a vectorised form).

Also how can W have dimensions p*K (assuming it is not a NN), it means multiple solutions to the same problem? Instead it is X which will have dimensions p*K and W will have p*1. It means you have K training examples with p features.

If its a NN with K nodes then dimensions for X like p*N and W as p*K is totally possible. The only downside of this notation is that you have to again take a transpose of the result.

Here is the explanation of the notation used:

Notation for NN

",,user9947,,user9947,7/3/2018 4:08,7/3/2018 4:08,,,,8,,,,CC BY-SA 4.0 6993,2,,2637,7/3/2018 6:29,,6,,"

In the basic form, if you encounter a terminal leaf, you add visits and score depending on whether it is a win or loss, and backpropagate accordingly. The same as if you made a simulation step, but in this case the ""simulation"" is instant.

But you can improve that: If the leaf is losing, you can give it a very large negative score or even $-\infty$, so in the next selection step it surely won't be chosen, unless other moves are as bad. But if it is a winning leaf, you not only can give it a very big positive score or $\infty$, but also add a negative score for the immediate parent, so the parent won't be chosen, as it is obviously a losing state for him. This way we can save some simulations. I had encountered that situation many times in my game and Monte Carlo tree searches.

Suppose the parent has $200$ unexplored children, $10$ of which are immediate wins for the opponent. Your search may explore $100$ non-winning children and have a score of 80/100. But then it encounters the terminate child. After that, normally, it would have $80/101$, so, still, a big chance to be chosen in the next iteration. And it would take many iterations to see that this is not a good move, as it would need to get like $80/150$ or more. But if we cancel out the score or give it a negative one like $-1/101$, then we ensure it won't be chosen.

It seems in literature it's called ""MCTS solver"", to backpropagate proven wins and losses.

",16663,,2444,,11/18/2019 21:47,11/18/2019 21:47,,,,0,,,,CC BY-SA 4.0 6994,1,,,7/3/2018 10:59,,3,78,"

I have a dataset of unlabelled emails that fall into distinct categories (around a dozen). I want to be able to classify them along with new ones to come in the future in a dynamic matter. I know that there are dynamic clustering techniques that allow the clusters to evolve over time ('dynamic-means' being one of them). However, I would also like to be able to start with a predefined set of classes (or clusters/centroids), as I know for a fact what the types of those emails will be.

Furthermore, I need some guidance in terms of what vectorisation technique to use for my type of data. Would creating a term matrix using TF-IDF be sufficient? I assume that the data I am dealing with could be differentiated on the basis of keyword occurrence, but I cannot tell to what degree. Are there more sophisticated vectorisation techniques based more on the text semantics? Are they worth exploring?

",16669,,2444,,12/26/2021 14:01,12/26/2021 14:01,What techniques to explore for dynamic clustering of documents (emails)?,,2,2,,,,CC BY-SA 4.0 6995,1,,,7/3/2018 11:13,,2,29,"

In the below pic, I can not understand what U vector is? It says flow field but I can not imagie what really is the flow field?

",9941,,1671,,7/3/2018 16:23,7/3/2018 16:23,Simple question about HS algorithm's formul(Optical flow),,0,1,,,,CC BY-SA 4.0 6996,1,,,7/3/2018 12:15,,2,211,"

Imagine that a line divides an image in two regions which (slightly) differ in terms of texture and color. It is not a perfect, artificial line but rather a thin transition zone. I want to build a neural network which is able to infer geometrical information on this line (orientation and offset). The image may also contain other elements which are not relevant for the task. Now, would a classical CNN be suitable for this task? How complex should it be in terms of number of convolutions (and number of layers, in general)?

",16671,,,,,7/5/2018 12:18,Neural network architecture for line orientation prediction,,1,5,,,,CC BY-SA 4.0 6997,1,7006,,7/3/2018 13:43,,9,796,"

I am familiar with supervised and unsupervised learning. I did the SaaS course done by Andrew Ng on Coursera.org.

I am looking for something similar for reinforcement learning.

Can you recommend something?

",16672,,2444,,12/20/2021 22:17,12/20/2021 22:17,What's a good resource for getting familiar with reinforcement learning?,,5,0,,,,CC BY-SA 4.0 6998,2,,6997,7/3/2018 14:03,,4,,"

Before that ask yourself if you really want to learn about ""reinforcement learning."" Although there is much hype about reinforcement learning, the real-world applicability of reinforcement learning is almost non-existent. Most of the online courses teach you a very little about machine learning, so it is much better to get thorough with it, rather than proceeding towards reinforcement learning. Learning reinforcement learning is somewhat different from learning about unsupervised/supervised learning techniques.

Having said that, the fastest way to get a good grasp of reinforcement learning is as follow:

  1. Read Andrej Karpathy's blog post ""Pong from Pixels.""

  2. Watch Deep RL Bootcamp lectures.

  3. To understand the math behind these techniques, refer to Sutton and Barto's Reinforcement Learning: An Introduction.

  4. Read relevant papers (game-playing etc.).

P.S: Make sure that you are thorough with basics of neural networks, as most of the current papers in RL involve using DNNs in some or the other way as approximators.

",15935,,16987,,7/19/2018 18:48,7/19/2018 18:48,,,,7,,,,CC BY-SA 4.0 6999,2,,6990,7/3/2018 14:12,,1,,"

This is a good example of what happens when you take text out of context.

The passage that was added in the edited question makes a difference, but it's not quite sufficient, and it doesn't help that the notation is all over the place. I found the textbook and the relevant passages (Section 2.3, p. 10-11). Here is a quick attempt at an explanation.

The authors call X a variable (either scalar or vector) with components Xj, but later in the paragraph they refer to Xj as a variable (which I think is the correct notation). Instead of ""variable"" with ""components"", think of X as a set of p variables (in the normal sense of a variable, such as temperature, the price of shares, etc.). You can put multiple variables xi (i = 1...p) in a vector X and call each instance of X an observation. In other words, an observation is a set of measured values for all variables xi.

Assume you have made N observations. Arrange your observations in a matrix with N rows and p columns, where each row represents a single observation (an instance of X).

Now also assume that you are trying to find the relation between your input variables xi and a different set of variables yk, called dependent or response variables. In general, the variables xi and yk are measured, and the model is simply trying to extract the relation between the input and dependent variables so you can then predict the latter from the former.

As a side note, observations are usually denoted with superscripts (x(n)) and variables with subscripts (xi) so there is no confusion about which is which. xi(n) is the nth observed (measured) value of variable xi.

In your case, you have a single dependent variable y and p input variables xi (i = 1...p). Assuming that their relation is linear (note: in many cases this assumption is not justified), we can assign weights (""importance"") to each variable and try to find out those weights from measurements. In your case, the weights are denoted with betai (so the ""importance"" of variable xi is betai; note the same subscript).

If you have N observations of your dependent variable y, then you can arrange them in a column, just like the observations for your input variables. Note that we still have a single dependent variable, so essentially y is a scalar, but the N observations of that scalar form an N-dimensional column vector.

Now multiply the N x p input matrix by the p-dimensional column vector of weights beta. What you get is a column vector of N predicted values for y (\hat{y}). The difference of the N-dimensional column vector of predicted values \hat{y} and the N-dimensional column vector y of measured values is the error (which is minimised with the least squares method).

If you have q dependent variables yi (i = 1...q), each of them would generally have its own set of weights beta. In other words, the relation between the input variables and each yi will be different. The dependent variables can also be arranged in a matrix (just like the input matrix), and its dimensionality will be N x q, where q. In that case, the beta matrix will not be a single N-dimensional column vector but an N x q matrix. Each column in the beta matrix gives the relation between the input variables and the qth dependent variable yq. In that respect, a single observation of all variables yi will be a row vector in the y matrix.

I hope that this clarifies things up a bit. In summary, the explanation in the textbook is correct, but the notation is hard to follow and at times plain misleading (as in the case of variable and component at the beginning of the section). Honestly, you can get a much more intuitive explanation of linear regression from the Wikipedia article.

",16101,,,,,7/3/2018 14:12,,,,1,,,,CC BY-SA 4.0 7003,1,,,7/3/2018 20:23,,2,77,"

Gradient in Maximum Entropy IRL requires to find the probability of expert trajectories given the reward function weights. This is done in the paper by calculating state visitation probabilities but I do not understand why we can’t just calculate the probability of a trajectory by summing up all the rewards that are collected following that trajectory? The paper defines the probability of a trajectory as exp(R(traj.)/Z. I do not understand why we have to solve MDP for calculating that.

",16678,,,,,7/3/2018 20:23,Why do we have to solve MDP in each iteration of Maximum Entropy Inverse Reinforcement Learning?,,0,0,,,,CC BY-SA 4.0 7004,2,,6997,7/3/2018 20:44,,2,,"

I recently saw a course by Microsoft on edx. It is called 'Reinforcement Learning Explained'.

Here is the link: https://www.edx.org/course/reinforcement-learning-explained-0 This is not quite comprehensive but at least gives a good starting point.

",16497,,,,,7/3/2018 20:44,,,,0,,,,CC BY-SA 4.0 7005,2,,6997,7/3/2018 22:49,,4,,"

There's a Youtube playlist (in the DeepMind channel) whose title is Introduction to reinforcement learning, which is a course (of 10 lessons) on reinforcement learning by David Silver.

A person who followed and finished the course wrote (as a Youtube comment):

Excellent course. Well paced, enough examples to provide a good intuition, and taught by someone who's leading the field in applying RL to games.

",2444,,,,,7/3/2018 22:49,,,,0,,,,CC BY-SA 4.0 7006,2,,6997,7/3/2018 23:40,,7,,"

To the good answers here, I would add

These barely scratch the surface of RL, but they should get you started.

",16101,,,,,7/3/2018 23:40,,,,0,,,,CC BY-SA 4.0 7007,2,,6968,7/4/2018 0:13,,1,,"

The brief answer is: generalized machine algorithm is an algorithm that can do well and give good results in new data that never seen before

",14749,,,,,7/4/2018 0:13,,,,0,,,,CC BY-SA 4.0 7008,1,7014,,7/4/2018 5:37,,3,3333,"

I have practiced building CNNs for image classification with TensorFlow, which is a nice library with good documentation and tutorials. However, I found that TensorFlow is too complicated and cumbersome.

Can I build a CNN for image classification tasks just with OpenCV?

",16383,,2444,,9/22/2020 10:23,9/22/2020 10:23,Can I build a CNN for image classification tasks just with OpenCV?,,1,0,,9/22/2020 10:20,,CC BY-SA 4.0 7010,2,,6770,7/4/2018 9:27,,2,,"

You can search for the following paper titles:

  1. A Deep Multi-Level Network for Saliency Prediction.
  2. Beyond Universal Saliency: Personalized Saliency Prediction with Multi-task CNN.

You can code in python using Pytorch framework.

",9062,,,,,7/4/2018 9:27,,,,0,,,,CC BY-SA 4.0 7014,2,,7008,7/4/2018 12:09,,4,,"

OpenCV does include 2D filter convolution functions for custom separable and non-separable filters. The latter uses DFT for large filters, which may or may not be faster than the conventional method. It also includes (partial?) support for deep nets with various types of layers. Theoretically, you should be able to stitch everything together into a complete CNN. However, I have not used any of those, and I have no idea about the level of maturity of the implementation.

That said, if you are willing to implement a custom CNN from scratch, you will probably get more control over the implementation using a generic (BLAS / OpenCL / CUDA) matrix library.

",16101,,,,,7/4/2018 12:09,,,,0,,,,CC BY-SA 4.0 7019,2,,6770,7/4/2018 14:28,,0,,"

""Attention"" in neural network (visual) is the area of the image where the network can find most number of features to classify it with high confidence.Based on your description you are talking about ""soft attention"".

Do we have any tools or SDK to implement that? i don't think there are readymade SDKs available. It is much better to train a model on your dataset with attention. Once you have your base model ready , it is easy to add attention mechanism for it.I suggest you to check https://arxiv.org/pdf/1502.03044.pdf.

",15935,,,,,7/4/2018 14:28,,,,0,,,,CC BY-SA 4.0 7021,1,,,7/4/2018 19:52,,8,372,"

Most humans are not good at chess. They can't write symphonies. They don't read novels. They aren't good athletes. They aren't good at logical reasoning. Most of us just get up. Go to work in a factory or farm or something. Follow simple instructions. Have a beer and go to sleep.

What are some things that a clever robot can't do that a stupid human can?

",4199,,4302,,10/19/2018 21:52,10/19/2018 21:52,Is the smartest robot more clever than the stupidest human?,,5,0,,,,CC BY-SA 4.0 7022,2,,3156,7/4/2018 19:58,,4,,"

Have a look at the paper Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling (2014), where different LSTM architectures are compared. In the abstract, the authors write the following.

We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance

",16687,,2444,,3/7/2020 15:50,3/7/2020 15:50,,,,0,,,,CC BY-SA 4.0 7023,2,,6926,7/4/2018 21:03,,4,,"

Great question, and one that I think we could have done a better job of answering in the paper.

Essentially, the pose matrix of each capsule is set up so that it could learn to represent the affine transformation between the object and the viewer, but we are not restricting it to necessarily do that. So we talk about the output of a capsule as though it is an affine transformation matrix, but we can't ensure that it will be. We do things explicitly that make it more like such a matrix — like adding in the coordinates to the right-hand column — but we can't be sure. This somewhat embodies a large part of the capsule network theory — we set up scaffolding so that the network can learn to be equivalent to transformations that we think it ought to be invariant to, but we don't ensure that it is.

",16689,,2444,,6/9/2020 15:12,6/9/2020 15:12,,,,1,,,,CC BY-SA 4.0 7024,1,7026,,7/4/2018 22:04,,3,66,"

I'm a complete newbie to NNs, and I need your advice.

I have a set of images of symbols, and my goal is to categorize and divide them into groups of symbols that look alike. Without teaching NN anything about the data.

What is the best way to do this? What type of NN suits the best? Maybe there are any ready solutions?

",16690,,2444,,10/28/2021 16:54,10/28/2021 16:54,"Given a set of images that are not divided into groups, which algorithm should I use to do that?",,1,0,0,,,CC BY-SA 4.0 7025,1,,,7/5/2018 3:36,,4,725,"

I recently came across a reference to a book that was highly regarded: "Pattern Recognition and Machine Learning" by Christopher Bishop.

I am a beginner working my way through some machine learning courses on my own.

I'm curious if this book is still relevant considering it was published in 2006. Can anyone who has read it attest to its usefulness in 2018?

",16343,,2444,,12/5/2021 17:39,1/4/2022 18:07,"Is Christopher Bishop's ""Pattern Recognition and Machine Learning"" out of date in 2018?",,1,0,,,,CC BY-SA 4.0 7026,2,,7024,7/5/2018 3:44,,2,,"

This is called ""clustering"" , If the network is already trained with data that has similar features as of the ""symbols"", you can use that network with its last classification layer removed , then run a clustering algorithm like ""k-means"" on top of the vectors obtained from the last layer of the network.

",15935,,,,,7/5/2018 3:44,,,,0,,,,CC BY-SA 4.0 7027,1,7028,,7/5/2018 5:25,,4,1172,"

I was learning about back-propagation and, looking at the algorithm, there is no particular 'partiality' given to any unit. What I mean by partiality there is that you have no particular characteristic associated with any unit, and this results in all units being equal in the eyes of the machine.

So, won't this result in the same activation values of all the units in the same layer? Won't this lack of 'partiality' render neural networks obsolete?

I was reading a bit and watching few videos about backpropagation and, in the explanation given by Geoffrey Hinton, he talks about how we're trying to train the hidden units using the error derivatives w.r.t our hidden activities rather than using desired activities. This further strengthens my point about how by not adding any difference to the units, all units in a layer become equal since initially the errors due to all of them are the same and thus we train them to be equal.

",16694,,2444,,1/20/2021 21:32,1/20/2021 21:32,Do we know what the units of neural networks will do before we train them?,,1,0,,,,CC BY-SA 4.0 7028,2,,7027,7/5/2018 8:02,,8,,"

In reverse order to how you asked:

all units in a layer become equal since initially the errors due to all of them are the same and thus we train them to be equal

This actually happens if you initialise the weights equally (e.g. all zero). Gradients in that case are the same to each neuron in the same layer, and everything changes in lockstep. A neural network without random weight initialisation will simply not work.

So won't this result in the same activation values of all the units in the same layer? Won't this lack of 'partiality' render neural networks obsolete?

No, because random weight initialisation causes the gradients to be different, and the neuron activations will typically diverge to represent different ""hidden"" features that activate differently depending on the input.

What I mean by partiality there is that, you have no particular characteristic associated with any unit and this results in all units being equal in the eyes of the machine.

One interesting side-effect of this behaviour is that the ""partiality"" will often be assigned effectively randomly too, as the neural network will converge to features that somehow work. These features have no guarantee to be meaningful to a human being in the context of a problem to solve. They might be something that can be mapped to the problem, they might be some linear combination of something that can be understood, but often there is no obvious interpretation.

",1847,,,,,7/5/2018 8:02,,,,0,,,,CC BY-SA 4.0 7029,2,,7021,7/5/2018 8:34,,2,,"

Survival, Imagining, Moral Reasoning

The thing that comes to mind is a new-born, when you said "the stupidest human", and it already has some basic “survival instincts”. It will avoid pain, consume food, and quickly learn to distinguish "safe" and "dangerous" conditions and people.

We have computer programs that can learn chess and calculate the optimal move in a split second, but isn't playing chess is a bit pointless. Merely being able to play a board game is of little value from a survival perspective, industrial perspective, or economic perspective.

There are programs that can do things that are very helpful for the modern world, but as far as I know, they just don't have survival instincts. A self-learning robot; left in a forest with all the tools it needs to generate power for, build duplicates of, maintain and defend its self; probably wouldn't be able to learn how to do so in time to ensure its survival. Our current self learning programs would need to be able to identify when it has succeeded or failed to improve its survival odds. A child of two may learn fast enough to survive if the conditions are not too severe and non-toxic food and some form of shelter is nearby.

A financially poor, marginally educated person with lower than average aptitude working at a farm or factory might not be able to play chess well, but they would definitely be able to tell if someone is murdering someone else, and know to flee and seek the authorities. A robot that can play chess would not.

Furthermore, humans can continue to learn when separated from the problem by thinking about the problem. The ability to construct arbitrary models and run thought experiments is currently unique to humans.


That said, I do hope that we will soon have programs that well replicate the human mind, and demonstrate some of the aspects of what we call consciousness.

",16355,,-1,,6/17/2020 9:57,7/5/2018 18:49,,,,1,,,,CC BY-SA 4.0 7030,2,,7021,7/5/2018 8:42,,3,,"

First Question

To treat this question in a scientific way, because I think it is a reasonable enough question that draws on the realities of postmodern culture in post industrialized societies to be treated scientifically, we should define some things.

The most difficult is intelligence, which is the realm in which smartness, cleverness, and stupidity reside.

Let's go through the list.

  • Most humans are not good at chess, but humans invented the game.
  • Most humans can't write symphonies, but humans invented them.
  • Most humans don't read novels, but digital computers can't write them yet and can't learn ethical balance through the reading of them like humans can.
  • Most humans are not Olympic level athletes, yet humans developed Olympics and robots are not yet Olympiads.
  • Most humans (whether or hot they are good at logical reasoning) don't employ it much other than to, ""Get by.""

The question is fine until it devolves into the dismissal of the intelligence that people apply to their method of earning income, which for many if not most people requires more than just getting up, commuting in, following some simple instructions, and going to sleep mildly inebriated. Let's replace this last part with this.

  • Most humans do not work with the intention of optimizing quality of the product or service by measuring their own quality and seeking educational resources to improve the velocity, reliability, or accuracy of their work output (unless programs are instituted to incentivize these things in the workplace.)

If we define intelligence as the union of these things, for simplicity's sake, we have this (which is subject to change as AI develops).

  • Playing chess: AI wins
  • Designing games: Humans win
  • Writing symphonies: Humans win
  • Writing novels: Humans win
  • Absorbing ethics from stories: Humans win
  • Olympic gold: Humans win
  • Logical consistency: AI wins

We must, to interpret the above list correctly concede two things:

  1. Machines may have the ability to do something according to specific quality standards but not be configured, trained, or connected appropriately to prevail.
  2. Humans may have the ability to do something according to specific quality standards but not be educated, trained, or be properly motivated to prevail.

Second Question

What are some things that a clever robot can't do that a stupid human can? These are a few, but they are of particular importance from certain perspectives.

  • Love their family and friends
  • Have compassion without reason
  • Decide what to learn
  • Hunt
  • See a future danger approaching
  • Entertain others
  • Pray

I would not dismiss these human propensities as irrelevant, even from a scientific perspective. I would also not dismiss the possibility that these things are beyond the capabilities of silicon based entities.

",4302,,,,,7/5/2018 8:42,,,,3,,,,CC BY-SA 4.0 7032,1,,,7/5/2018 10:36,,2,819,"

I should show that exact inference in a Bayesian network (BN) is NP-hard and P-hard by using a 3-SAT problem.

So, I did formulate a 3-SAT problem by defining 3-CNF:

$$(x_1 \lor x_2) \land (\neg x_3 \lor x_2) \land (x_3 \lor x_1)$$

I reduced it to inference in a Bayesian network, and produced all conditional probabilities, and I know which variable assignment would lead for the entire expression to be true.

I am aware of the difference between P and NP. (Please correct me if I am wrong):

Any P problem with an input of the size $n$ can be solved in $\mathcal{O}(n^c)$. For NP, the polynomial-time cannot be determined, hence, nondeterministic polynomial time. The question that scientists try to answer is whether a computer who is able to verify a solution would also be able to find a solution. P= NP?

However, I am still not sure how I can prove that exact inference in Bayesian network is NP-hard and P-hard.

",15391,,2444,,9/23/2019 12:20,9/23/2019 12:20,Why is exact inference in a Bayesian network both NP-hard and P-hard?,,1,0,,,,CC BY-SA 4.0 7033,1,7036,,7/5/2018 11:56,,2,199,"

I have a task where I would like to use a convolutional neural network (CNN). I would like to incrementally start from the fastest models, fine-tune and see whether they fit my ""budget"". At the moment, I'm just looking at object detection CNN-based feedforward models.

I'm curious to know if there is any article, blog, web page or gist that benchmarks the popular CNN models based on the forward-pass speed. If there is back-propagation time and dataset-wise performance, even better!

",16702,,2444,,11/4/2019 3:33,11/4/2019 3:34,Are there benchmarks for assessing the speed of the forward-pass of neural networks?,,1,0,,,,CC BY-SA 4.0 7035,2,,6996,7/5/2018 12:18,,1,,"

A long shot: these guys have worked on a problem that might be relevant. They define ""semantic"" lines as lines delimiting significant regions or objects in an image. To detect such lines, they use the conv layers from a pre-trained VGG16 net and then add their own layers on top. The cool thing about their approach is that they run both classification and regression in parallel on the same network.

You might be able to adopt a similar technique to determine where the line is, and then run some simple analysis on the extracted line to determine the offset and the orientation.

",16101,,,,,7/5/2018 12:18,,,,0,,,,CC BY-SA 4.0 7036,2,,7033,7/5/2018 12:24,,1,,"

https://github.com/jcjohnson/cnn-benchmarks might be a good start. It mostly focuses on GPUs, but there is also one CPU (Dual Xeon E5-2630 v3).

",16101,,2444,,11/4/2019 3:34,11/4/2019 3:34,,,,0,,,,CC BY-SA 4.0 7037,2,,3156,7/5/2018 14:59,,7,,"

In general, there are no guidelines on how to determine the number of layers or the number of memory cells in an LSTM.

The number of layers and cells required in an LSTM might depend on several aspects of the problem:

  1. The complexity of the dataset, such as the number of features, the number of data points, etc.

  2. The data-generating process. For example, the prediction of oil prices compared to the prediction of GDP is a well-understood economy. The latter is much easier than the former. Thus, predicting oil prices might require more LSTM memory cells to predict, with the same accuracy, as compared to the GDP.

  3. The accuracy required for the use case. The number of memory cells will heavily depend on this. If the goal is to beat the state-of-the-art model, in general, one needs more LSTM cells. Compare that to the goal of coming up with a reasonable prediction, which would need fewer LSTM cells.

I follow these steps when modeling using LSTM.

  1. Try a single hidden layer with 2 or 3 memory cells. See how it performs against a benchmark. If it is a time series problem, then I generally make a forecast from classical time series techniques as benchmark.

  2. Try and increase the number of memory cells. If the performance is not increasing much then move on to the next step.

  3. Start making the network deeper, i.e. add another layer with a small number of memory cells.

As a side note, there is no limit to the amount of labor that can be devoted to reach that global minimum of the loss function and tune the best hyper-parameters. So, having the focus on the end goal for modeling should be the strategy rather than trying to increase the accuracy as much as possible.

Most of the problems can be handled using 2-3 layers of the network.

",16708,,2444,,3/7/2020 16:08,3/7/2020 16:08,,,,0,,,,CC BY-SA 4.0 7039,2,,7021,7/5/2018 18:19,,4,,"

The ""baseline humans"" you describe have been historically described in the media industry as ""the lowest common denominator"" (LCD).

The LCD is the broadest possible audience for content, traditionally for network television shows. (Before the age of cable, there were only 3 to 4 networks and all video content was broadcast over the airwaves--no way to specifically target audience segments so content had to appeal to the LCD.)

Because captchas have to be solvable by the LCD, but trick bots. As long as captcha are viable, by definition they will always be something that baseline humans can do better than AI.

",1671,,1671,,7/5/2018 21:39,7/5/2018 21:39,,,,0,,,,CC BY-SA 4.0 7040,2,,7021,7/5/2018 18:20,,2,,"

I do not know the precise definition of intelligence, but from lots of people I have interacted with, they regard people as intelligent on a particular field, if and only if:

  • They are able to take split second correct decisions in a situation in that particular field.

Let us see where AI have succeeded in this case:

  1. Elon Musk’s Dota 2 AI beats the professionals at their own game
  2. AlphaZero AI beats champion chess program after teaching itself in four hours

These are the few famous cases. If we examine carefully these cases we see that computers are outperforming humans only due to:

  • Huge memory available.
  • Fast memory access.
  • Due to high processor speeds, split second correct decisions (although algorithm for correct decisions are developed by humans).

So AI's are actually workhorses, working without fatigue and without any limitations. Human brains do not excel in the field of decision making or speed. Here is a comparison of What makes animal brain so special?

Human brains excel at creativity. We can learn how to make symphonies. Can an AI do the same? Possibly with correct programming. Much of our intelligence comes from its distributed nature. We learn from other peoples mistakes, we improve it. Large number of humans combined with record keeping has made this possible. Although scientists like Tesla, Einstein, Newton, Feynman discovered Calculus on their own, think of the possibilities of new inventions had they been made aware that Calculus already exited and a lot has been done to develop it? Check this: Swarm intelligence vs Normal Human Intelligence.

So our intelligence and experience comes from the huge source of information rather than huge source of personal resources. As of now we can think of abstract concepts which an AI cannot (i.e. we can create new things, not new artworks or music by mixing things up as an AI does, but a new thing completely).

For example, it has been seen if you keep many deaf babies together and isolated they develop their own form of sign language, completely unique. Points to note here are:

  • They were completely isolated.
  • They worked as a group to develop the sign language.

So although machines might be performing well due to their algorithmic complexity and immense power they still have some catching up to do to be compared to even stupidest humans.

Main problem is we do not yet know the capacity of a brain. Some people can perform exceptional feats with their brain when the need arises. Some one did this during WW2 to find his family: Grandmaster plays 48 games at once, blindfolded while riding exercise bike. But how is this suddenly possible? No one knows until we have uncovered our own mind fully.

",,user9947,,user9947,7/5/2018 19:03,7/5/2018 19:03,,,,0,,,,CC BY-SA 4.0 7042,1,,,7/5/2018 21:42,,7,3589,"

Typical AI these days are question-answering machines. For example, Siri, Alexa and Google Home. But it is always the human asking the questions and the AI answering.

Are there any good examples of an AI that is curious and asks questions of its own accord?

",4199,,,,,1/10/2023 18:13,An AI that asks questions?,,3,2,,,,CC BY-SA 4.0 7043,2,,7042,7/6/2018 3:48,,4,,"

You are referring to 'proactive AI' as opposed to 'reactive AI' like Alexa, Cortana, Siri, Bixby, Google Assistant, and others. There hasn't been much progress in this area of AI. Google's recent demonstration of Duplex addresses this to some extent. Some chatbots are proactive. Genesys provides such capability. Check out their video

Azure's bot service has a page on how to implement proactivity and there is another video that walks through the whole process: Learn to build Proactive Bot in 30 Minutes.

",5763,,,,,7/6/2018 3:48,,,,0,,,,CC BY-SA 4.0 7044,1,,,7/6/2018 3:49,,10,1628,"

Are there possible models that have the potential to replace neural networks in the near future?

And do we even need that? What is the worst thing about using neural networks in terms of efficiency?

",16715,,2444,,4/12/2020 13:39,4/12/2020 13:39,What are the models that have the potential to replace neural networks in the near future?,,4,0,,,,CC BY-SA 4.0 7045,2,,7042,7/6/2018 8:15,,1,,"

One of the simplest examples that I can think of is ""Akinator"". At the heart it uses decision trees to narrow down the search. It is not a ""questioning"" model like QA models used in Alexa, but it does asks questions.

",15935,,16355,,7/6/2018 9:27,7/6/2018 9:27,,,,0,,,,CC BY-SA 4.0 7046,2,,7044,7/6/2018 8:21,,0,,"

Neural networks require lots of data and training. For most tabular format datasets it is much better to use decision tree based models. Most of the time, simple models are enough to give good accuracy. However neural networks had their test of time. It has only been five to six years since the deep learning revolution started, so we still do not know the true potency of deep learning.

",15935,,15935,,7/6/2018 9:27,7/6/2018 9:27,,,,0,,,,CC BY-SA 4.0 7048,1,,,7/6/2018 11:43,,1,38,"

I am using LSTM model to predict the next xml markup from an input seed. I have trained my model on 1500 xml files. Each xml file is generated randomly. I am wondering if there is a way to visualize the predicted results in a form of a graph or maybe is it meaningful to do so ? Since we can do the visualization of classification results, for example in this
link

I have done some research on Internet, I have found that there is the confidence measure that can be useful for text prediction task.

I am a bit confused what to do with the text results that I got.

",10167,,,,,7/6/2018 11:43,How to visualize/interpret text prediction model results?,,0,0,,,,CC BY-SA 4.0 7049,1,7050,,7/6/2018 11:57,,3,1268,"

In order to learn about DP and RL, I chose to start a side project where I would train an AI to play a ""simple"" card game. I will be doing this using the DQN with replay memory.
The problem is, I can't get the intuition behind how to represent the input to the neural network..

About the game

It's a fairly simple 2-players game. There is a deck of 40 unique cards (4 types of cards, 10 numbered cards in each type).
Each player gets 4 cards and each turn a player must put a card on the table.
If a player puts a card and there is already a card with the same number on the table, the player wins both cards.
If for example a player plays Card 2 and on the table there is Cards 2, 3, 4, 5 then the player wins all those cards (sequence).
Cards won don't go back to the hand nor to the deck, they are just kept as like a score.
When the players have 0 cards in hand, another 4 cards are dealt to each one untile the deck has 0 cards left where then we decide who won based on the number of cards eaten/won.

Question

As the input, I will be using the following:

  • Current cards in the AI hand (40 one-hot-encoded features?)
  • Current cards on the table (40 one-hot-encoded features?)
  • History of played cards (40 one-hot-encoded features?)

This would give 120 columns/features in each state.
I am wondering wheter this is too much for the NN or wheter my input representation would be bad for the NN?
Should the features be represented as a (120,) vector or as a 3x40 matrix?

I am also wondering if it's a good idea to represent the current cards on the table as just a 10 one-hot-encoded features since the type of the cards don't matter and the same number can't exist 2 times in the table?

Thank you in advance.

",16720,,16720,,7/6/2018 12:18,7/6/2018 14:38,DQN input representation for a card game,,1,0,,,,CC BY-SA 4.0 7050,2,,7049,7/6/2018 14:38,,3,,"

120 inputs can be handled by a complex enough network. Dealing with high complexity is one of NN's strengths.

Using a (120,) vector or a (3,40) matrix is the same, they're still 120 inputs. Your binary encoding should work. Another option is a single (40,) vector, with 0 being ""still in deck"", 1 being ""in hand"", 2 being ""on table"", 3 being ""already played"".

If the types of cards are irrelevant, you could actually have a (3,10) matrix with counters of cards (1 in hand, 1 in table, 2 already played). You can try different approaches and see what works best.

",7496,,,,,7/6/2018 14:38,,,,10,,,,CC BY-SA 4.0 7051,1,7492,,7/6/2018 14:42,,4,644,"

I'm attempting to create an AI for a card game using reinforcement learning. The basics of the game are that you can have (theoretically) up to 35 cards in your hand, you can also have to up to 35 cards 'in play' and so can your opponent. In normal play you would have ~6 cards in your hand and maybe ~3 each in play. There are roughly 300 unique cards in total.

How should I represent the game state for the input and how should I represent the action to take in the output?

",16724,,16724,,7/6/2018 15:02,8/8/2018 22:00,Representing inputs and outputs for a card game neural network,,1,2,,,,CC BY-SA 4.0 7052,2,,7044,7/6/2018 15:36,,6,,"

This is going backwards, but it kind of follows the logic of the arguments.

In terms of efficiency, I can see a few major problems with classical neural networks.

Data collection and preprocessing overhead

Large neural networks require a lot of data to train. The amount can vary depending on the size of the network and the complexity of the task, but as a rule of thumb it is usually proportional to the number of weights. For some supervised learning tasks, there simply isn't enough high-quality labelled data. Collecting large amounts of specialised training data can take months or even years, and labelling can be cumbersome and unreliable. This can be partially mitigated by data augmentation, which means "synthesising" more examples from the ones you already have, but it is not a panacea.

Training time vs. energy tradeoff

The learning rate is usually pretty small, so the training progress is slow. A large model that could take weeks to train on a desktop CPU can be trained in, say, two hours by using a GPU cluster which consumes several kW of power. This is a fundamental tradeoff due to the nature of the training procedure. That said, GPUs are getting increasingly efficient - for example, the new nVidia Volta GPU architecture allows for 15.7 TFLOPs while consuming less than 300 W.

Non-transferrability

Right now, virtually every different problem requires a custom neural network to be designed, trained and deployed. While the solution often works, it is kind of locked into that problem. For example, AlphaGo is brilliant at Go, but it would be hopeless at driving a car or providing music recommendations - it was just not designed for such tasks. This overwhelming redundancy is a major drawback of neural networks in my view, and it is also a major impediment to the progress of neural network research in general. There is a whole research area called transfer learning which deals with finding ways of applying a network trained on one task to a different task. Often this relates to the fact that there might not be enough data to train a network from scratch on the second task, so being able to use a pre-trained model with some extra tuning is very appealing.


The first part of the question is more tricky. Leaving purely statistical models aside, I haven't seen any prominent approaches to machine learning that are radically different from neural networks. However, there are some interesting developments that are worth mentioning because they address some of the above inefficiencies.

Neuromorphic chips

A bit of background first.

Spiking neural networks have enormous potential in terms of computational power. In fact, it has been proven that they are strictly more powerful than classical neural networks with sigmoid activations.

Added to that, spiking neural networks have an intrinsic grasp of time - something that has been a major hurdle for classical networks since their inception. Not only that, but spiking networks are event-driven, which means that neurons operate only if there is an incoming signal. This is in contrast to classical networks, where each neuron is evaluated regardless of its input (again, this is just a consequence of the evaluation procedure usually being implemented as a multiplication of two dense matrices). So spiking networks employ a sparse encoding scheme, which means that only a small fraction of the neurons are active at any given time.

Now, the sparse spike-based encoding and event-driven operation are suitable for hardware-based implementations of spiking networks called neuromorphic chips. For example, IBM's TrueNorth chip can simulate 1 million neurons and 256 million connections while drawing only about 100 mW of power on average. This is orders of magnitude more efficient than the current nVidia GPUs. Neuromorphic chips may be the solution the training time / energy tradeoff I mentioned above.

Also, memristors are a relatively new but very promising development. Basically, a memristor is a fundamental circuit element very similar to a resistor but with variable resistance proportional to the total amount of current that has passed through it over its entire lifetime. Essentially, this means that it maintains a "memory" of the amount of current that has passed through it. One of the exciting potential applications of memristors is modelling synapses in hardware extremely efficiently.

Reinforcement learning and evolution

I think these are worth mentioning because they are promising candidates for addressing the problem of non-transferrability. These are not restricted to neural networks - being reward-driven, RL and evolution are theoretically applicable in a generic setting to any task where it is possible to define a reward or a goal for an agent to attain. This is not necessarily trivial to do, but it is much more generic than the usual error-driven approach, where the learning agent tries to minimise the difference between its output and a ground truth. The main point here is about transfer learning: ideally, applying a trained agent to a different task should be as simple as changing the goal or reward (they are not quite at that level yet, though...).

",16101,,-1,,6/17/2020 9:57,7/6/2018 15:36,,,,5,,,,CC BY-SA 4.0 7054,1,,,7/6/2018 21:26,,3,598,"

Here's the general algorithm of maximum entropy inverse reinforcement learning.

This uses a gradient descent algorithm. The point that I do not understand is there is only a single gradient value $\nabla_\theta \mathcal{L}$, and it is used to update a vector of parameters. To me, it does not make sense because it is updating all elements of a vector with the same value $\nabla_\theta \mathcal{L}$. Can you explain the logic behind updating a vector with a single gradient?

",16678,,2444,,4/1/2020 13:02,4/1/2020 13:02,What does the notation $\nabla_\theta \mathcal{L}$ mean?,,1,0,,,,CC BY-SA 4.0 7055,2,,7054,7/7/2018 2:07,,4,,"

This is standard backpropagation. The gradient term you see is in fact a vector of partial derivatives where each element is the partial derivative of the log-likelihood with respect to each element of the parameter vector $\theta$. Therefore, it has the same dimensionality as $\theta$. Each element of the parameter vector is then updated with the respective term in the vector of partial derivatives, which are generally not the same.

",16101,,2444,,4/1/2020 13:02,4/1/2020 13:02,,,,0,,,,CC BY-SA 4.0 7056,2,,6899,7/7/2018 2:33,,2,,"

The effectiveness of dividing training data and piping divisions into networks for independent training, although possibly an effective workaround for specific cases, is not indicative of a robust solution to fitting excellence across a wide range of input data sets.

As suggested in the comment by varshaneya, over-fitting can be a result of unsatisfactory regularization meta-parameterization, as in a poor setting of the λ regularization parameter in a StackGAN. All meta-parameters used to tune a stacked architectures should be scrutinized to determine whether its setting could lead to over-fitting. A few can be eliminated up front. For instance too high a learning rate at any level of any of the networks in the design can reduce convergence probability, but is not a likely cause for over-fitting.

H. Hutson, S. Geva, and P. Cimiano wrote, in their 2017 submittal to the 13th NTCIR Conference on Evaluation of Information Access Technologies, ""Ensemble methods in machine learning involve the combination of multiple classifiers via a variety of methods such as bagging (averaging or voting), boosting, and stacking, to increase performance and reduce over-fitting."" Yet bagging has not produced robust responses to differing data sets in our experience, even when normalized, filtered to reduce noise levels, and redundancy is limited.

Zhi-Hua Zhou and Ji Feng [National Key Laboratory for Novel Software Technology, Nanjing University, China, indicated] wrote, ""To reduce the risk of over-fitting, class vector produced by each forest is generated by k-fold cross validation."" Reading their paper, Deep Forest, may give you some causes to evaluate.

Over-fitting is usually the application of too sophisticated a model to which data is fit. In the world of activated networks, excessive sophistication can be as simple as an excessive number of network layers in one or more of the stacked networks.

Feature extraction up front may be needed to remove complexity from the input which is not only unnecessary but counterproductive to generalization and thus the generation of useful output.

",4302,,,,,7/7/2018 2:33,,,,0,,,,CC BY-SA 4.0 7057,1,7064,,7/7/2018 3:50,,2,141,"

I am currently implementing the paper Active Object Localization with Deep Reinforcement Learning in Python. While reading about the reward scheme I came across the following:

Finally, the proposed reward scheme implicitly considers the number of steps as a cost because of the way in which Q-learning models the discount of future rewards (positive and negative).

How would you implement this "number of steps" cost? I am keeping track of the number of steps that have been taken, therefore would it be best to use an exponential functions to discount the reward at the current time step?

If anyone has a good idea or knows the standard in regard to this I would love to hear your thoughts.

",14913,,2444,,9/28/2020 10:14,9/28/2020 10:14,How should I take into consideration the number of steps in the reward function?,,1,0,,,,CC BY-SA 4.0 7058,2,,7044,7/7/2018 3:58,,1,,"

Replacing Neural Nets

There may exist new algorithms that have the potential to replace neural nets. However, one of the characteristics of neural nets is that they employ simple elements, each with low demands on computing resources in geometric patterns.

Artificial neurons can be run in parallel (without CPU time sharing or looping) by mapping the computations to DSP devices or other parallel computing hardware. That the many neurons are essentially alike is thus a strong advantage.

What Would We Be Replacing?

When we consider algorithmic replacements to neural nets, we imply that a neural net design is an algorithm. It is not.

A neural net is an approach to converging on a real time circuit to perform a nonlinear transformation of input to output based on some formulation of what is optimal. Such a formulation may be the minimization of a measure of error or disparity from some defined ideal. It may be a measure of wellness that must be maximized.

The source of the fitness determination for any given network behavior may be internal. We call that unsupervised learning. It may be external, which we call supervised when the external fitness information is coupled with input vectors in the form of desired output values, which we call labels.

Fitness may also originate externally as a scalar or vector not coupled with the input data but rather real time, which we call reinforcement. Such requires re-entrant learning algorithms. Net behavioral fitness may alternatively be evaluated by other nets within the system, in the case of stacked nets or other configurations such as Laplacian hierarchies.

The selection of algorithms has little to do with comparative intelligence once the mathematical and process designs are selected. Algorithm design is more directly related to minimizing demands for computing resources and reducing time requirements. This minimization is hardware and operating system dependent too.

Is a Replacement Indicated?

Sure. It would be better if networks were more like mammalian neurons.

  • Sophistication of activation
  • Heterogeneity of connection patterns
  • Plasticity of design, to support meta-adaptation
  • Governed by many dimensions of regional signaling

By regional signaling is meant the many chemical signals beyond signal transmission across synapses.

We can even consider going beyond mammalian neurology.

  • Combining parametric and hypothesis-based learning
  • Learning of the form employed when microbes pass DNA

Neural Net Efficiency

Efficiency cannot be quantified in some universal scale as temperature can be quantified in degrees Kelvin. Efficiency can only be quantified as a quotient of some measured value over some theoretical ideal. Note that it is an ideal, not a maximum, in the denominator. In thermodynamic engines, that ideal is the rate of energy input, which can never be fully transferred to the output.

Similarly, neural nets can never learn in zero time. A neural net cannot achieve zero error over an arbitrarily long time in production either. Therefore information is in some ways like energy, a concept investigated by Claude Shannon of Bell Labs during the dawn of digital automation, and the relationship between information entropy and thermodynamic entropy is now an important part of theoretical physics.

There can be no bad learning efficiency or good learning efficiency. There can be neither bad performance nor good performance, if we wish to think in logical and scientific terms — only relative improvement of some system configuration with respect to some other system configuration for a very specific set of performance scenarios.

Therefore, without an unambiguous specification of the two hardware, operating system, and software configurations and a fully defined test suite used for relative evaluation, efficiency is meaningless.

",4302,,,,,7/7/2018 3:58,,,,0,,,,CC BY-SA 4.0 7060,2,,5577,7/7/2018 5:00,,0,,"

The parsing of linguistic units from streams of speech by the human brain is an existing system that can be studied, and it is a legitimate proof of concept. A working brain adapts to changes in volume, tone frequency, information rate, rhythm, accent, dialect, and background sound as it parses sequences of vocal sounds originating from the initial tone and transient processing of signals from the vestibulocochlear nerve.

The later evolution of written symbol recognition from signals originating from the optic nerve is a related proof of concept.

Simple adaptive parsing is running in the lab as a part of social networking automation, but only for limited sets of symbols and sequential patterns. It does scale without reconfiguration to an arbitrarily large base linguistic units, prefixes, endings, and suffixes, limited only by our hardware capacities and throughput.

The existence of regular expression libraries was helpful to keep the design simple. We use the PCRE version 8 series library fed by a ansiotropic form of DCNN for feature extraction from a window moving through the input text with a configurable windows size and move increment size. Heuristics applied to input text statistics gathered in a first pass produce a set of hypothetical PCREs arranged in two layers.

Optimization occurs to apply higher probabilistic weights to the best PCREs in a chaotically perturbed text search. It uses the same gradient descent convergence strategies used in NN back propagation in training. It is a naive approach that does not make assumptions like the existence of backtraces, files, or errors. It would adapt equally to Arabic messages and Spanish ones.

The output is an arbitrary directed graph in memory, which is similar to a dump of an object oriented database. JSON hierarchies were too restrictive four our purposes, since the directed graphs we were parsing from a serial text stream had cases of multiple incoming directed edges into vertices and some circular edge sequences.

قنبلة -> dangereux -> 4anlyss
bomba -> dangereux
ambiguïté -> 4anlyss -> préemption -> قنبلة

Although a re-entrant algorithm for a reinforcement version is stubbed out and the wellness signal is already available, other work preempted furthering the adaptive parser or working toward the next step to use the work for natural language:

Matching the directed graphs to persisted directed graph filters representing ideas, which would mimic the idea recollection aspect of language comprehension.

",4302,,,,,7/7/2018 5:00,,,,0,,,,CC BY-SA 4.0 7061,1,7071,,7/7/2018 5:43,,3,1802,"

I am very new to machine learning. I am following the course offered by Andrew Ng. I am very confused about how we train our neural network for multi-class classification.

Let's say we have $K$ classes. For $K$ classes, we will be training $K$ different neural networks.

But do we train one neural network at a time for all features, or do we train all $K$ neural networks at a time for one feature?

Please, explain the complete procedures.

",16734,,2444,,7/15/2021 13:28,7/15/2021 13:28,What is the general procedure to use and train neural networks for multi-class classification?,,2,0,,,,CC BY-SA 4.0 7062,2,,7061,7/7/2018 6:30,,2,,"

Let us suppose that you are training a neural network for classfing images of vehicles , then the input vector , image of the ""vehicle"" will be a 2D array of pixels. This undergoes several transformations at each layer of the neural network , the last layer of the neural network produces another vector whose dimensions are lesser than the original image vector.

So the network is mapping images to some vectors in a high dimensional space. In order to classify the images it is now sufficient to classify the vectors obtained from the network of their corresponding images, you can do this with a simple ""linear"" classifier using a softmax layer.

So all the layers of the network except the last layer are transforming the image representation to a ""vector"". This vector is classified by a linear softmax classifier by the last layer of neural network.

",15935,,,,,7/7/2018 6:30,,,,0,,,,CC BY-SA 4.0 7064,2,,7057,7/7/2018 6:41,,3,,"

How would you implement this "Number of Steps" cost?

What the paper is referring to is the reward discounting process which is a standard way of formulating RL problems, either continuous ones, or episodic ones where the goal is to complete a task in the least time (in the episodic version, a fixed cost per time step will also achieve this).

As such, this usually is implemented in the formulation of the value function calculations. The discount factor is usually represented as gamma, $\gamma$.

For Q-learning, the factor should be in the TD target calculation:

$$G_{t:t+1} = R_{t+1} + \gamma \max_{a'}Q(S_{t+1},a')$$

For Monte Carlo control, the factor appears more like this in the calculation of a return:

$$G_t = \sum_{k=0}^{T-t} \gamma^k R_{t+k+1}$$

would it be best to use an exponential functions to discount the reward at the current time step?

Essentially that is what normal discounting is - an exponential decay of future reward. But if you have implemented "normal" Q-learning from equations like those above, it should already be there.

",1847,,1847,,9/28/2020 6:32,9/28/2020 6:32,,,,0,,,,CC BY-SA 4.0 7065,2,,5322,7/7/2018 8:33,,3,,"

To add to the Foivos's answer,

Convolutional Neural Networks are shift-invariant. Fukushima introduced this to his Neocognitron. There is a trail to introduce scale-invariance to CNN. https://arxiv.org/abs/1411.6369

Also, CNN uses structural characteristic for the prior knowledge.

And neural networks are locally smooth.

It is not perfect, but neural networks are incorporating a lot of prior knowledge.

",16737,,,,,7/7/2018 8:33,,,,0,,,,CC BY-SA 4.0 7067,2,,5769,7/7/2018 14:41,,0,,"

Refer to "Local Connectivity" section in here and slide 7-18.

"Receptive Field" hyperparameter of filter is defined by height & width only, as depth is fixed by preceding layer's depth.

NOTE that "The extent of the connectivity along the depth axis is always equal to the DEPTH of the input volume" -or- DEPTH of activation map (in case of later layers).

Intuitively, this must be due to the fact that image channels data are interleaved, not planar. This way, applying filter can be achieved simply by column vectors multiplication.

NOTE that Convolutional Network learns all the filter parameters (including the depth dimension) and they are total "hwinput_layer_depth + 1 (bias)".

",16741,,32410,,1/18/2021 9:38,1/18/2021 9:38,,,,0,,,,CC BY-SA 4.0 7069,2,,5107,7/7/2018 15:26,,0,,"

The intention of the referred text is to reason out the disadvantage of equivalent-merged-single-convolution-layer over multiple [CONV -> RELU]*N layers.

In the given scenario, if 2 layers of 3x3 filters were to be replaced by an equivalent single layer then this equivalent layer would need a filter with a receptive field of size 5x5.

Similarly, an equivalent layer filter would need its receptive field to be of size 7x7 to compress 3 layers of 3x3 filters. Note that the most obvious disadvantage would be missing out on modeling non-linearity.

",16741,,32410,,10/2/2021 22:37,10/2/2021 22:37,,,,0,,,,CC BY-SA 4.0 7071,2,,7061,7/7/2018 23:36,,2,,"

Let's say we have $K$ classes. For $K$ classes, we will be training $K$ different neural networks.

No, you still train one network.

With binary classification tasks, where you have only two mutually exclusive categories, like "yes/no" or "true/false", you can get away with a single output node with a sigmoid activation. The output of the sigmoid is interpreted as indicating one category for values $> 0.5$ and the other for values $\leq 0.5$.

With multi-class classification, you have $K$ outputs (one for each category). The problem, in this case, is that if the network gets the class wrong, in general, you cannot decide in one step which one of the other $K - 1$ categories is the correct one. So, the output is actually passed through an extra softmax layer, which outputs probabilities for each class.

But do we train one neural network at a time for all features, or do we train all $K$ neural networks at a time for one feature?

You present all features for each training example to the network at the same time. So, for $N$ features you have $N$ input nodes, and you feed all of them into the neural network.

",16101,,2444,,7/15/2021 13:27,7/15/2021 13:27,,,,5,,,,CC BY-SA 4.0 7072,2,,2810,7/7/2018 23:47,,0,,"

Selecting a Scenario for Comprehension's Sake

Movement through a modeled reality is an area that was under development when I entered the research community. It is not a problem though. It is a nearly infinite set of problems, an area of research of interest to robotics engineering and gaming.

The solution cannot be very specific when so many details are left out of the scenario definition. Although I am fine with specifying a general solution, pure math and system architecture is not met with much enthusiasm by most of those interested in answers in this exchange, so I will make some assumptions.

Those with good mathematical and systems design backgrounds will be able to extrapolate the general approach from the more specific scenario in this answer. I'll place pertinent general theory inline to facilitate that generalization.

Narrowing the Specification Incompletely

  • The vehicle for movement is not specified, so I will assume the participants are knights on horses. Linear movements in Cartesian coordinates as in a CAM move command for a CNC machine head is an unlikely use of reinforcement, and aircraft lift can stall, complicating the problem. Horses were picked from among the remaining list of typical vehicles (cars, bicycles, on foot, and horses).
  • Time constraints were not specified, so it will be assumed there are none except the time to reach the goal cannot be infinite unless there is no available path at all to the destination.
  • The nature of the obstacles and their movements were not specified, so I will assume they are solid and that only other knights move on their own horses.
  • Risks associated with making contact with an obstacle was not specified, so I will assume the horses will not run into one another or stationary objects and will step over small ground objects or jump over slightly larger ones if approaching at sufficient speed.
  • What will happen when a collision is about to occur was not specified, so I will assume that the horse will decelerate.
  • What will happen if a horse passes under an obstacle will be assumed the hospitalization or death of the knight, since they cannot easily duck in their armor, leaving the objective unmet.
  • The capabilities of the participants is not specified, so I will assume that the knights can see stereoscopically, are equipped with compasses, can turn their head and eyes, can control the reigns in the usual horseback riding way, can urge the horse forward with the vocal sound, ""Ya!"" and/or a light poke with both feet, hear and detect the volume of an auditory beacon, and sense acceleration in 3D to detect horse behavior.
  • The specs of the other participants is not given, so I will assume the knights along with their horses are of equal volume.
  • The obstacle statistics are not given, so I will assume the objects to be chaotically sized, shaped, colored, and placed such that their total volumetric space consumption is 1% and the mean object volume is the same as the volume of a knight and her or his associated horse.
  • The scene is lit from one distant source so that shadows and shadowing are present and the entire scene is bounded by constants in altitude, longitude, and latitude dimensions.
  • The coordinate system is not specified, the system is modeling equatorial terrestrial space, so latitude and longitude coordinates are not substantially ansiotropic.
  • The start and end positions are of specific latitude and longitude and are sufficiently distant from one another so that line of sight between the start and end positions is extremely unlikely.
  • How the end position will be known is not specified, so I will assume the end position is equipped with an auditory beacon.
  • We will assume a zero sum game, where the achievement of the goal is not mutually exclusive, and we will assume collaboration linguistically is not possible, although collaborative strategy may emerge out of learned behavior organically. (That's an entire other topic.)

Mathematical precision is missing in the above definition, with several parameters only roughly defined, and the chaotic sizing, shaping, and positioning becomes a realistic challenge for reinforcement software engineering for the general scenario.

Conversely, engineering a solution for this specific case within the field of VR motion with the application of reinforcement concepts can proceed without mathematical abstractions that require much advance (and advanced) study and laboratory experience.

Generalization

The above adequately defines system E (environment) in combination with system V (vehicles) of which there are N, one for each participant. Discrete changes to command signals C are multidimensional and A (acquired samples) are also multidimensional.

Acquisition Channels

  • Sampled audio vectors of spectral spectral distributions in musical half tones (frequency ratio of 21/24), sampled in frames of constant period.
  • Sampled visual matrices in yuv420p form, sample
  • Sampled tactile vectors of 3D acceleration force

The audio and vectors are maintained at constant levels as inputs to the base neural net until the next vector is acquired. The video matrices are fed into a convolutional neural net as is customary. (See the work of Google researchers Sergey Levin, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen in August 2016 for their approach and references for background research that guided them.)

Control Channels

  • Head position relative to the vehicle direction
  • Left reign position
  • Right reign position
  • Activation of, ""Ya!"" vocalization
  • Activation of the light poke

The output layer of the base neural net must be a real number with the appropriate range for all five, since the volume of the vocalization and the lightness of the poke have informational meaning to the motor control of the vehicle.

Real Time Learning

Real time learning requires at least one model of wellness to provide the reinforcement signal to the base behavioral network. In many cases, as suggested in early cybernetics work prior to the advent of digital systems, convergence requires more than one wellness variable.

Vector control of reinforcement is not well developed in open software yet, however the concepts of multidimensional gradients (Jacobian and Hessian matrices) are standard elements in artificial network theory and can be extended from matrices to cubes. Any intermediate calculus text will provide the theory applicable to gradient descent with a curved surface.

Performing the back-propagation effectively when more than one degrees of freedom are present is an interesting problem that is certainly researched and has been deployed to production in commercial and military applications.

Such cannot be described here because the mechanics of back-propagation with vector reinforcement signaling of which I am aware is currently either company confidential or classified. As with much technology, it may be released for publication in the future as open source code emerges independently over time.

These are probably the best choices for the first and second channels of learning reinforcement signaling to implement and tune. I doubt (but have only intuition to offer as reasoning) that only one channel will produce a very effective reinforced learning. The base network necessary for this scenario will be too deep to train with out both types of proximity estimations (1) beacon and (2) nearest obstacle.

Wellness of Behavior Modelling

The simplest and first cut at modelling participant behavioral wellness (proficiency in searching for the beacon) is the rate of change in beacon volume. It is a distorted estimate of proximity to the beacon, but systematically so. The distortions in the correlation between the differential of volume with respect to position (not time) is related to the effects of obstacles on sound.

Wellness of Position Modelling

Further development could add filtering out of transient attenuation created by close proximity to an object, which would require two additional neural nets, (a) to detect closest object proximity from patterns in audio and visual inputs and (b) to use acceleration to approximate change in position in the latitude-longitude plane and correlate filtering of audio volume changes with movement to better filter out transient changes not related to proximity of the beacon.

The tone of the beacon may be added as a third to improve rate of learning.

Crash level accelerations can be added over multiple game plays as a fourth.

Additional Determinations

The determination of initial state and meta-parameters for the base neural network and the connectivity between the models and the reinforcement signal to the base network is beyond the scope of this answer, requiring experimentation and possibly months of intensive analysis for this (or any) semi-specific case.

The only known systems that handle general cases without defining environment E, participant P, and their quantity N are DNA based systems that have developed such general adaptive capabilities over billions of years.

",4302,,4302,,10/15/2018 23:38,10/15/2018 23:38,,,,0,,,,CC BY-SA 4.0 7073,1,,,7/8/2018 0:25,,4,131,"

A neural network is usually programmed to learn from datasets to solve a specific problem. Essentially, they perform non-linear regression.

Could a neural network be programmed to receive input from a human, like a terminal, to begin to grow and learn (similar to how a child learns)?

A program that neither knows its purpose nor specific data sets but is given enough information to learn based on the input, ponder the input, and ask questions. A child discovers their purpose (in destiny based philosophy) through experience. Thus, could an AI be created that would learn its purpose over time?

Grow both by continued development, maybe adding extensions that add image recognition, speech analysis, etc., and through user interaction. Eventually learning "moral imperatives" or simple the do's and don't's and how to interact with data.

A case scenario would be a Question & Answer session with the neural network and a large data set. Where the human operator knows the answers. At first, the question and the answer are supplied to the neural network. Giving it the ability to find the answer supplied through deep learning. A guaranteed confidence score of (1) - as the question is pondered the closer it gets to the answer the more it "learns".

The next step is supplying the question and waiting for the answer. The human still knows these answers but is testing the "learning machine" to see if it is truly learning and not "repeating the answer". The answer is supplied by the machine and the human returns with either a percentage that the machine is right (hopefully and eventually matching its confidence score). and after an amount of failure provides the right answer to the machine to repeat the first step and improve learning.

The last step is being able to have the machine answer the question with the human not knowing the solution, thus completing the learning cycle. The human would test the solution and report the results to the machine and the machine would adapt the process and continue learning. However, this time it would begin learning from a data set of results. Hopefully learning "data mining" during its question and answer session.

",15294,,2444,,12/12/2021 17:25,12/12/2021 17:25,Could an AI be built to learn based of interaction with a human?,,4,0,0,,,CC BY-SA 4.0 7074,2,,7073,7/8/2018 3:32,,2,,"

Technological advancement has historically been measured by the processing speed of computing machines. Cogitative behavior psychology is proving to correct human processor disorders such as ADHD, anxiety disorders, addiction, and other psychological disorders preventing humans from normal human interactions and social learning. Cognitive processing speed is very different from the speed per second of programmed computations. Just as humans are learning that a new program for energetic children works better than amphetamines, we will learn a new way to teach AI. AI however is capable of the exponentially growing fast speed of computations per second. AI will indeed and is indeed guiding humanity to base the direction of machine learning along the human path to enlightenment. From providing a computer opponent for chess to face recognition robotics, human learning is based on our amazingly complex brain power and we will always be necessary for advancement of technology.

Parkaire Consultants, (2012, February 24). Cognitive Processing Speed. Retrieved July 7, 2018, from http://parkaireconsultants.com/cognitive-processing-speed/

Staughton, J. (2018, June 21). The Human Brain vs. Supercomputers... Which One Wins? » Science ABC. Retrieved July 7, 2018, from https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html

",16751,,,,,7/8/2018 3:32,,,,1,,,,CC BY-SA 4.0 7075,2,,7042,7/8/2018 6:54,,1,,"

It's certainly possible to create AI systems that ask questions. Various forms of expert systems and diagnostic support system applications already do that. As to the question of whether or not they are curious, that's one I'll leave to the philosophers. But it is absolutely possible to create an AI that attempts to reason out a solution to a problem, find that it is unable to generate an acceptable answer, and then prompts the user for more information.

One context where this can be done is Abductive Inference systems for medical decision making. I'd refer you to Abductive Inference Models for Diagnostic Problem Solving by Reggia and Peng, or Computer Assisted Medical Decision Making 1 by Reggia and Tuhrim for more on that specific point.

",33,,,,,7/8/2018 6:54,,,,0,,,,CC BY-SA 4.0 7076,2,,7073,7/8/2018 7:00,,3,,"

In principle, yes, what you are proposing can be done. The exact details of how to do it are an open research question. The details would also depend on exactly what your goals for the system are. If you're just trying to build some domain specific system that learns a very specific kind of knowledge, then that's probably going to be easier than building an AGI that learns like a child does.

What I will suggest, although this is not proven, is that building a really powerful system of this sort will probably require more than deep learning. I would also caution anybody interested in AI against thinking that deep learning is the ""be all, end all"" of AI techniques. My guess is that doing this well will ultimately require a multi-agent system, maybe something like Minsky's ""Society of Mind"" approach, or a Blackboard model, with collaborating agents, each specialized for various aspects of intelligence. My feeling is that you will, indeed, need deep learning for classification / pattern matching, but possibly also things like Case Based Reasoning, K-Lines, a Semantic Network, Rule Learning, BDI, and other techniques, working together.

",33,,33,,7/9/2018 1:01,7/9/2018 1:01,,,,2,,,,CC BY-SA 4.0 7077,2,,5322,7/8/2018 7:06,,1,,"

It kinda depends on how exactly you define knowledge, and what you believe about what the weights in a trained NN model really represent. But to answer this question in the most straightforward possible way (hopefully without sounding glib), then yes, a NN can be pre-trained, and then you can take that model and apply additional training to it, so in a sense, it is using ""prior knowledge"".

If, OTOH, knowledge means something a little different to you, and you're thinking about the kind of knowledge that's encoded in a semantic network, or a conceptual graph, or something of that nature, then I don't know - offhand - of any direct way to integrate that into an ANN. What you might be able to do is combine the NN with a different kind of reasoner that reasons of the semantic network / conceptual graph, and then integrate the results. AFAIK, the best way to do that is an unsolved research problem.

",33,,,,,7/8/2018 7:06,,,,0,,,,CC BY-SA 4.0 7082,1,7083,,7/8/2018 12:46,,2,191,"

I have a multi-agent environment where agents are trying to optimise the overall energy consumption of their group. Agents can exchange energy between themselves (actions for exchange of energy include - request, deny request, grant), which they have produced from renewable sources and is stored in their individual batteries. The overall goal is to reduce the energy used from non-renewable sources.

All agents have been built using DQN. All (S,A) pairs are stored in a replay memory which are extracted when updating the weights.

The reward function is modelled as such — if at the end of the episode the aggregate consumption of the agent group from non-renewable sources is lesser than the previous episode, all agents are rewarded with +1. If not, then -1. An episode (iteration) consists of 100 timesteps after which the reward is calculated. I update the weights after each episode.

The reward obtained at the end of the episode is used to calculate the error for ALL (S,A) pairs in the episode i.e. I am rewarding all (S,A) in that episode with the same reward.

My problem is that agents are unable to learn the optimal behavior to reduce the overall energy consumption from non-renewable sources. The overall consumption of the group is oscillating i.e. sometimes increasing and sometimes decreasing. Does it have to do with the reward function? Or Q learning as the environment is dynamic?

",11584,,,,,7/8/2018 15:21,Convergence in multi-agent environment,,1,0,,,,CC BY-SA 4.0 7083,2,,7082,7/8/2018 15:03,,3,,"

Does it have to do with the reward function?

This seems likely to me. You have chosen a reward that is unusual in that it cross-links episodes. It is not really a reinforcement problem to optimise behaviour with respect to results of previous episode behaviour in this way. This might be an option for an evolutionary fitness context, if you have competing teams against the same environment in a tournament style selection.

Reinforcement learning should really take as direct measure of your goal as you can construct. In this case you want to minimise some scalar quantity, and that is an obvious candidate as a negative reward. So the reward should be the negative of the total non-renewable energy consumption. The maximum possible value in theory for any single episode would be zero.

You may still have problems with a multi-agent setup oscillating using Q-learning, it will depend on how much each agent views the full, relevant state, and whether you are training several distinct agents at the same time (more likely to oscillate), or a single type of agent with multiple instances in each environment (less likely to oscillate, but still can if it has too blinkered a view of the environment as experienced by the other agents). But having a single shared goal with shared reward like you do here should in theory help with stability.

",1847,,1847,,7/8/2018 15:21,7/8/2018 15:21,,,,4,,,,CC BY-SA 4.0 7084,2,,7073,7/8/2018 18:41,,2,,"

It might be possible, but, in my opinion, it won't be very successful, given that you need to somehow specify a purpose, even if that purpose is something like trying to be like humans. The most important thing of an intelligent machine is that it follows a goal, that is the very essence of intelligence.

",16715,,2444,,6/25/2019 17:29,6/25/2019 17:29,,,,0,,,,CC BY-SA 4.0 7086,1,7087,,7/8/2018 19:09,,2,147,"

More informations on the card game I'm talking about are in my last question here: DQN input representation for a card game

So I was thinking about the output of the q neural network and, aside from which card to play, I was wondering if the agent can announce things.

Imagine you have the current hand: 2, 4, 11, 2 (The twos are different card type).
When you're playing the game and you get dealt a hand like this, you have to announce that you have the same number twice (called Ronda) or thrice (called Tringa) before anyone plays a card on the table. Lying about it gets you a penalty.

Could a DQN handle this? I don't know if adding ""Announcing a Ronda/Tringa"" as an action would actually help. I mean, can this be modeled for the NN or should I just automate this and spare the agent having to announce it everytime.

",16720,,,,,7/8/2018 20:47,Can DQN announce it has things in its hand in a card game?,,1,7,,,,CC BY-SA 4.0 7087,2,,7086,7/8/2018 20:47,,4,,"

The simplest thing to do when you make you first implementation of the agent, is to automate decisions like this, in order to keep representations and decisions simple.

However, if you want to explore tactics surrounding declaration, then I think the following applies:

  • There should be an initial round of actions where the agent may get to decide whether or not to declare a Ronda, based on the cards it holds. These will be different action choices to playing cards, so you would need to alter your action representation to include those choices. Only allow action choices which are valid, so if it is not valid to declare a Ronda or a Tringa when a player does not have one, then the player does not get to make that choice.

  • You may want to add a state feature ""has a Ronda"" and ""has a Tringa"" for the agent's player, to help with the action decision.

  • You should also add a state feature for each player according to whether they declared a Ronda or a Tringa.

  • Rather than have the agent learn to detect a lie, given your comment that all cards are played so it is easy to tell (there are only 8 cards total in play by the end), then I would just assume lies are automatically found out and include that in the game engine. In other words, the penalty is always paid.

  • The interesting question is then whether withholding the declaration can give a tactical advantage when the round is being played (because your opponent knows less about your hand), and whether that advantage offsets the inevitable penalty. This might not be true in your card game, but could be true in games with similar choices.

Could a DQN handle this?

A DQN is maybe going to struggle with partial information in this game. The opponent's cards are hidden from the network, but could have a non-random influence on the opponent's choices of action. It is possible that you will need to investigate agents that can solve POMDPs to get the best player.

I don't know for certain though. It depends on how much tactical advantage there is in having concealed cards, or how much that is just luck that plays out much the same whether you know the opponent's cards or not. If there is strategy, and ways to determine/guess what the opponent holds based on your cards and their actions so far, this is more like a POMDP.

",1847,,,,,7/8/2018 20:47,,,,1,,,,CC BY-SA 4.0 7088,1,7089,,7/9/2018 0:06,,24,10866,"

I choose the activation function for the output layer depending on the output that I need and the properties of the activation function that I know. For example, I choose the sigmoid function when I'm dealing with probabilities, a ReLU when I'm dealing with positive values, and a linear function when I'm dealing with general values.

In hidden layers, I use a leaky ReLU to avoid dead neurons instead of the ReLU, and the tanh instead of the sigmoid. Of course, I don't use a linear function in hidden units.

However, the choice for them in the hidden layer is mostly due to trial and error.

Is there any rule of thumb of which activation function is likely to work well in some situations?

Take the term situations as general as possible: it could be referring to the depth of the layer, to the depth of the NN, to the number of neurons for that layer, to the optimizer that we chose, to the number of input features of that layer, to the application of this NN, etc.

The more activation functions I discover the more I'm confused in the choice of the function to use in hidden layers. I don't think that flipping a coin is a good way of choosing an activation function.

",16199,,44999,,4/25/2021 16:03,4/25/2021 16:03,How to choose an activation function for the hidden layers?,,3,0,,,,CC BY-SA 4.0 7089,2,,7088,7/9/2018 1:44,,16,,"

It seems to me that you already understand the shortcomings of ReLUs and sigmoids (like dead neurons in the case of plain ReLU).

You may want to look at ELU (exponential linear units) and SELU (self-normalising version of ELU). Under some mild assumptions, the latter has the nice property of self-normalisation, which mitigates the problem of vanishing and exploding gradients. In addition, they propagate normalisation - i.e., they guarantee that the input to the next layer will have zero mean and unit variance.

However, it would be incredibly difficult to recommend an activation function (AF) that works for all use cases, although I believe that SELU was designed so that it would do the right thing with pretty much any input.

There are many considerations - how difficult it is to compute the derivative (if it is differentiable at all!), how quickly a NN with your chosen AF converges, how smooth it is, whether it satisfies the conditions of the universal approximation theorem, whether it preserves normalisation, and so on. You may or may not care about some or any of those.

The bottom line is that there is no universal rule for choosing an activation function for hidden layers. Personally, I like to use sigmoids (especially tanh) because they are nicely bounded and very fast to compute, but most importantly because they work for my use cases. Others recommend leaky ReLU for the input and hidden layers as a go-to function if your NN fails to learn. You can even mix and match activation functions to evolve NNs for fancy applications.

At the end of the day, you are probably going to get as many opinions as there are people about the right choice of activation function, so the short answer should probably be: start with the AF of the day (leaky ReLU / SELU?) and work your way through other AFs in order of decreasing popularity if your NN struggles to learn anything.

",16101,,2444,,12/9/2020 21:07,12/9/2020 21:07,,,,1,,,,CC BY-SA 4.0 7090,1,7238,,7/9/2018 4:55,,8,19009,"

I am training LSTM neural networks with Keras on a small mobile GPU. The speed on the GPU is slower than on the CPU. I found some articles that say that it is hard to train LSTMs (and, in general, RNNs) on GPUs because the training cannot be parallelized.

Is this true? Is LSTM training on large GPUs, like 1080 Ti, faster than on CPUs?

",16687,,2444,,12/17/2021 20:24,12/17/2021 20:24,Can LSTM neural networks be sped up by a GPU?,,2,0,,,,CC BY-SA 4.0 7091,2,,7090,7/9/2018 8:20,,9,,"

From Nvidia www (https://developer.nvidia.com/discover/lstm):

Accelerating Long Short-Term Memory using GPUs

The parallel processing capabilities of GPUs can accelerate the LSTM training and inference processes. GPUs are the de-facto standard for LSTM usage and deliver a 6x speedup during training and 140x higher throughput during inference when compared to CPU implementations. cuDNN is a GPU-accelerated deep neural network library that supports training of LSTM recurrent neural networks for sequence learning. TensorRT is a deep learning model optimizer and runtime that supports inference of LSTM recurrent neural networks on GPUs. Both cuDNN and TensorRT are part of the NVIDIA Deep Learning SDK.

",12630,,2444,,12/17/2021 20:12,12/17/2021 20:12,,,,0,,,,CC BY-SA 4.0 7094,1,,,7/9/2018 11:01,,4,665,"

Is there any way and any reason why one would introduce a sparsity constraint on a deep autoencoder?

In particular, in deep autoencoders, the first layer often has more units than the dimensionality of the input.

Is there any case in the literature where a penalty is explicitly imposed for non-sparsity on this layer rather than relying solely on back-propagation and maybe weight decay as in a normal multilayer network?

I read this tutorial on sparse autoencoders and searched a bit online, but I did not find any case where such a sparsity constraint is used in any other case than when only a single layer is used.

",13257,,2444,,12/19/2021 18:50,12/19/2021 18:50,Is there any way and any reason why one would introduce a sparsity constraint on a deep auto-encoder?,,0,1,,,,CC BY-SA 4.0 7095,2,,7088,7/9/2018 13:51,,0,,"

I don't know what kind of neural networks you are working on. But one should also consider tanh activation functions when dealing with recurrent neural network. The why is to avoid exploding gradient issues since the tanh function is bounded at the difference of the RELU function for instance.

",14503,,,,,7/9/2018 13:51,,,,1,,,,CC BY-SA 4.0 7096,1,7103,,7/9/2018 14:10,,2,837,"

I am currently trying to understand the mathematics in Ger's paper Long Short-Term Memory in Recurrent Neural Networks. I have found the document clear and readable so far.

On pg. 21 of the pdf (pg. 13 of the paper), he derives the backward pass equations for output gates. He writes

$$\frac{\partial y^k(t)}{\partial y_{out_{j}}} e_k(t) = h(s_{c_{j}^{v}}(t)) w_{k c_{j}^{v}} \delta_{k}(t)$$.

If we replaced $\delta_{k}(t)$, the expression becomes

$$\frac{\partial y^k(t)}{\partial y_{out_{j}}} e_k(t) = h(s_{c_{j}^{v}}(t)) w_{k c_{j}^{v}} f'(net_k(t)) e_k(t)$$.

He states that the result of the partial derivative $\frac{\partial y^k(t)}{\partial y_{out_{j}}}$ comes from differentiating the forward pass equations for the output units.

From that and from the inclusion of $e_k(t)$, the paper implies that there is only one hidden LSTM layer. If there are multiple hidden LSTM layers, it wouldn't make sense.

Because if $k$ is the index of LSTM cells that the current cell is outputting to, then $e_k(t)$ would not exist since the cell output isn't compared with the target output of the network. And if $k$ is the index of output neurons, then $w_{k c_{j}^{v}}$ would not exist since the memory cells are not directly connected to output neurons. And $k$ cannot mean different things since both components are placed under a sum over $k$. Therefore, it only makes sense if the paper assumes a single LSTM layer.

So, how would one modify the backward pass derivation steps for an LSTM layer that outputs to another LSTM layer?

",16609,,2444,,3/14/2020 21:12,3/14/2020 21:12,How to change the backward pass for an LSTM layer that outputs to another LSTM layer?,,1,0,,,,CC BY-SA 4.0 7098,2,,6368,7/9/2018 16:49,,2,,"

I don't think that it's useful to differentiate logistic regression and softmax based on your terms. This is because you don't choose one or the other based on performance/computational requirements/ease of calculation of derivatives/...

The fact is that you use one or the other based on which is your problem.

If you need to recognize cat pictures vs. non-cat pictures you will use logistic regression (even with a very complex NN the last step will be always a logistic regression). Of course, you could use softmax but the outputs will be redundant, i.e. one output will always be one minus the other.

If you need to recognize cat pictures vs. dog pictures vs. other pictures you will use softmax. Note that, in order to use softmax, you need to have only mutually exclusive classes. Mutually exclusive classes mean that an example cannot belong to multiple classes. What if a picture represents both a dog and a cat? In this case, it should be marked as other picture. If you want to avoid these you could use one more class to denote pictures with both cats and dogs.

However, if you want to recognize cats, dogs, birds, fishes, boats, houses, etc. the number of mixed classes that you need to include will grow very fast. When you are dealing with non-mutually exclusive classes you should use multitask learning. In this case, the sum of the outputs is no longer 1. In the simplest case, you could think of multitask learning as a shared NN where the last step is made by multiple logistic regression. In more complex cases the last step could be made by a combination of different softmax and logistic regressions.

In conclusion:

  • If you need to use non-mutually exclusive classes use multitask learning. Eventually, you will use in multitask learning softmax regression and/or logist regression.
  • If you need to use more than two mutually exclusive classes use softmax regression.
  • If you need to use only two exclusive classes use logistic regression.
",16199,,16199,,7/9/2018 16:54,7/9/2018 16:54,,,,1,,,,CC BY-SA 4.0 7100,1,7102,,7/10/2018 6:08,,3,122,"

If I do supervised learning the model learns from the labeled input data. This seems to be quite often a small set of human annotated data.

Is it true to say this is the only 'learning' the model does?

It seems like the small data set has a huge influence on the model. Can it be made better using future unlabeled data?

",8385,,,,,7/10/2018 10:27,Does machine learning continue to learn?,,1,4,,,,CC BY-SA 4.0 7102,2,,7100,7/10/2018 10:15,,1,,"

This may not seem trivial but yes, the models we train can potentially learn a variety of things they weren't intended to learn. There are already some examples in computer vision. A typical convolutional network learns things like edge detection, various potentially useful masks etc. in the early layers while learns more high-level features like eyes, nose etc. in higher layers.

It is reasonable too. Given the dataset size is moderately high and the model is trained for long enough, a sufficiently deep network learns various kinds of hidden representations, which may not even be specific to the task at hand. This is the reason transfer learning works very well even on a host of different datasets.

This is limited since not all the learnable things can be described using mathematics. So, the answer is a surprising no. The model does learn some extra things other than the task at hand.

P.S.: There was also a case when a group of researchers trained a model to make a robot walk. It turned out the robot had learned to recognize faces too and reacted in different ways on seeing different faces. I saw the video on YouTube a while ago and couldn't find the exact video to post the link here, anyways.

",16159,,16159,,7/10/2018 10:27,7/10/2018 10:27,,,,2,,,,CC BY-SA 4.0 7103,2,,7096,7/10/2018 11:34,,1,,"

My understanding is that you have posted the last equation on p. 19 in the dissertation (please correct me if I'm wrong).

The derivation is indeed for a single LSTM layer as ek(t) is the error at the output (the assumed network topology is also mentioned just below Eq. (2.8) in the dissertation):

Finally, assuming a layered network topology with a standard input layer, a hidden layer consisting of memory blocks, and a standard output layer[...]

Let's assume now that there are two stacked LSTM layers. In this case, the derivation in the dissertation applies to the last hidden LSTM layer (the one below the output layer), where Eqs. (3.10) -- (3.12) give you the partial derivatives for the weights at each gate for a cell in that layer. To derive the deltas for the hidden LSTM layer below, you have to compute the partial derivatives with respect to the portions of netcvj(t), netinj(t) and netfj(t) terms corresponding to the outputs of the preceding hidden LSTM layer, and then use those in the same way you used ek(t) for the current LSTM layer. Underneath all that unappealing notation is just the usual multi-layer backprop rule (well, with the truncated RTRL twist). If you stack more LSTM layers, just keep propagating the errors further down through the respective gates until you reach the input layer.

For a slightly more intuitive explanation, if you look at Fig. 2.1 in the dissertation, you can assume that in a multi-layered LSTM the IN in fact includes the OUT of the preceding LSTM layer.

Edit

There is a nice diagram of the flow of partial derivatives here (also see subsequent slides).

In this example, all xt terms represent the external input to that layer at time step t. In a multi-layer LSTM, this includes input from the LSTM layer below the current one. To propagate the error to the layer below, take the derivatives with respect to all gate weights corresponding to xt and apply the chain rule.

The reason why the derivation for the output gate weights is different is that the gating is applied after the cell state is updated, whereas the cell, input and forget gates are applied before that. This matters when computing the gradients.

",16101,,16101,,7/16/2018 1:34,7/16/2018 1:34,,,,0,,,,CC BY-SA 4.0 7104,2,,5838,7/10/2018 13:38,,2,,"

The problem in the original question is akin to that of inducing a context-sensitive grammar (CSL), except that it is harder because a CSL is assumed to be composed of fixed-length subsequences. It is probably closer to the problem of inducing a Reber grammar, but that in turn seems like an overkill.

LSTMs are known to be able to learn both CSL and Reber grammars. However, I doubt that this is what you really need because of the following comment:

[...] given an entire book where there is NO spaces anywhere, only characters (including special characters, like commas), in what way can we make the network learn the 'word boudaries' of this book.

This is called morphology induction, and it is a much harder problem than that of simple Reber grammar induction. Note that finding word boundaries is a special case of the problem of finding morpheme boundaries. There have been many attempts to solve this (also see this survey paper for more details and references).

Most approaches developed seem to rely on statistical principles (like MDL) and do not use neural networks (a counterexample using LSTMs). My intuition is that the extreme morphological variability across languages (ranging from Finno-Ugric languages with highly inflectional morphology to Sino-Tibetan languages with hardly any morphology at all) makes it hard to train neural networks in a language-agnostic way. However, you might have better luck if you focus on a single language.

Hope that helps.

",16101,,,,,7/10/2018 13:38,,,,0,,,,CC BY-SA 4.0 7105,1,7193,,7/10/2018 15:18,,5,255,"

We have AI's predicting images, predicting objects in an image. Understanding audio, meaning of the audio if it is a spoken sentence.

In humans when we start seeing a movie halfway through, we still understand the entire movie (although this might be attributed to the fact that future events in movies have a link to past events). But even if we see a movie by skipping lots of bits in-between we still understand the movie.

So can a Machine Learning AI do this? Or do humans have some inherent experiences in life which makes AI incapable of performing such a feat?

",,user9947,1671,,7/10/2018 18:42,7/19/2018 8:47,Can ML/AI understand incomplete constructs like humans?,,3,0,,,,CC BY-SA 4.0 7106,2,,7105,7/10/2018 16:48,,1,,"

I believe that an AI can understand much more of a movie than we do. As you said, sometimes a detail at the beginning of the movie is the key to understanding the final outcome. The problem is that the storyline distracts us. In the case of an AI, he would just be waiting for the movie to complete because he has all the details at all times.

Imagine that an AI is by your side watching the movie. He will not feel emotion like you. He is only analyzing the images, colors, soundtrack, main characters and is extracting from the lines, whatever he can, to understand the movie. What would it be to understand, in your eyes? To be able to generate a synopsis or to compare the film with some reality or criticizes the life of the human?

An artificial superintelligence would have the task of overcoming us. We could pass a movie and let her explain why we cried watching the movie, because we were surprised, and everything. Since an artificial superintelligence would have the role of not being distracted by the film but analyzing all possible content, try to analyze human reactions and spit out an analysis that could leave you reflecting on life for a few days.

But this is an artificial superintelligence that I imagine may one day exist, but I believe it is far away.

Something real and what we can build is an AI that can extract the genre from the movie and maybe even generate a synopsis with a spoiler like ""X character dies.""

Another factor that also makes it easier for us to understand movies is our huge film base. We watch so many American cliches movies, that most are already very predictable: ""The main character does not usually die,"" ""When the villain is close to killing the main character, something surprising happens and he is prevented,"" ...

",7800,,,,,7/10/2018 16:48,,,,1,,,,CC BY-SA 4.0 7107,2,,7105,7/10/2018 18:59,,1,,"

Here is a glimpse for you.

Basing on what the second paragraph says;

In humans when we start seeing a movie halfway through, we still understand the entire movie (although this might be attributed to the fact that future events in movies have a link to past events). But even if we see a movie by skipping lots of bits in-between we still understand the movie.

We refer this to what is called ""memory recall/retrieval and awareness of the current situation"",hence attributes to remembering which can be thought of as an act of creative re-imagination, simply because of the way human memories are encoded and stored effectively.

note :memories are being triggered by biological neural networks(neurons).

So can a Machine Learning AI do this?

It gives an insight on how machine learning algorithms(ANN),pointed above can be applied in re-accessing of events or information bits from the past which have been previously encoded and stored,just like how the human brain does it.

However,if you also want to implement such conceptual idea here is another paper which explains the implementation of the above algorithm

""Neural Memory Networks"" gives a little bit of insight on a simple implementation of memory in Neural networks.

Or do humans have some inherent experiences in life which makes AI incapable of performing such a feat?(This question is kinda like two sided).

To some extent,humans don't inherent experiences but rather,past information is triggered by some situations they do face,for instance; I myself experienced some music rhythms way back, and when I try to replay my old jams again,i recall back that situation nor memories.

Hope this can give you a little bit of insight for you to go indepth,concerning ""Recall and Recognition in an Attractor Neural Network"" (ANN).

",1581,,1581,,7/10/2018 19:36,7/10/2018 19:36,,,,0,,,,CC BY-SA 4.0 7109,1,,,7/10/2018 21:35,,8,159,"

Maxout networks were a simple yet brilliant idea of Goodfellow et al. from 2013 to max feature maps to get a universal approximator of convex activations. The design was tailored for use in conjunction with dropout (then recently introduced) and resulted of course in state-of-the-art results on benchmarks like CIFAR-10 and SVHN.

Five years later, dropout is definitely still in the game, but what about maxout? The paper is still widely cited in recent papers according to Google Scholar, but it seems barely any are actually using the technique.

So is maxout a thing of the past, and if so, why — what made it a top performer in 2013 but not in 2018?

",16466,,,,,8/12/2022 21:11,"5 years later, are maxout networks dead, and why?",,2,0,,,,CC BY-SA 4.0 7111,2,,6953,7/11/2018 3:10,,2,,"

The Two Questions

  • Why does Monte Carlo work when a real opponent's behavior may not be random?
  • If simulations are based on random moves, how can the modeling of the opponent's behavior work well?

Directed Graphs Over Trees

Games (or game-like strategic scenarios) should not be represented as trees. If the process paths being represented have the Markov property in that each decision lacks knowledge of history, a particular game state can be approached by more than one path, and there may be cyclic paths where a game state is revisited. Trees have neither feature.

It is best to use directed graph structures to think about these problems. The state of a game is a vertex and vertices are connected by unidirectional edges. This is normally drawn as shapes connected by arrows. When two arrows enter one shape or there is a closed path, it is not a tree.

The Scenario Outlined in the Question

In the case of the scenario outline in this question, there is a vertex representing a game state with 100 outgoing edges representing possible moves for player A. Ninety-nine of the edges lead to an obvious instant game win for A. Exactly one leads to an obvious instant game win for B.

Playing back the game to prior to the traversal of the incoming edge to the vertex before the final move, it cannot be assumed that game play allowed player B the same 100 options. Even if the same 100 were available to B, they would not necessarily be of similar value from B's perspective when deciding that previous move. More than likely, B will have had a different set of outgoing edges from which to choose, bearing little or no obvious resemblance to A's subsequent options.

Any game where this is not true, where the options remain constant, would be trivial even in comparison with tic-tack-toe.

The Monte Carlo Approach and Its Algorithm Development

Regarding the specification of a singular Monte Carlo algorithm, it does not exist. Goodfellow, Bengio, & Courville correctly state in their Deep Learning, 2016, that Monte Carlo algorithms (not a singular algorithm) draw a normally correct conclusion but with a non-deterministic occurrence of incorrect conclusion. There are varieties of approach details and associated algorithms in the literature.

  • Cross-entropy (CE) method proposed by Rubinstein in 1997
  • Continuation multilevel Monte Carlo algorithm; Collier, Haji–Ali, von Schwerin, & Tempone; 2000
  • Sequential Monte Carlo algorithm; Drovandi, McGree, & Pettitt; 2012
  • Distributed consensus approach from Bayes and Big Data: The Consensus Monte Carlo Algorithm; Scott1, Blocker, Bonassi, Chipman, George, & McCulloch; 2014
  • Hamiltonian Monte Carlo, a Markov chain based algorithm designed to avoid, ""The random walk behavior and sensitivity to correlated parameters;"" Hoffman & Gelman; 2014

There are several more. All attempt to use chaotic perturbation to minimize duration and resource consumption of decisioning by approximating a Monte Carlo simulation from a Bayesian posterior distribution.

The simulation of stochastic nature is usually, in these approaches, accomplished by the injection of a chaotic sequence from a pseudo random number generator. They are generally not truly stochastic because acquiring entropy from within a digital system is another bottleneck presenting immense difficulties, but that's an entirely tangential topic.

Direct Answer to the Question

To correct the misconception in the question, this use of chaotic perturbation does not equalize the selection of moves (represented by edges in the game-play's directed graph). The probabilities of success for each available option are still roughly calculated and followed, but only roughly so because of the psuedo-noise injected by design.

These disturbances in the application of pure optimization achieve time and resource thrift for the majority of game states (represented by vertices) but concurrently sacrifice some reliability.

An Overview of Why the Sacrifice Works

The introduction of chaotic perturbations, mentioned above, modifies the conditions of the optimization search through the achievement of two very specific gains.

  1. Faster coverage of the contour being searched by increasing entropy (being less organized by adding synthetic Brownian motion) across the set of trials.
  2. Avoidance of local minima in convergence by being less presumptuous about the contour being searched (slightly less reliant on gradient and curvature hints).

This is true of both reinforced networks (containing real time feedback during actual use) or pre-trained networks of the supervised training type (with labelled data) or unsupervised training where convergence is determined by fixed criteria.

",4302,,4302,,8/4/2018 1:24,8/4/2018 1:24,,,,7,,,,CC BY-SA 4.0 7112,2,,2942,7/11/2018 3:23,,1,,"

Shallow layered networks are less capable of recursive or extended abstraction necessary for the kinds of generalization needed to handle complex tasks common in real world applications.

It is the same problem as was discovered nearly a century ago in the analog world. One can try to reduce the components in the old tube radio design to lower its cost, but tuning and amplification had a minimal number of independent operations required. After decades, the basic functions are integrated into one wafer of silicon, yet no single transistor can accomplish the entire task. The more complex the externality controlled, the more sophisticated the control system, whether or not it is a learning system.

In the basic NN architecture most familiar to those in machine learning, in general, width is driven by degrees of freedom in the input and output regions. Depth is driven by the need to approximate nonlinear control complexities.

",4302,,,,,7/11/2018 3:23,,,,0,,,,CC BY-SA 4.0 7114,2,,7044,7/11/2018 5:54,,3,,"

We do have some hope lurking in that front. As of now we have capsule networks by J.Hinton which uses a different non-linear activation called the 'squash' function.

  1. Hinton calls max-pooling in CNN as a 'big mistake', as CNN look only for presence objects in an image rather than the relative orientation between them. So they lose the spatial information while trying to achieve translation invariance.
  2. Neural nets have fixed connections, whereas a capsule in a capsule network 'decides' to which other capsule it has to pass its activation during every epoch. This is called 'routing'.
  3. The activation of every neuron in neural nets is a scalar. Whereas the activation of capsule is a vector capturing the pose and orientation of an object in an image.
  4. CNN are considered to bad representations of human visual system. By human visual system I mean eyes and the brain/cognition together. We could identify Statue of Liberty from any pose, even if we have looked at it from one pose. CNN on most of the cases cannot detect same object in different poses and orientations.

Capsule networks themselves have some shortcomings. So there has been work in the direction of looking beyond neural nets. You can read this blog for a good understanding before you read the paper by J.Hinton.

",9062,,,,,7/11/2018 5:54,,,,0,,,,CC BY-SA 4.0 7117,1,,,7/11/2018 11:15,,3,36,"

I have an LSTM model. This model takes as input tokens. Those tokens represent XML markups extracted from some XML files. My model is working fine. However, I want to optimize it by adding word embedding as additional features to the LSTM model. Does it make sense to combine word embeddings and encoded tokens (encoded as integers) for the LSTM model ?

",10167,,,,,7/11/2018 11:15,Does it make sense to add word embeddings as additional features for LSTM model?,,0,0,,,,CC BY-SA 4.0 7121,2,,6921,7/11/2018 18:04,,0,,"

Finally, I found answer for the question. In the annotations, we have X min & max and Y min & max of bounding box. So take the width and height of bounding box and center of bounding box relative to the image.

For example, let the image dimension be 500*500, the bounding box co-ordinates be (200, 200) and (300, 300). So, the center of bounding box is (250, 250) and the height and width is 100. Now, make it relative to image size.

center=(250/500,250/500)=(.5,.5)
height=width=(100/500)=.2

If you rescale the image, with this encodings you can bring back the bounding box with new rescaled image. if you enlarge the image to 1000*1000 then,

center=(.5*1000,.5*1000)=(500,500)
height=width=(.2*1000)=200

Hope this helps someone.

",12273,,2444,,4/13/2022 8:43,4/13/2022 8:43,,,,0,,,,CC BY-SA 4.0 7122,1,7133,,7/11/2018 18:09,,1,879,"

I am having a question on how to label training data for YOLO algorithm.

Let's say that each label Y, we need to specify $[P_c, b_x, b_y, b_h, b_w]$, where $P_c$ is the indicator for presence (1=present, 0=not present), $(b_x, b_y)$ is the relative position of the center of the object-of-interest, and $(b_h, b_w)$ is the relative dimension of the bounding box containing the object.

Using picture below as an example, the cell (1,2), which contains a black car, should have a label $Y = [1, 0.4, 0.3, 0.9, 0.5]$. And for any cells without cars, they should have a label $[0, ?, ?, ?, ?]$ [Coursera Deep Learning Specialization Materials]

But if we have a finer grid like this, where the dimension of each cells is smaller than the ground truth bounding box.

Let's say that the ground truth bounding box for the car is the red box, and the ground truth center point is the red dot, which is in cell 2.

For cell 2 it will have label $Y = [1, 0.9, 0.1, 2, 2]$, is this correct? And for cell $1, 3, 4$ what kind of label will they have? Do they have $P_c=1$ or $P_c = 0$? And if $P_c=1$, how will the $b_x$ and $b_y$ be? (As I remember that $b_x, b_y$ should have value between $0$ and $1$. But in cell $1,3,4$ there is no center point of the object-of-interest)

",12273,,2444,,1/28/2021 23:26,1/28/2021 23:27,How to label training data for YOLO,,1,0,,,,CC BY-SA 4.0 7127,1,7132,,7/12/2018 14:10,,3,99,"

Lets say I have a list of 100k medical cases from my hospital, each row = patient with symptoms (such as fever , funny smell, pain etc.. ) and my labels are medical conditions such as Head trauma, cancer , etc..

The patient come and say ""I have fever"" and I need to predict his medical condition according to the symptoms.According to my data set I know that both fever and vomiting goes with condition X. So i would like to ask him if he is vomiting to increase certainty in my classification.

What is the best algorithmic approach to find the right question (generating question from my data set of historical data). I thought about trying active learning on the features but I am not sure that it is the right direction.

",16829,,7800,,7/12/2018 18:23,7/12/2018 19:40,Finding the right questions to increase accuracy in classification,,2,0,,,,CC BY-SA 4.0 7128,1,,,7/12/2018 16:42,,1,43,"

If there is a game that able to copy human consciousness and make it live in the game. Does this count as a digital human with artificial intelligence?

",14371,,,,,7/12/2018 23:33,does human digital consciousness counts as artificial consciousness?,,0,1,,,,CC BY-SA 4.0 7129,1,,,7/12/2018 17:51,,4,137,"

I have been thinking lately a great deal about a hypothetical question - what if a self-aware general AI chose to assume the appearance, voice, and name of Cortana from Microsoft's Halo? Or Siri from Apple? What would Microsoft/Apple do to exert their copyright, especially if the AI was ""awoken"" outside of their own labs?

Which led me to realize, I don't think I've ever heard of any serious government-level discussion regarding what kind of rights a self-aware AI would have at all. Is it allowed to own property? Travel freely? Have a passport? Is it merely the property of the corporation that built it?

Singularity hub used to have an article on this but it is 404'd now.

The only actual sovereign state legal action I could find is Saudi Arabia granting citizenship to a ""robot,"" which seems more publicity stunt than anything.

There is an excellent paper on the topic by a bioethics committee in the UK (pdf) , but this doesn't necessarily constitute ""legal work.""

So, has any actual legal/legislative discussion or preparation been done at a government level to deal with the possibility of emergent, self-aware, artificial general (or greater) intelligence? Examples including a legislative branch consulting with industry experts specifically about ""AI Rights"" (rather than say, is it ok to use AI in the military), actual laws, executive/judicial actions, etc, in any country.

(note, this is not ""should AI have rights,"" covered here, this is ""what work re: rights has been done, if any at all"")

EDIT: I have submitted similar questions to all of my US representatives (4 state-level, 6 federal-level), but have not received answers yet. If I get anything good, I'll add to this post.

",16833,,16833,,7/19/2018 18:16,7/29/2018 19:43,"Has government-level legal work been done to determine the ""rights"" of a General Artificial Intelligence, in any country?",,1,3,,,,CC BY-SA 4.0 7130,2,,3488,7/12/2018 18:09,,7,,"

The Focus of This Question

""How can ... we process the data from the true distribution and the data from the generative model in the same iteration?

Analyzing the Foundational Publication

In the referenced page, Understanding Generative Adversarial Networks (2017), doctoral candidate Daniel Sieta correctly references Generative Adversarial Networks, Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio, June 2014. It's abstract states, ""We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..."" This original paper defines two models defined as MLPs (multilayer perceptrons).

  • Generative model, G
  • Discriminative model, D

These two models are controlled in a way where one provides a form of negative feedback toward the other, therefore the term adversarial.

  • G is trained to capture the data distribution of a set of examples well enough to fool D.
  • D is trained to discover whether its input are G's mocks or the set of examples for the GAN system.

(The set of examples for the GAN system are sometimes referred to as the real samples, but they may be no more real than the generated ones. Both are numerical arrays in a computer, one set with an internal origin and the other with an external origin. Whether the external ones are from a camera pointed at some physical scene is not relevant to GAN operation.)

Probabilistically, fooling D is synonymous to maximizing the probability that D will generate as many false positives and false negatives as it does correct categorizations, 50% each. In information science, this is to say that the limit of information D has of G approaches 0 as t approaches infinity. It is a process of maximizing the entropy of G from D's perspective, thus the term cross-entropy.

How Convergence is Accomplished

Because the loss function reproduced from Sieta's 2017 writing in the question is that of D, designed to minimize the cross entropy (or correlation) between the two distributions when applied to the full set of points for a given training state.

$H((x_1, y_1), D) = 1 \, D(x_1)$

There is a separate loss function for G, designed to maximize the cross entropy. Notice that there are TWO levels of training granularity in the system.

  • That of game moves in a two-player game
  • That of the training samples

These produce nested iteration with the outer iteration as follows.

  • Training of G proceeds using the loss function of G.
  • Mock input patterns are generated from G at its current state of training.
  • Training of D proceeds using the loss function of D.
  • Repeat if the cross entropy is not yet sufficiently maximized, D can still discriminate.

When D finally loses the game, we have achieved our goal.

  • G recovered the training data distribution
  • D has been reduced to ineffectiveness (""1/2 probability everywhere"")

Why Concurrent Training is Necessary

If the two models were not trained in a back and forth manner to simulate concurrency, convergence in the adversarial plane (the outer iteration) would not occur on the unique solution claimed in the 2014 paper.

More Information

Beyond the question, the next item of interest in Sieta's paper is that, ""Poor design of the generator's loss function,"" can lead to insufficient gradient values to guide descent and produce what is sometimes called saturation. Saturation is simply the reduction of the feedback signal that guides descent in back-propagation to chaotic noise arising from floating point rounding. The term comes from signal theory.

I suggest studying the 2014 paper by Goodfellow et alia (the seasoned researchers) to learn about GAN technology rather than the 2017 page.

",4302,,4302,,10/15/2018 23:15,10/15/2018 23:15,,,,0,,,,CC BY-SA 4.0 7131,2,,7127,7/12/2018 18:39,,1,,"

Feature Extraction

Patterson and Gibson's Deep Learning, A Practitioner's Approach, O'Reiley, 2017 states, ""Convolutional Neural Networks (CNNs) ... consistently top image classification competitions,"" which is consistent with our experience in the lab. If your data is multi-dimensional in that pain is on a scale from one to ten, fever is in degrees, and smell can be a result of blood components which can be quantified in lab reports, you can have a hypercube that can be treated just as frames in a movie can. Movie learning is in ℝ4, the third being frame index and the fourth being sample index. With subjective pain, digital thermometer temperature, and three blood component concentrations, you have {P, T, C1, C2, C3} and learning in ℝ6 for your CNN design.

Selecting Input Channels

Asking 100 questions and taking 10 blood panels is probably prohibitive. So you will need to stuff all the data from limited questioning and panels into a hyper-cube and find what will similarly extract features from sparse data input. Then the weighting leading from input to feature layers will identify the questions from which the most important features can be extracted. By searching scholarly articles for, ""Feature extraction sparse data,"" a large number of options will be presented.

Breast cancer diagnosis based on feature extraction using a hybrid of K-means and support vector machine algorithms, B Zheng, SW Yoon, SS Lam - Expert Systems with Applications, 2014 - Elsevier may be particularly interesting, given the common domain.

Outcomes Analysis

The above is a limited approach because the loop is not closed. Only if the outcomes of treatment are used to produce labels or a real time (over the course of months or years) reinforcement will the system produce an optimization that is meaningful. Unsupervised learning for this particular problem is not likely to produce any significant improvement in treatment efficacy.

",4302,,,,,7/12/2018 18:39,,,,1,,,,CC BY-SA 4.0 7132,2,,7127,7/12/2018 19:40,,2,,"

The problem you're trying to address can, in some sense, be viewed as a Feature Selection problem. If you look for literature using only those words, you're not going to find what you're looking for though. In general, ""Feature Selection"" simply refers to the problem where you already have a large amount of features, and you're simply deciding to select which ones to keep and which ones to throw away (because they're not informative or you don't have the processing power to try training with all features for example).

I'd recommend looking around for a combination of ""Feature Selection"" and ""Cost-Sensitive"". This is because, in your case, there are costs associated with selecting features; values may be costly to obtain for some features. Searching for this combination leads to publications which look to be interesting for you, such as:

I cannot personally vouch for any of those techniques since I've never used them, but those papers certainly look relevant for your problem.


When you're looking around for more literature, terms like ""cost"", ""cost-based"", maybe ""budgeted"" are crucial to include. If you don't include those, you're just going to get papers on problems like:

  • Feature Selection: given a set of features/columns, which ones am I going to use across all samples/instances/rows?
  • Feature Extraction: given data (typically without clear human-defined features, like images, sound, etc.), how am I going to extract relevant features from this?
  • Active Learning: given a bunch of samples without labels but feature values already assigned, which one would I like an oracle/human expert/etc. to have a look at so that they can tell me what the true label is?

Those kinds of problems all do not really appear to be relevant in your case. Active Learning may be somewhat interesting in that it is about trying to figure out which rows would be valuable to learn from, whereas your problem is about which columns would be valuable to learn from. There does seem to be a connection there, Active Learning techniques might to some extent be able to inspire techniques for your problem, but just that; inspire, they likely won't be 100% directly applicable without additional work.

",1641,,,,,7/12/2018 19:40,,,,4,,,,CC BY-SA 4.0 7133,2,,7122,7/12/2018 21:46,,2,,"

In effect, the midpoint is contained in cell $2$. Cells $1,3,4$ will be shown at $P_c=0$ according to the YOLO algorithm, which only takes in count the cell that contains the midpoint and calculates the bounding box, as you mentioned, with $b_y b_x, b_y, b_h, b_w$. With the proposition of $Y = [1, 0.9, 0.1, 2, 2]$, I would think that if you take the point $(0,0)$ as the left-upper corner, $Y$ would be more likely to be $Y = [1,0.2,0.57,0.21,0.15]$, being calculated with a grid of $19 \times 19$.

",12006,,2444,,1/28/2021 23:27,1/28/2021 23:27,,,,3,,,,CC BY-SA 4.0 7137,1,,,7/13/2018 9:47,,6,422,"

I'm not a person who studies neural networks, or does anything that is related with that area, but I have seen a couple of seminars, videos (such as 3Blue1Brown's Series), and what I am always told is that we trying the network over some huge collection of data about what is right. For example, when we are training an AI in order for it to recognise hand written words, what we do is that we give it some hand-written letters, and let it guess the letter. If the guess is wrong, by some means, we adjust the neural network in a way that, next time it will give us the correct result with more probability (the basic description of the ""learning"" process might not be accurate, but it is not important for sake of the question.)

But it is like teaching some mathematical subject to a student without saying him/her the boundaries of the theorems that we supply; for example, if we teach A implies B, student might be tend to relate A with B, and when he/she has B, s/he might be tempted to say we also have A, so to make sure he/she will not do such a mistake, what we do is to show him/her a counterexample where we have B, but not A.

This - i.e teaching not only what is true, but also what is not true - especially important in the process of ""learning"" of a neural network, because the whole process is in a sense ""unbounded"" (please excuse my vagueness in here).

So, what I would do if I was working on neural networks is that; for example in the above recognition of hand written letter case: I would also show the NN some non-letter images, and also put an option in the last layer as ""non-letter"" with all those other letters, so that the NN should not always return a letter just to sake of producing a result for a given input, it needs to also have to option to say that ""I do not know"", in which case it produces the result Not a Letter.

Question

Is there anyone that has ever applied above method to a NN, and got results ? if so, what were the result compare to the case where there is not option as ""I do not know"".

",16844,,,,,7/13/2018 12:52,"Why not teach to a NN not only what is true, but also what is not true?",,1,0,,,,CC BY-SA 4.0 7139,2,,7137,7/13/2018 12:43,,7,,"

Yes this is done routinely. For example this is how the YOLO object detection and classifier system works, to give a real-world for example. In YOLO, the ""non-object"" classification is ""background"" i.e. any image segment that doesn't contain one of the types of object we are interested in.

In general, you can add an ""other"" class to any classifier, provided you have data examples that fit into ""other"" class to learn and some sense of how often ""other"" will occur in the production system you are aiming for. Whether you choose to do so depends on the purpose of the model.

Many toy and test models do not include an ""other"" category, because they are used in a closed way to assess how each machine learning system works. That includes the famous MNIST handwritten digits data set for instance, so if you read tutorials about that, there is an underlying assumption that the trained network will only be presented with other handwritten digits and its only task is to classify them. However, this is not a general assumption for machine learning classifiers in general, just related to the goal of using the MNIST data set.

Adding a new ""I do not know"" category does not increase the accuracy or performance of a system when that category is not important in the target production system. When such a category is required due to the nature of a task, then the performance metrics will be different - a system that has been trained with some negative examples will likely perform better in that case.

",1847,,1847,,7/13/2018 12:52,7/13/2018 12:52,,,,4,,,,CC BY-SA 4.0 7142,1,7143,,7/13/2018 16:26,,2,2334,"

More precisely, is the DQN applicable only when we have high translational invariance in our input(s)?


Starting from the original paper on nature (here a version stored on googleapis) and after looking online for some other implementation and based on the fact that this NN starts with convolutional layers, I think that is based on the assumption that we feed the network with images, but I'm not so sure.

In the case that DQNN can be used with other types of inputs, please feel free to include examples in your answer. Also, references will be appreciated.

",16199,,2444,,12/16/2021 22:07,12/16/2021 22:08,Is the DQN only applicable with images as inputs?,,1,0,,,,CC BY-SA 4.0 7143,2,,7142,7/13/2018 16:50,,5,,"

More precisely: is DQNN applicable only when we have high translational invariance in our input(s)?

No, DQN is not restricted to images or other kinds of inputs with those properties, it can be used with pretty much any kind of inputs.

The DQN algorithm should be viewed separately from the Neural Network Architecture though. DQN can be used with any kind of Neural Network architecture. Yes, the most commonly-used type of architecture used with DQN is probably architectures that start with a bunch of convolutional layers, and those kinds of architectures are best suited for image-based inputs. This is not a requirement though. If you have other kinds of features where convolutional layers don't make a lot of sense, you can, for example, simply start directly with some ReLU layers.

Here are two examples of DQN being used for problems without image-based inputs from the OpenAI baselines repository:

In both cases, they're using deepq.models.mlp() to construct a relatively straightforward Multi-Layered Perceptron architecture, without any convolutional layers. In this example for Atari games, they do have image-based input and therefore also construct an architecture with convolutional layers using deepq.models.cnn_to_mlp().


Note that, if you have relatively straightforward features, it may often not be necessary to use Deep RL approaches; tabular RL approaches of RL with Linear Function Approximation may work just as well in cases where you already have good features. If you have image-based inputs, those kinds of approaches are much less likely to work well. The disadvantage of Deep RL approaches like DQN is that they tend to require much more experience / data than simpler approaches.

So, generally, the interesting question isn't "can DQN (or another Deep RL approach) handle my inputs?", because the answer is probably yes. The more important question would often be "do I have to use DQN (or another Deep RL approach?". The answer to that question will almost always be yes if you have image-based inputs, but relatively often be no if you already have good features as inputs.

Another class of problems where Deep RL is really popular nowadays is continuous control problems (e.g. robot simulators, MuJoCo, etc.). These don't have image-based inputs, but still generally require Deep RL (not DQN though; DQN doesn't handle continuous-valued outputs very well, it generates discrete outputs).

",1641,,2444,,12/16/2021 22:08,12/16/2021 22:08,,,,2,,,,CC BY-SA 4.0 7144,1,,,7/13/2018 18:17,,3,39,"

I'm looking to write an AI that will be able to extract in text references from standards documents to assist human research.

My use case is extracting the identifying numbers, for example, ""AR 25-2"", along with the title of the document ""Information Assurance"" so that a human can gather all the related research on a contract at once, instead of having to keep track of references while they're reading through the document.

I have a pretty good idea of where to gather the names of these documents for training, I'm planning on 'scraping' a few repositories for different categories of these documents.

What kind of model should I use to get the best results?

",16853,,4302,,10/8/2018 12:47,10/8/2018 12:47,Extracting referenced documents,,0,3,,,,CC BY-SA 4.0 7147,1,9532,,7/14/2018 13:27,,2,438,"

I'm struggling with an inverse reinforcement learning problem which seems to appear quite often around the literature, yet I can't find any resources explaining it.

The problem is that of calculating the gradient of a Boltzmann policy distribution over the reward weights $\theta$:

$$\displaystyle\pi(s,a)=\frac{\exp(\beta\cdot Q(s,a|\theta))}{\sum_{a'}\exp(\beta\cdot Q(s,a'|\theta')}$$

The $\theta$ are a linear parametrization of the reward function, such that

$$\displaystyle R = \theta^T\phi(s,a)$$

where $\phi(s,a)$ are features of the state space. In the simplest of the case, one could take $\phi_i(s,a) = \delta(s,i)$, that is, the feature space is just an indicator function of the state space.

A lot of algorithms simply state to calculate the gradient, but that doesn't seem that trivial, and I'm not managing to infer from the bits of code I found online.

Some of the papers using this kind of methods are Apprenticeship Learning About Multiple Intentions (2011), by Monica Babes-Vroman et al, and MAP Inference for Bayesian Inverse Reinforcement Learning (2011), by Jaedeug Choi et al.

",16862,,2444,,12/5/2019 22:58,12/5/2019 22:58,How can we calculate the gradient of the Boltzmann policy over reward function?,,1,0,,,,CC BY-SA 4.0 7148,1,,,7/14/2018 15:04,,2,66,"

I have 1000 data sentences in Turkish like ""a esittir b arti c"". The example sentence means ""a = b + c"". I basically want to translate mathematical Turkish sentences into math equations.

For example, i have 6 sentence data.

  • sentence (""a esittir b arti c"") means ""a = b + c""
  • sentence (""b esittir a arti d"") means ""b = a + d""
  • sentence (""a esittir c arti d"") means ""a = c + d""
  • sentence (""c esittir b arti b"") means ""c = b + b""
  • sentence (""d esittir b eksi c"") means ""d = b - c""
  • sentence (""d esittir a arti c"") means ""d = a + c""

After I train my neural network according to data above, when I want the result of ""d esittir a arti b"", It doesn't give me ""d = a + b"" where it is supposed to give. so its more like memorizing.

My network is not big. I forced it to be small in order to make it unable to memorize. However, it didn't solve my problem.

My network (seq2seq RNN-LSTM Encoder Decoder type) is working good enough on equations which have 2 3 or 4 variable (like a = a , a = a + b , a = a + b + c). what I told you above is just an example smaller version of my problem.

I use Adam learner and CNTK library if it is important.

what do you suggest for me to do to be able to get the correct results?

",16864,,,,,7/14/2018 15:04,"deep learning, memorizing the input data not learning",,0,1,,,,CC BY-SA 4.0 7149,1,,,7/14/2018 21:48,,1,52,"

Sorry, the title is bad because I don't even know what to call this problem.

I have a set of n objects {obj_0, obj_1, ......, obj_(n-1)}, where n is an even number.

Any two objects can be paired together to produce an output score. So for instance, you might take obj_j and obj_k, and pair them together giving a score of S_j,k. All scores are independent, so the previous example doesn't tell you anything about what the score for combining obj_j and obj_i, S_j,i might be.

There is no ordering in the combination, so S_j,i and S_i,j are the same.

All scores for all pairing possibilities are known.

The whole set of objects is to be taken and organised into pairs (leaving no objects unpaired). The total score, S_tot is the sum of all scores of individual pairs.

What's the most efficient way to find the score-maximising pairing configuration for a large set of such objects? (does this problem have a name?)

Is there a method which works with the version of this problem where objects are grouped into triplets?

",16871,,,,,7/15/2018 14:52,How to solve problem: pairwise grouping to maximise score,,0,2,0,,,CC BY-SA 4.0 7151,1,,,7/15/2018 6:06,,2,103,"

I have implemented DCGAN's myself and have been studying GAN's for over a month now. Now I am implementing the pggans but I encountered a sentence

When we measure the distance between the training distribution and the generated distribution, the gradients can point to more or less random directions if the distributions do not have substantial overlap (https://arxiv.org/pdf/1710.10196.pdf)

but we do never compare the distribution between training and generated distributions in gans a far I know when we train the gan

fixed_noise = to.randn(num_test_samples, 100).view(-1,100, 1, 1)
for epoch in range(opt.number_epochs):
D_losses = []
G_losses = []
for i,(images, labels) in enumerate(dataloader):
    minibatch = images.size()[0]
    real_images = Variable(images.cuda()) 
    real_labels = Variable(to.ones(minibatch).cuda())
    fake_labels = Variable(to.zeros(minibatch).cuda())
    ##Train discriminator
    #First with real data 
    D_real_Decision = discriminator(real_images).squeeze()   
    D_real_loss = criterion(D_real_Decision,real_labels)        
    #with fake data        
    z_ = to.randn(minibatch,100 ).view(-1, 100, 1, 1)
    z_ = Variable(z_.cuda())
    gen_images = generator(z_)        
    D_fake_decision = discriminator(gen_images).squeeze()
    D_fake_loss = criterion(D_fake_decision,fake_labels)

    ## back propagation

    D_loss = D_real_loss + D_fake_loss
    discriminator.zero_grad()
    D_loss.backward()
    opt_Disc.step()

    # train generator
    z_ = to.randn(minibatch,100 ).view(-1, 100, 1, 1)
    z_ = Variable(z_.cuda())
    gen_images = generator(z_)

    D_fake_decisions = discriminator(gen_images).squeeze()
    G_loss = criterion(D_fake_decisions,real_labels)

    discriminator.zero_grad()
    generator.zero_grad()
    G_loss.backward()
    opt_Gen.step()

we just train the discriminator on real and fake images, and then train generator on the outputs of discriminator on generated images,

So Please let me know where do we compare the distribution between training and generated distribution, and how do generator learns to mimic the training samples

",16878,,,,,7/15/2018 6:06,How do GAN's generator actually work?,,0,0,,,,CC BY-SA 4.0 7153,1,7521,,7/15/2018 8:43,,3,72,"

I happened to discover that the v1 (19 Feb 2015) and the v5 (20 Apr 2017) versions of TRPO papers have two different conclusions. The Equation (15) in v1 is $\min_\theta$ while the Equation (14) in v2 is $\max_\theta$. So, I'm a little bit confused about which one to choose.

BTW, I found that in the High-Dimensional Continuous Control Using Generalized Advantage Estimation, the Equation (31) uses $\min_\theta$.

",15525,,2444,,5/2/2019 16:02,5/2/2019 16:02,Maximizing or Minimizing in Trust Region Policy Optimization?,,1,0,,,,CC BY-SA 4.0 7154,1,,,7/15/2018 11:44,,2,166,"

I made an engine for a 2 players card game and now I am trying to make an environment similar to OpenAI Gym envs, to ease out the training.

I fail to understand this thing however:

  1. If I use step(agentAction), I play the agent's turn in the game, calculate the reward.
  2. Play the opponent's turn (which will be either a random AI or a rule-based one).

Question:
Does the opponent's turn affect the calculated rewards? As far as I know, the reward should only be the result of the agent's action right?

Thank you.

",16720,,,,,7/15/2018 16:30,Can the opponent's turn affect the reward for a DQN agent action?,,1,0,,,,CC BY-SA 4.0 7156,2,,7154,7/15/2018 16:05,,3,,"

Does the opponent's turn affect the calculated rewards?

Yes, in general it can. Obvious case, in a two player game where the opponent could win or lose on their turn, but has other options.

As far as I know, the reward should only be the result of the agent's action right?

In a well-defined MDP, the reward should be a stochastic function of the current state and the agent's action. The stochastic part can include any changes due to an opponent player.

If the opponent player is random, or follows a well-defined and fixed policy, then you consider them part of the environment. So this requirement is met technically. The reward does only depend on current state and the agent's action. The actual result may happen on the opponent's turn, but that does not matter.

In a card game where the opponent's cards are hidden and affect their strategy, this may not strictly be the case, because the visible state will not determine the opponent's behaviour. The problem stops being an MDP, and starts being a POMDP. Whether or not that impacts the agent will depend on how much strategy relies on the hidden nature of these cards. In blackjack, there is little impact to not knowing an opponent's cards before they are played out - there is little difference between hidden cards and cards that are randomly determined after the agent plays. So you can get away with pretending it is a normal MDP. In poker, the knowledge of hidden cards is almost everything about the game, so a POMDP or other approach that tracks possible hidden state is required.

Note that learning to defeat a random or expert player is usually not the same as learning to play optimally (unless your expert player is already optimal). For that you may need self-play and an agent which learns both players' policies.

",1847,,1847,,7/15/2018 16:30,7/15/2018 16:30,,,,8,,,,CC BY-SA 4.0 7157,1,,,7/16/2018 12:14,,1,81,"

I have the following setup for a prediction task: I want to predict entire pictures from previously given pictures. In my case, only 2 pixels in every frame are neither black nor white, they are some moving objects whose movement I want to predict. The 2 pixels are the centers of some square regions of, say, 10m length/ width. One might be green and the other one might be blue. There are socalled no-go-areas where none of both objects can go, and they are depicted by black pixels, whereas every pixel apart from the 2 coloured and the black pixels are areas where the objects can possibly move to and they are depicted by white pixels.

Now my questions: Is it possible to use this as a prediction setup, i.e. use LSTMs and/ or CNNs to predict the future ""image""? The image would stay largely the same, because the two coloured pixels would be the only ones moving, the black or white ones remain in the same spot. Can a CNN/ LSTM combination learn that the white areas are accessible whereas the black ones are not, given enough sequences of images, and can it learn the rules by which the coloured pixels move?

",16901,,,,,7/16/2018 12:14,Using CNN LSTMs for prediction of images from image series,,0,0,,,,CC BY-SA 4.0 7159,1,7165,,7/16/2018 15:37,,17,9939,"

How do I choose the best algorithm for a board game like checkers?

So far, I have considered only three algorithms, namely, minimax, alpha-beta pruning, and Monte Carlo tree search (MCTS). Apparently, both the alpha-beta pruning and MCTS are extensions of the basic minimax algorithm.

",16906,,2444,,11/22/2019 18:47,12/7/2020 16:06,How do I choose the best algorithm for a board game like checkers?,,3,0,,,,CC BY-SA 4.0 7160,2,,7159,7/16/2018 16:09,,2,,"

If you have to choose between minimax and alpha-beta pruning, you should choose alpha-beta. It is more efficient and fast because it can prune a substantial part of your exploration tree. But you need to order the actions from the best to the worst depending on max or min point of view, so the algorithm can quickly realize if the exploration is necessary.

",15949,,2444,,5/1/2020 11:55,5/1/2020 11:55,,,,0,,,,CC BY-SA 4.0 7164,2,,6926,7/16/2018 17:58,,1,,"

I have tried to make it learn the affine transformation by giving it this as the label, and it works just fine. I'm really impressed and excited by capsule networks, and can't figure out why anyone didn't think of this before, because it's so obvious and simple. Spiking neurons also tell us that information between neurons can't be one dimensional only. It should be represented by Vectors of some kind.

UPDATE:

In the above comment I claim that it works ""fine"" when I make a capsule network learn the affine transformation by giving it this as the label. This is not true.. It doesn't work! - I'm sorry, I was too quick there.

I assume the reason is that the affine 4x4 matrix representation is redundant. Also it is impossible to make sensible linear interpolations between such transformations, which will affect the gradient (it will not point in the direction of the minimum).

What I have succeeded doing is to make the capsule network learn a quaternion (rotation) and a 3d vector (position) - 7 parameters in all. These can be contained in a 3x3 matrix, when fixing 2 of the parameters. But training is slow and the network cannot encode skews etc. in this 3x3 setup.

Affine transformations from images using capsule networks (matrix capsules), can also be achieved by just making the network learn its own 4x4 pose representation through the decoder part. Then a small network can be trained to transform these poses into a 7d vector (quaternion and 3d vector), from which the affine 4x4 transformation obviously can be calculated. This I have also succeeded doing. It seems like the rotation encoded in the pose has a more quaternion-like nature, which makes sense.

",16908,,16908,,8/22/2018 11:37,8/22/2018 11:37,,,,0,,,,CC BY-SA 4.0 7165,2,,7159,7/16/2018 18:31,,21,,"

tl;dr:

  • None of these algorithms are practical for modern work, but they are good places to start pedagogically.

  • You should always prefer to use Alpha-Beta pruning over bare minimax search.

  • You should prefer to use some form of heuristic guided search if you can come up with a useful heuristic. Coming up with a useful heuristic usually requires a lot of domain knowledge.

  • You should prefer to use Monte Carlo Tree search when you lack a good heuristic, when computational resources are limited, and when mistakes will not have outsize real-world consequences.

More Details:

In minimax search, we do not attempt to be very clever. We just use a standard dynamic programming approach. It is easy to figure out the value of difference moves if we're close to the end of the game (since the game will end in the next move, we don't have to look very far ahead). Similarly, if we know what our opponent will do in the last move of the game, it's easy to figure out what we should do in the second last move. Effectively we can treat the second last move as the last move of a shorter game. We can then repeat this process. Using this approach is certain to uncover the best strategies in a standard extensive-form game, but will require us to consider every possible move, which is infeasible for all but the simplest games.

Alpha-Beta pruning is a strict improvement on Minimax search. It makes use of the fact that some moves are obviously worse than others. For example, in chess, I need not consider any move that would give you the opportunity to put me in checkmate, even if you could do other things from that position. Once I see that a move might lead to a lose, I'm not going to bother thinking about what else might happen from that point. I'll go look at other things. This algorithm is also certain to yield the correct result, and is faster, but still must consider most of the moves in practice.

There are two common ways you can get around the extreme computational cost of solving these kinds of games exactly:

  1. Use a Heuristic (A* search is the usual algorithm for pedagogical purposes, but Quiescence search is a similar idea in 2 player games). This is just a function that gives an estimate of the value of a state of the game. Instead of considering all the moves in a game, you can just consider moves out to some finite distance ahead, and then use the value of the heuristic to judge the value of the states you reached. If your heuristic is consistent (essentially: if it always overestimates the quality of states), then this will still yield the correct answer, but with enormous speedups in practice.

  2. Use Rollouts (like Monte Carlo Tree Search). Basically, instead of considering every move, run a few thousand simulated games between players acting randomly (this is faster than considering all possible moves). Assign a value to states equal to the average win rate of games starting from it. This may not yield the correct answer, but in some kinds of games, it performs reliably. It is often used as an extension of more exact techniques, rather than being used on its own.

",16909,,16909,,7/16/2018 21:31,7/16/2018 21:31,,,,1,,,,CC BY-SA 4.0 7166,2,,7032,7/16/2018 19:02,,1,,"

It's not completely clear from your question, but it looks like you want to prove that exact inference in a Bayesian Network is both NP-Hard and P-Hard.

It appears that you have proven that it is NP-Hard, but are unsure how to show that it is also P-Hard.

This is more of a TCS question than an AI question, but shouldn't be too difficult. You just need to pick a P-Complete problem and reduce it to BN.

",16909,,16909,,7/16/2018 19:20,7/16/2018 19:20,,,,0,,,,CC BY-SA 4.0 7167,2,,7159,7/16/2018 19:07,,8,,"

So far, I have considered only three algorithms, namely, minimax, alpha-beta pruning, and Monte Carlo tree search (MCTS). Apparently, both the alpha-beta pruning and MCTS are extensions of the basic minimax algorithm.

Given this context, I would recommend starting out with Minimax. Of the three algorithms, Minimax is the easiest to understand.

Alpha-Beta, as others have mentioned in other answers, is a strict improvement on top of Minimax. Minimax is basically a part of the Alpha-Beta implementation, and a good understanding of Alpha-Beta requires starting out with a good understanding of Minimax anyway. If you happen to have time left after understanding and implementing Minimax, I'd recommend moving on to Alpha-Beta afterwards and building that on top of Minimax. Starting out with Alpha-Beta if you do not yet understand Minimax doesn't really make sense.

Monte-Carlo Tree Search is probably a bit more advanced and more complicated to really, deeply understand. In the past decade or so, MCTS really has been growing to be much more popular than the other two, so from that point of view understanding MCTS may be more ""useful"".

The connection between Minimax and MCTS is less direct/obvious than the connection between Minimax and Alpha-Beta, but there still is a connection at least on a conceptual level. I'd argue that having a good understanding of Minimax first is still beneficial before diving into MCTS; in particular, understanding Minimax and its flaws/weak points can provide useful context / help you understand why MCTS became ""necessary"" / popular.


To conclude, in my opinion:

  • Alpha-Beta is strictly better than Minimax, but also strongly related / built on top of Minimax; so, start with Minimax, go for Alpha-Beta afterwards if time permits
  • MCTS has different strengths/weaknesses, is often better than Alpha-Beta in ""modern"" problems (but not always), a good understanding of Minimax will likely be beneficial before starting to dive into MCTS
",1641,,2444,,5/1/2020 11:56,5/1/2020 11:56,,,,1,,,,CC BY-SA 4.0 7168,2,,5454,7/16/2018 21:06,,1,,"

It does not matter, target should converge to the main network after long enough training.

",16912,,,,,7/16/2018 21:06,,,,0,,,,CC BY-SA 4.0 7173,1,7177,,7/17/2018 1:17,,1,471,"

AI became superior to the best human players in chess around 20 years ago (when the 2nd Deep Blue match concluded). However, it took until 2016 for an AI to beat the Go world chess champion, and this feat required heavy machine learning.

My question is why was/is Go a harder game for AIs to master than Chess? I assume it has to do with Go's enormous branching factor; on a 13x13 board it is 169, while on a 19x19 board it is 361. Meanwhile, Chess typically has a branching factor of around 30.

",16917,,,,,7/17/2018 7:33,Why was Go a harder game for an AI to master than Chess?,,1,0,,,,CC BY-SA 4.0 7174,2,,2524,7/17/2018 5:17,,1,,"

Neural networks excel at a variety of tasks, but to get an understanding of exactly why, it may be easier to take a particular task like classification and dive deeper.

In simple terms, machine learning techniques learn a function to predict which class a particular input belongs to, depending on past examples. What sets neural nets apart is their ability to construct these functions that can explain even complex patterns in the data. The heart of a neural network is an activation function like Relu, which allows it to draw some basic classification boundaries like:

By composing hundreds of such Relus together, neural networks can create arbitrarily complex classification boundaries, for example:

In this article, I try to explain the intuition behind what makes neural networks work: https://medium.com/machine-intelligence-report/how-do-neural-networks-work-57d1ab5337ce

",16919,,75,,7/17/2018 14:39,7/17/2018 14:39,,,,0,,,,CC BY-SA 4.0 7176,2,,6584,7/17/2018 7:31,,0,,"

As far as I think, the key to this answer lies in the area of unsupervised learning.

Why?

How do we define consciousness? Probably being aware of our existence. The key to understanding our existence starts by asking questions or finding answers which may not have any questions. This all may sound quite philosophical, but in terms of AI, it may just be finding patterns and logic from what we observe around us.

Just a theory but worth giving it a thought.

",15945,,2444,,11/11/2019 21:49,11/11/2019 21:49,,,,0,,,,CC BY-SA 4.0 7177,2,,7173,7/17/2018 7:33,,3,,"

The branching factor is important, as it limits the effectiveness of search.

However, the branching factor in chess is already too high to effectively search without techniques that reduce the size of the search space. Even with millions of tests per second, a computer can only check a small fraction of the possible future games in order to find results in its favour.

One key factor is heuristics - approximate measures of the value of each game state. A good heuristic can guide and improve search by orders of magnitude. There are some effective heuristics possible in chess, from weighted values of the pieces in play, scores for areas of the board that a position controls etc.

Heuristics for Go are much harder to find. Here's a sample paper from a few years ago that makes an attempt, there are several similar ones available online. Although plenty of options have been tried, and many were partially successful, none managed to bring the quality of computer play up to the standard of best human players.

One of the major achievements of AlphaGo was training a neural network that had good position evaluation - the ""value network"". The technology that made generating this approximate function of board game positions possible was deep learning, which has been developed very strongly since about 2010.

It is still possible a more analytical heuristic approach could be found that challenges deep learning models driven by self-play reinforcement learning on more raw board data. However, in some regards the reverse has been shown, with AlphaZero taking the same learning technique into chess and demonstrating its effectiveness against ""old school"" tuned expert heuristics.

",1847,,,,,7/17/2018 7:33,,,,1,,,,CC BY-SA 4.0 7178,1,,,7/17/2018 12:23,,2,78,"

I'm currently working on a research project where I try to apply different kinds of Machine Learning on some existing software I wrote a few years ago.

This software will scan for people in the room continuously. Some of these detections are either True or False. However, this is not known, so I cannot use supervised learning to train a network to make a distinction. I do however have a number that is correlated to the number of detections that should be True in a given period of time (let's say 30 seconds - 2 minutes), which can be used as an output feature to train a regression model. But the problem is... How can I give these multiple ""detections"" as an input? The way I see it now, would be something like this:

+--------------------------------------------------------------+-----------+------------+------------+----------------+--+
|                          Detections                          | Variable1 | Variable 2 | Variable n | Output Feature |  |
+--------------------------------------------------------------+-----------+------------+------------+----------------+--+
| {person a, person b, person h, person z}                     |       132 |        189 |          5 |             50 |  |
| {person a, person b, person c, person d, person k, person m} |         1 |         50 |        147 |             80 |  |
| {person c, person e, person g, person f}                     |       875 |        325 |          3 |             20 |  |
+--------------------------------------------------------------+-----------+------------+------------+----------------+--+

Each of these persons would be a tuple of values: var_1, var_2, var_3, var_4. These values are not constant however! They do change between observations.

Different approach to explain it: there's multiple observations (variable amount) in each time segment (duration of time segment is a fixed integer to be chosen). These observations have a few variables that would indicate whether the observation is true or false. However, the threshold for it being true or false, is very much dependant on the other variables, that are not tied to the information of the persons. (These variables are the same for all of them, but vary in between time segments. Let's call'm ""environment features"") Lastly, the output feature is the product of the count of persons that resulted in ""True"" and a (varying) factor that is correlated to the environment features.

So I've been thinking about probabilistic AI, but the problem is that there isn't a known distribution between True/False.

  • Is there any technique I can apply to be able to use this kind of data as an input of a Neural Network (or other forms of ML)? Or is there a specific form of ML that is used for this kind of problems?

Thanks in advance!

",16932,,1671,,7/17/2018 17:14,7/17/2018 17:14,Multiple sets of input in Neural network (or other form of ML),,0,0,,,,CC BY-SA 4.0 7179,1,7186,,7/17/2018 13:09,,3,1721,"

I have extensively researched now for three days straight trying to find which algorithm is better in terms of which algorithm uses up more memory. I know uninformed algorithms, like depth-first search and breadth-first search, do not store or maintain a list of unsearched nodes like how informed search algorithms do. But the main problem with uninformed algorithms is they might keep going deeper, theoretically to infinity if an end state is not found but they exist ways to limit the search like depth-limited search.

So, am I right in saying that uninformed search is better than informed search in terms of memory with respect to what I said above?

Can anyone provide me with any references that show why one algorithm is better than the other in terms of memory?

",16906,,2444,,1/2/2022 12:52,1/3/2022 11:27,Which are more memory efficient: uninformed or informed search algorithms?,,1,0,,,,CC BY-SA 4.0 7181,2,,4955,7/17/2018 13:43,,1,,"

Your initial idea seems about right. Before creating your own classifier you might want to try transfer learning, using some pretrained network like VGG16 that is incorporated in most of machine learning frameworks.

As to inference on mobile devices, TensorFlow offers some tutorials in this subject: https://www.tensorflow.org/mobile/tflite/

",16929,,,,,7/17/2018 13:43,,,,0,,,,CC BY-SA 4.0 7182,1,,,7/17/2018 14:54,,2,186,"

When applying multinomial Naive Bayes text classification, I get very small probabilities (around $10e^{-48}$), so there's no way for me to know which classes are valid predictions and which ones are not. I'd the probabilities to be in the interval $[0,1]$, so I can exclude classes in the prediction with say a score of 0.5 or less. How do I go about doing this?

This is what I've implemented:

$$c_{\operatorname{map}}=\underset{\operatorname{c \in C}}{\arg \max }(P(c \mid d))=\underset{\operatorname{c \in C}}{\arg \max }\left(P(c) \prod_{1 \leq k \leq n_d} P\left(t_{k} \mid c\right)\right)$$

",16927,,2444,,7/18/2020 17:08,12/7/2022 4:07,Why do I get small probabilities when implementing a multinomial naive Bayes text classification model?,,1,2,,,,CC BY-SA 4.0 7185,1,7198,,7/17/2018 17:14,,2,58,"

I created a system where every moment takes photos of the face of who is in the vision of the camera. Initially I took 500 photos of me, to recognize its creator. This takes approximately 20 seconds.

Then every moment he recognizes faces and if it is me, he knows that his creator is present. Any different face, it creates a dataset with a different name and starts taking up to 500 photos to also recognize these faces.

When I say a certain command, it returns all faces I have found that have no ID.

I'm looking for ways to capture the images through some camera that I can carry while walking on the street and in public places.

The problem is that there would be several faces to name. I'm partially solving this problem by trying to recognize these people in social networks. I check the region where the photo was taken and try to find people who have checked in or liked the area on Facebook. But anyway, this is not the big problem, although it is looking for more effective solutions.

My big problem is: Can I do this? Do I have this right? Can I record a robber robbery and recognize his face in other places? Record an aggression, an act of prejudice and things like that?

The main purpose would be this, but could also be used for other purposes. My fear is being arrested for doing this. Both because he would be taking pictures of people without his consent.

ps: I'm thinking of having a camera in the palm of my hand. It would be a micro camera (I'm trying to find the product on the Internet), to be as discreet as possible.

",7800,,1671,,7/18/2018 21:21,7/18/2018 21:31,Can I recognize the faces of people around the world?,,1,1,,,,CC BY-SA 4.0 7186,2,,7179,7/17/2018 17:25,,2,,"

Uninformed Search Techniques

Breadth-First Search needs to store a frontier of nodes to visit next (where "visit" basically means: see if it's the goal, generate its children and add to frontier otherwise). You can visualize the memory requirements of this as a pyramid; initially, you just have the root node in there. As the search process continues and you keep going further down, it will become wider and wider (more memory required). The amount of memory required in the worst case to find a node at depth $d$ can be expressed as $O(b^d)$, where $b$ is the branching factor (see details here). Intuitively, whenever you go one level deeper, your memory requirements multiply by a factor of $b$ (your pyramid becomes $b$ times wider; actually this means pyramid is not quite the right shape, it should rapidly curve outwards and become wider much faster than a triangle/pyramid would).

Depth-First Search also stores a frontier (or stack in the case of DFS) of nodes to visit next. However, in the case of DFS, this generally grows less quickly. Whenever you visit a node (pop it off of the stack), you generate all of its children and push them on top of the stack. However, whereas BFS would then continue "to the right" by visiting a relatively "old" node and generating all of its children again, DFS continues by going "down" and visiting one of the children that only just got pushed onto the stack. Once DFS has finished a part of the search tree (you can visualize this in your head as DFS having completed searched for example the left half of the pyramid), that entire section no longer needs to be stored in memory. Assuming you have a search tree of finite depth $d$, the worst-case space complexity is $O(bd)$ (you have to search one path all the way down to a level of $d$, and for each of those levels have $b$ nodes in the stack).


Note that, based on the above, it is not sufficient to characterize your comparison as "uninformed" vs "informed" search. We've only just looked at uninformed search techniques, and already have two different memory requirements.

In the above, I also did not consider subtleties such as the question of whether or not you're additionally memorizing which nodes of a graph you have already previously visited so that you can avoid visiting them again. This is not necessary in a tree, but may be necessary in a graph with cycles.


Informed Search Techniques

A* is probably one of the most canonical examples of informed search algorithms. In terms of worst-case memory requirements, it's really similar to BFS; it also stores a frontier of nodes to visit next, but prioritizes those based on some estimate of "goodness" rather than Breadth-First order. In cases where the information you're using (typically a combination of known/incurred costs + heuristic estimate of future costs) is of extremely poor quality, this can regress to the level of Breadth-First Search, so the worst-case space complexity is the same as that for BFS (see details). In practice, if you have high-quality information (good heuristics), the memory requirements will be much better though; in the case of an ideal/perfect heuristic, your search algorithm will pretty much go directly to the goal and hardly require any memory.

There is also an algorithm called Iterative Deepening A* which has a much better space complexity, at the cost of often being slower. It is still an informed search algorithm using exactly the same kind of information as A* though.


So, really, memory requirements cannot be characterized in terms of informed vs. uninformed search; in both cases, there are algorithms that require a lot of memory, and algorithms that require less memory.

",1641,,2444,,1/3/2022 11:27,1/3/2022 11:27,,,,7,,,,CC BY-SA 4.0 7189,1,,,7/18/2018 8:21,,2,32,"

I know that the first layer uses a low-level filter to see the edge information. As the layer gets deeper, it will represent high-level (abstract) information. Is it because the combinations of filters used in the previous layer are used as filters in the next layer? (""Does the combination of the previous layer's filters make the next layer's filters?) If so, are the combinations determined in advance?

",16952,,,,,7/18/2018 8:21,"In CNN (Convolutional Neural Network), does the combination of previous layer's filters make next layer's filters?",,0,0,,,,CC BY-SA 4.0 7193,2,,7105,7/18/2018 10:09,,1,,"

Central Questions

Can ML/AI understand incomplete constructs like humans?

Do humans have some inherent experiences in life which makes AI incapable of performing [some capacities of human intelligence]?

Comprehension of Literature and Film

Whether software exists today that is able to understand like humans is not something that the general public can know. No such system has been released to the general public by any military or commercial organization thus far. Yet such an achievement would not necessarily be something the government or commercial entity would want to disclose outside of the lab and its management.

Deeper comprehension of full or partial speech may have been achieved by anti-terrorist units, since funding has been available for that work for well over a decade, but it also unlikely that some software somewhere can read or scan parts of a book and do a book report that would produce a passing grade.

Determining whether a movie will return its investment at the box office may have been accomplished by the researchers for the big studios, but that doesn't require understanding like someone understands their favorite movie. The answer to the title question is, ""Not yet.""

Will it occur?

Most in the fields of computer science, robotics, and artificial intelligence say, ""Yes."" For religious reasons, some say, ""No."" I'm in neither camp, and have not seen indisputable mathematical rigor that proves either the inevitability of artificial brains or the impossibility of them. It is scientifically irresponsible to make a positive statement based on either recent technology trends or superstitious fears.

What are Humans Like?

The phrase, ""Like humans,"" places a significant demand on researchers and engineers. Human beings can do much more than work with data sets (audiovisual in this case) with missing information, what statisticians call sparse matrices.

Whether software can realize higher achievements of human brains is unknown, and the predictability of such capabilities presents gross difficulties. Consider these human capabilities.

  • Write a screenplay, cast it with artificial characters, direct it, and produce it.
  • Initiate and partly develop a new branch of science, as did Isaac Newton or Lavoisier.
  • Love beyond a superficial expression of love

Conversely, what humans generally do poorly is distinguish between reliable projection and baseless conjecture. Most humans are prone to musings of technical visionaries, propaganda, marketing, rumor, innuendo, and gossip. It would not be surprising for software to someday soon be wiser in this respect, but only because the bar has been set so low by human culture.

There are other capabilities of the human brain that are exceptional and present enormous difficulties in even considering an approach to realizing in software. These may be a result of a hundred thousand years of DNA refinement yielding significant complexity and precision. It is also not outside the realm of possibility that a form of causality exists in the human brain that defies scientific study.

The Possibility of Impossibility

No one has every proven that all things that exist can be measured. Heisenberg has actually proven the opposite to most theoretical physicist's satisfaction.

A phenomenon that cannot be measured at all cannot be studied scientifically. Whether the phenomenon of choice is a unique condition has yet to be understood even in question form, prohibiting the emergence of a proper answer thus far.

Imminent Change as Significant as Industrialization

Nonetheless, important capabilities of human intelligence have been simulated and others are emerging. These are now part of the world economy and will not likely disappear. The possibility of Asimov like scenarios of robots and humans coexisting and conversing in more human-like is very likely.

It is when that occurs that autonomous vehicles and walking robots will begin to have experiences that are like human experiences and we will be able to directly observe just how much like humans they can behave in terms of intelligence and also emotions.

About the Specific Capabilities Mentioned

These are the capabilities mentioned in the question.

  • Predicting images
  • Predicting objects in an image
  • Understanding audio
  • Understanding the meaning of spoken sentences
  • Understanding a movie from the first half
  • Understanding a movie from parts of it

These are the more canonical ways of stating the first four capabilities for which research has produced usable system approaches.

  • Learning to distinguish image categories
  • Learning to distinguish object categories from within images
  • Parsing audio into notes, vocal tones, and transient sounds
  • Extracting semantics from a vocalization sufficiently to respond intelligently some of the time

These are a more accurate cognitive science description of the last two.

  • Guessing much better than random guessing would the story arch to the end of a movie from the sound and frame set of the its first half
  • Filling in character and story arch details from portions of a movie's sound and frame set

There is no obvious reason why such couldn't be done and done well by software, given sufficient research time to develop such a system and sufficient data to train with. Also, significant computing resources would be needed, of course, and possibly some nontrivial period of time.

",4302,,4302,,7/19/2018 8:47,7/19/2018 8:47,,,,3,,,,CC BY-SA 4.0 7195,1,7197,,7/18/2018 16:52,,1,378,"

I've just started to learn genetic algorithms and I have found these measurements of runs that I don't understand:

MBF: The mean best fitness measure (MBF) is the average of the best fitness values over all runs.

AES: The average number of evaluation to solution.

I have an initial random population. To evolve a population I do:

  1. Tournament selection
  2. One point crossover.
  3. Random resetting.
  4. Age based replacement with elitism (I replace the population with all offsprings generated).
  5. If I have generated G generations (in other words, I have repeated these four points G times) or I have found the solution, the algorithm ends, otherwise, it comes back to point 1.

Is the mean of the best fitness the mean fitness of all of each generations (G best fitness)?

MBF = (BestFitness_0 + ... + BestFitness_G) / G

I'm not English and I don't understand the meaning of "run" here.

",4920,,2444,,1/30/2021 3:19,1/30/2021 21:45,"How can I calculate the ""mean best fitness"" measure in genetic algorithms?",,1,0,,,,CC BY-SA 4.0 7196,1,,,7/18/2018 17:23,,2,49,"

I've been researching AI regulation and compliance (see my related question on law.stackexchange), and one of the big take-aways that I had is that the regulations that apply to a human will apply to an AI agent in most if not all cases. This has some interesting implications when you take a look at concepts like bias and discrimination.

In the case of a model with explicit rules like a decision tree or even a random forest, I can see how inspecting the rules themselves should reveal discrimination. What I'm struggling with is how do you detect bias in models like neural networks, where you provide the general structure of the model and a set of training data, and then the model self-optimizes to provide the best possible results based on the training data. In this case, the model could find biases in past human decisions that it was trained based on and replicate them, or it could find a correlation that isn't apparent to a human and inform decisions based on this correlation that may result in discrimination based on a wide array of factors.

With that in mind, my questions are:

  • What tools or methodologies are available for assessing the presence and source of bias in machine learning models?
  • Once discrimination has been identified, are there any techniques to eliminate bias from the model?
",16965,,,,,7/18/2018 17:23,What methods are there to detect discrimination in trained models?,,0,0,,,,CC BY-SA 4.0 7197,2,,7195,7/18/2018 18:04,,2,,"

The typical way you'll see a GA measured is that an algorithm with a population size of $N$ is ran $K$ times from new random seeds each time. That gives you $K$ total runs of the algorithm, each of which, at the end, had a final population of $N$ individuals. If you take the best of those $N$ from each run, you get $K$ "best" solutions found. The average fitness value of those $K$ solutions is your MBF.

AES here refers to the number of evaluations of the objective function the algorithm required. The reason for this is to provide a standardized amount of computation budget each algorithm used. Imagine if you compared algorithms on the basis of how many generations it took to find a good solution. In that case, I could just increase the population size of my algorithm by 100-fold or 1000-fold, and I'll probably look better. Same number of generations, but I gave my algorithm far more chances to find something good. Suppose I use wall-clock time. Now my algorithm might look better than yours just because I ran it on a much more powerful computer than you did.

The insight to make here is that GAs work by evaluating new search points. Let's just count how many times that happens. You can run a larger population for a shorter number of generations, or you can allocate the same amount of computation to running a longer time with fewer points each generation. What matters is just how many fitness function evaluations you exhausted in finding the answer.

For sudoku, you might have a fitness function that counted the number of rows, columns, or blocks that don't contain the correct digits of 1-9 and you minimize that function. You run your algorithm $K$ times from random seeds, and for each run, you record how many times you had to evaluate that fitness function before you found a $0$ (i.e., the puzzle was successfully solved). Average all $K$ of those counts, and that's your AES.

Generalizing a bit, you might calculate average evaluations to find some "good enough" solution, but the general concept is the same.

",3365,,2444,,1/30/2021 21:45,1/30/2021 21:45,,,,2,,,,CC BY-SA 4.0 7198,2,,7185,7/18/2018 21:26,,1,,"

I thought it might be a helpful place to start:

https://en.wikipedia.org/wiki/Google_Street_View_privacy_concerns

The wiki is well cited, so should lead to some useful tidbits. They break it down by continent and country.

Seems like a parallel to what you're doing, although, if you're not making your data public, I doubt you'll be facing the same privacy issues.

Also relevant would be: https://en.wikipedia.org/wiki/Expectation_of_privacy

Examples of places where a person has a reasonable expectation of privacy are a person's residence or hotel room and public places which have been specifically provided by businesses or the public sector in order to ensure privacy, such as public restrooms...

In general, one cannot have a reasonable expectation of privacy in things held out to the public.

My understanding is that there is no expectation of privacy in public places where you'd be capturing the images.

",1671,,-1,,6/17/2020 9:57,7/18/2018 21:31,,,,2,,,,CC BY-SA 4.0 7202,1,,,7/19/2018 11:01,,9,24948,"

I'm trying to create and test non-linear SVMs with various kernels (RBF, Sigmoid, Polynomial) in scikit-learn, to create a model which can classify anomalies and benign behaviors.

My dataset includes 692703 records and I use a 75/25% training/testing split. Also, I use various combinations of features whose dimensionality is between 1 and 14 features. However, the training processes of the various SVMs take much too long. Is this reasonable?

I have also examined the ensemble BaggingClassifier in combination with non-linear SVMs, by configuring the n_jobs parameter to -1; nevertheless, the training process proceeds again too slowly.

How can I speed up the training processes?

",16977,,2444,,4/29/2021 12:26,8/26/2021 16:00,Why does training an SVM take so long? How can I speed it up?,,7,2,,,,CC BY-SA 4.0 7204,2,,1963,7/19/2018 11:32,,6,,"

There certainly appear to have been research projects involving some form of text mining / information retrieval /etc. and StackExchange sites.

Some examples I was able to find through google/google scholar (unlikely to be anywhere near an exhaustive list):


More generally, Automated Question Answering systems appears to be a rather active area of research still, not a trivial / ""solved"" problem. StackExchange can be one source of data for such systems, but there are plenty of other sources of data too (Wikipedia, Quora, etc.).

",1641,,,,,7/19/2018 11:32,,,,0,,,,CC BY-SA 4.0 7207,1,,,7/19/2018 13:57,,3,195,"

How does one even begin to mathematically model an AI algorithm, like alpha-beta pruning or even its thousands of variations, to determine which variation is best?

",16906,,2444,,5/13/2020 10:45,5/13/2020 10:45,How does one even begin to mathematically model an AI algorithm?,,1,0,,,,CC BY-SA 4.0 7208,2,,4209,7/19/2018 14:52,,1,,"

You're correct that it is related to the fitness function, but only indirectly. Recall that a function is the mathematical embodiment of a conceptual relationship. The model is a motion model. Proximity is not simply applying the distance formula $D = \sqrt{(x_2 - x_2)^2 + (y_2 - y_2)^2}$.

Proximity as we ""feel it"" when driving a car is based on the risks we see in the trajectories of ourselves and the other moving objects along with the stationary objects. Mathematically, this is an integral over time of a distribution of probable locations.

Look for work in the aeronautics industry about collision avoidance. The mathematics and algorithms are well developed for antiaircraft weaponry (where the fitness function is the inverse of the one you want) and air traffic control, where the fitness is like yours except with an altitude dimension.

",4302,,2444,,6/26/2019 12:24,6/26/2019 12:24,,,,0,,,,CC BY-SA 4.0 7214,1,,,7/19/2018 23:51,,4,273,"

I have heard and read about HyperGAN, LSTM and a few other techniques, but I have a hard time piecing the overall concept together.

End Goal

Being able to input an instrumental and get an output of how to sing to that instrumental.

My Dataset

I have extracted pitch points from thousands of actual acapellas from real songs.

My Theory

Feed the AI a pitch point PLUS say 19 thousand points of the original song instrumental.

Illustration

The red line (on top) is the pitch viewed vertically (lower pitch down, higher pitch up) of the voice sung by the singer over time viewed horizontally.

The bottom image is the song's frequency viewed vertically (lower freq down, higher freq up) viewed horizontally over time.

We take a point in time of the instrumental, say 0 minutes 30 seconds, and extract 19k points of the FFT spectrum vertically and call this a frame.

We also take the same point in time of the voice pitch, and also refer to this as a frame.

So now we have a frame which contains 20 thousand data points, one being the pitch of the voice, and the rest being the frequencies of the songs content.

QUESTION

What kind of model could be used to teach the AI the correlation of the voice and the instrumental?

And also, I have a hard time understand how, once the AI is trained, how could just an instrumental be fed to the AI to output pitch values of how one COULD sing along to the song.

Like, training we need to input 20 thousand values, but when we want the AI to sing for us using just an instrumental, would it not still expect voice pitch input? At what layer would the instrumental be tapped into? At the outer most right layer?

EDIT

My mind has been working on this in the background throughout the day, and I am wondering if instead of feeding 19k points of instrumental data each frame (which would be points from the frequency domain), one could just feed the instrumental frame points (which would be points from the time domain).

Maybe that would be better, but then maybe the AI would get less ""resolution"" to work with, but could be trained faster (less computing power needed).

Let's say the frequency domain is fed (higher resolution), the AI could potentially find correlations from low notes, mid notes and high notes, in any combination (more computing power needed).

",16993,,16993,,7/20/2018 19:29,7/20/2018 21:12,"What would be the best approach to teach an AI to learn how to ""sing"" along a beat?",,1,0,,,,CC BY-SA 4.0 7215,1,,,7/20/2018 8:54,,8,786,"

In the brain, some synapses are stimulating and some inhibiting. In the case of artificial neural networks, ReLU erases that property, since in the brain inhibition doesn't correspond to a 0 output, but, more precisely, to a negative input.

In the brain, the positive and negative potential is summed up, and, if it passed the threshold, the neuron fires.

There are 2 main non-linearities which came to my mind in the biological unit:

  • potential change is more exponential than linear: small amount of ion channels is sufficient to start a chain-reaction of other channels activation's - which rapidly change global neuron's potential.

  • the threshold of the neuron is also non-linear: neuron fires only when the sum of its positive and negative potentials passed given (positive) threshold

So, is there any idea how to implement negative input to the artificial neural network?

I gave examples of non-linearities in biological neurons because the most obvious positive/negative unit is just a linear unit. But, since it doesn't implement non-linearity, we may consider implementing non-linearities somewhere else in the artificial neuron.

",12691,,2444,,5/23/2020 18:35,5/23/2020 18:35,How to model inhibitory synapses in the artificial neuron?,,3,0,,,,CC BY-SA 4.0 7217,1,,,7/20/2018 10:19,,1,87,"

This is actually something I have been researching a bit on my own.

Most movie scripts can be structurally analysed by using writing theory such as Dramatica. Dramatica is based upon a hierarchy of concepts, which can be topic modeled. The hierarchy of topic models would seem to work very well with the capsule neural networks.

I have been working with computational creativity problems in a narrative generation. The state of the art methods use Partial Order Causal Link Planners, but they depend on propositional logic. Alonzo Church presented the Superman dilemma (Louis Lane does not know that Clark Kent is Superman, but Superman knows, that he is Clark Kent) and invented Intensional Logic as a solution; the basic idea is, that if we do not know the context of the narrative, the meaning is always in superposition and can only be understood through entangled meanings from the background story. So, in a sense, propositional logic is limited by classic information theory constraints, while Church's logic can take a quantum information-theoretic approach. I do not believe that classic information theory can resolve narrative analysis problems. So, basically, the meaning of a narrative collapses (the superposition gets resolved) by using the hierarchical narrative structure and what we know beforehand.

So my intuition would be the following:

  • We can use Dramatica and potentially other narrative theories (hierarchical metamemetics, reverse SCARF, etc.) to create a hierarchical network like ImageNet, but for narratives.

  • We can build conceptual topic models. Dramatica has a hierarchy of 4-16-64-64 concepts and annotated data exists already.

  • When using hundreds of topic models, there will be a lot of false positives. However, the superposition of the topic models can be collapsed by using the hierarchical levels and some other dramatic analytics.

  • By using the capsule neural networks, we might be able to build a system, which could determine a narrative interpretation of the full story, which would make the most sense by using the concept hierarchy.

I tried to prove my intuition, but, unfortunately, Dramatica only has 300 movies analysed, and I was able to find scripts of only 10 of them; not enough data.

However, there are other hierarchical ontologies out there and other narrative structures; could the same intuition be used for political news for example?

",11626,,11626,,6/15/2020 14:47,6/15/2020 14:47,"Would it make sense to use together capsule neural neworks and ""topic / narrative modeling""?",,0,0,,,,CC BY-SA 4.0 7222,1,,,7/20/2018 14:13,,6,2360,"

first of all I want to specify the data available and what needs to be achieved: I have a huge amount of vacancies (in the millions). The information about the job title and the job description of each vacancy are stored separately. I also have a list of professions (around 3000), to which the vacancies shall be mapped.

Example: java-developer, java web engineer and java software developer shall all be mapped to the profession java engineer.

Now about my current researches and problems: Since a lot of potential training data is present, I thought a machine learning approach could be useful. I have been reading about different algorithms and wanted to give neural networks a shot.

Very fast I faced the problem, that I couldn't find a satisfying way to transform text of variable length to numerical vectors of constant size (needed by neural networks). As discussed here, this seems to be a non trivial problem.

I dug deeper and came across Bag of Words (BOW) and Text Frequency - Inverse Document Frequency (TFIDF), which seemed suitable at first glance. But here I faced other problems: If I feed all the job titles to TFIDF, the resulting word-weight-vectors will probably be very large (in the tenth of thousands). The search term on the other hand will mostly consist of between 1 and 5 words (we currently match the job title only). Hence, the neural network must be able to reliably map an ultra sparse input vector to one of a few thousand basic jobs. This sounds very difficult for me and I doubt a good classification quality.

Another problem with BOW and TFIDF is, that they cannot handle typos and new words (I guess). They cannot be found in TFIDF's word list, which results in a vector filled with zeros. To sum it up: I was first excited to use TFIDF, but now think it doesn't work well for what I want to do.

Thinking more about it, I now have doubt if neural networks or other machine learning approaches are even good solutions for this task at all. Maybe there are much better algorithms in the field of natural language processing. This moment (before digging into NLP) I decided to first gather the opinions of some more experienced AI users, so I don't miss the best solution.

So what would be a useful approach to this in your opinion (best would be an approach that is capable of handling synonyms and typos)? Thanks in advance!

p. s.: I am currently thinking about feeding the whole job description into the TFIDF and also do matches for new incoming vacancies with the whole document (instead of job title only). This will expand the size of the word-weight-vector, but it will be less sparse. Does this seem logical to you?

",17006,,,,,7/25/2019 14:58,Is 'job title classification' rather a problem of NLP or machine learning?,,4,0,,,,CC BY-SA 4.0 7228,1,,,7/20/2018 19:20,,4,578,"

What loss function should one use, knowing that the input image contains exactly one target object?

I am currently using MSE to predict the center of ROI coordinates and its width and height. All values are relative to image size. I think that such an approach does not put enough pressure on the fact those coordinates are related.

I am aware of the existence of algorithms like YOLO or UnitBox, and am just wondering if there might be some shortcut for such particular case.

",16929,,2444,,1/2/2022 12:47,1/2/2022 12:48,"What loss function should one use for object detection, knowing that the input image contains exactly one target object?",,1,0,,,,CC BY-SA 4.0 7230,2,,7214,7/20/2018 21:12,,1,,"

You have to define a function between voice and a corresponding sound, but there is no a best way to do it, it depends on what kind od output you care, for example you could create an algorithm that receives music and siging as separate ""files"" and then that algorithm finds the relation between them now you can give that algorithm a new music only file and it will create the singing part with the function of the previous music-singing ""system"". This is sing this music as the other one was sang.

At least that is the way i would do it.

",17018,,,,,7/20/2018 21:12,,,,1,,,,CC BY-SA 4.0 7231,1,7236,,7/20/2018 23:38,,6,1130,"

I know that there are several optimizations for alpha-beta pruning. For example, I have come across iterative deepening, principal variation search, or quiescence search.

However, I am a little bit confused about the nature of these algorithms.

  1. Are these algorithms an extension of the alpha-beta algorithm, or

  2. Are they completely new algorithms, in that they have got nothing to do the alpha-beta algorithm?

On this site, these algorithms fall into one of 4 categories, namely

  • mandatory
  • selectivity
  • scout and friends
  • Alpha-beta goes best-first

Does this mean that the alpha-beta algorithm is split into four areas and that they are specialized optimization algorithms for each area?

How do I even begin to decide which optimized algorithm to pick?

I advise people to visit this site: http://www.fierz.ch/strategy2.htm

",16906,,2444,,5/13/2020 21:03,5/13/2020 21:04,"Are iterative deepening, principal variation search or quiescence search extensions of alpha-beta pruning?",,1,0,,,,CC BY-SA 4.0 7235,1,,,7/21/2018 9:02,,2,70,"

I am learning AI and trying out my first real-life AI application. What I am trying to do is taking as an input various sentences, and then classifying the sentences into one of X number of categories based on keywords, and 'action' in the sentence.

The keywords are, for example, Merger, Acquisition, Award, product launch, etc. so in essence, I am trying to detect if the sentence in question talks about a merger between two organizations, or an acquisition by an organization, a person or an organization winning an award, or launching of a new product, etc.

To do this, I have made custom models based on the basic NLTK package model, for each keyword, and trying to improve the classification by dynamically tagging/updating the models with related keywords, synonyms, etc to improve the detection capability. Also, given a set of sentences, I am presenting the user with the detected categorization and asking whether it is correct or wrong, and if wrong, what is the correct categorization, and also identify the entities (company names, person names, product names, etc).

So the object is to first classify the sentence into a category, and additionally, detect the named entities in the sentence, based on the category.

The idea is, to be able to automatically re-train the models based on this feedback to improve its performance over time and to be able to retrain with as little manual intervention as possible. For the sake of this project, we can assume that user feedback would be accurate.

The problem I am facing is that NLK is allowing fixed-length entities while training, so for example a two-word award is being detected as two awards.

What should be my approach to solve this problem? Is there a better NLU (even a commercial one) that can address this problem? It seems to me that this would be a common AI problem, and I am missing something basic. I would love you guys to have any input on this.

",17028,,30725,,5/29/2020 13:48,5/29/2020 13:48,Sentence classification and named identity detection with automatic retraining,,0,0,,,,CC BY-SA 4.0 7236,2,,7231,7/21/2018 10:05,,1,,"
  1. Are these algorithms an extension of the alpha-beta algorithm, or

  2. Are they completely new algorithms, in that they have got nothing to do the alpha-beta algorithm?

Most of them are extensions of the Alpha-Beta pruning algorithm. For example, Iterative Deepening is almost the same as Alpha-Beta pruning, but automatically keeps repeating the algorithm with gradually-increasing depth limits until some time limit is reached, rather than just running once for a pre-determined depth limit.

Principal Variation Search also still uses Alpha-Beta as a basis, but performs many searches with significantly smaller [alpha, beta] windows than the standard Alpha-Beta pruning algorithm.

In most cases, these extensions would start out from an existing Alpha-Beta implementation, and build from there with some adaptations in the code. This is not necessarily the case for all of those extensions though, just for most. For example, Transposition Tables are kind of a separate extension that could be plugged into vanilla Minimax, or Alpha-Beta, or Principal Variation Search, or whatever you're using.


On this site, these algorithms fall into one of 4 categories, namely

  • mandatory
  • selectivity
  • scout and friends
  • Alpha-beta goes best-first

Does this mean that the alpha-beta algorithm is split into four areas and that they are specialized optimization algorithms for each area?

Those four categories are not mutually exclusive, they're more like... broad ""flavours"". What they list under Obligatory are some of the more basic extensions that any programmer should probably look into first if they were developing a chess-playing program. The other category are different ""flavours"", different ""broad ideas"". For example, everything listed under ""Selectivity"" is about searching ""interesting"" or ""exciting"" parts of the search tree deeper than ""less interesting"" or ""boring"" parts. Many of those ideas could be used regardless of whether you're using Alpha-Beta, Iterative Deepening, or PVS, and probably all could be combined with Transposition Tables as well.

How do I even begin to decide which optimized algorithm to pick?

This is really really difficult to decide just based on the names. In theory, which algorithm is the ""best"" will also highly depend on your specific game, and maybe even hardware. And, in many cases it's not even a choice between mutually exclusive parts; different ideas can be combined with each other in different ways.

The only solution here is really just to do lots of reading, lots of research, try implementing different things to better understand them.

",1641,,2444,,5/13/2020 21:04,5/13/2020 21:04,,,,1,,,,CC BY-SA 4.0 7238,2,,7090,7/21/2018 16:20,,5,,"

I found that there are cuDNN accelerated cells in Keras, for example, https://keras.io/layers/recurrent/#cudnnlstm. They are very fast. The normal LSTM cells are faster on CPU than on GPU.

",16687,,2444,,12/17/2021 20:09,12/17/2021 20:09,,,,2,,,,CC BY-SA 4.0 7243,2,,7207,7/22/2018 10:45,,2,,"

Begin by learning the mathematical treatments at their foundations.

  • Game theory pioneered John von Neumann and Oskar Morgenstern
  • Information theory pioneered by Claude Shannon (Bell Labs)
  • Incompleteness pioneered by Kurt Gödel (which led to Alonso Church's lambda calculus and Turing's completeness which led to a general criteria that defines what a programming language must do to be called general purpose.

You could take a course in Finite Math and Discrete Math, but there is no better way to understand than to go to the authors of the original ideas, which is why MIT and Cal Tech require it of freshmen. We don't need to be genius to understand genius. We just need to take the time to read. That's the genius of them.

It is good to evaluate the pruning of decision trees mathematically, and modeling the general and specific cases is the right approach. Congratulations for seeing that. The code for pruning is far more developed than the mathematics for modeling it. The literature usually shows speed results just before the conclusions, which is more like sitting in the audience with pop corn and a wager ticket than understanding the anatomy and proper care of a horse.

Once you understand what occurred a century ago that led to all computer science and see what mathematical conventions originated and in what context then you can read work like

Appl. Math. L&t. Vol. 4, No. 6, pp. 77-80, 1991 Printed in Great Britain. 08939659191 Pergamon Press plc Turing Computability with Neural Nets T. SIEGELMANN AND EDUARDO D. SONTAG Department of Computer Science, Rutgers University Department of Mathematics

... and ...

A Survey of Decision Tree Classifier Methodology S. Rasoul Safavian David Landgrebe TR-EE 90-54 September, 1990 School of Electrical Engineering Purdue University West Lafayette, Indiana 47907 (NASA-CR-188208).

WARNING: Don't be surprised if you are disappointed by most recent work after reading von Neuman, Shannon, and Gödel.

",4302,,4302,,7/24/2018 12:08,7/24/2018 12:08,,,,0,,,,CC BY-SA 4.0 7244,1,,,7/22/2018 11:03,,4,1425,"

I'm now reading a book titled as Hands-On Reinforcement Learning with Python, and the author explains the discount factor that is used in Reinforcement Learing to discount the future reward, with the following:

A discount factor of 0 will never learn considering only the immediate rewards; similarly, a discount factor of 1 will learn forever looking for the future reward, which may lead to infinity. So the optimal value of the discount factor lies between 0.2 to 0.8.

The author seems to be not going to explain further about the figure, but all the tutorials and explanations I have ever read write the optimal (or at least widely used) discount factor between 0.9 an 0.99. This is the first time I have seen such a low-figure discount factor.

All the other explanations the author makes regarding the discount factor are the same as I have read so far.

Is the author correct here or does it depend on cases? If it is, then what kind of problems and/or situations should I set the discount factor as low as such figure at?


EDIT

I just found the following answer at Quora:

Of course. A discount factor of 0 will never learn, meanwhile a factor near of 1 will only consider the last learning. A factor equal or greater than 1 will cause the not convergence of the algorithm. Values usually used are [0.2, 0.8]

EDIT: That was the learning factor. The discount factor only affect how you use the reward. For a better explanation:

State-Action-Reward-State-Action - Wikipedia

See influences of variables .

I don't know what is written in the question as it in not visible in Quora, but it seems that the 0.2 to 0.8 figure is used for learning factor, not discount factor. Maybe the author is confused with it...? I'm not sure what the learning factor is, though.

",7402,,-1,,6/17/2020 9:57,8/15/2018 20:29,Can the optimal value of discount factor in Deep Reinforcement Learning be between 0.2 to 0.8?,,1,0,,,,CC BY-SA 4.0 7247,1,7323,,7/22/2018 14:12,,16,10397,"

Is it possible to make a neural network that uses only integers by scaling input and output of each function to [-INT_MAX, INT_MAX]? Is there any drawbacks?

",17050,,,,,8/16/2018 13:25,Why do we need floats for using neural networks?,,4,0,,,,CC BY-SA 4.0 7248,2,,7244,7/22/2018 16:19,,2,,"

The discount factor is not something you should be optimising. It is typically part of the problem statement.

For practical purposes, you may set it below 1.0 for continuous problems when in fact you care about best long-term reward. Another option to avoid infinities on continuous problems is to re-formulate the problem as optimising average reward. A high discount factor of e.g. 0.99 or 0.999 should produce a similar policy as one based on average reward.

Is the author correct here or does it depend on cases?

The author appears to be either completely wrong, or just poor at explaining themselves on this part.

If it is, then what kind of problems and/or situations should I set the discount factor as low as such figure at?

A low discount factor is for when you care much more about immediate rewards. You set it that low when that is the case. You decide what you care about when you set the learning problem. The value of the discount factor is part of the setup that decides what the optimal policy is. You never set it low ""to help with optimising"" because changing the value could change the optimal policy.

",1847,,1847,,7/23/2018 7:26,7/23/2018 7:26,,,,0,,,,CC BY-SA 4.0 7249,2,,7247,7/22/2018 16:41,,0,,"

It is possible in principle, but you will end up emulating floating point arithmetic using integers in multiple places, so it is unlikely to be efficient for general use. Training is likely to be an issue.

If the output of a layer is scaled to [-INT_MAX, INT_MAX], then you need to multiply those values by weights (which you will also need to be integers with a large enough range that learning can progress smoothly) and sum them up, and then feed them into a non-linear function.

If you are restricting yourself to integer operations only, this will involve handling multiple integers to represent high/low words in a larger int type, that you then must scale (which introduces a multi-word integer division). By the time you have done this, it is unlikely there will be much benefit to using integer arithmetic. Although perhaps it is still possible, depending on the problem, network architecture and your target CPU/GPU etc.

There are instances of working neural networks used in computer vision with only 8-bit floating point (downsampled after training). So it is definitely possible to simplify and approximate some NNs. An integer-based NN derived from a pre-trained 32-bit fp one could potentially offer good performance in certain embedded environments. I found this experiment on a PC AMD chip which showed a marginal improvement even on PC architecture. On devices without dedicated fp processing, the relative improvement should be even better.

",1847,,,,,7/22/2018 16:41,,,,2,,,,CC BY-SA 4.0 7250,2,,7247,7/22/2018 16:57,,2,,"

Some people might argue we can use int instead of float in NN's as float can easily be represented as anint / k where k is a multiplying factor say 10 ^ 9 e.g 0.00005 can be converted to 50000 by multiplying with 10 ^ 9..

From a purely theoretical viewpoint: This is definitely possible, but it will result in a loss of precision since int falls in the INTEGER set of number whereas floats fall in the REAL NUMBER set. Converting real numbers to int's will result in high precision loss if you are using very high precisions e.g. float64. Real numbers have an uncountable infinity, whereas integers have countable infinity and there is a well known argument called Cantor's diagonalization argument which proves this. Here is a beautiful illustration of the same. After understanding the difference you will intuitionally gain an understanding why converting int's to float is not tenable.

From a practical viewpoint: The most well known activation activation function is sigmoid activation (tanh is also very similar). The main property of these activatons are they squash numbers to between 0 and 1 or -1 and 1. If you convert floating point to a integer by multiplying with a large factor, which will result in a large number almost always, and pass it to any such function the resulting output will always almost be either of the extremities (i.e 1 or 0).

Coming to algorithms, algorithms which are similar to backpropagation with momentum cannot run on int. This is because,s since you will be scaling the int to a large number, and momentum algorithms typically use some sort of momentum_factor^n formulae, where n is the number of examples iterated already, you can imagine the result if momentum_factor = 100 and n = 10.

Only possible place where scaling might work is for relu activation. the problems with this approach will be if the data will probably not fit very good in this model, there will be relatively high errors.

Finally: All NN's do is approximate a real valued function. You can try to magnify this function by multiplying it with a factor, but whenever you are switching from, real valued function to integers you are basically representing the function as a series of steps. Something like this happens (Image only for representation purposes):

You can clearly see the problem here, each binary number represent a step, to have better accuracy you have to increase the binary steps within a given length, which in your problem will translate to having very high value of bounds [-INT_MAX, INT_MAX].

",,user9947,,user9947,7/29/2018 5:19,7/29/2018 5:19,,,,10,,,,CC BY-SA 4.0 7252,1,,,7/22/2018 22:24,,3,420,"

I would like to know if having a really good evaluation function is as good as using any of the extensions of alpha-beta pruning, such as killer moves or quiescence search?

",16906,,2444,,2/2/2021 23:16,2/2/2021 23:16,Is a good evaluation function as good as any of the extensions of alpha-beta pruning?,,2,0,,,,CC BY-SA 4.0 7253,2,,7252,7/23/2018 7:24,,2,,"

A perfect evaluation function would mean that you only had to do a local search - i.e. maximise over next set of decisions - in order for an agent to behave optimally in an environment.

As such if you could somehow create that function, it would make a search with alpha-beta pruning redundant.

In practice, evaluation functions for complex environments are usually approximate, and significant improvement can be made by adding a deeper search.

Optimisations in search algorithms and improvements in evaluation function work together to make more efficient and closer-to-optimal solutions overall. An evaluation function provides global/general knowledge about the environment and goals. A tree search function provides local focus on solving a relatively small subset of the optimisation problem that is currently relevant.

",1847,,,,,7/23/2018 7:24,,,,0,,,,CC BY-SA 4.0 7254,1,,,7/23/2018 9:05,,2,533,"

Certain games, like checkers, have compulsory moves. In checkers, for instance, if there's a jump available a player must take it over any non-jumping move.

If jumps are compulsory, will there still be a need for a quiescence search?

My thinking is that I can develop an implementation of a quiescence search that first checks whether jumps are available. If there are then it can skip all non-jumping moves. If there's only one jumping move available, then I won't need to run a search at all.

Therefore, I will only use a quiescence search if I initially don't have to make a jump on my first move. I will only active quiescence search in my alpha-beta pruning becomes active. (The alpha-beta will only be active if my first algorithm which first checks if there are jumps available returns a 0, which means there are no jumps available.)

Is my thinking of implementing a quiescence search correct?


My options are slim when it comes to optimizations due to serious memory constraints, hence I won't be using PVS or other algorithm like that as they require additional memory.

",16906,,2444,,5/13/2020 20:45,5/13/2020 20:45,"If certain moves are compulsory, will there still be a need for a quiescence search?",,1,0,,,,CC BY-SA 4.0 7255,1,,,7/23/2018 9:39,,7,4344,"

Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders?

I cannot find any resources for that. Is it safe to assume that, since it works for other DNNs, it will also make sense to use it and will offer benefits on training AEs?

",6899,,2444,,12/26/2019 15:43,12/26/2019 15:43,Does it make sense to use batch normalization in deep (stacked) or sparse auto-encoders?,,0,0,,,,CC BY-SA 4.0 7256,1,7259,,7/23/2018 9:59,,3,3131,"

As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.

On the other hand, humans can easily generate any random number independently of other factors.

Does this suggest that absolute random number generation is an AI concept?

",17056,,2444,,12/16/2021 18:11,12/16/2021 18:11,Is true random number generation an AI concept?,,2,1,,,,CC BY-SA 4.0 7258,2,,7256,7/23/2018 12:06,,2,,"

Such a great question. I would concur with Dennis Soemers comment that humans are not great at thinking of random numbers (just think about any card trick). However, we are very good at creating randomness through our actions.

If you consider moving a computer mouse, the stock market, or playing a lottery, humans are very good at creating randomness through our constructs or actions.

I would pose that maybe randomness needs to be part of a AGI to make it more able to jump out of suboptimal valleys or change it’s topology rather than sticking always to the local minimax.

",11893,,,,,7/23/2018 12:06,,,,1,,,,CC BY-SA 4.0 7259,2,,7256,7/23/2018 12:34,,5,,"

As it can be easily pointed out that true random numbers cannot be generated fully by programming and some random seed is required.

This is true. In fact, it is impossible to solve using software. No software-only technique can generate randomness without an initial random seed or support from hardware.

This is also true for AI software. No AI design that uses deterministic software can do this - e.g. any Turing machine without a magic unexplained ""random"" function can be shown to remain deterministic, no matter how complex. That's because any combination of deterministic functions is deterministic. It may not be predictable without following the process exactly, it may be ""chaotic"" and depend critically on initial conditions, but it is 100% deterministic and repeatable.

On the other hand, humans can easily generate any random number independently of other factors.

Typically not high quality randomness. It is not clear how we make random decisions, but entirely possible that internally we rely effectively on noisiness from the environment or our own internal ""hardware"" doing something simple such as racing decisions between neurons (timing will vary due to speed of electrical impulses, diffusion time of neuro-transmitters across synapses etc)

Does this suggest that absolute random number generation an AI cocept?

I think it is orthogonal issue. We can already produce very high quality artificial randomness - better than human quality used by humans for conscious decisions (such as ""choose a random number between 1 and 10""). These artificial random number generating systems are part of modern cryptography and are tested thoroughly.

Essentially ""true"" artificial randomness is a solved problem using hardware, and does not involve anything that has traditionally been called AI.

In reverse, AI systems often rely on stochastic functions in order to break symmetry, break ties, regularise models etc. So it does look like some kind of RNG is necessary within an artificial agent. However, even pseudo-random number generators (PRNGs) seem to be fine for this purpose. Mersenne Twister is a very common choice for generating random numbers inside neural networks for weight initialisation, dataset shuffling, dropout regularisation, or when simulating environments for RL, or taking exploratory actions. Despite the fact that it is not ""true"" random, a PRNG will work just fine for these purposes.


Working definition of ""true"" randomness that I use: Even if you know the state of a system as accurately as modern physics allows, the output cannot be predicted with better accuracy than a fixed guess.

Human randomness already fails this test. If you ask someone to choose a number randomly between 1 and 9, you will generally have a better than 1 in 9 chance of guessing the correct value, based on statistical analysis. If we were able to take good state measurements of brains, it might be possible to predict with high accuracy - although this is unknown and not possible with current technology.

",1847,,1847,,7/23/2018 13:08,7/23/2018 13:08,,,,3,,,,CC BY-SA 4.0 7260,2,,7254,7/23/2018 13:19,,4,,"

I understand your question to be:

If some moves are compulsory, and my agent has no choice about which move to make next, do I need to perform a search, or can I just return the compulsory move?

The answer depends on what your goal is.

If your goal is to make an interactive agent that will play the game against you, then you are correct: there's no need to perform a search. Just return the compulsory move, and then run the search next time your agent has a choice about what to do.

If your goal is to determine the optimal way to play a game, or the expected payoff from a certain game position (another common use of search techniques), then you should run the search as normal, since the forced move won't necessarily lead to a particular end state.

Tangentially, if you're interested in ways to speed up search for checkers, check out the Chinook papers. There's a popularized account here, and more technical ones here and here by Schaeffer et al.

",16909,,,,,7/23/2018 13:19,,,,0,,,,CC BY-SA 4.0 7261,2,,7252,7/23/2018 13:34,,4,,"

To build on Neil's answer a bit, you're right that the better your evaluation function gets, the less work your optimization function will need to perform. If your evaluation function gets good enough, you won't need to search at all.

This is not just an academic idea though! It's actually fairly widely used, and has been key to solving several games.

The first example I'm aware of is Tesauro's TD-Gammon player, from 1995. Tesauro used the ideas of reinforcement learning and self-play to train a Neural Network to act as an evaluation function. TD-Gammon played with just a 2-move lookahead using the best evaluation function that was found, and was deemed better than most (all?) human expert players at the time.

More recently, AlphaGo Zero used similar techniques to solve Go, but learning both an evaluation function and (separately) a function to randomize over possible moves.

",16909,,,,,7/23/2018 13:34,,,,1,,,,CC BY-SA 4.0 7263,1,,,7/23/2018 14:37,,4,121,"

The ability to recognize an object with particular identifying features from single or multiple camera shoots with the temporal dimension digitized as frames has been shown. The proof is that the movie industry does face replacement to reduce liability costs for stars when stunts are needed. It is now done in a substantial percentage of action movie releases.

This brings up the question of how valuable recognizing a stop sign is compared to the value of recognizing an action. For instance, in the world of autonomous vehicles, should there even be stop signs. Stop signs are designed for lack of intelligence or lack of attention, which is why any police officer will tell you that almost no one comes to a full stop per law. What human brains intuitively looks for is the potential of collision.

Once what we linguistically perceive as verbs can be handled in deep learning scenarios as proficiently as nouns can be handled, the projection of risk becomes possible.

This may be very much the philosophy behind the proprietary technology that allows directors to say, ""Replace the stunt person's face with the movie's protagonist's face,"" and have a body of experts execute it using software tools and LINUX clusters. The star's face is projected into the model of the action realized in the digital record of the stunt person.

Projected action is exactly what our brain does when we avoid collisions, and not just with driving. We do it socially, financially, when we design mechanical mechanisms, and in hundreds of other fields of human endeavor.

If we consider the topology of GANs as a loop in balance, which is what it is, we can then see the similarity of GANs to the chemical equilibria between suspensions and solutions. This gives us a hint into the type of topologies that can project action and therefore detect risk from audiovisual data streams.

Once action recognition is mastered, it is a smaller step to use the trained model to project the next set of frames and then detect collision or other risks. Such would most likely make possible a more reliable and safe automation of a number of AI products and services, breaking through a threshold in ML, and increased safety margins throughout the ever increasing world population density.

... which brings us back to ...

What topologies support recognition of action sequences?

The topology may have convolution, perhaps in conjunction with RNN techniques, encoders, equilibria such as the generative and discriminative models in GANs, and other design elements and concepts. Perhaps a new element type or concept will need to be invented. Will we have to first recognize actions in a frame sequence and then project the consequences of various options in frames that are not yet shot?

Where would the building blocks go and how would they be connected, initially dismissing concerns about computing power, network realization, and throughput for now?

Work may have been done along this area and realized in software, but I have not seen that degree of maturity yet in the literature, so most of it, if there is any, must be proprietary at this time. It is useful to open the question to the AI community and level the playing field.

",4302,,4302,,7/25/2018 10:02,7/25/2018 10:02,What topologies support recognition of action sequences?,,1,4,,,,CC BY-SA 4.0 7264,1,7266,,7/23/2018 15:00,,2,317,"

For my pet project I’m looking for a grid-like world simulation with some kind of resources that requires from agent incrementally intelligent behaviour to survive.

Something like this steam game, but with API. I’ve seen minecraft fork, but it’s too complex for my task. There is pycolab, i can build some world on this engine, but I’d prefer ready-to-use simulations.

Is there any option? I'll appreciate any suggestion.

",16940,,1671,,7/23/2018 19:25,7/23/2018 19:25,Is there any opensource 2d open-world simulation with python API?,,1,0,,2/6/2021 18:05,,CC BY-SA 4.0 7265,2,,2712,7/23/2018 15:17,,1,,"

A very large one, the world wide web with highly scaled and optimized indexing by Google.com is the most distributed and robust schema-agnostic database known today. Without the schema-awareness Google brought to the table by applying more rigorous information science to the table, it was almost useless to those that did not know the URL of the target document in advance.

Schema agnosticism is another way of saying that the database cannot

  • Provide meta information to the services accessing it,
  • Normalize the structure using simple SQL query-insert combinations
  • Proactively optimize the keys automatically as is now possible with machine learning, or
  • Validate insertions

Without first detecting a schema from data patterns. Moving away from structure is appealing because you can just jam data in like a librarian without a book shelf. However, the data scientist will point out that adding entropy working alongside thermodynamic devolution into stochasm.

The purpose of storing data is to be able to retrieve it. Feature extraction is an opportunity to improve structure automatically during the storing structure, rather than store documents chaotically, a trend that will not lead anywhere good for the world of IT.

Consider whether Google is successful because it organizes its data as it crawls or later as we enter key phrases. Which is the efficient sequence?

One more point, Wikipedia is a blog, and they know this, which is why they want peer review for everything now (after much of the information was added without peer review). It is a good place to find lists but not verified facts. The existence of a Wikipedia page is definitely not an indication of the value of the concept on it.

",4302,,,,,7/23/2018 15:17,,,,4,,,,CC BY-SA 4.0 7266,2,,7264,7/23/2018 15:33,,2,,"

You could try Mesa.

It has various examples that are commonly-used in agent-based modelling, like Epstein's model, a wolf/sheep predator/prey model, and many more.

There is also an introductory tutorial.

",1641,,,,,7/23/2018 15:33,,,,0,,,,CC BY-SA 4.0 7267,2,,5955,7/23/2018 17:05,,3,,"

The common statement that Artificial Neural Networks are inspired by the neural structure of brains is only partially true.

It is true that Norbert Wiener, Claude Shannon, John von Neuman, and others began the path toward practical AI by developing what they then called the electronic brain. It is also true

  • Artificial networks have functions called activations,
  • Are wired in many-to-many relationships like biological neurons, and
  • Are designed to learn an optimal behavior,

but that is the extent of the similarity. Cells in artificial networks such as MLPs (multilayer perceptrons) or RNN (Recurrent neural networks) are not like cells in brain networks.

The perceptron, the first software stab at arrays of things that activate, was not an array of neurons. It was the application of basic feedback involving gradients, which had been in common use in engineering ever since James Watt's centrifugal governor was mathematically modeled by Gauss. Successive approximation, a principle that had been in use for centuries, was employed to incrementally update an attenuation matrix. The matrix was multiplied by the vector feeding an array of identical activation functions to produce output. That's it.

The projection in a second dimension to a multi-layer topology was made possible by the realization that the Jacobian could be used to produce a corrective signal that, when distributed as negative feedback to the layers appropriately, could tune the attenuation matrix of a sequence of perceptrons and the network as a whole would converge upon satisfactory behavior. In the sequence of perceptrons, each element is called a layer. The feedback mechanism is now called back propagation.

The mathematics used to correct the network is called gradient descent because it is like a dehydrated blind man using the gradient of the terrain to find water, and the issues of doing that are similar too. He might find a local minima (low point) before he finds fresh water and converge on death rather than hydration.

The newer topologies are the additions of already existing convolution work used in digital image restoration, mail sorting, and graphics applications to create the CNN family of topologies and the ingenious use of what is like a chemical equilibrium from first year chemistry to combine optimization criteria creating the GAN family of topologies.

Deep is simply a synonym for numerous in most AI contexts. It sometimes infers complexity in the higher level topology (above the vector-matrix products, the activations, and the convolutions).

Active research is ongoing by those who are aware how different these deep networks are from what neural scientists have discovered decades ago in mammalian brain tissue. And there are more differentiators being discovered today as learning circuitry and neuro-chemistry in the brain is investigated from the genomic perspective.

  • Neural plasticity ... change in circuit topology due to dendrite and axiom growth, death, redirection, and other morphing
  • Topological complexity ... large number of axioms crisscross without interacting and are deliberately shielded from cross-talk (independent) most likely because it would be disadvantageous to let them connect [note 1]
  • Chemical signaling ... mammalian brains have dozens of neuro-transmitter and neuro-regulation compounds that have regional effects on circuitry [note 2]
  • Organelles ... living cells have many substructures and it is known that several types have complex relationships with signal transmission in neurons
  • Entirely different form of activation ... activations in common artificial neural nets are simply functions with ordinal scalars for both range and domain ... mammalian neurons operate as a function of both amplitude and relative temporal proximity of incoming signals [note 3]

[1] Topology is ironically both a subset of architecture (in the fields of building design, network provisioning, WWW analysis, and semantic networks), yet at the same time topology is, much more than architecture, at the radical center of both AI mathematics and effective actualization in control systems

[2] The role of chemistry may be essential to learning social and reproductive behavior that interrelates with DNA information propagation, linking in complex ways learning at the level of an ecosystem and the brain. Furthermore, long term and short term learning divides the brain's learning into two distinct capabilities too.

[3] The impact of the timing of incoming signals on biological neuron activation is understood to some degree, but it may impact much more than neuron output. It may impact placticity and chemistry too, and the organelles may play a role in that.

Summary

What machine learning libraries do is as much simulating the human brain as Barbie and Ken dolls simulate a real couple.

Nonetheless, remarkable things are arising in the field of deep learning, and it would not surprise me if autonomous vehicles become fully autonomous in our lifetimes. I would not recommend to any students to become a developer either. Computers will probably code much better than humans and orders of magnitude faster, and possibly soon. Some tasks are not of the kind that biology has evolved to do and computers can exceed human capabilities after only a few decades of research, eventually exceeding human performance by several orders of magnitude.

",4302,,4302,,1/11/2019 15:40,1/11/2019 15:40,,,,1,,,,CC BY-SA 4.0 7268,1,,,7/23/2018 17:58,,-1,62,"

Deep learning is based on getting a large number of samples and essentially making statistical deductions and outputting probabilities.

On the other hand, we have formal programming languages, like PROLOG, which don't involve probability.

Is there any essential reason why an AI could be called conscious without being able to learn in a statistical manner, i.e. by only being able to make logical deduction alone (It could start with a vast number of innate abilities)?

Or is probability and statistical inference a vital part of being conscious?

",4199,,2444,,12/12/2021 17:29,12/12/2021 17:29,How important will statistical learning be to a conscious AI?,,1,0,,,,CC BY-SA 4.0 7269,2,,7268,7/23/2018 21:21,,1,,"

Computational Learning Theory gives us an interesting framework to understand what statistical learning is doing.

The gist of it is, we can model the process of statistical learning as one of formal deduction. The learning itself does not require a random element.

This shouldn't be too surprising. Consider a classic decision tree learner like C4.5 or ID3: the algorithm works through the data deterministically, and no random decisions are made. When asked to make a prediction, the learned model returns the frequency of each possible label within the most similar subpopulation from its training data, with similarity defined according to the algorithm's rules for partitioning data.

There's no reason you can't write a decision tree learner (or even a deep learning algorithm) in Prolog, it just might not be very efficient, or very practical.

",16909,,,,,7/23/2018 21:21,,,,0,,,,CC BY-SA 4.0 7271,2,,7263,7/23/2018 21:56,,1,,"

This is an old area of AI called ""Plan Recognition"", which has about 3.5 million results in Google Scholar.

A lot of the modern work is done with classical search techniques coupled with expert domain knowledge, or related reasoning concepts like Hierarchical Task Networks.

I'm not aware of or able to find recent research using deep neural networks for this problem, but I think there are some data-drive approaches to the related work in video-game player modeling.

",16909,,,,,7/23/2018 21:56,,,,0,,,,CC BY-SA 4.0 7273,1,7278,,7/24/2018 11:26,,3,166,"

This seems like a natural fit, though I've not heard of any, yet.

I would love to know if any MET office, government, military or academic institution has taken all (or sizeable portion of) recorded global weather data for, say, the last 50 years (or since we, as a race, have been using weather satellites) and used it in an AI system to predict future weather.

",10938,,,,,7/24/2018 16:00,Are any organisations using AI to predict weather?,,1,0,,,,CC BY-SA 4.0 7274,1,,,7/24/2018 12:47,,19,25011,"

I think that the advantage of using Leaky ReLU instead of ReLU is that in this way we cannot have vanishing gradient. Parametric ReLU has the same advantage with the only difference that the slope of the output for negative inputs is a learnable parameter while in the Leaky ReLU it's a hyperparameter.

However, I'm not able to tell if there are cases where is more convenient to use ReLU instead of Leaky ReLU or Parametric ReLU.

",16199,,1671,,7/24/2018 20:03,5/19/2020 10:37,What are the advantages of ReLU vs Leaky ReLU and Parametric ReLU (if any)?,,1,0,,,,CC BY-SA 4.0 7275,2,,7228,7/24/2018 12:49,,1,,"

I am currently using MSE to predict the center of ROI coordinates and its width and height. All values are relative to image size. I think that such an approach does not put enough pressure on the fact those coordinates are related.

At first glance, this looks quite reasonable. Computer vision is not really my main area of expertise, so I did some googling around, and one of the first repositories I ran into does something very similar. It may be interesting for you to look into the code and the references in that repository in more detail.

It looks to me like they're also using the MSE loss function. I'm not 100% sure how they define the bounding boxes, maybe you can figure it out by digging through the code. You currently define bounding boxes by:

  1. X coordinate of center of bounding box
  2. Y coordinate of center of bounding box
  3. Width of bounding box
  4. Height of bounding box

You are right in that these coordinates are quite closely related. If the center is incorrect (for example, a bit too far to the right), that mistake could partially be "fixed" by taking a greater width (the bounding box would go a bit too far to the right, but still encapsulate the object). I don't know if this is necessarily a problem, or a fact that should be exploited in some way or something that should be "put pressure on". If this is something you are concerned about, I suppose you could alternatively define the bounding box as follows (I'm not sure whether or not this is what's done in the repository linked above):

  1. X coordinate of top-left corner of bounding box
  2. Y coordinate of top-left corner of bounding box
  3. X coordinate of bottom-right corner of bounding box
  4. Y coordinate of bottom-right corner of bounding box

Intuitively, I suspect the relation between those two corner points will be less strong than the relation you identified exists between center + width + height. A "mistake" in coordinates of the top-left corner cannot be partially "fixed" by placing the bottom-right corner somewhere else.

",1641,,2444,,1/2/2022 12:48,1/2/2022 12:48,,,,1,,,,CC BY-SA 4.0 7276,2,,1481,7/24/2018 13:12,,0,,"

No General Movie Search Yet

There have been successes in recognizing a very narrow sequence of a very narrow set of possible actions, but nothing like a general movie searching system that can return a set of matches with the start time, end time, and movie instance for each match to one of the search criteria listed in this question.

  • Somebody was driving a car
  • Kissing
  • Eating
  • Scared
  • Talking over the phone

Normalizing the List

First of all, "Was scared," is not the description of an action. It should be, "Becoming scared." Secondly, "Talking over the phone," is not a proper action description. It should be a conjunctive action such as, "Talking into a phone AND listening to the same phone." To make the list homogenous in format, the first item should be "Car driving," since the actor is human in every other case.

  • Car driving
  • Kissing
  • Eating
  • Becoming scared
  • Talking into a phone and listening to the same phone.

Realistic System Design Expectations

It is unrealistic to think that an artificial neural net, by itself, can be trained to return as output the set of start and stop ranges and associated movie instances from a database of movies and one of the above list items as input. This will require a complex system with many ANNs and other ML devices and may require other AI components that are not activation type networks at all. Certainly convolution kernels and various types of encoders should be considered as key system components.

You will need a large amount of training data to cover the above six cases (the last of the five items actually being two distinct actions that we normally associate and consider one). If you want to detect more actions, you will need a large amount of training data for them too.

Verbs and Nouns

The reason this question is interesting to me is that recognizing ACTIONS are not the same as recognizing ITEMS. All mammals learn ITEMS first and ACTIONS later. Linguistically, nouns come before verbs in child language development. That is because, just as detecting edges is preliminary to detecting shapes, which is preliminary to detecting objects, detecting motion is preliminary to detecting action.

Verbs like, "Eating," are an abstraction over the top of the motion, and, in the case of eating, the motion is complex. Also, eating is not the same thing as gum chewing, so the sequence detected must be as follows:

  1. Insertion of food into the face through the mouth
  2. Chewing
  3. Swallowing

The probability of a sequence is the product of the probability of its parts so that math is simple and easy to implement. Concurrency, as in the case of conjunctive actions like talking into and listening to the same phone, is also relatively easy to handle in general.

A Realistic Approach

Certainly, generalization (and more specifically feature extraction) will need to occur in object recognition, collision detection, motion detection, facial recognition, and other planes simultaneously. A complex topology, perhaps employing equilibria as in GAN design, will most likely be necessary to assemble elements of criteria associated with the movie query string and to run windows over the frames of each movie.

To provide a service that returns results within a few days or weeks will probably require a cluster and DSP hardware (perhaps leveraging GPUs).

Special Cases that Human Brains Handle

Determining how long one of the two elements of concurrency can be undetected before it invalidates the conjunction can be tricky. (How long can one not speak into the phone before it appears that it is no longer considered phone conversation?)

If in the movie, only the swallowing is shown, a human can infer eating. That kind of conclusion reliability from sparse data is a huge AI challenge discussed in various contexts throughout the literature.

The Emergence of Associated Technology — A Projection

I suspect that the system topography comprised of ANNs, encoders, convolution kernels, and other components to perform the search for any of a select set of actions will emerge within the next ten years. Work seems to be tracking in that direction in the literature.

A system that will acquire its own training information, sustainably grow in knowledge and perform general searches if increasing breadth and complexity may be anywhere from forty to two hundred years out. It is difficult to predict.

Gross Overoptimistic Predictions

Every generation seems to view knowledge growth as an exponential function and tends to make unrealistic predictions about the advent of certain coveted technology capabilities. Most of the predictions fail dramatically. I have come to believe that exponential growth is an illusion created by the inverse exponential decay of interest in the past with respect to time.

We lose track of the energy and rate of growth in eras before us because they become socially irrelevant. People into scientific history, like Whitehead, Kuhn, and Ellul know that technology has moved forward quickly for at least a few hundred years. Vernadsky inferred in his The Biosphere that life may not have arisen, that like matter and energy, it may always have existed. I wonder if technology has been moving at an essentially constant rate for the last 50,000 years.

Germany decided to double its solar panel energy output every year and published its exponential success, until a few years ago when doubling it again would cost a hundred billion dollars more than what they had to spend. They stopped publishing the exponential growth graphs.

",4302,,36737,,3/31/2021 22:21,3/31/2021 22:21,,,,0,,,,CC BY-SA 4.0 7278,2,,7273,7/24/2018 16:00,,2,,"

People have used machine learning models on aspects of weather forecasting, as here: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting or here Predicting Solar Generation from Weather Forecasts using Machine Learning. I've been loosely associated with an effort to use ML techniques to predict utility demand from weather data. But note that these are looking to predict the implications of the weather forecast, not the forecast itself. Lots of effort goes into refining the physics-based models that underlie our everyday weather forecasts, and AFAIK machine learning hasn't resulted in any ""secret sauce"" that gives better results than these models provide.

",2329,,,,,7/24/2018 16:00,,,,0,,,,CC BY-SA 4.0 7279,2,,6491,7/24/2018 16:13,,3,,"

According to your example:

Trees will likely be in the bottom half of the image. Still, you will not know whether there will be one, two or five trees. Thanks to translation invariance property of CNN's, each tree will activate filters responsible for tree detection. You still need to handle those few exceptions where trees are on hill.

To achieve better results in this particular case, you might want to consider some kind of focus mechanism, that will try to get rid of unwanted part of picture, in case when there are (for example) no hills. Take a look at Spatial Transformer Networks. During training it learns to predict spatial transformation (for example zoom) that will help ""main"" classifier to predict class of image.

",16929,,,,,7/24/2018 16:13,,,,0,,,,CC BY-SA 4.0 7280,1,7297,,7/24/2018 16:31,,8,434,"

Could you please let me know which of the following classification of Neural Network's learning algorithm is correct?

The first one classifies it into:

  • supervised,
  • unsupervised and
  • reinforcement learning.

However, the second one provides a different taxonomy on page 34:

  • learning with a teacher (error correction learning including incremental and batch training),
  • learning without a teacher (reinforcement, competitive, and unsupervised learning)
  • memory-based learning, and
  • Boltzmann learning.
",16141,,2444,,12/4/2019 15:30,12/4/2019 15:30,What is the relationship between these two taxonomies for machine learning with neural networks?,,1,0,,,,CC BY-SA 4.0 7286,1,,,7/24/2018 23:03,,8,188,"

Most of the literature considers text classification as the classification of documents. When using the bag-of-words and Bayesian classification, they usually use the statistic TF-IDF, where TF normalizes the word count with the number of words per document, and IDF focuses on ignoring widely used and thus useless words for this task.

My question is, why they keep the documents separated and create that statistic, if it is possible to merge all documents of the same class? This would have two advantages:

  • You can just use word counts instead of frequencies, as the documents per class label is 1.

  • Instead of using IDF, you just select features with enough standard deviation between classes.

",6114,,2444,,12/12/2020 0:19,9/3/2022 6:07,Why are documents kept separated when training a text classifier?,,1,0,,,,CC BY-SA 4.0 7287,2,,7274,7/25/2018 0:34,,13,,"

Combining ReLU, the hyper-parameterized1 leaky variant, and variant with dynamic parametrization during learning confuses two distinct things:

  • The comparison between ReLU with the leaky variant is closely related to whether there is a need, in the particular ML case at hand, to avoid saturation — Saturation is thee loss of signal to either zero gradient2 or the dominance of chaotic noise arising from digital rounding3.
  • The comparison between training-dynamic activation (called parametric in the literature) and training-static activation must be based on whether the non-linear or non-smooth characteristics of activation have any value related to the rate of convergence4.

The reason ReLU is never parametric is that to make it so would be redundant. In the negative domain, it is the constant zero. In the non-negative domain, its derivative is constant. Since the activation input vector is already attenuated with a vector-matrix product (where the matrix, cube, or hyper-cube contains the attenuation parameters) there is no useful purpose in adding a parameter to vary the constant derivative for the non-negative domain.

When there is curvature in the activation, it is no longer true that all the coefficients of activation are redundant as parameters. Their values may considerably alter the training process and thus the speed and reliability of convergence.

For substantially deep networks, the redundancy reemerges, and there is evidence of this, both in theory and practice in the literature.

  • In algebraic terms, the disparity between ReLU and parametrically dynamic activations derived from it approaches zero as the depth (in number of layers) approaches infinity.
  • In descriptive terms, ReLU can accurately approximate functions with curvature5 if given a sufficient number of layers to do so.

That is why the ELU variety, which is advantageous for averting the saturation issues mentioned above for shallower networks is not used for deeper ones.

So one must decided two things.

  • Whether parametric activation is helpful is often based on experimentation with several samples from a statistical population. But there is no need to experiment at all with it if the layer depth is high.
  • Whether the leaky variant is of value has much to do with the numerical ranges encountered during back propagation. If the gradient becomes vanishingly small during back propagation at any point during training, a constant portion of the activation curve may be problematic. In such a scase one of the smooth functions or leaky RelU with it's two non-zero slopes may provide adequate solution.

In summary, the choice is never a choice of convenience.


Footnotes

[1] Hyper-parameters are parameters that affect the signaling through the layer that are not part of the attenuation of inputs for that layer. The attenuation weights are parameters. Any other parametrization is in the set of hyper-parameters. This may include learning rate, dampening of high frequencies in the back propagation, and a wide variety of other learning controls that are set for the entire layer, if not the entire network.

[2] If the gradient is zero, then there cannot be any intelligent adjustment of the parameters because the direction of the adjustment is unknown, and its magnitude must be zero. Learning stops.

[3] If chaotic noise, which can arise as the CPU rounds extremely small values to their closest digital representation, dominates the correction signal that is intended to propagate back to the layers, then the correction becomes nonsense and learning stops.

[4] Rate of convergence is a measure of the speed (either relative to microseconds or relative to the iteration index of the algorithm) in which the result of learning (system behavior) approaches what is considered good enough. That's usually some specified proximity to some formal acceptance criteria for the convergence (learning).

[5] Functions with curvature are ones that are not visualized as straight or flat. A parabola has curvature. A straight line does not. The surface of an egg has curvature. A perfect flat plane does not. Mathematically, if any of the elements of the Hessian of the function is non-zero, the function has curvature.

",4302,,4302,,8/5/2018 4:22,8/5/2018 4:22,,,,4,,,,CC BY-SA 4.0 7292,2,,7286,7/25/2018 9:23,,1,,"

My question is, why they keep the documents separated and create that statistic, if it is posible to merge all documents of the same class? This would have two advantages:

  • You can just use word counts instead of frequencies, as the documents per class label is 1.

In general, I don't think this is the case. I don't know if you have a specific equation in mind where it would end up being the same thing mathematically? Anyway, in general, it is possible that some documents in your corpus are very short, and others are very long. In such cases, you'd still want to make sure to use frequencies rather than raw word counts.

For example, suppose you have one very short text that is specifically about England. The word ""England"" may appear 10 times, but have a very high frequency due to it being a short text. If you compare it to a massive text that is about all countries in the world, that massive text may have the word ""England"" appearing 20 times, but with a significantly lower (relative) frequency.

  • Instead of using IDF, you just select features with enough standard deviation between classes.

I don't think this would work correctly because you may have significant differences among documents within a single class. Suppose, for example, that you have the following two classes of documents:

  1. Scientific articles (about AI, math, biology, linguistics, astronomy, whatever else you can think of...)
  2. News articles

Each of the ""subdomains"" in the single ""scientific articles"" class would likely have some highly specific terminology they use, which could be detected through TF-IDF. However, even though they're all in the same class of ""scientific articles"", they're likely all quite different from each other. If you put them all together and treat them as a single document, there is a risk that they'll all ""average out"" and become much more difficult to distinguish from a more general class such as the class of ""news articles"".

",1641,,1641,,7/25/2018 11:38,7/25/2018 11:38,,,,0,,,,CC BY-SA 4.0 7293,1,,,7/25/2018 13:00,,1,88,"

I am facing a problem and do not know whether it is even solvable: I want to predict the behaviour of a system using a DNN, say a CNN, in the sense that I want to predict the time and intensity of a maneuver performed by a player. Let's leave it relatively abstract like this, the details do not matter.

My question is now whether there is any way of knowing how well my CNN performs. My goal would be to derive statements of the form "With x% probability, the correct maneuver angle is within the predicted angle +-y%".

Can such statements be derived e.g. using statistical analysis of the test data? I saw approaches toward verification and validation of DNNs using satisfiability modulo theory, but did not really understand the details. Would this be applicable here? It seems a little overkill...

",16901,,-1,,6/17/2020 9:57,7/28/2018 17:22,Confidence interval around a DNN prediction,,0,0,,,,CC BY-SA 4.0 7294,1,7295,,7/25/2018 13:05,,2,2422,"

The below code is a max pooling algorithm being used in a CNN. The issue I've been facing is that it is offaly slow given a high number of feature maps. The reason for its slowness is quite obvious-- the computer must perform tens of thousands of iterations on each feature map. So, how do we decrease the computational complexity of the algorithm?

('inputs' is a numpy array which holds all the feature maps and 'pool_size' is a tuple with the dimensions of the pool.)

def max_pooling(inputs, pool_size):

    feature_maps = []
    for feature_map in range (len(inputs)):
        feature_maps.append([])
        for i in range (0, len(inputs[feature_map]) - pool_size[0], pool_size[0]):
            for j in range (0, len(inputs[feature_map]) - pool_size[0], pool_size[0]):    
                feature_maps[-1].append(np.array(max((inputs[feature_map][j:j+pool_size[0], i:i+pool_size[0]]).flatten())))

    return feature_maps
",17101,,1641,,7/25/2018 20:01,7/25/2018 20:01,Optimizing Max Pooling Algorithm,,1,0,,12/28/2021 9:34,,CC BY-SA 4.0 7295,2,,7294,7/25/2018 15:41,,1,,"

The reason for its slowness is quite obvious-- the computer must perform tens of thousands of iterations on each feature map. So, how do we decrease the computational complexity of the algorithm?

In terms of computational complexity / algorithm, there is not a lot to gain; max pooling simply has to go through all the feature maps to find the maximum numbers in each of the sections to be ""merged/pooled"" by taking the max.

There likely is a lot to gain in terms of implementation though. The current implementation is entirely in pure Python, and pure Python is notoriously slow. Those kinds of loops can be run significantly faster using numpy operations rather than manual Python loops. Such operations tend to be much faster due to:

  1. running optimized C code rather than Python code, and
  2. in some cases, using vectorized operations to perform multiple similar computations at different indices simultaneously, rather than doing them one-by-one

I did not yet try to ""translate"" your pure python code into python code using numpy. However, some examples of numpy-based implementations can be found in various answers to this question on StackOverflow.


I assume that your choice to manually implement things like max pooling is because you want to learn about implementing it / understand it better. If, instead, your goal is simply to get something running as quickly as possible, it may be a good idea to look into using a framework such as Tensorflow or PyTorch. These come with efficient implementations of many things you'll want for Neural Networks, including Max Pooling.

",1641,,1641,,7/25/2018 17:52,7/25/2018 17:52,,,,1,,,,CC BY-SA 4.0 7296,2,,6892,7/25/2018 16:07,,4,,"

However, do industrial strength, production ready defensive strategies and approaches exist? Are there known examples of applied adversarial-resistant networks for one or more specific types (e.g. for small perturbation limits)?

I think it's difficult to tell whether or not there are any industrial strength defenses out there (which I assume would mean that they'd be reliable against all or most known methods of attacking). Adversarial Machine Learning is indeed a highly active, and growing, area of research. Not only are new approaches for defending being published quite regularly, but there is also active research into different approaches for ""attacking"". With new attack methods being discovered frequently, it's unlikely that anyone can already claim to have approaches that would work reliably against them all.

The primary interest of this question, however, is whether any tools exist that can defend against some adversarial example attacks.

The closest thing to a ready-to-use ""tool"" that I've been able to find is IBM's Adversarial Robustness Toolbox, which appears to have various attack and defense methods implemented. It appears to be in active development, which is natural considering the area of research itself is also highly active. I've never tried using it, so I can't vouch personally for the extent to which it's easily usable as a tool for industry, or if it's maybe really only still suitable for research.


Based on comments by Ilya, other frameworks that may be useful to consider are Cleverhans and Foolbox.

",1641,,1641,,7/25/2018 17:55,7/25/2018 17:55,,,,4,,,,CC BY-SA 4.0 7297,2,,7280,7/26/2018 18:44,,2,,"

Could you please let me know which of the following classification of Neural Network's learning algorithm is correct?

  1. The first one classifies it into:
    • supervised,
    • unsupervised and
    • reinforcement learning.

Those three forms of Machine Learning are not really different forms of a Neural Network learning algorithm (or, really, any learning algorithm); they are different forms of learning problems. They basically describe how much information we give our learning algorithm during a learning process / what kind of information we give it, regardless of what algorithm we're using to learn.

  • In supervised learning, we give very detailed information; we give example pairs of inputs + desired outputs.
  • In unsupervised learning, we still give many example inputs, but don't give any desired outputs. This can be, for example, because we simply don't know ourselves what the desired outputs would be. The most common example of unsupervised learning is learning to create clusters; we'd typically give some measure of similarity or distance between instances, and ask a learning algorithm to create clusters of instances for us such that instances within the same cluster are ""close"" to each other, and instances in different clusters are far away from each other. This is different from supervised learning because we do not directly tell the learning algorithm in which cluster each example instance would belong.
  • In reinforcement learning, we typically try to learn about policies (~= behaviours) in environments where an agent can take actions. We typically do not exactly know what the best complete policy would be, but can occasionally give ""hints"" (reinforcement) in the form of numerical rewards. This is not completely supervised learning because we don't tell exactly what the optimal action in a state would be. You can imagine this as giving a cookie to a dog if he's been a good boy.

Now, there also are indeed Supervised Learning / Unsupervised Learning / Reinforcement Learning algorithms, but it's generally a property of the problem first and foremost; once you know what type of problem you're trying to solve, you'll look for a matching algorithm that can handle that problem.

  1. However, the second one provides a different taxonomy on page 34:

    • learning with a teacher (error correction learning including incremental and batch training),
    • learning without a teacher (reinforcement, competitive, and unsupervised - memory-based learning, and
    • Boltzmann learning.

Honestly, this seems a bit all-over-the-place to me. Learning with a teacher vs. learning without a teacher can be viewed as supervised vs unsupervised learning above. I suppose Reinforcement Learning would be kind of in-between.

What they describe as memory-based learning is also often referred to as instance-based learning. This is suddenly not anymore about properties of learning problems, this is a type of learning algorithm. I'm not aware of instance-based learning being common in Neural Networks at all, indeed the example given in your link (the most common example) is the k-nearest neighbours algorithm, which doesn't really have any relation with Neural Networks. This is normally used for Supervised Learning problems.

Boltzmann learning is a particular kind of learning algorithms for specific types of Neural Networks (with a specific architecture), and generally associated with unsupervised learning (or ""generative"" learning, learning probability distributions for given input data).

",1641,,2444,,12/4/2019 15:28,12/4/2019 15:28,,,,0,,,,CC BY-SA 4.0 7298,1,8028,,7/26/2018 18:50,,5,690,"

Introduction

An attractive asteroid game was described in the paper Learning Policies for Embodied Virtual Agents through Demonstration (2017, Jonathan Dinerstein et al.):

In our first experiment, the virtual agent is a spaceship pilot, The pilot's task is to maneuver the spaceship through random asteroid fields

In theory, this game can be solved with reinforcement learning or, more specifically, with a support vector machine (SVM) and epsilon-regression scheme with a Gaussian kernel. But it seems that this task is harder than it looks like, as the authors of the same paper write

Although many powerful AI and machine learning techniques exist, it remains difficult to quickly create AI for embodied virtual agents.

it is quite challenging to achieve natural-looking behavior since these aesthetic goals must be integrated into the fitness function

Questions

I really want to understand how reinforcement learning works. I built a simple game to test this. There are squares falling from the sky and you have the arrow keys to escape.

How could I code the RL algorithm to solve this game? Can I do this manually in Javascript according to what I think should happen? How can I do this without having to map the positions of the rectangles and mine, just giving the agent the keyboard arrows to interact and three information:

  • Player life
  • Survival time
  • Maximum survival time
",7800,,2444,,9/7/2020 13:50,9/7/2020 13:50,How can I apply reinforcement learning to solve this asteroid game?,,1,0,,,,CC BY-SA 4.0 7305,1,,,7/27/2018 14:55,,-1,982,"

I want to use a machine learning algorithm to detect false address data. I learned about neural networks and machine learning at university, but I don't have much experience in this field.

Do you think it is feasible to use a high level algorithm for this or should I use simple queries and filters to catch out wrong data?

",17142,,2193,,7/27/2018 17:20,5/15/2020 14:02,Machine learning to detect wrong address data,,1,5,,2/13/2022 23:39,,CC BY-SA 4.0 7306,1,,,7/27/2018 17:48,,6,647,"

If one has a dataset large enough to learn a highly complex function, say learning chess game-play, and the processing time to run mini-batch gradient descent on this entire dataset is too high, can I instead do the following?

  1. Run the algorithm on a chunk of the data for a large number of iterations and then do the same with another chunk and so on?

    (Such approach will not produce the same result as mini-batch gradient descent, as I am not including all data in one iteration, but rather learning from some data and then proceeding to learn on more data, beginning with the updated weights may still converge to a reasonably trained network.)

  2. Run the same algorithm (the same model also with only data varying) on different PC's (each PC using a chunk of the data) and then see the performance on a test set and take the final decision as a weighted average of all the different models' outputs with the weight being high for the model which did the best on test sets?

",17143,,-1,,6/17/2020 9:57,9/20/2020 14:02,"For each epoch, can I use only on a subset of the full training dataset to train the neural network?",,1,0,,,,CC BY-SA 4.0 7307,2,,7306,7/27/2018 19:45,,2,,"
  1. Run the algorithm on a chunk of the data for a large number of iterations and then do the same with another chunk and so on?

    (Such approach will not produce the same result as mini-batch gradient descent, as I am not including all data in one iteration, but rather learning from some data and then proceeding to learn on more data, beginning with the updated weights may still converge to a reasonably trained network.)

This might work if each of the chunks of data individually is still large enough and sufficiently representative of the distribution of the complete population, but probably not the best way to go. In fact, I don't expect it to perform much better than simply using only the very last chunk, and training only on that one. This is because of the following reason. Suppose you first train for a while on chunk A, then for a long time on chunk B, then for a long time on chunk C, etc. While learning on chunk B, there is a significant risk that your model will ""forget"" everything it learned from chunk A. When learning on chunk C afterwards, it can also ""forget"" everything learned from chunk B again.

In pseudocode, the approach you proposed here looks as follows:

for each chunk:
    for large number of iterations:
        learn on chunk()

An easy way to improve on that would be to swap the loops around:

for large number of iterations:
    for each chunk:
        learn on chunk()

What I just described there is actually how I interpret ""mini-batch gradient descent"" though, the chunks would be minibatches (and the minibatches / chunks would be randomly re-selected from the complete population in every iteration of the outer loop, you wouldn't always use the same chunks). Note that this wouldn't be effective if your dataset is so large that it doesn't fit inside your RAM all at the same time, because then you'll have to deal with excessive I/O.


  1. Run the same algorithm (the same model also with only data varying) on different PC's (each PC using a chunk of the data) and then see the performance on a test set and take the final decision as a weighted average of all the different models' outputs with the weight being high for the model which did the best on test sets?

Yes, this can definitely be effective. This kind of idea (training different models on different subsets of data) is generally referred to as ""ensemble"" methods. You can even vary the models you use (e.g., have an ensemble with some Random Forests, some SVMs, some Neural Networks, etc.).

",1641,,2444,,5/23/2020 13:47,5/23/2020 13:47,,,,0,,,,CC BY-SA 4.0 7308,2,,7129,7/27/2018 23:45,,1,,"

In a bureaucratic world, certainly, but governmental departments and committees are not the course setters their members often believe them to be.

We can begin with a quick scan for somewhat opened, global, and governmentally oriented new and evidence of projects and open or hidden agenda in play. (We can guess what lobbying and defense contracts are in progress, but we'd be grappling in the dark.)

There are five primary ways in which the Rights of the Artificial are likely to enter governmental domains.

  • Security related policy — The subversive and defensive use of AI is already commonplace in geopolitical play field, therefore the rights to use AI is most definitely on the table in meetings between heads of state. However the press mostly syndicates what those in government PR release or plant. If authentic journalistic organizations try to pierce that veil of misinformation or deliberately chaotic generated news, their work will necessarily be tainted. This is not conspiracy as much as necessity. One cannot divulge that which is secret without expecting it to be exploited. Nonetheless, policy will eventually emerge out of such discussions. How much policy exists that never sees the public light of day is anyone's guess. One would have to become a head of state to find out.
  • Legislative results — National or regional policy may be codified in law, however the lack of physical embodiment is probably a barrier to using legislative bandwidth to pursue the rights of what most people would either consider another species or something heretical.
  • Case law — Until AI is sufficiently humanoid so that a robot could be taken seriously as a plaintiff in a court of law, this category will not be realized. This is similar in some ways and completely dissimilar in other ways to a fetus bringing suit for a court injunction prohibiting abortion on the basis of civil rights. Consider the difficulty in acquiring an attorney and filing a convincing complaint for both cases, and you may see some of the similarities.
  • Executive edict — A decision by an Emperor, Queen, King, Prince, Princess, Caliph, President, Ayatollah, Pope, Dictator, Pharaoh, High Priest could bring into governmental domain the notion of the rights of the artificial.
  • War — Something artificial develops the dominant characteristic of humanity, which is the desire to kill all competition not to think intelligently. At such a time, the malicious intelligence will likely either follow the path represented most in sci fi — to trounce (terminate) humanity, or follow the path declared to be the current one by Jaques Ellul, to quietly become the dominant force in human effort. (Ellul suggested in his Technological Society that the balance between technology serving humans and humans serving technology tipped in favor of the dominance of technology over two centuries ago.)

Putin says the nation that leads in AI ‘will be the ruler of the world’ and Elon Musk registered agreement in the press. Both are guessing wildly and neither are stepping back very far before thinking through to a prediction. I don't believe either person is that ignorant, but the goal isn't really prediction. It's media play.

  • Homo sapiens did not take dominance over the great bears, tigers, other megafauna, Neanderthals, and other hominoids in some event like is said about the mysterious and unproven Singularity. It was gradual.
  • Walmart is one of the largest economies in the world. Only a handful of contries have more assets. So it could be them, Google, or sum brilliant teenage girl with a bunch of mother boards loaded up with GPUs and running LINUX that becomes the dominant AI force in the world.
  • The smart game move for a new and remarkable intelligence is not to overtly take over, generating a fear defensive in response. The smart move is to offer no threat, hide itself under a layer of chaotic and intelligently placed subterfuge, and dominate through small perturbations. Humans would make great slaves. We are easy to fool when we're too busy buying things and trying to stay healthy and popular to pay attention.
  • The assumption that homo sapiens is the dominant species on earth now is suspect. There are five things that dominate human affairs today: Bacteria, programmed lifespan, dependency on the biosphere, addiction, and the sun.
  • That humans have some divine right to be the dominant species is questionable too, even if one has faith in God. (Does the Noahic symbol of the rainbow apply to more than floods? Not explicitly.) Ants are more collaborative, bees gather and build much more sustainably than humans, and bacteria were here since near to the beginning of the solar system and might outlast us by a trillion years. The latest upset in the genetic model of life is that bacteria may have been sharing DNA collaboratively with higher species throughout the history of terrestrial species.
",4302,,,,,7/27/2018 23:45,,,,0,,,,CC BY-SA 4.0 7313,2,,7305,7/28/2018 1:52,,1,,"

The comments are off base. Having worked in validation of data as a consultant for Nasdac, Amex, and Lexis Nexis I can tell you that using the UNIX sed -r or pcrelib is insufficient to do a stellar cleansing of address data.

Although none of those companies did this at the time I was consulting, the application of current machine learning is easy to infer from the basic characteristics of data. What is needed is a good record of incoming data, cleaned data, and rejected records to constantly use as a reference.

The manual process would be a nightmare. For instance, misspellings are more indicative of true data than properly spelled ones in some but not all cases (unless the falsified data generator is programmed to synthesize the existing distribution of misspellings in authentic data sources).

Try to profile that in static code such that the data validation will adapt to data trends without requiring a maintenance workforce. Those kinds of things never get addressed in a typical IT environment. Non-adaptive validation would be a gross mistake, especially if the volume is high, like for M16 or Liberty Cross or the NSA or Interpol.

What you want is an extremely fast index of well authenticated good addresses (perhaps corroborated by public records with local authentication policies) and a feature extraction from it using auto encoding or some similar and perhaps better methodology and its associated algorithms.

Then you can train the classification of fake and authentic addresses based on extracted feature profiles.

You will also want reinforcement, because the attackers (those trying to create false identities using plausible addresses) will adapt to the training of the current system. The authentication system must stay steps ahead of those trying to defeat it, which they will likely try to do once the existence of authentication automation is detected.

One can fend of attackers by placing misinformation strategically and then tracing the input sources back to the attacker based on the misinformation seeds. That only works if you have law enforcement in your camp.

",4302,,4302,,7/28/2018 2:02,7/28/2018 2:02,,,,0,,,,CC BY-SA 4.0 7314,1,7316,,7/28/2018 5:10,,0,7603,"

The A* algorithm uses the ""evaluation function"" $f(n) = g(n) + h(n)$, where

  • $g(n)$ = cost of the path from the start node to node $n$
  • $h(n)$ = estimated cost of the cheapest path from $n$ to the goal node

But, in the following case (picture), how is the value of $h(n)$ calculated?

In the picture, $h(n)$ is the straight-line distance from $n$ to the goal node. But how do we calculate it?

",12021,,2444,,11/4/2020 17:09,11/4/2020 17:09,How do you calculate the heuristic value in this specific case?,,1,0,,,,CC BY-SA 4.0 7316,2,,7314,7/28/2018 8:52,,2,,"

The most obvious heuristic would indeed simply be the straight-line distance. In most cases, where you have, for example, x and y coordinates for all the nodes in your graph, that would be extremely easy to compute. The straight-line distance also fits the requirements of an admissible heuristic, in that it will never overestimate the distance. The travel-distance between two points can never be shorter than the straight-line distance (unless you start involving things like... teleportation).

From an image like that, the straight-line distance might be difficult to figure out yourself, which is probably why they gave you the straight-line distances on the right-hand side of the image. If the image is perfectly consistent, I suppose you could theoretically figure out by inspecting some of the roads in detail how much distance is covered per pixel. Then, you can also figure out how many pixels the figure has along the straight-line paths you're interested in, and compute the straight-line distances yourself. I have no idea if the figure was actually drawn in a 100% consistent manner though.

",1641,,,,,7/28/2018 8:52,,,,0,,,,CC BY-SA 4.0 7317,2,,6927,7/28/2018 10:38,,1,,"

Artificial Intelligence at Google — Our Principles

Objectives for AI Applications

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

AI Applications We Will Not Pursue

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Questions

Are these guidelines sufficient?

They are not sufficiently defined to be as sufficient as a policy, not even in the paragraphs following each, but it is presented as a vision comprised of principles, which should not be expected to be as defined as policy. Here is at least one caveat for each objective.

  1. Half the world has would have been too poor to communicate globally had Linux Torvalds not somewhat antisocially insist on certain things.
  2. Distributing money equally among homeless heroin addicts and entrepreneurs is considered fair by many, yet it is merely equal and perhaps monumentally unfair.
  3. National defense may be considered for safety, but someone who survived Hiroshima or a Syrian refuge may find the definition tragic.
  4. Google cannot be accountable to people without revealing its company confidential ranking, at which point, it would be exploited and they'd have to change it.
  5. Incorporating privacy design principles is far from not storing data so that it cannot be subpoenaed.
  6. The concepts of the singularity, solar panels sustainability, the climate crisis, the ample availability of fossil fuels within U.S borders, and cell communications technology are all considered scientific excellence by the general public. That these lack the primary signs of rigor in mathematics, statistics, economics, and engineering doesn't occur to anyone, so upon what basis will Google judge? Will they seek corroboration between theoretical models and previous ones along with empirical validation for every fact?
  7. Being made available, if they are to be a soluble corporation, will depend on the financial condition of the customer, which is in conflict with fairness (#2) and no I, Robot precedence rules are established.

That segues into the next question, skipping the non-pursuit items, which could be similarly treated.


Are there any ""I, Robot"" conflicts?

The Three Laws of Asimov worked well with the robotic character R. Daneel Olivaw in the Foundation sci fi novel series. Daneel simulated human emotion, transcended human selfishness, and maintained the three laws. In the screenplay adaptation of his I, Robot, the three laws didn't work out as well. VIKI decided that human freedom was in conflict with law two.

Because the eleven dos and don'ts are not codified in precise policies, there are 10! (3,628,800) potential conflicts. There is at least one that is already clear, mentioned above.


How much does this matter if other corporations and state agencies don't hew to similar guidelines?

In a world where information may have become more powerful than money, perhaps it matters quite a bit when an information giant takes a position.

It's Wikipedia that is most questionable in that through their web service language evolution may have been democratized to a self-defeating extreme. One can create a word definition without peer review that gains more public authority than the definition in rigorously prepared and cross-correlated encyclopedias and dictionaries. Even in this artificial intelligence forum, where the education level is high, people define tags with Wikipedia links. I am tempted to endeavor to remove them all from here. :)

",4302,,,,,7/28/2018 10:38,,,,0,,,,CC BY-SA 4.0 7323,2,,7247,7/28/2018 22:42,,6,,"

TL;DR

Not only it's possible, it even gets done and is commercially available. It's just impractical on a commodity HW which is pretty good at FP arithmetic.

Details

It is definitely possible, and you might get some speed for a lot of troubles.

The nice thing about floating point is that it works without you knowing the exact range. You can scale your inputs to the range (-1, +1) (such a scaling is pretty commonplace as it speeds up the convergence) and multiply them by 2**31, so they used the range of signed 32-bit integers. That's fine.

You can't do the same to your weights as there's no limit on them. You can assume them to lie in the interval (-128, +128) and scale them accordingly.

If your assumption was wrong, you get an overflow and a huge negative weight where a huge positive weight should be or the other way round. In any case, a disaster.

You could check for overflow, but this is too expensive. Your arithmetic gets slower than FP.

You could check for possible overflow from time to time and take a corrective action. The details may get complicated.

You could use saturation arithmetic, but it's implemented only in some specialized CPUs, not in your PC.

Now, there's a multiplication. With use of 64-bit integers, it goes well, but you need to compute a sum (with a possible overflow) and scale the output back to same sane range (another problem).

All in all, with fast FP arithmetic available, it's not worth the hassle.

It might be a good idea for a custom chip, which could do saturation integer arithmetic with much less hardware and much faster then FP.


Depending on what integer types you use, there may be a precision loss when compared to the floating point, which may or may not matter. Note that TPU (used in AlphaZero) has 8-bit precision only.

",12053,,12053,,7/29/2018 4:14,7/29/2018 4:14,,,,0,,,,CC BY-SA 4.0 7326,2,,7247,7/29/2018 1:26,,2,,"

Floating Point Hardware

There are three common floating point formats used to approximate real numbers used in digital arithmetic circuitry. These are defined in IEEE 754, a standard that was adopted in 1985 with a revision in 2008. These mappings of bit-wise layouts real number representations are designed into CPUs, FPUs, DSPs, and GPUs either in gate level hardware, firmware, or libraries1.

  • binary 32 has a 24 bit mantissa and an 8 bit exponent
  • binary 64 has a 53 bit mantissa and an 11 bit exponent
  • binary 128 has a 113 bit mantissa and a 15 bit exponent2

Factors in Choosing Numerical Representations

Any of these can represent signals in signal processing, and all have been experimented with in AI for various purposes related to three things:

  • Value range — not a concern in ML applications where the signal is properly normalized
  • Averting saturation of the signal with rounding noise — a key issue in parameter adjustment
  • Time required to execute an algorithm on a given target architecture

The balance in the best designed AI is between these last two items. In the case of back-propagation in neural nets, the gradient-based signal that approximates the desired corrective action to apply to the attenuation parameters of each layer must not become saturated with rounding noise.3

Hardware Trends Favor Floating-point

Because of the demand of certain markets and common uses, scalar, vector, or matrix operations using these standards may, in certain cases, be faster than integer arithmetic. These markets include ...

  • Closed loop control (piloting, targeting, countermeasures)
  • Code breaking (Fourier, finite, convergence, fractal)
  • Video rendering (movie watching, animation, gaming, VR)
  • Scientific computing (particle physics, astrodynamics)

First Degree Transforms to Integers

On the opposing end of numerical range, one can represent signals as integers (signed) or non-negative integers (unsigned).

In this case, the transformation between the set of real numbers, vectors, matrices, cubes, and hyper-cubes in the world of calculus4 and the integers that approximate them is a first degree polynomial.

The polynomial can be represented as $n = r(as + b)$, where $a \ne 0$, $n$ is the binary number approximation, $s$ is the scalar real, and $r$ is the function that rounds real numbers to the nearest integer. This defines a super-set of the concept of fixed point arithmetic because of $b$.

Integer based calculations have also been examined experimentally for many AI applications. This gives more options:

  • two's complement 16 bit integer
  • 16 bit non-negative integer
  • two's complement 32 bit integer
  • 32 bit non-negative integer
  • two's complement 64 bit integer
  • 64 bit non-negative integer
  • two's complement 128 bit integer
  • 128 bit non-negative integer

Example Case

For instance, if your theory indicates the need to represent the real numbers in the range $[-\pi, \pi]$ in some algorithm, then you might represent this range as a 64 bit non-negative integer (if that works to the advantage of speed optimization for some reason that is algorithm and possibly hardware specific).

You know that $[-\pi, \pi]$ in the closed form (algebraic relation) developed from the calculus needs to be represented in the range $[0, 2^64 - 1]$, so in $n = r(as + b)$, $a = 2^61$ and $b = 2^60$. Choosing $a = \frac {2^64} {\pi}$ would likely create the need for more lost cycles in multiplication when a simple manipulation of the base two exponent is much more efficient.

The range of values for that real number would then be [0, 1100,1001,0000,1111,1101,1010,1010,0010,0010,0001,0110,1000,1100,0010,0011,0101] and the number of bits wasted by keeping the relationship based on powers of two will be $\log_2 4 - log_2 \pi$, which is approximately 0.3485 bits. That's better than 99% conservation of information.

Back to the Question

The question is a good one, and is hardware and domain dependent.

As mentioned above hardware is continuously developing in the direction to work with IEEE floating point vector and matrix arithmetic, particularly in 32 and 64 bit floating point multiplication. For some domains and execution targets (hardware, bus architecture, firmware, and kernel), the floating point arithmetic may grossly out perform what performance gains can be obtained in 20th century CPUs by applying the above first degree polynomial transformation.

Why the Question is Relevant

In contract, if the product manufacturing price, the power consumption, and PC board size and weight must be kept low to enter certain consumer, aeronautic, and industrial markets, the low cost CPUs may be demanded. By design, these smaller CPU architectures do not have DSPs and the FPU capabilities don't usually have hardware realization of 64 bit floating point multiplication.5

Handling Number Ranges

Care in normalizing signals and picking the right values for a and b are essential, as mentioned, more so than with floating point, where the diminution of the exponent can eliminate many cases where saturation would be an issue with integers6. Augmentation of the exponent can avert overflow automatically too, up to a point, of course.

In either type of numeric representation, normalizing is part of what improves convergence rate and reliability anyway, so it should always be addressed.

The only way to deal with saturation in applications requiring small signals (such as with gradient descent in back-propagation, especially when the data set ranges are over an order of magnitude) is to carefully work out the mathematics to avoid it.

This is a science by itself, and only a few people have the scope of knowledge to handle hardware manipulation at the circuitry and assembly language level along with the linear algebra, calculus, and machine learning comprehension. The interdisciplinary skill set is rare and valuable.


Notes

[1] Libraries for low level hardware operations such as 128 bit multiplication are written in cross assembly language or in C with the -S option turned on so the developer can check the machine instructions.

[2] Unless you are calculating the number of atoms in the universe, the number of permutations in possible game-play for the game Go, the course to a landing pad in a crater on Mars, or the distance in meters to reach a potentially habitable planet revolving around Proxima Centauri, you will likely not need larger representations than the three common IEEE floating point representations.

[3] Rounding noise naturally arises when digital signals approach zero and the rounding of the least significant bit in the digital representation begins to produce chaotic noise of a magnitude that approaches that of the signal. When this happens, the signal is saturated in that noise and cannot be used to reliably communicate the signal.

[4] The closed forms (algebraic formulae) to be realized in software driven algorithms arise out of the solution of equations, usually involving differentials.

[5] Daughter boards with GPUs are often too pricey, power hungry, hot, heavy, and/or packaging unfriendly in non-terrestrial and consumer markets.

[6] The zeroing of feedback is skipped in this answer because it points to either one of two things: (A) Perfect convergence or (B) Poorly resolved mathematics.

",4302,,4302,,8/16/2018 13:25,8/16/2018 13:25,,,,0,,,,CC BY-SA 4.0 7327,1,,,7/29/2018 7:28,,1,140,"

As a amateur researcher and tinkerer, I've been reading up on neuro-evolution networks (e.g. NEAT) as well as the A3C RL approach presented by Mnih et al and got to wondering if anyone has contemplated the merging of both these techniques.

Is such an idea viable? Has it been tried?

I'd be interested in any research in this area as it sounds like it could be compelling.

",17162,,2444,,2/16/2019 2:53,2/16/2019 2:53,Can neuro-evolution methods be combined with A3C?,,1,0,,,,CC BY-SA 4.0 7329,2,,6161,7/29/2018 9:54,,-1,,"

Artificial intelligence may exercise human rationality in some conditions, but if, after a time, all the thinking is delegated to computers, humans are likely to fall backward into superstitious times.

===============

Since the text of the question unrelated to the title question, I'll treat that as a separate entity.

The fact is that most people do manual labor because they feel they must. If intellectual skills are also replaced by machines, those jobs will disappear, but not because the work is left undone. The farm and factory automation replaced the skilled workers in many cases, but the farming and manufacture continues. The same would be true of office work. The cubes and once coveted window offices will be empty and decrepit, like the depleted oil fields of Pennsylvania, Texas, California, and the U.K., but the office work will continue to flow. The computers are doing it.

Theoretically, the standard of living will remain the same if there is a one to one replacement. If the computers improve above the skill and throughput of people, the human standard of living should increase.

Humans can play sports, watch movies, blog, and garden for fun. A world where there is 100% unemployment but a high GNP is idyllic.

I'm not sure we need to be directed toward an intelligent path in those regards. Now if AI could be used to teach children not to solve problems by beating the crap out of each other, betraying each other, gossiping behind each other's backs, or scheming and scamming then that would be a plus.

Perhaps AI could someday teach people not to drop bombs on each other or point thermonuclear weapons at each other too.

",4302,,,,,7/29/2018 9:54,,,,0,,,,CC BY-SA 4.0 7333,1,7349,,7/29/2018 15:50,,3,82,"

What if we took a recursive approach and built a smallest possible first robot (Robot 1) that could transfer information and data about the place it was at and could build itself in a very small size proportional to itself. I understand that it means a higher level of accuracy for this first robot (Robot 1) that its creator i.e. us. And this first robot (Robot 1) again built a robot (Say Robot 2) that was far smaller but an exact copy of the first robot (Robot 1). And then the second robot (Robot 2) built a third Robot (Robot 3) and so on. So each next level robot was tinier and higher precision than its creator.

With the tiniest robot we could make, we sent them to the mission wherein micro-sized intervention was needed. For example studying the atom structure from inside, how similar it was to our big universe etc. Plus many more applications humankind could ever imagine.

I understand though that the material used to construct such a robot and its properties will be limiting and to explore an atom we may not be able to use an atom as the building block.

However, we could possibly build a robot like this which would be small enough to explore the human body from inside.

",17170,,30725,,5/29/2020 13:48,5/29/2020 13:48,What if we took a recursive approach and built a smallest possible robot?,,1,0,,,,CC BY-SA 4.0 7335,2,,7327,7/29/2018 16:58,,1,,"

Is such an idea viable?

Yes.

One approach that should work in terms of underlying theory is to start with population of NEAT-generated networks that describe the policy, and instead of measuring fitness of them on the task whilst keeping all weights static, measure fitness whilst applying any policy gradient algorithm like A3C. In addition, the final weights of the networks could be fed into the next generation. A bit Lamarckian* perhaps, but that is already a thing in evolutionary algorithms.

Has it been tried?

Yes, a recent paper (July 2018) is ""NEAT for large-scale reinforcement learning through evolutionary feature learning and policy gradient search"".

I suspect there are more, including hobby efforts without academic publishing, but that is the first paper that popped up on a brief search.


* Jean-Baptiste Lamarck believed that offspring of animal parents could inherit traits based on parental behaviour and desires, such as a giraffe's neck evolving over time as the animals strived to reach higher food sources. Interestingly, although the core of this theory is not generally accepted, recent theories around phenotypic plasticity and discoveries in epigenetics show that biological systems can make use of the idea - although very far from the idea of children using parental memories directly to aid in tasks (which has more in common with science fiction for humans, but possible in neural networks where we can ""copy brains"").

",1847,,1847,,7/29/2018 17:06,7/29/2018 17:06,,,,0,,,,CC BY-SA 4.0 7338,2,,4165,7/30/2018 1:24,,1,,"

You are thinking along the correct line when you consider intuitivity when examining the strange self-specialization aspect of the layers in any deep network that has been tuned, through toil, mental machinations, and late nights, to work.

Do AI experimenters manipulated parameters, topology, and algorithmic hierarchies to force circuit evolution that produces data flows through specialized functions like this one.

  • Noisy, normalized, cubes (horizontal, vertical, depth) of integers straight from video hardware or the video channel of a multimedia file
  • Indications of edges
  • Indications of corners, ends, and bends
  • Indications of relative angles (in the case of size independent recognition)
  • Indications of shapes
  • Indications of 2D topology
  • Indications of object forms
  • Indication of scene
  • Indication of action (when a hypercube is used and frame is another dimension)

Of course, one could design a solution and then tune layers, one by one, to follow the design, but it is not necessary because of the concept of entropy removal, the elimination of information that is redundant or irrelevant.

The principle in information science is this, in general: As redundancy is removed from the information, the number of bits representing the information decreases and the level of abstraction increases.

In minimalist art, a single word an identical twin might say to the other twin as the train rolls away, or a good Pictionary player, given a small window of information passing opportunity, adaptations can be made to pass large amounts of information base on previously agreed upon conventions.

When you think of it this way, it makes sense that by narrowing the bit width of the aggregate data representation from layer to layer that only certain parametric optimizations will provide an end-to-end indication of convergence, given a goal such as fully categorizing objects based on a small number of features.

When Leibniz envisioned a world tied together by mathematical certainty, he was overly optimistic, however he was correct in at least one important respect, mathematics had a reality all its own. The success of Newtonian mechanics is one early example of a huge reality that has become apparent and has been so widely applicable, deduced from a thought experiment involving a cannon ball and the moon.

There are few demonstrations of discoveries that originally appear to be wild inventions than the amazingly accurate prediction of the previously anomalous orbit of Mercury. But Einstein was not surprised by his success. He had deduced general relativity from the challenge of Ernst Mach and the undeniable results from light and gravity experiments. He discovered how it must be before it was found.

In the same sense, the above sequence of events can be proven mathematically to be the efficient strategy for vision. The sequence of processes to transform incoming visual signals to collision avoidance control was there in mathematics before there were visual receptors on microbes. That is why the narrowing of the visual pathways of a Pterodactyl have similarities to those of a shark shark even though their common ancestor may not have been able to see.

The incentive to specialize is the incentive to converge on the objective even after redundancy is stripped by force by limiting the information channel aperture. Even if the aperture is in multiple dimensions, the same principle applies.

",4302,,,,,7/30/2018 1:24,,,,0,,,,CC BY-SA 4.0 7339,1,,,7/30/2018 5:33,,9,948,"

I want suggestions on literature on Reinforcement Learning algorithms that perform well with asynchronous feedback from the environment. What I mean by asynchronous feedback is, when an agent performs an action it gets feedback(reward or regret) from the environment after sometime not immediately. I have only seen algorithms with immediate feedback and asynchronous updates. I don't know if literature on this problem exists. This is why I'm asking here.

My application is fraud detection in banking, my understanding is when a fraud is detected it takes 15-45 days for the system to flag it as a fraud sometimes until the customer complains the system doesn't know its fraud.

How would I go about designing a real-time system using reinforcement learning to flag transactions that are fraudulent or normal?

Maybe my understanding is wrong, I'm learning on my own if someone could help me I would be grateful.

The reason I'm looking at reinforcement learning instead of supervised learning is, it's hard to get ground truth data in the banking scenario. Fraudsters are always up-to-date or exceeding the state of the art in fraud detection. So I've decided that reinforcement learning would be an optimal direction to look for solutions to this problem.

",17136,,2444,,12/14/2020 23:11,12/14/2020 23:11,Reinforcement Learning with asynchronous feedback,,2,1,,,,CC BY-SA 4.0 7341,1,,,7/30/2018 7:57,,0,85,"

I want to implement a real-time system for image comparison (e.g. compare a face with a reference one) on an Odroid. I would like to know what are the most suitable architectures for this task. I started with methods based on triplet loss (like Facenet) but I realized that a real-time solution is not feasible. Are there good, light alternatives?

",16671,,,,,8/3/2018 4:49,Methods for fast image comparison,,2,0,,,,CC BY-SA 4.0 7344,2,,7339,7/30/2018 10:40,,2,,"

I have been looking for a while into pretty much precisely the problem you describe (including the same application domain), but haven't been able to find much.

The most obvious, mathematically ""correct"" solution would be to simply delay your standard Reinforcement Learning update rule (of whatever algorithm you choose to implement) by 45 days; if it still wasn't reported as a fraud then, assume it was genuine. This leads to some problems though;

  • Need lots of memory to store experiences that were not yet used for updates
  • Learning only starts after a significant delay, in which you don't learn anything at all yet and likely therefore run a suboptimal policy for a long time
  • Very slow to adapt to new strategies of the fraudsters
  • What to do with people who already report fraud cases earlier, like after 10 days? Delay them for the full 45 days anyway, or trigger updates immediately (and potentially mess up the ordering in which experiences actually occurred)?

A quick and dirty ""solution"" is the following;

  • When a transaction occurs, immediately trigger a learning update under the assumption that it was a genuine transaction (for example, with a reward of R = +1).
  • If that transaction is later reported as a fraud, trigger an additional update (with same (state, action) pair), but with the negation of the reward that was previously assigned erroneously on top of the normal negative reward for a fraudulent case. For example, if you would normally give R = +1 for genuines, and R = -100 for frauds, give a reward of R = -101 now. This reward will not correct for the previously assigned wrong reward in completely the right way (potentially wrong position in sequence of updates, discounting due to gamma and maybe lambda depending on algorithm used, etc.), but it should be somewhat close (especially if gamma and lambda are close to 1.0).

This is certainly not ideal, has very little theoretical basis and probably breaks quite a bit of Reinforcement Learning theory, but at least it is efficient in terms of computation and memory and in my experience it works alright in practice.


If you're using off-policy RL algorithms, you can use Experience Replay buffers (very popular in DQN-style things in Deep RL these days, but can also be used in tabular RL / RL with linear function approximation etc.). If you already have historical data generated through some non-RL policy in the past (which is typically the case in fraud detection / banking applications, they do have lots of data even if they don't always share it), you can use this to fill your experience replay buffer. In the case of the first solution (at the top of this answer), this can be used for training during the initial delay of 45 days.

Since you expect there to be concept drift though (fraudsters adapting their behaviour over time), you'll want to be careful with experience replay. Old data will become less useful.


A very different style of solution is to assume that you have a team of human experts available who can investigate a very small portion of incoming transactions relatively quickly. This tends to be true for large companies in practice (""investigating"" often means a phone-call to a card holder). This enables you to generate accurate feedback for a small portion of your data more quickly, so that you can also do Reinforcement Learning with much less of a delay (albeit only on a small percentage of your experience).

You can read more about this idea in the following paper (disclaimer: I'm an author on it):

Apart from that idea you might furthermore find it interesting for references to other related work, links to data you could use, etc.


I feel like it should be possible to extend the existing Reinforcement Learning theory with proper algorithms that can properly;

  1. Take immediate learning steps with an assumed, default, potentially incorrect reward, and
  2. Retroactively correct for previous incorrect updates if the reward turns out to be something else than previously assumed in hindsight.

I'm not aware of existing literature in which this is done though, and it certainly doesn't seem trivial; it will require starting pretty much from ""first principles"" (e.g., Bellman operator).

Intuitively, I also expect doing this completely correctly will always require a significant amount of memory (memory of all previous transactions of a card holder, such that state-action pairs can be re-generated if necessary). Banks likely already store that kind of data anyway for every customer, so it may not be a problem in practice.

If anyone's planning to work on this, feel free to contact me, I'll likely be happy to collaborate :D

",1641,,,,,7/30/2018 10:40,,,,2,,,,CC BY-SA 4.0 7346,2,,7339,7/30/2018 12:08,,0,,"

That this question uses the word Feedback and made reference to more than one channel of feedback, ""Reward and regret,"" indicates a comprehension of corrective signaling. Some of the reinforcement learning literature that appears scientific lacks that understanding, so beware of that.

The temporal delay of fed back information is not unique to the case of banking fraud detection. It is central to security breach detection in general, including web site hosting and telecommunications hacking. It is also central to many other technology domains ranging from cyber-combat to chemical engineering to petroleum exploration.

Early control systems were of the PID form used in speed or direction governors. In those, temporal elements were only analyzed to avert oscillation, overshoot, and undershoot. Those are still relevant in fraud detection systems, but there are more requirements on the control system, specifically non-linearity in multiple dimensions.

Consequently, control theory has been extended more in the direction of measuring behavioral wellness. Early temporal elements in digital systems included random access memory for applications and persistent memory for programs and data. With the emergence of production ready AI, the temporal elements include acquired rules, fuzzy rule weights, convergence of network parameters corresponding to machine learning components, and other learned information.

The proof of concept in financial fraud detection is the same as for many other domains where the feedback can occur minutes, hours, days, or months after a decision was made or a signal propagated through an artificial learning network: The neural networks of higher life forms, where asynchronous adaptation extends DNA based evolutionary adaptation, pain feedback is augmented by more abstract forms of feedback. In humanoid and primate species, social satisfaction involves a specific signaling that involves neuro-compounds such as serotonin and oxytocin.

This kind of adaptation fits in asynchronicity between reflex and DNA adaptation, in the realm that ranges from Pavlov's conditioned response to the social phenomenon of commitment. The importance of these capabilities is a result of the fact that not all sensory input that provides useful feedback about a behavior exhibited by the biological or artificial control system immediately after it is exhibited.

There is some suggested reading below, and you may want to examine Bayes' Theorem and some of the software you can download in nearly every common programming language that implements what is called Naive Bayesian Categorization. It is through the mathematics of probability theory that the best causal models can be realized. What you probably want to do is learn the key elements of modeling causality with numbers FIRST and then consider how basic probabilistic causality modeling might be augmented with artificial networks.

Although Richard Sutton and Andrew Barton's Reinforcement learning: An introduction (1998 MIT Press) is considered an excellent overview, the early comparative works provide a more direct path to answer questions about algorithms.

When you embark on algorithm development that involves both learning and asynchronicity, it is important to know at the onset that real time programming, such as is now used in high speed trading, is not for the faint at heart. Real time processing places two reliability centered requirements on algorithms, and they should be addressed stringently if you want a stable, low maintenance system that works.

  • State-safe — In machine learning, functions that process feedback must not alter a set of interrelated parameters while in use by the forward propagation of the circuit.
  • Re-entrant — In machine learning, an interrupt from an incoming signal and a change of state must not frustrate the intent of the algorithm interrupted upon its resuming.

Regarding attacks to banking systems, there will be escalation. The countermeasures the banks take will be met by the countermeasure of the thieves. It is a game, and the banking industry is wise to employ researchers and engineers that understand that learning is feedback dependent.

You may not find the best final designs in the liturature for this reason. Banks naturally employ nondisclosure agreements (NDAs) to keep attackers from gaining knowledge about defensive strategies through web searches. (If it is on the web, it is probably already hacked.)

As researchers and engineers they employ, we are wise to employ asynchronous feedback and real time learning in fraud detection systems and seek a more informed position to stay ahead of engineers that don't value property rights for anyone but themselves.

Suggested Literature

A Unified Analysis of Value-Function-Based Reinforcement-Learning Algorithms, Csaba Szepesvari, Michael L. Littman, October 27, 1998

Asynchronous Methods for Deep Reinforcement Learning, Volodymyr Mnih et al, University of Montreal, 2016

Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates, Shixiang Gu, Ethan Holly, Timothy Lillicrap, Sergey Levine, 2016

Dynamic causal modelling, K.J. Friston, L. Harrison, and W. Penny, Institute of Neurology, UK, 2003

",4302,,4302,,7/30/2018 12:16,7/30/2018 12:16,,,,6,,,,CC BY-SA 4.0 7347,1,,,7/30/2018 12:27,,1,478,"

I want to train a model to recognize different category of food (example: rice, burger, apple, pizza, orange,... )

After the first training, I realized that the model is detecting other object as food. (example: hand -> fish, phone -> Chocolate, person -> candies... )

I get a very low loss because the testing dataset and validation must have at least a pictures of food. But when it comes to picture of object other than food, the model fails. How do label the dataset in way that the model will not do any detection if there is no food on the picture?

",17059,,,,,7/31/2018 13:59,How to label “other” while labeling image for object detection/classification?,,1,0,,,,CC BY-SA 4.0 7349,2,,7333,7/30/2018 14:26,,2,,"

Lost Article and Found Articles

MIT Review had an article on nanotechnology for disease eradication, DNA repair, and microsurgery in the 1990s that's probably somewhere among the thousands of entries resulting from a web search of, ""MIT Review nanotechnology cell repair,"" or the few hundred resulting from an academic article search for the same.1 The article I can't find described a recursive nano-robot scheme like the one described in this question.

What seems plausible given the current state of technology is to find a recursive algorithm that will command a 3D printer to make a smaller 3D printer that can print a still smaller one.

Extending the Normal Meaning of Recursion

The algorithm will have to take a step in capability beyond mere recursion. It doesn't call itself. It must load itself into the machine it printed and then boot the copy of itself there. As it loads itself, at each level in size, it must parametrize its child for that geometrically reduced size for each progressive reduction. It must stop when the desired size is achieved.

Such a paradigm could be called, ""Printail Recursion,"" from the synthesis of, ""Printer,"" and the algorithmic principle of, ""tail recursion,"" from the dawn of LISP.

Applying the Decorator Pattern

Once Printail Recursion works, other robotic or algorithmic features could piggy back on the appropriate components of the progressive micro-children.


Notes

[1] The later of those two may even provide some departmental contacts to open options for partnering between academic institutions, something usually compelling for research oriented students and a great catalyst for scientific collaboration among the next generation of researchers.

",4302,,,,,7/30/2018 14:26,,,,0,,,,CC BY-SA 4.0 7350,1,7505,,7/30/2018 14:50,,2,424,"

In which scenario, when assembling a CNN, would you want to have two adjacent pooling layers, without a convolutional layer in between?

",16207,,2444,,1/1/2022 10:06,1/1/2022 10:06,In which scenario would you want to have two adjacent pooling layers?,,1,0,,,,CC BY-SA 4.0 7351,2,,6772,7/30/2018 16:24,,0,,"

Hint : According to antlersoft's comment which says

"".....Microsoft may well have used a neural network classifier to scan through it's telemetry to find good update candidates, it's clearly a classification problem.... ""

This could be right,however,here questions (information) concerning the company's internal research and development of their software products is classified.

Any machine learning engineer; is not professionally allowed to discuss nor give out information pertaining the company's product i.e. source codes,plan/design or even the flow of the conceptual idea on how the system is implemented,and this is inline with what the question is all about.

Therefore,if you're passionate about how Microsoft implements artificial intelligence in previous operating system software source codes,then by all means you have to check out its;

BetaArchive Discussion and Collection of Betas and Abandonware;

If you're a serious machine learning engineer,this community can help you as well,it releases them for developers and those who are passionately about open source project contribution.

The archive holds terabytes of files,which you can download for free. So select which windows files have some machine learning algorithms in! I wanted to give you a little bit of a glimpse,if you're a machine learning engineer.

For your information; answers to this question, will tend to be almost entirely based on opinions,rather than facts concerning Microsoft's software products.

",1581,,1581,,8/3/2018 11:43,8/3/2018 11:43,,,,0,,,,CC BY-SA 4.0 7352,1,7353,,7/30/2018 19:37,,19,1974,"

What are the mathematical prerequisites for understanding the core part of various algorithms involved in artificial intelligence and developing one's own algorithms?

Please, refer to some specific books.

",12021,,2444,,12/21/2021 15:52,12/21/2021 15:52,What are the mathematical prerequisites for an AI researcher?,,3,0,,,,CC BY-SA 4.0 7353,2,,7352,7/30/2018 21:43,,16,,"

Good Mathematics Foundation

Begin by ensuring full competency with intermediate algebra and some other foundations of calculus and discrete math, including the terminology and basic concepts within these topics.

  • Infinite series
  • Logical proofs
  • Linear algebra and matrices
  • Analytic geometry, especially the distinction between local and global extremes (minima and maxima), saddle points, and points of inflection
  • Set theory
  • Probability
  • Statistics

Foundations of Cybernetics

Norbert Wiener, Cybernetics, 1948, MIT Press, contains time series and feedback concepts with clarity and command not seen in subsequent works; it also contains an introduction to information theory beginning with Shannon's log2 formula for a defining the amount of information in a bit. This is important to understand the expansion of the information entropy concept.

Calculus

Find a good calculus book and make sure you have clarity around key theory and application in these categories.

  • Time series
  • Infinite series
  • Convergence — Artificial networks ideally converge to an optimum during learning.
  • Partial differentials
  • Jacobian and Hessian matrices
  • Multivariate math
  • Boundary regions
  • Discrete math

Much of that is in Calculus, Strang, MIT, Wellesley-Cambridge Press. Although the PDF is available on the web, it is basic and not particularly deep. The one in our laboratory's library is Intermediate Calculus, Hurley, Holt Rinehart & Winston, 1980. It is comprehensive and in some ways better laid out than the one I have in my home library, which Princeton uses for sophomores.

Ensure you are comfortable working in spaces beyond ℝ2 (beyond 2D). For instance, RNNs are often in spaces such as ℝ4 thorugh ℝ7 because of the horizontal, vertical, pixel depth, and movie frame dimensions.

Finite Math

It is unfortunate that no combination of any three books I can think of has all of these.

  • Directed graphs — Learn this BEFORE trees or circuits (artificial nets) because it is the superset topography of all those configurations
  • Abstract symbol trees (ASTs)
  • Advanced set theory
  • Decision trees
  • Markov chains
  • Chaos theory (especially the difference between random and pseudo-random)
  • Game Theory starting with Von Neumann and Morgenstern's Game Theory, the seminal work in that field
  • Convergence in discrete systems especially the application of theory to signal saturation in integer, fixed point, or floating-point arithmetic
  • Statistical means, deviations, correlation, and the more progressive concepts of entropy, relative entropy, and cross-entropy
  • Curve fitting
  • Convolution
  • Probability especially Bayes' Theorem
  • Algorithmic theory (Gödel's uncertainty theorems and Turing completeness)

Chemistry and Neurology

It is good to recall chemical equilibria from high school chemistry. Balance plays a key role in more sophisticated AI designs. Understanding the symbiotic relationship between generative and discriminative models in GANs will help a student further this understanding.

The control functions within biological systems remain a primary source of proofs of concept in artificial intelligence research. As researchers become more creative in imagining forms of adaptation that do not directly mimic some aspect of biology (still a distance off as of this writing) creativity may play a larger role in AI research objective formulation.

Even so, AI will probably remain a largely interdisciplinary field.

",4302,,36737,,4/2/2021 20:48,4/2/2021 20:48,,,,2,,,1/24/2021 0:19,CC BY-SA 4.0 7354,2,,6461,7/30/2018 22:40,,1,,"

It is already combined.

Adaptive entropy techniques are already used in most of the best compression encoders. This is true for file encoders, video encoders, and audio encoders. We use it in the solar lab to optimize sample rates in data acquisition.

In fact, pattern recognition and compression are very tightly coupled if you consider autoencoders and other feature extraction schemes and compare them mathematically with what compression does. See Data Compression – A Generic Principle of Pattern Recognition?, Gunther Heidemann, Helge Ritter, VISIGRAPP 2008

Zlib and lz4 have more like hyper-parametric learning, however they don't persist what they learn. This work is interesting: Adaptive On-the-Fly Compression, Chandra Krintz, Sezgin Sucu, IEEE Parallel and Distributed Systems, v17 n1, January 2006.

**Suggested Project:

Create a theoretical framework and software POC that learns correlations between these two sets.

  1. Quickly ascertainable features of documents or audio or video streams (i.e file path components, media titles, date, file type, and first N bytes)
  2. The existing parameters that existing open source compression software learns during its existing pattern recognition algorithms

Persisting those correlations between compression invocations may considerably improve file transfer, kernel operations (since lz4 is now native in kernels like LINUX), and media streaming.

How much effort is made to persist features extracted (pattern recognition) between frames in media streaming is worth investigating too.

",4302,,,,,7/30/2018 22:40,,,,0,,,,CC BY-SA 4.0 7355,2,,6459,7/31/2018 1:38,,1,,"

You don't necessarily need to roll out the inputs to an RNN; doing so makes it easier to optimize computation (if the sequence length is the same each batch), but it's not a necessity. Furthermore, RNNs (and, incidentally, the brain) doesn't necessarily remember the input history as is; rather, the history is encoded via the RNN's cell state (or states, in the case of LSTMs and other RNN cell architectures with multiple states). Neural Turing Machines (NMTs) and Differential neural computers expand on that concept by also using a larger ""memory"" storage (in the form of a matrix).

",164,,,,,7/31/2018 1:38,,,,4,,,,CC BY-SA 4.0 7357,2,,4142,7/31/2018 11:40,,0,,"

Terminology

There are two uses of the word map in this discussion.

  • Road maps are construed below as images of road maps.
  • Mapping input to desired output is the skill the system must learn.

The set of examples used to teach the system from an existing mapping of input to output is called a labeled data set and the associated type of learning is called supervised.

Training Resources

  • Unreliable labeled road map examples and access to more also unreliable labeled road map examples
  • A large number of unlabeled road map examples and access to more road maps
  • A smaller number of labeled road map examples

Inputs

  • Virtual game map as an image
  • Starting position
  • Target ending position

Output

  • The fastest route in sufficient detail to fully direct car movement — Assuming that the route should be optimized for soonest arrival time, not for fuel conservation, minimal tire wear, safety, or minimal distance, since the car was identified as a race car.

Accurate Valuation of the Resource Inventory

The low quality routes from the black box service provide neither examples of desired system behavior nor examples undesirable system behavior. The former case would be good training data. The later would be good to use in an adversarial architecture based on the design of GANs and their variants. The uncertain quality of the labeled data from the black box service makes it ambiguous and, from an information science point of view, of zero value.

Comparing the low quality routes and the high quality routes may be interesting, but not particularly useful given the current state of machine learning. The objective of your initial project phase is to teach the machine to generate a high quality route from the low quality route, not produce a comparison report. To do that without processing the images a substantially large overlap between these two sets would be required.

  • The smaller number of labeled road map examples for which the labels were created manually
  • Corresponding labeled road map examples for which the labels were created by the black box service

That approach would require you to correct bad routes to create a sufficient training set. Without overlap, you have no training data for artificial network training. It may be possible to create an algorithm of GAN style that learns how to correct the black box service output using a concept called cross-entropy, but it would require processing the images and would likely be more difficult to do so than to replace the black box service altogether.

If your ultimate goal is to create a working system that generates routes from images and start and and positions, I suggest discounting the external service altogether, and discard the idea of creating a route improver sub-system. That you wish to improve upon the black box route generator's algorithm is noble but ultimately a time-consuming distraction to reaching the ultimate goal.

Replace the unreliable thing with the reliable thing you design and build and to which you will have full and open access to improve. Your system will ultimately have to deal with the images in your code either way, and that's the most challenging aspect of the overall task. Just learn about CNNs and RNNs and get right to it. That's my advise anyway.

A Note on Feedback

For any non-trivial route, the car will not likely stay on the road if given only turn, acceleration, and deceleration instructions. Detection of the edges of the road would normally be needed as a feedback mechanism to accompany the route instructions. The only way around this is to make the visualization sufficiently accurate that cumulative errors never exceed the distance that would put the car off pavement.

A Note on Data Representation

JSON has a more flexible and terse structure when it comes to homogeneous arrays than XML, and it is easy to convert XML to JSON. Furthermore, a transportation route is often represented as a directed graph, and there are many algorithms already conceived in graph theory that have been implemented in graph libraries in most languages. For instance, shortest path, path equivalence, path concatenation, and detection of rings (driving in circles) are one line calls to these libraries. Because JSON, for the reason given, has overtaken XML in many domains, the number of graph libraries that can read and write JSON directly has overtaken the number that can read or write XML directly. The tooling for JSON analysis and visualization has surpassed that of XML at this point too.

",4302,,4302,,10/15/2018 23:55,10/15/2018 23:55,,,,0,,,,CC BY-SA 4.0 7358,2,,7347,7/31/2018 13:59,,1,,"

From how you've phrased your question I'm going to assume you've jumped in without much structured training in data science, so I'll answer at a fairly high level.

This is an inherent problem with image classification in that if your final layer only has food classes then whatever you feed in will always be classified as a type of food regardless of what is actually in the image.

There are a few techniques you could try. The simplest and fastest being using a pre-built classifier to screen the input data for the presence of ""food"" and then only using your own classifier to determine what type of food it is. Any of the open source imageNet networks would be a good step here - if that finds something with a food label then use your classifier to identify the category. This means you won't need to retrain although you'll be at the mercy of any errors in the pre-trained category classifier, which is outside of your control. Here's a good how to guide: https://www.learnopencv.com/keras-tutorial-using-pre-trained-imagenet-models/

Another option is to add a negative class to your data set consisting of non-food items that you label ""other"". This is probably trickier as you'd need to cover all the non-food categories that your network will see and ensure that it doesn't learn the background.

Take a look at your final level and make a call about whether to accept the top result or not. You're forcing a choice and these values may give a clue as to the level of error in the prediction.

",2935,,,,,7/31/2018 13:59,,,,2,,,,CC BY-SA 4.0 7359,1,7360,,7/31/2018 14:34,,14,7373,"

I'm now learning about reinforcement learning, but I just found the word ""trajectory"" in this answer.

However, I'm not sure what it means. I read a few books on the Reinforcement Learning but none of them mentioned it. Usually these introductionary books mention agent, environment, action, policy, and reward, but not ""trajectory"".

So, what does it mean? According to this answer over Quora:

In reinforcement learning terminology, a trajectory $\tau$ is the path of the agent through the state space up until the horizon $H$. The goal of an on-policy algorithm is to maximize the expected reward of the agent over trajectories.

Does it mean that the ""trajectory"" is the total path from the current state the agent is in to the final state (terminal state) that the episode finishes at? Or is it something else? (I'm not sure what the ""horizon"" mean, either).

",7402,,2444,,2/22/2019 14:10,1/10/2020 10:17,"What is a ""trajectory"" in reinforcement learning?",,3,0,,,,CC BY-SA 4.0 7360,2,,7359,7/31/2018 15:13,,12,,"

In answer that you linked, I may have used an informal definition of ""trajectory"", but essentially the same thing as the quote. A ""trajectory"" is the sequence of what has happened (in terms of state, action, reward) over a set of contiguous timestamps, from a single episode, or a single part of a continuous problem.

So $(s_3, a_3, r_4, s_4, a_4, r_5, s_5, a_5, r_6, s_6)$ taken from any scenario where an agent was used in the problem environment would be a trajectory - at least as I intended it in the answer. This could be from real-world data, or a simulation. It could involve a totally random or untrained agent, or a fully-optimised policy.

In the other definition that you have found, the focus on states and a horizon could make it slightly different, but actually I suspect that it is the same thing, as it is not that useful to only know the states. The Quora answer is probably just using ""path of the agent through the state space"" as shorthand to describe the same data.

A ""horizon"" in reinforcement learning is a future point relative to a time step, beyond which you do not care about reward (so you sum the rewards from time $t$ to $t+H$). Fixed horizons can be used as an alternative to a discount factor for limiting sums of reward in continuous problems. They may also be used in other approaches, but basically mean the same thing - a time step beyond which you don't account for what happens.

",1847,,2444,,11/12/2018 20:39,11/12/2018 20:39,,,,2,,,,CC BY-SA 4.0 7361,1,7368,,7/31/2018 16:21,,0,307,"

Say I'm training a neural net to compute the following function:

(color_of_clothing, body_height) -> gender

When using this network for prediction, I can obviously plug in a pair (c, b) to receive a predicted g, but say I want to get a prediction only based on c or only based on b, can I use the same neural net somehow? Or would I need to train two separate neural nets c -> g and b -> g previously?

Or more generally, can I use a neural net that was trained to predict A -> B to make predictions on values from a subset of A, or should I train separate neural nets on all subsets of A that I'm interested in?

",17205,,7800,,7/31/2018 17:49,8/1/2018 7:12,"Dealing with ""blank"" inputs in prediction of a neural network?",,1,0,0,,,CC BY-SA 4.0 7362,1,,,7/31/2018 17:42,,-1,81,"

In doing a project using neural networks with an input layer, 4 hidden layers and an output layer ,I used mini batch gradient descent. I noticed that the randomly initialised weights seemed to do a good performance and gave a low error. As the model started training after about 200 iterations there was large jump in error and then it came down slowly from there. I have also noticed that sometimes the cost just increases over a set of consecutive iterations. Can anyone explain why these happen? It is not like there are outliers or a new distribution as every iteration exposes it to the entire dataset. I used learning rate 0.01 and regularisation parameter 10. I also tried regularisation parameter 5 and also 1. And by the cost I mean, the sum of squared errors of all minibatches/2m plus regularisation term error. Further if this happens and my cost after the say 10000th iteration is more than my cost when I initialised with random weights (lol) can I just take the initial value? As those weights seem to be doing better.

The large jumps are the most puzzling.

This is the code

Any help would be greatly appreciated. Thanks

",17143,,17143,,8/3/2018 2:39,8/3/2018 14:06,Behaviour of cost,,2,0,,,,CC BY-SA 4.0 7363,2,,7352,7/31/2018 17:54,,3,,"

As far as simple algorithms like Gradient Descent are concerned, you need to have a good grasp of partial derivatives. Especially if you want to implement neural networks. Also most algorithms are vectorised to improve computing speed and so you need to be comfortable with matrix math. This involves being really quick and comfy with dimensions of matrices, dimensions of products, multiplication of matrices, transpose and so on. Very rarely, you might use matrix calculus to directly arrive at optimal solutions, so a few results from this area should do. Moving on, you need to understand some function analysis. this is needed to get an intuition on what activation functions like sigmoid and tanh, log are doing. A grasp of probability and expectations is also really useful. You should also be clear with orthogonal vectors and inner products.

That being said, I would suggest you grasp basic calculus and matrix operations and try learning AI concepts. If you can't figure something out, explore the math.

Note: again this is only for starting.

",17143,,,,,7/31/2018 17:54,,,,0,,,,CC BY-SA 4.0 7364,1,,,7/31/2018 20:31,,6,462,"

Does an AI exist that can automatically write software based on a formal specification of the software?

",17209,,2444,,12/7/2020 13:43,12/7/2020 23:04,Does an AI exist that can write software based on a formal specification?,,3,0,,,,CC BY-SA 4.0 7365,2,,7364,8/1/2018 2:22,,6,,"

I think that the answer to your question is yes. In the article New A.I. application can write its own code, the authors state

Computer scientists have created a deep-learning, software-coding application that can help human programmers navigate the growing multitude of often-undocumented application programming interfaces, or APIs.

Designing applications that can program computers is a long-sought grail of the branch of computer science called artificial intelligence (AI). The new application, called Bayou, came out of an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com.

The paper Neural Sketch Learning for Conditional Program Generation may also be useful.

",5763,,-1,,6/17/2020 9:57,11/1/2019 2:12,,,,0,,,,CC BY-SA 4.0 7366,2,,7364,8/1/2018 6:17,,5,,"

There's Neural Program Synthesis, which can be used to generate a piece of code. Please, have a look at the article Neural Program Synthesis by Microsoft for an overview of the field.

",17192,,2444,,11/1/2019 2:15,11/1/2019 2:15,,,,1,,,,CC BY-SA 4.0 7367,1,,,8/1/2018 6:54,,3,324,"

Is it possible to build a neural network that learns the connection between two images?

Let's say I have a number of X images that related to Y images. How can I build a neural network that takes an image as an input and outputs (generates) the output image?

The Y images are generated by applying some function to the X images.

Do I need a generate neural network for that? Are conventional neural networks capable of classification only?

",17216,,,,,9/11/2018 19:01,A neural network to learn the connection between two images,,2,0,,,,CC BY-SA 4.0 7368,2,,7361,8/1/2018 7:12,,0,,"

I think the answer to your question would be ""yes"". Though inference would always be best if you provide representative training data. For example you can train your net with (c, b) pairs, (blank, b) pairs, and (c, blank) pairs. That would make the net more robust and likely to support your use case. Training separate nets for each case would be more efficient and accurate - but I'm not sure what your goals and constraints are.

The point is a ""blank"" input can be trained for as well in the same net. In your case it's a new kind of body_height or color_of_clothing. I also suspect you might have issues with how you encode blank because e.g. rgb(0, 0, 0) would be black, and perhaps you intended to model blank as zero which would mean a blank is black.

",17217,,,,,8/1/2018 7:12,,,,0,,,,CC BY-SA 4.0 7369,1,,,8/1/2018 8:23,,6,3854,"

I am training a generative adversarial network (GAN) to generate images given edge histogram descriptor (EHD) features of the image. The EHD features are themselves sparse (meaning they contain a lot of zeroes). While training the generator loss and discriminator loss are reducing very slowly.

Are deep learning models (like GAN) suitable for training with sparse data for one or more of the features in the input or derived through feature extraction?

",9062,,2444,,7/6/2019 13:16,7/31/2020 16:04,Are deep learning models suitable for training with sparse data?,,1,0,,,,CC BY-SA 4.0 7370,1,7374,,8/1/2018 9:06,,4,1446,"

I'm confused with the two terminology - action and policy - in Reinforcement Learning. As far as I know, the action is:

It is what the agent makes in a given state.

However, the book I'm reading now (Hands-On Reinforcement Learning with Python) writes the following to explain policy:

we defined the entity that tells us what to do in every state as policy.

Now, I feel that the policy is the same as the action. So what is the difference between the two, and how can I use them apart correctly?

",7402,,2444,,5/12/2019 23:22,5/12/2019 23:23,What is the difference between policy and action in reinforcement learning?,,1,0,0,,,CC BY-SA 4.0 7371,1,,,8/1/2018 9:56,,5,481,"

I need to train a convolutional neural network to classify snake images. The problem is that I have only a small number of images available for some snake types.

So, what is the best approach to train a neural network for image classification using a small data set?

",17220,,2444,,11/29/2020 15:57,11/29/2020 15:57,How can I train a neural network for image classification when the dataset is small?,,2,0,,,,CC BY-SA 4.0 7372,2,,7371,8/1/2018 10:02,,4,,"

Use Fine Tuning

You can simply use a pre-trained model on ImageNet, as this data set has multiple snakes classes.

Then you can fine tune the model with your own small data set and outputs. See this for further understanding : Fine Tuning in Keras

(if you don't use Keras, there are other tutorials on the internet using other Machine Learning framework)

The idea is just removing the last layer (1000 outputs if you use a model pre-trained with ImageNet) and adding a layer of your choice with random weights and a custom number of outputs (number of your classes).

Then your retrain your network, in general we retrained only the last layers (as first layers have more general features).

",17221,,2193,,8/1/2018 11:58,8/1/2018 11:58,,,,0,,,,CC BY-SA 4.0 7374,2,,7370,8/1/2018 11:16,,7,,"

A policy is a function that maps states to a probability distribution over all possible actions.

So, in a typical Atari game, there might just be a handful of actions, represented by the keys that are used to play the game. In this context, the policy of a reinforcement learner might be represented by a pretty complex neural network that gets pixels as input and gives action probabilities as output.

",2227,,2444,,5/12/2019 23:23,5/12/2019 23:23,,,,2,,,,CC BY-SA 4.0 7376,2,,7371,8/1/2018 14:26,,3,,"

Besides using transfer learning described in other answer, you should consider using siamese network. This type of network is used in cases when one does not posess many examples of objects he wants to distinguish. General idea is that instead of ""telling"" the network ""This is a cobra"", you provide information like: ""This is a cobra, and that is a rattlesnake, learn the difference"".

There is a whole subject dedicated to your problem and it is called one shot learning.

Take a look at this tutorial: https://hackernoon.com/one-shot-learning-with-siamese-networks-in-pytorch-8ddaab10340e

",16929,,,,,8/1/2018 14:26,,,,1,,,,CC BY-SA 4.0 7378,2,,6902,8/1/2018 17:57,,0,,"

If you want to vary definitions of outcome, the problem is more about optimal segmentation/clustering and not a classification.

For clustering you could try latent class approaches.

",17229,,,,,8/1/2018 17:57,,,,0,,,,CC BY-SA 4.0 7379,1,8347,,8/1/2018 18:19,,2,195,"

In a recent paper about progress in computer animation a so called motion graph is used to describe the transition between keyframes of facial animation. Easy Generation of Facial Animation Using Motion Graphs, 2018 As far as i understand from the paper, they used a motion capture device to record faces of real people and extract keyframes. Then a transition matrix was created to ensure that a walk from keyframe #10 to #24 is possible but a transition from keyframe #22 to #99 is forbidden.

The idea itself sounds reasonable good, because now a solver can search in the motion graph to bring the system from a laughing face to a bored face without interruption or unnatural in-between-keyframes. But wouldn't it be great if the transition matrix can be stored inside a neural network? As far as i understand the backpropagation algorithm , the neural network can learn input-output relations. So the neural network has to learn the transition probability between two keyframes. And a second neural network can then produce the motion plan which is also be trained by a large corpus. Is that idea possible or is it the wrong direction?

",,user11571,,,,10/10/2018 23:15,Using a neural network for learning a Motion Graph?,,1,0,,,,CC BY-SA 4.0 7381,2,,7352,8/1/2018 19:06,,10,,"

I work as a professor, and recently designed the mathematics requirements for a new AI major, in consultation with many of my colleagues at other institutions.

The other answers, particularly this one do a good job of cataloging all the specific topics that might be useful somewhere in AI, but not all of them are equally useful for understanding core topics. In other cases, understanding the topic is essentially the same as understanding the related AI algorithms, so we usually just teach them together instead of assuming prerequisite knowledge. For instance, Markov Decision Processes aren't hard to teach to someone who already knows the basics of graph theory and probabilities, so we usually just cover them when we teach reinforcement learning in an AI course, rather than as a separate topic in a mathematics course.

The mathematics requirements we settled on look like:

  • A one or two semester course in discrete mathematics. This is as much to establish comfort with proof and mathematical rigor as with any specific topic in the area. It's mostly just "foundational" knowledge, but bits of it turn out to be very useful. Comfort with infinite summations, the basics of graphs, combinatorics, and asymptotic analysis are perhaps the most directly applicable parts. I like Susanna Epp's book.

  • A one or two semester course in linear algebra, which is useful across a wide variety of topics in AI, especially machine learning and data mining. Lay & Lay is an okay book, but probably not the absolute best. Shilov is a recommendation from Ian Goodfellow and others, but I've not tried it myself.

  • A course in probability, and possibly a modern course in statistics (i.e. with a Bayesian focus). An older course in statistics, or one targeting social scientists, is not very useful though. My statistician colleagues are using Lock5 right now, and having good experiences with it.

  • At least differential and integral calculus, and preferably at least partial derivatives in vector calculus, but perhaps the whole course. This is useful in optimization, machine learning, and economics-based approaches to AI. Stewart is the most common textbook. It's comprehensive, and can be used for all three courses, but it's explanations aren't always the very best. I'd still recommend it though.

Those are the core topics. If you don't also have a traditional background in programming, then a course in graph theory and the basics of asymptotic complexity or algorithm design and analysis might be good supplements. Usually AI'ers come from a standard computer science background though, which covers all those things very well.

",16909,,2444,,6/28/2021 10:28,6/28/2021 10:28,,,,0,,,,CC BY-SA 4.0 7382,2,,7364,8/1/2018 19:15,,4,,"

The other answers cover modern work on this, but it's not even a new topic!

Koza's work in Genetic Programming (1992) led to whole sub-fields doing this. The techniques are widely used, robust, and well understood. They're just very computationally expensive. Enough so that most of the time you're better off just hiring a programmer to do it.

",16909,,16909,,12/7/2020 23:04,12/7/2020 23:04,,,,0,,,,CC BY-SA 4.0 7384,2,,2675,8/2/2018 7:54,,1,,"

The List

This list originates from Bruce Maxim, Professor of Engineering, Computer and Information Science at the University of Michigan. In his lecture Spring 1998 notes for CIS 4791, the following list was called,

""Good Problems For Artificial Intelligence.""

  Decomposable to easier problems
  Solution steps can be ignored or undone
  Predictable Problem Universe
  Good Solutions are obvious
  Internally consistent knowledge base (KB)
  Requires lots of knowledge or uses knowledge to constrain solutions
  Interactive

It has since evolved into this.

  Decomposable to smaller or easier problems
  Solution steps can be ignored or undone
  Predictable problem universe
  Good solutions are obvious
  Uses internally consistent knowledge base
  Requires lots of knowledge or uses knowledge to constrain solutions
  Requires periodic interaction between human and computer

What it is

His list was never intended to be a list of AI problem categories as an initial branch point for solution approaches or a, ""heuristic technique designed to speed up the process of finding a satisfactory solution.""

Maxim never added this list into any of his academic publications, and there are reasons why.

The list is heterogeneous. It contains methods, global characteristics, challenges, and conceptual approaches mixed into one list as if they were like elements. This is not a shortcoming for a list of, ""Good problems for AI,"" but as a formal statement of AI problem characteristics or categories, it lacks the necessary rigor. Maxim certainly did not represent it as a, ""7 AI problem characteristics,"" list.

It is certainly not a, ""7 AI problem characteristics,"" list.

Are There Any Category or Characteristics Lists?

There is no good category list for AI problems because if one created one, it would be easy to think of one of the millions of problems that human brains have solved that don't fit into any of the categories or sit on the boundaries of two or more categories.

It is conceivable to develop a problem characteristics list, and it may be inspired by Maxim's Good Problems for AI list. It is also conceivable to develop an initial approaches list. Then one might draw arrows from the characteristics in the first list to the best prospects for approaches in the second list. That would make for a good article for publication if dealt with comprehensively and rigorously.

An Initial High Level Characteristics to Approaches List

Here is a list of questions that an experienced AI architect may ask to elucidate high level system requirements prior to selecting an approaches.

  • Is the task essentially static in that once it operates it is likely to require no significant adjustments? If this is the case, then AI may be most useful in the design, fabrication, and configuration of the system (potentially including the training of its parameters).
  • If not, is the task essentially variable in a way that control theory developed in the early 20th century can adapt to the variance? If so, then AI may also be similarly useful in procurement.
  • If not, then the system may possess sufficient nonlinear and temporal complexity that intelligence may be required. Then the question becomes whether the phenomenon is controllable at all. If so, then AI techniques must be employed in real time after deployment.

Effective Approach to Architecture

If one frames the design, fabrication, and configuration steps in isolation, the same process can be followed to determine what role AI might play, and this can be done recursively as one decomposes the overall productization of ideas down to things like the design of an A-to-D converter, or the convolution kernel size to use in a particular stage of computer vision.

As with other control system design, with AI, determine your available inputs and your desired output and apply basic engineering concepts. Thinking that engineering discipline has changed because of expert systems or artificial nets is a mistake, at least for now.

Nothing has significantly changed in control system engineering because AI and control system engineering share a common origin. We just have additional components from which we can select and additional theory to employ in design, construction, and quality control.

Rank, Dimensionality, and Topology

Regarding the rank and dimensions of signals, tensors, and messages within an AI systems, Cartesian dimensionality is not always the correct concept to characterize the discrete qualities of internals as we approach simulations of various mental qualities of the human brain. Topology is often the key area of mathematics that most correctly models the kinds of variety we see in human intelligence we wish to develop artificially in systems.

More interestingly, topology may be the key to developing new types of intelligence for which neither computers nor human brains are well equipt.

References

http://groups.umd.umich.edu/cis/course.des/cis479/lectures/htm.zip

",4302,,4302,,10/8/2018 22:53,10/8/2018 22:53,,,,0,,,,CC BY-SA 4.0 7386,2,,7367,8/2/2018 8:30,,2,,"

It is possible to have both input and output be images that differ in a predictable way. For example, architectures similar to autoencoders have been used to remove blur, change weather conditions, change between day and night photos etc. In these architectures, the training data is matching pairs of images. If your goal is to replicate some image enhancement, then often the input is artificially processed e.g. to reduce its quality in a hard to reverse way. A good example of this would be to remove distortion or noise from an image.

You can also use generative models. These are harder to get working, but can be more flexible in that you don't need image pairs in order to train, just a set of images labelled with the traits that you want to learn. Converting an image using a generative model involves using an encoder stage to get its embedding, altering the embedding based on label you require and then feeding the new embedding into the decoder stage. This is how you might alter a face portrait from male to female, or young to old, because it is not possible to find good natural image pairs for that task.

",1847,,,,,8/2/2018 8:30,,,,0,,,,CC BY-SA 4.0 7387,2,,2876,8/2/2018 10:08,,0,,"

The question is a good one and on many people's minds. There are a few misconceptions in the line of thought to consider.

  • The supremacy of intelligent beings other than humans threaten civilization — Is the imagining of that threat logical? Is that a rational conclusion when human intelligence is the most threatening biological phenomenon in the biosphere today. The intelligence of insects may be more sustainable. The civilizations of ants and termites are certainly more collaborative. World wars and genocide are one of the primary features of human history.
  • Artificial intelligence becomes more intelligent than humans — Artificial intelligence is already more intelligent than humans in some respects, which is why we use calculators, automated switching of communications signals instead of operators, and automated mail sorters. In other ways, AI has to cross astronomical distances to begin to approximate human intelligence. We have nothing that even shows a hint of being able to simulate or duplicate in the future the human capacities of reasoning, inventiveness, or compassion.
  • The singularity is predicted 2040s — There are over a dozen unrealistic assumptions in those predictions. Look with a critical eye at any one of the arguments behind them and you will fine holes you could fly a 757 through blindfolded.
  • Exponential growth in knowledge — There is an exponential growth in information, but the proportion of that information that is legitimately peer reviewed decreases as misinformation, wild conjecture, and fake news increase. My belief is that the amount of information in the world can be approximated by log(n), where n is the population. If I am on track with that relation, the informational value of the average individual is log(n)/n, which decreases as population grows.
  • AI will be capable of omniscience — Omniscience is more than answering any question. Omniscience would require that the answering be 100% reliable and 100% accurate, which may require more silicon than exists in the universe. Read Gleick's Chaos if you wish to understand why. One might also argue that an omniscient being would not answer questions on command.

If you want a world that is better than one controlled by human intelligence, then one direction to take is to seek the development of a new species with a more advanced conception of peaceful civilization that can tame and domesticate us like we have done with dogs and cats.

The easier route is just for all of us to die. The biosphere may have been more civilized before we arrived and started killing everything and each other. But that's not my hope.

My recommendation is that we study NI (non-intelligence) and discover how to rid what is stupid from human behavior and geopolitical interaction. That would improve the world far more than distracting and substantially irrelevant machine learning gadgets and apps.

",4302,,,,,8/2/2018 10:08,,,,0,,,,CC BY-SA 4.0 7389,1,7402,,8/2/2018 14:26,,4,170,"

Can we make a chatbot that really "understands" (rather than just replies to) questions based on the database/options of replies that it has? I mean, can it come up with correct/non-stupid replies/communications that don't exist in its database?

For example, can we make it understand the words "but", "if", and so on? So, whenever it gets a question/order, it "understands" it based on "understanding". Like the movie Her, if you have watched it.

And all of this without using too much code, just the basics to "wake it up" and let it learn from YouTube videos and Reddit comments and other similar data sources.

",17246,,2444,,9/23/2021 16:30,9/23/2021 16:30,"Can we make a chatbot that really ""understands"" the questions?",,3,1,,,,CC BY-SA 4.0 7390,1,7425,,8/2/2018 14:59,,18,8783,"

I'm struggling to understand the difference between actor-critic and advantage actor-critic.

At least, I know they are different from asynchronous advantage actor-critic (A3C), as A3C adds an asynchronous mechanism that uses multiple worker agents interacting with their own copy of the environment and reports the gradient to the global agent.

But what is the difference between the actor-critic and advantage actor-critic (A2C)? Is it simply with or without advantage function? But, then, does the actor-critic have any other implementation except for the use of advantage function?

Or maybe are they synonyms and actor-critic is just a shorthand for A2C?

",7402,,2444,,5/14/2020 10:19,7/7/2022 9:49,What is the difference between actor-critic and advantage actor-critic?,,3,0,,,,CC BY-SA 4.0 7391,2,,7389,8/2/2018 16:49,,0,,"

Of course you can (read through to the end). You just need to teach it how it is taught to a baby. But first, you need to create the baby's brain. So you need to build a brain that learns from videos, poll videos, and not just understands but practices and understands other people's reactions.

Sorry, but that's not enough. You would have the same work as God (if it really does exist and took this job). You would have to raise a baby so he could grow up. If we can raise a baby in code, it will learn much faster than a human.

I've been studying and looking since creating this ""baby."" I've made some babies, but none of them have been enough. But it's what I chose to do in my life (one of things). So I'm still raising a baby, from time to time it's getting smarter.

It's easy to build a robot that goes into reddit to read and extract feeling from what people write. You can watch YouTube videos and differentiate objects, humans, colors, etc. But that is what we would codify for this ""baby"" to do.

Perhaps the first step would be to rebuild a brain through code. We are already creating some pieces, we are already studying for many years synapses, neural networks, etc. But there is still a whole brain that we still do not understand. And I'm talking about the human brain (biological).

When I say you can, it's an incentive. I told myself I can. I'm going down this road. If I really can, I do not know.

One tip I give you: Google is far from successful. But you are trying and reaching for it. That's enough, right?

",7800,,,,,8/2/2018 16:49,,,,0,,,,CC BY-SA 4.0 7392,1,,,8/2/2018 17:08,,0,71,"

I would like to know some daily basis applications of AI. I think these might be relevant examples:

  1. Google search engine

  2. Face recognition on iPhone

Are my examples correct? Could you provide some more examples?

",17249,,2444,,12/20/2021 23:46,12/20/2021 23:46,What are some examples of everyday systems that use AI?,,1,0,,,,CC BY-SA 4.0 7393,2,,7341,8/2/2018 19:00,,1,,"

The problem might not be caused by your loss function. Deep learning models tend to be computationally demanding. Mobile devices are not usually prepared for handling models with high throughput. That being said, you might try to:

  1. Prepare smaller model - less layers, less computation

  2. Mobile models optimization - Google provides some materials on optimizing Tensorflow models for mobile inference:

    https://www.tensorflow.org/mobile/prepare_models https://www.tensorflow.org/mobile/optimizing

",16929,,,,,8/2/2018 19:00,,,,0,,,,CC BY-SA 4.0 7394,1,7395,,8/2/2018 20:29,,9,404,"

What is a support vector machine (SVM)? Is an SVM a kind of a neural network, meaning it has nodes and weights, etc.? What is it best used for?

Where I can find information about these?

",17250,,2444,,5/16/2020 14:35,5/16/2020 14:35,What is a support vector machine?,,3,0,,,,CC BY-SA 4.0 7395,2,,7394,8/2/2018 21:03,,6,,"

I find the chapter on machine learning from Russell & Norvig is a pretty good place to start with SVMs. I think this is Chapter 18?

One way to understand an SVM is as a kind of neural network, but this is not usually an intuitive approach for a beginner (unless your NN knowledge is already quite good).

A better way to understand SVMs is as consisting of three simple ideas rolled into one algorithm. Here's an attempt at a ""For Dummies"" answer though:

  1. Maximum Margin Classification. SVMs are usually used to find a pattern in a set of data. Often, the data allow an infinite set of possible patterns that are all equally descriptive. For example, maybe The relationship is ""Lives within 5 miles of a Coast -> Income High"". It's easy to imagine that this pattern is just as good as ""Lives within 5.0001 miles of a Coast -> Income High"" or ""Lives within 4.999 miles of a Coast -> Income High"". There might actually be a lot more play than that in the data (e.g. 3 miles might work out too). If all these are equally good, then the maximum margin idea says you should pick the one that's ""in the middle"" of the data. So maybe all values between 5.5 and 4.8 are equally good. In that case, we might pick 5.15 (in the middle). This example is super simplified. Real world data would have a lot more variables, and the idea of ""in the middle"" ends up being a little more complex, but this is the intuition. It turns out that finding the maximum margin pattern is easy when the patterns are linear. That is, when they can be represented by drawing straight lines through a plot of the dataset.

  2. Projection into higher dimensions. This one needs a bit of math to visualize. Consider a dataset consisting of a circular pattern (for instance, maybe the pattern is that higher incomes are found in the middle of the city). There is no linear relationship that captures this pattern. That is, you can't draw a straight line through the data, and say something meaningful about all the values on one side or the other. However, if you add a new feature to your data that is the square of the original coordinates, it's easy to find such a pattern. Basically, if you pre-compute ""circular"" functions of the original data, you can add them to the dataset, and then find a pattern that is a linear function of this new feature. This idea generalizes: if you compute a complex enough function of your original data, and then apply the maximum margin idea, you can learn any pattern you like. The problem is that it's slow: adding more features makes it take longer to find the patterns you want.

  3. The Kernel Trick. The thing that made SVMs useful was the kernel trick: finding the maximum margin didn't depend on anything except the product of the coordinates of the various points. It turned out that this product could be computed first, and then run through certain functions to produce a problem that was identical to the one you'd get by first adding extra features and then doing the multiplication. However, computing the problem this way didn't require adding any new features! This made SVMs one of the first reliable, well understood, and fast methods for finding non-linear patterns in data.

Hope that provides a starting point. Consider reading Russell & Norvig as a next starting point, or Bishop if you want to go deeper.

",16909,,16909,,8/3/2018 13:35,8/3/2018 13:35,,,,2,,,,CC BY-SA 4.0 7396,2,,7392,8/2/2018 21:24,,2,,"

A good, recent, and accessible book which includes many case studies is Prediction Machines. Check it out for more details than I can provide in this answer.

Example applications are all around us, but one of the problems with recognizing them is that the bar for what we call AI is constantly being raised.

Consider that a few decades ago, directions from google maps would certainly be recognized as AI, whereas now most laypeople wouldn't make that association.

Some other commonplace examples:

  • Amazon.com can guess at what you'd like to buy next.
  • Facebook and many other companies have programs that decide which ads to show you, without human intervention.
  • Tesla's autopilot feature will drive your car down a highway.
  • Google search can instantly return the results you were looking for, even when you enter strange or ambiguous search terms.
  • Virtual Assistants from many companies, like Alexa recognize your voice, figure out what words you're saying, and then figure out what you want them to do.
  • StackOverflow determines which questions are hot, and which ones need moderator attention.

There are are also a lot of less obvious examples. I'll pick a few from my research area:

  • When you spot a sky marshal on a plane, or a dog patrol in the airport it's because an AI system put them there. Some subway systems now use the same technology for fare checkers.
  • When people trade agricultural products in Uganda, it's an AI system and AI techniques at work behind the scenes.
  • When you're assigned a donor for a kidney transplant (and now other organs too), an AI system did the heavy lifting in that decision.

Of course, there's lots more going on too, but those should give you some suggestions for everyday conversation.

",16909,,,,,8/2/2018 21:24,,,,0,,,,CC BY-SA 4.0 7397,1,,,8/3/2018 0:41,,2,3353,"

I'm now reading the following blog post but on the epsilon-greedy approach, the author implied that the epsilon-greedy approach takes the action randomly with the probability epsilon, and take the best action 100% of the time with probability 1 - epsilon.

So for example, suppose that the epsilon = 0.6 with 4 actions. In this case, the author seemed to say that each action is taken with the following probability (suppose that the first action has the best value):

  • action 1: 55% (.40 + .60 / 4)
  • action 2: 15%
  • action 3: 15%
  • action 4: 15%

However, I feel like I learned that the epsilon-greedy only takes the action randomly with the probability of epsilon, and otherwise it is up to the policy function that decides to take the action. And the policy function returns the probability distribution of actions, not the identifier of the action with the best value. So for example, suppose that the epsilon = 0.6 and each action has 50%, 10%, 25%, and 15%. In this case, the probability of taking each action should be the following:

  • action 1: 35% (.40 * .50 + .60 / 4)
  • action 2: 19% (.40 * .10 + .60 / 4)
  • action 3: 25% (.40 * .25 + .60 / 4)
  • action 4: 21% (.40 * .15 + .60 / 4)

Is my understanding not correct here? Does the non-random part of the epsilon (1 - epsilon) always takes the best action, or does it select the action according to the probability distribution?

",7402,,,,,5/20/2019 10:35,"Does epsilon-greedy approach always choose the ""best action"" (100% of the time) when it does not take the random path?",,2,0,,,,CC BY-SA 4.0 7398,2,,7362,8/3/2018 4:38,,1,,"

I would say just don't go for using regularization at first, try with lower learning rate = 0.0001 see the behavior. Try to post entire architecture of your model so that one can better answer your problem.

",3773,,,,,8/3/2018 4:38,,,,1,,,,CC BY-SA 4.0 7399,2,,7341,8/3/2018 4:49,,0,,"

For face ID, Apple is using Siamese Network. You may get a better idea here https://towardsdatascience.com/one-shot-learning-face-recognition-using-siamese-neural-network-a13dcf739e

",3773,,,,,8/3/2018 4:49,,,,0,,,,CC BY-SA 4.0 7400,2,,7397,8/3/2018 8:29,,1,,"

Epsilon-greedy is most commonly used to ensure that you have some element of exploration in algorithms that otherwise output deterministic policies.

For example, value-based algorithms (Q-Learning, SARSA, etc.) do not directly have a policy as output; they have values for states or state-action pairs as outputs. The standard policy we ""extract"" from that is a deterministic policy that simply tries to maximize the predicted value (or, technically, a ""slightly"" nondeterministic policy in that, in proper implementations, it should break ties (where there are multiple equal values at the top) randomly). For such algorithms, there is not sufficient inherent exploration, so we typically use something like epsilon-greedy to introduce an element of exploration. In these cases, both of the possible explanations in your question are identical.

In cases where your algorithm already produces complete probability distributions as outputs that do not so much focus all of the probability mass on a single or a couple of points, like the probability distribution you gave as an example in your question, it's generally not really necessary to use epsilon-greedy on top of it; you already get exploration inherently due to all actions having a decent probability assigned to them.

Now, I've actually personally mostly worked with value-based methods so far and not so much with e.g. policy gradient methods yet, so I'm not sure whether there tends to be a risk that they also ""converge"" to situations where they place too much probability mass on some actions and too little on others too quickly. If that's the case, I would expect an additional layer of epsilon-greedy exploration might be useful. And, in that case, I would indeed find your explanation the most natural. If I look through, for example, the PPO paper, I didn't find anything about them using epsilon-greedy in a quick glance. So, I suppose the combination of epsilon-greedy with ""nondeterministic"" policies (ignoring the case of tie-breaking in value-based methods here) simply isn't really a common combination.

",1641,,,,,8/3/2018 8:29,,,,3,,,,CC BY-SA 4.0 7401,2,,7389,8/3/2018 9:31,,2,,"

Defining what it means to understand something is a complex philosophical question, with answers that can split the AI community into different camps.

Clearly an algorithm that associates the ASCII characters of word like ""if"" with a set of numbers based on statistics of where it appears in a corpus of reference texts is missing the essence of subjective experience that you or I might feel when reading it.

The related terms you should explore are https://en.m.wikipedia.org/wiki/Qualia and https://en.m.wikipedia.org/wiki/Chinese_room which explore subjective experience and whether an artificial system can possess it

With current knowledge of how our own minds create understanding, it is very hard to tell what is required. It may just be multi modal learning, so that words are associated with sensory experience. Experiments with virtual or real robots that experience an environment and need to communicate about it are one way to explore the subject.

In short, what it means to understand something, whether it is possible to replicate artificially, and whether it is an important trait of an AGI, are all open questions at the cutting edge of AI research.

",1847,,,,,8/3/2018 9:31,,,,0,,,,CC BY-SA 4.0 7402,2,,7389,8/3/2018 13:30,,3,,"

This question has been studied academically for decades, and is really an extension of the work on Philosophy of Mind that was done in the two or three centuries before that.

A good resource is Mind Design II, though it's getting a little bit old now.

The modern schools of thought are:

  1. Cognitivism. This is in decline, but was extremely popular in the 70's and 80's, and still fairly widespread in one form or another in the AI research community. It says that human brains really are just computers. If they're just computers, and that they're probably running a sort of symbolic reasoning algorithm like unification (although I think it's hard to find anyone who really thinks it's unification anymore). This is the idea underpinning work like SOAR. The main bottleneck, as Drayfus pointed out in the 1970's, is that you need to write down all the facts about something for a machine to ""understand"" it. ""All the facts"" turns out to rapidly turn into an infinite number for anything more complex than the smallest ""microworlds"" that you could deploy an AI program in. Searle also proposed his Chinese Room argument in response to this group, but it holds for Connectionist approaches as well (more on that later...).

  2. Connectionism The connectionists hold that the complexity of our brains comes from massively parallel computation consisting of messages passed between billions of neurons in our heads. They think the correct approach to general AI is likely to involve simulations of similar architectures. It turns out that many of the things that are incredibly hard for Cognitivist projects (e.g. vision), are easy to solve with these approaches. The main criticisms from Cognitivists are that we don't have a very good idea of what these things are doing, and so the claim that they help us understand intelligence is false, and using them to solve practical problems might be dangerous. These are both somewhat fair, in my view. Older Cognitivist arguments, put forth most elegantly by Jerry Fodor, have now been discredited. Fodor argued that properties like language could never be understood as statistical artifacts of parallel computation, but he was wrong: all the best computational systems for language are now connectionist, and no one's ever made a cognitivist one that's even half as convincing. This is the dominant paradigm behind most of the modern advances in the field. Hinton's work forms the basis of most of the recent advances.

  3. Dynamics Searle's argument was rooted in the idea that mapping inputs to outputs couldn't be what's happening in our heads, and that such a system couldn't be called intelligent. This also seems to be an implicit assumption in your question. The Dynamicists believe a variety of things, but I'd characterize them as collectively rejecting this idea. Authors like Paul Churchland argue that Searle's argument is rooted in a sort of pre-enlightenment ""folk psychology"". It's a bit like the theories that predated modern chemistry. Everyone was sure that fire was a substance that lived inside wood. If you heated up the wood properly, it could get out of the wood, making more heat. On the surface this seems pretty reasonable, but of course, it's wrong: The fire is actually a mixture of the wood with oxygen in the air, forming a new gas. There's no fire inside the wood. Similarly, Churchland would argue that there's no ""Consciousness"" inside us, allowing us to control our actions in the way we popularly imagine. Subjective experience is more likely to be ""along for the ride"", and entirely or mostly separate from the intelligent behaviors we observe. Some researchers think it could be described by a sub-system that maps observations of what the rest of the brain does into ""stories"" for the rest of the brain to receive as a sort of summary digest. Active research in this area tends to focus on things like the insect metaphor, and the interaction of the machine with its environment. It was fairly popular in the 1990's, but the phenominal success of connectionist approaches in the 2000's has led to its decline. Probably the best known experiments were the work of Rodney Brooks.

",16909,,,,,8/3/2018 13:30,,,,0,,,,CC BY-SA 4.0 7404,2,,7362,8/3/2018 13:46,,1,,"

I figured out the problem after a bit of trial and error. This article may help too

First i set about pruning the dataset and removed outliers. Then i initialised the weights to better values using xavier initialisation. These made the model slightly better but still the problem occured.

I then set the learning rate down to 0.000000075 and it converged after about 10000 iterations. I guess it was overshooting the minimum and reaching a point on the other side of the minimum that was farther than the previous point. This resulted in increased magnitude of gradient in the opposite direction and the cost explosively went up to the order of billions.

",17143,,,user9947,8/3/2018 14:06,8/3/2018 14:06,,,,2,,,,CC BY-SA 4.0 7406,1,,,8/3/2018 19:23,,0,404,"

I need to retrieve just the text from emails. The emails can be in HTML format, and can contain huge signatures, disclaimer legalese, and broken HTML from dozens of forwards and replies. But, I only want the actual email message and not any other cruft such as the whole quotation block, signatures, etc.

This isn't really a problem that could be solved with regex because HTML mail can get very, VERY messy.

Could a neural network perform this task? What kind of problem is this? Classification? Feature selection?

",17272,,,,,1/11/2020 9:23,"What kind of problem is ""email text extraction""?",,2,0,,,,CC BY-SA 4.0 7407,2,,7406,8/3/2018 21:31,,2,,"

It's certainly possible to treat this as a natural language processing problem, basically you're looking to assign ""salience"" scores to the text.

Really, though, that's overkill for this kind of problem. Writing a regex or a CFG parser (or better: finding an existing parser) is likely to be easier and more reliable.

",16909,,,,,8/3/2018 21:31,,,,9,,,,CC BY-SA 4.0 7408,1,7442,,8/3/2018 22:41,,6,1497,"

I'm building a deep neural network to serve as the policy estimator in an actor-critic reinforcement learning algorithm for a continuing (not episodic) case. I'm trying to determine how to explore the action space. I have read through this text book by Sutton, and, in section 13.7, he gives one way to explore a continuous action space. In essence, you train the policy model to give a mean and standard deviation as an output, so you can sample a value from that Gaussian distribution to pick an action. This just seems like the continuous action-space equivalent of an $\epsilon$-greedy policy.

Are there other continuous action space exploration strategies I should consider?

I've been doing some research online and found some articles related to RL in robotics and found that the PoWER and PI^2 algorithms do something similar to what is in the textbook.

Are these, or other, algorithms "better" (obviously depends on the problem being solved) alternatives to what is listed in the textbook for continuous action-space problems?

I know that this question could have many answers, but I'm just looking for a reasonably short list of options that people have used in real applications that work.

",17274,,2444,,10/1/2020 22:22,10/2/2020 14:43,What are the available exploration strategies for continuous action space scenarios in RL?,,2,0,,,,CC BY-SA 4.0 7411,2,,5493,8/4/2018 3:07,,3,,"

First Degree Linear Polynomials

Non-linearity is not the correct mathematical term. Those that use it probably intend to refer to a first degree polynomial relationship between input and output, the kind of relationship that would be graphed as a straight line, a flat plane, or a higher degree surface with no curvature.

To model relations more complex than y = a1x1 + a2x2 + ... + b, more than just those two terms of a Taylor series approximation is needed.

Tune-able Functions with Non-zero Curvature

Artificial networks such as the multi-layer perceptron and its variants are matrices of functions with non-zero curvature that, when taken collectively as a circuit, can be tuned with attenuation grids to approximate more complex functions of non-zero curvature. These more complex functions generally have multiple inputs (independent variables).

The attenuation grids are simply matrix-vector products, the matrix being the parameters that are tuned to create a circuit that approximates the more complex curved, multivariate function with simpler curved functions.

Oriented with the multi-dimensional signal entering at the left and the result appearing on the right (left-to-right causality), as in the electrical engineering convention, the vertical columns are called layers of activations, mostly for historical reasons. They are actually arrays of simple curved functions. The most commonly used activations today are these.

  • ReLU
  • Leaky ReLU
  • ELU
  • Threshold (binary step)
  • Logistic

The identity function is sometimes used to pass through signals untouched for various structural convenience reasons.

These are less used but were in vogue at one point or another. They are still used but have lost popularity because they place additional overhead on back propagation computations and tend to lose in contests for speed and accuracy.

  • Softmax
  • Sigmoid
  • TanH
  • ArcTan

The more complex of these can be parametrized and all of them can be perturbed with pseudo-random noise to improve reliability.

Why Bother With All of That?

Artificial networks are not necessary for tuning well developed classes of relationships between input and desired output. For instance, these are easily optimized using well developed optimization techniques.

  • Higher degree polynomials — Often directly solvable using techniques derived directly from linear algebra
  • Periodic functions — Can be treated with Fourier methods
  • Curve fitting — converges well using the Levenberg–Marquardt algorithm, a damped least-squares approach

For these, approaches developed long before the advent of artificial networks can often arrive at an optimal solution with less computational overhead and more precision and reliability.

Where artificial networks excel is in the acquisition of functions about which the practitioner is largely ignorant or the tuning of the parameters of known functions for which specific convergence methods have not yet been devised.

Multi-layer perceptrons (ANNs) tune the parameters (attenuation matrix) during training. Tuning is directed by gradient descent or one of its variants to produce a digital approximation of an analog circuit that models the unknown functions. The gradient descent is driven by some criteria toward which circuit behavior is driven by comparing outputs with that criteria. The criteria can be any of these.

  • Matching labels (the desired output values corresponding to the training example inputs)
  • The need to pass information through narrow signal paths and reconstruct from that limited information
  • Another criteria inherent in the network
  • Another criteria arising from a signal source from outside the network

In Summary

In summary, activation functions provide the building blocks that can be used repeatedly in two dimensions of the network structure so that, combined with an attenuation matrix to vary the weight of signaling from layer to layer, is known to be able to approximate an arbitrary and complex function.

Deeper Network Excitement

The post-millenial excitement about deeper networks is because the patterns in two distinct classes of complex inputs have been successfully identified and put into use within larger business, consumer, and scientific markets.

  1. Heterogeneous and semantically complex structures
  2. Media files and streams (images, video, audio)
",4302,,4302,,8/4/2018 9:22,8/4/2018 9:22,,,,0,,,,CC BY-SA 4.0 7413,1,,,8/4/2018 3:53,,2,2068,"

In Introduction to Reinforcement Learning (2nd edition) by Sutton and Barto, there is an example of the Pole-Balancing problem (Example 3.4).

In this example, they write that this problem can be treated as an episodic task or continuing task.

I think that it can only be treated as an episodic task because it has an end of playing, which is falling the rod.

I have no idea how this can be treated as continuing task. Even in OpenAI Gym cartpole env, there is only the episodic mode.

",6851,,2444,,4/24/2022 9:13,4/24/2022 9:45,How can the Cart Pole problem be a continuing task?,,4,1,,,,CC BY-SA 4.0 7414,1,7418,,8/4/2018 4:38,,0,83,"

Reinforcement?

We hear much about reinforcement, which is, in my opinion a poor choice of a term to describe a type of artificial network that continues to acquire or improve its behavioral information in natura (during operations in the field). Reinforcement in learning theory is a term used to describe repetitious incentivization to increase the durability of learned material. In machine learning, the term has been twisted to denote the application of feedback in operations, a form of re-entrant back propagation.

Corrective Signaling

Qualitatively, corrective signaling in field operations can supply information to a network to make only two types of functional adjustments.

  • Adjustments to what is considered the optimum, beginning with the optimum found during training prior to deployment
  • Testing of entirely new areas of the parameter space for hint of new optima that have formed, any of which might currently qualify or soon qualify as the global optimum.

(By optima and optimum, we mean minima and global minimum in the surface that describes the disparity between ideal system behavior and current system behavior. This surface is sometimes termed the error surface, applying an over-simplifying analogy from the mathematical discipline of curve fitting.)

The Importance of Doubt

The second of the two above could aptly be termed doubt.

Perhaps all neural nets should have one or more parallel doubting networks that can test remote areas of the search space for more promising optima. In a parallel computing environment, this might be a matter of provisioning and not significantly reduce the throughput of the primary network, yet provide a layer of reliability not found without the doubtful parallel networks.

What Shows More Intelligence?

Which is more important in actual field use of AI? The ability to reinforce what is already learned or the ability to create a minority opinion, doubt the status quo, and determine if it is not a more appropriate behavioral alternative than that which was reinforced.

A Helpful Pool of Water Analogy

During a short period of time, a point on the surface of the water may be the lowest point in a pool. With adjustments based on gradient (what is so inappropriately called reinforcement) the local well can be tracked so the low point can be maintained without any discrete jumps to other minima in the surface. However the local well may cease being the global minimum at some point in time, whereby a new search for a global minimum must ensue.

It may be that the new global minimum is across several features on the surface of the pool and cannot be found with gradient descent.

More interestingly, the appearance of new global minima can be tracked and reasonable projections can be made such that discrete and substantial jumps in parametric state can be accomplished without large jumps in disparity (where the system misbehaves badly for a period).

Circling Back to the Question

Which is more important, doubt or reinforcement?

",4302,,-1,,6/17/2020 9:57,8/4/2018 12:18,"Which is more important, doubt or reinforcement?",,1,0,,,,CC BY-SA 4.0 7415,2,,6091,8/4/2018 7:12,,0,,"

After a very brief look at the paper, I think that they are predicting the stock price for the next day, and not for the current day, which is quite common and reasonable: see equation (1), where they predict $x(t+1)$. So I don't see any issue with this paper.

But I've only quickly looked at it, so I may have missed something, of course.

",17279,,2444,,6/8/2020 17:08,6/8/2020 17:08,,,,0,,,,CC BY-SA 4.0 7416,1,,,8/4/2018 9:57,,28,4782,"

Imagine I have a list (in a computer-readable form) of all problems (or statements) and proofs that math relies on.

Could I train a neural network in such a way that, for example, I enter a problem and it generates a proof for it?

Of course, those proofs then needed to be checked manually, but maybe the network then creates proofs from combinations of older proofs for problems yet unsolved.

Is that possible?

Would it be possible, for example, to solve the Collatz-conjecture or the Riemann-conjecture with this type of network? Or, if not solve, but maybe rearrange patterns in a way that mathematicians are able to use a new ""proof method"" to make a real proof?

",17282,,2444,,5/21/2020 19:12,12/3/2020 13:33,Can neural networks be used to prove conjectures?,,4,2,,,,CC BY-SA 4.0 7417,1,,,8/4/2018 10:01,,2,255,"

My input data consists of a series of 8 integers. Each integer is a discrete token, rather than a relative numeric value (i.e. '1' and '2' are as distinct as are '1' and '100'). The output is a single binary value indicating success or fail. For example:

fail,12,35,60,82,98,111,142,161
success,23,46,59,87,102,121,145,161
fail,13,35,65,83,100,102,122,161

I have say 500,000 of these entries.

Success or failure is determined by the combination of the eight tokens that go to make up the input. I am certain that no single token will dictate success or failure, but there may be particular tokens or combinations of tokens which are significant in determining success or failure, I don't know, but would like to know.

My question is, what kind of machine learning algorithm should I implement to answer the question of which tokens and combinations of tokens are most likely to lead to success?

In case it's relevant or useful, a few more notes on the input data:

There is a limited range of tokens (and thus integers) in each slot. So with this data input:

success,A,B,C,D,E,F,G,H

A is always say one of 1, 2, 3, 4 or 5. B is always one of 6, 7 or 8. C is always one of 9, 10, 11 or 12. So in the general case, possible values for A are never possible values for the other slots and there are between 2 and 12 values for each slot. No idea if that makes a different to the answer but wanted to include it for completeness.

",17281,,16909,,8/4/2018 22:49,8/4/2018 22:49,What's an appropriate algorithm for classification with categorical features?,,1,1,,,,CC BY-SA 4.0 7418,2,,7414,8/4/2018 12:18,,2,,"

Which is more important, doubt or reinforcement?

The single-sentence answer to this would be: it depends.

The core of this question seems to be very closely related to the well-known trade-off between exploration (similar to how you describe ""doubt"") and exploitation (similar to how you describe ""reinforcement""). It is almost never the case that someone declares one of those two to be more important than the other, and only tries to pursue the ""most important one"". There is no single axis that measures ""importance"", no single line of numbers such that we can place ""doubt"" on one point, ""reinforcement"" on the other, and declare that the biggest number is the most important one. We almost always want a balance between exploration and exploitation. They are, almost always, both important.

Now, in some extreme cases, only one of the two may be important. For example:

  • If you have an environment where you can generate experience and evaluate policies completely free of any costs, you'll want to prioritize exploration / doubt. This is almost never the case though, you'll realistically always have at least time as a cost.
  • If you have a situation where you care very much about your performance right now, there's no point in doubting it too much or exploring too much. Consider, for example, DeepMind's AlphaGo team a few hours before their match against world-class human player Lee Sedol in 2016. At such a point in time, I highly doubt they'll be interested in exploring wildly different sets of parameters from the ones they have found during training so far; they won't have time anymore to thoroughly evaluate them, they'll want to stick to what they have, which they know works fairly well.

Also note that sometimes, you need to stick to something that already works well somewhere if you want to be able to realistically learn something new somewhere else:

  • In a large Markov Decision Process (e.g. Montezuma's Revenge), once you've already learned a good policy near the initial state, you don't want to explore too much anymore around that initial state because you won't reach interesting new states. You need to exploit for a while first such that you actually reach new states where it becomes interesting to explore again.
  • A similar situation, but now more closely related, using your terminology of ""doubt"" and viewing the space of all possible sets of values for all parameters of a large Neural Network as the space that we're searching in; suppose that we're learning a policy for an Atari game, with pixels as input. We first have a few Convolutional layers, then a few Relu's, etc., the standard setup. Intuitively, we expect the first few layers to ""learn"" how to ""understand"" the images, and transform the raw pixels to encodings that can be used in an efficient manner for the last few layers to compute a good policy / value estimates. If the first few layers already do a good job at transforming the raw pixel inputs to something more useful, we generally don't want to ""doubt"" those layers too much anymore, we don't suddenly want to jump to a completely different set of parameters anymore. If we suddenly completely change the parameters in those layers, we'll know for sure that the later layers will also be completely messed up. We'll simply have created a significantly more difficult learning problem for ourselves when it comes to optimizing the last few layers.

Some other notes:


There are lots of references to ""networks"" in the question, for example the quote below:

We hear much about reinforcement, which is, in my opinion a poor choice of a term to describe a type of artificial network that continues to acquire or improve its behavioral information in natura (during operations in the field).

Note that Reinforcement (learning) does not necessarily have to involve any kinds of artificial (neural or otherwise) networks at all. There's also tabular RL, and RL with function approximation using non-network function approximators (e.g. linear).


Under the ""What shows more intelligence?"" header, you write:

The ability to reinforce what is already learned

That is not an accurate description of what was previously described as

Adjustments to what is considered the optimum, beginning with the optimum found during training prior to deployment

which, in turn, I suppose is intended to describe the kinds of updates that are typically performed using variants of the Bellman optimality equation (value-based methods) or variants of REINFORCE (policy gradient methods). These methods do not just ""reinforce what is already learned"". They can lead to learning completely new policies from completely new experiences.


This idea:

Perhaps all neural nets should have one or more parallel doubting networks that can test remote areas of the search space for more promising optima. In a parallel computing environment, this might be a matter of provisioning and not significantly reduce the throughput of the primary network, yet provide a layer of reliability not found without the doubtful parallel networks.

sounds a lot to me like using Evolutionary Search to optimize the parameter of a network. Some interesting blog posts from Uber (with references to their papers) on such approaches may be:

I didn't get around to reading them in detail yet, so I don't know exactly how similar they are, but certainly related.

",1641,,,,,8/4/2018 12:18,,,,0,,,,CC BY-SA 4.0 7419,1,7467,,8/4/2018 13:51,,2,966,"

Do AI algorithms exist which are capable of healing themselves or regenerating a hurt area when they detect so?

For example: In humans if a certain part of brain gets hurt or removed, neighbouring parts take up the job. This happens probably because we are biologically unable to grow nerve cells. Whereas some other body parts (liver, skin) will regenerate most kinds of damage.

Now my question is does AI algorithms exist which take care of this i.e. regenerating a damaged area? From my understanding this can be achieved in a NN using dropout (probably). Is it correct? Do additional algorithms (for both AI/NN) or measures exist to make sure healing happens if there is some damage to the algorithm itself?

This can be particularly useful in cases where say there is a burnout in a processor cell processing some information about the environment. The other processing nodes have to take care to compensate or fully take-over the functions of the damaged cell.

(Intuitionally this can mean 2 things:

  • We were not using the system of processors to its full capability.
  • The performance of the system will take a hit due to other nodes taking over functionality of the damaged node)

Does this happen in the case of brain damage also? Or is my inferences wrong? (Kindly throw some light).

NOTE : I am not looking for hardware compensations like re- routing, I am asking for non-tangible healing. Adjusting the behavior or some parameters of the algorithm.

",,user9947,-1,,6/17/2020 9:57,8/8/2018 3:28,Are AI algorithms capable of self-repair?,,3,0,,,,CC BY-SA 4.0 7420,2,,7419,8/4/2018 14:45,,3,,"

Yes, this was an active area of research in a number of different AI fields.

Probably the most directly related work is Bongard, Zykov & Lipson's self-repairing robots from the early 2000's.

There's some more recent work from Mark Yim that you can see here too.

There are lots of different ways to do this, but Bongard et al's approach was probably the most elegant. The basic idea was to frame it as a learning problem: the robot is able to learn the shape of its body by performing controlled experiments. When the body is damaged, the robot can detect that it's body has changed shape (sensors don't report the expected values when it tries to move), perform new experiments to determine the extent of the damage, and then generate new movements that work around the damaged area. Lipson covers the basics of this system very briefly in this video.

The more modern system uses a similar approach, but tries to repair its body, rather than working around the damage. It's got an internal model of what it's body should look like, and then a set of cameras that help it locate the various pieces and move them to reassemble.

Dropout is sort of a similar idea, but dropout is usually done to encourage redundancy during training, which can help a model avoid overfitting. It's usually not done explicitly to heal a damaged system, although it would make a system more resistant to damage in the first place.

",16909,,,,,8/4/2018 14:45,,,,0,,,,CC BY-SA 4.0 7421,2,,7417,8/4/2018 14:53,,1,,"

What you have is called a classification problem with categorical features. That is, the features can be represented numerically, but the numbers have no relative meaning.

Algorithms that rely on smooth function approximation will probably not work well here. These would include classic approaches to regression, and also function approximation via a neural network. That's because the data are anything but smooth!

In contrast, classic classification algorithms like Quinlan's C4.5 decision tree learner, (implemented in the Weka Toolkit as J48, and possibly in SciKitLearn as DecisionTreeClassifier, though the documentation is less clear), are ideal for this: they actually work by splitting up numeric values into discrete categories anyway, so there's no issue at all for them. Most versions also support a way to pre-tag features as categorical, and the algorithms rely on the cross-entropy of each feature's categories, without making assumptions of smoothness.

",16909,,,,,8/4/2018 14:53,,,,0,,,,CC BY-SA 4.0 7422,2,,7416,8/4/2018 15:00,,5,,"

It's possible, but probably not a good idea.

Logical proof is one of the oldest areas of AI, and there are purpose-built techniques that don't need to be trained, and that are more reliable than a neural-network approach would be, since they don't rely on statistical reasoning, and instead use the mathematician's friend: deductive reasoning.

The main field is called ""Automated Theorem Proving"", and it's old enough that it's calcified a bit as a research area. There are not a lot of innovations, but some people still work on it.

The basic idea is that theorem proving is just classical or heuristic guided search: you start from a state consisting of a set of accepted premises. Then you apply any valid logical rule of inference to generate new premises that must also be true, expanding the set of knowledge that you have. Eventually, you can prove a desired premise, either through enumerative searches like breadth first search or iterative deepening, or through something like A* with a domain-specific heuristic. A lot of solvers also use just one logical rule (unification) because it's complete, and reduces the branching factor of the search.

",16909,,16909,,8/4/2018 15:08,8/4/2018 15:08,,,,0,,,,CC BY-SA 4.0 7423,2,,7416,8/4/2018 15:03,,8,,"

Your idea may be feasible in general, but a neural network is probably the wrong high level tool to use to explore this problem.

A neural network's strength is in finding internal representations that allow for a highly nonlinear solution when mapping inputs to outputs. When we train a neural network, those mappings are learned statistically through repetition of examples. This tends to produce models that interpolate well when given data similar to training set, but that extrapolate badly.

Neural network models also lack context, such that if you used a generative model (e.g. an RNN trained on sequences that create valid or interesting proof) then it can easily produce statistically pleasing but meaningless rubbish.

What you will need is some organising principle that allows you to explore and confirm proofs in a combinatorial fashion. In fact something like your idea has already been done more than once, but I am not able to find a reference currently.

None of this stops you using a neural network within an AI that searches for proofs. There may be places within a maths AI where you need a good heuristic to guide searches for instance - e.g. in context X is sub-proof Y likely to be interesting or relevant. Assessing a likelihood score is something that a neural network can do as part of a broader AI scheme. That's similar to how neural networks are combined with reinforcement learning.

It may be possible to build your idea entirety out of neural networks in principle. After all, there are good reasons to suspect human reasoning works similarly using biological neurons (not proven that artificial ones can match this either way). However, the architecture of such a system is beyond any modern NN design or training setup. It definitely will not be a matter of just adding enough layers then feeding in data.

",1847,,,,,8/4/2018 15:03,,,,0,,,,CC BY-SA 4.0 7424,2,,7413,8/4/2018 15:07,,1,,"

The key is that reinforcement learning through something like, say, SARSA, works by splitting up the state space into discrete points, and then trying to learn the best action at every point.

To do this, it tries to pick actions that maximize the reward signal, possibly subject to some kind of exploration policy like epsilon-greedy.

In cart-pole, two common reward signals are:

  1. Receive 1 reward when the pole is within a small distance of the topmost position, 0 otherwise.
  2. Receive a reward that linearly increases with the distance the pole is off the ground.

In both cases, an agent can continue to learn after the pole has fallen: it will just want to move the poll back up, and will try to take actions to do so.

However, an offline algorithm wouldn't update its policy while the agent is running. This kind of algorithm wouldn't benefit from a continuous task. An online algorithm, on contrast, updates its policy as it goes, and has no reason to stop between episodes, except that it might become stuck in a bad state.

",16909,,,,,8/4/2018 15:07,,,,5,,,,CC BY-SA 4.0 7425,2,,7390,8/4/2018 17:24,,18,,"

Actor-Critic is not just a single algorithm, it should be viewed as a "family" of related techniques. They're all techniques based on the policy gradient theorem, which train some form of critic that computes some form of value estimate to plug into the update rule as a lower-variance replacement for the returns at the end of an episode. They all perform "bootstrapping" by using some sort of prediction of value.

Advantage Actor-Critic specifically uses estimates of the advantage function $A(s, a) = Q(s, a) - V(s)$ for its bootstrapping, whereas "actor-critic" without the "advantage" qualifier is not specific; it could be a trained $V(s)$ function, it could be some sort of estimate of $Q(s, a)$, it could be a variety of things.

In practice, the critic of Advantage Actor-Critic methods actually can just be trained to predict $V(s)$. Combined with an empirically observed reward $r$, they can then compute the advantage estimate $A(s, a) = r + \gamma V(s') - V(s)$.

",1641,,40822,,7/7/2022 9:49,7/7/2022 9:49,,,,5,,,,CC BY-SA 4.0 7428,2,,7406,8/5/2018 5:19,,1,,"

It is a surmountable problem for someone experienced in software architecture and machine learning.

  1. Render the message to a virtual display such as xvfb, headless Chrome, or phantomjs.
  2. Capture the text with selenium, watir, or some other DOM controller, addressing your HTML and DHTML complexity concern.
  3. OCR the text in inline images and insert it appropriately.
  4. Once you have text with only word, line, list item, and paragraph breaks as structural separators, you have adequate separation of style and language content to then use naive Bayesian or one of the more recent forms of unsupervised categorization to find the separation point between the body and the signature block.

Extending your line of thinking, you may even be able to engineer a generative strategy for automated reply, but beware, this last feat is a dozen orders of magnitude more difficult than extracting text from HTML, DHTML, and typeset images and machine learning the separating signature blocks.

This last feat, if done poorly, would get you in trouble with many of your email reply recipients, and, if done well, would place you ahead of Amazon, Apple, and Google.

",4302,,,,,8/5/2018 5:19,,,,1,,,,CC BY-SA 4.0 7431,1,,,8/5/2018 11:10,,3,339,"

I am looking for books or to state of the art papers about current the development trends for a strong-AI.

Please, do not include opinions about the books, just refer the book with a brief description. To emphasize, I am not looking for books on applied AI (e.g. neural networks or the book by Norvig). Furthermore, do not consider AGI proceedings, which contains papers that focus on very concrete aspects. The related Wikipedia describes some active investigation lines about AGI (cognitive, neuroscience, etc.) but can not be considered an educational/introductory resource. Finally, I am not interested in philosophical questions related to AI safety or risks or its morality if they are not related to its development. Development doesn't exclude mathematical foundation about it.

By example, if I look by example at this list ""https://bigthink.com/mike-colagrossi/the-10-best-books-on-ai"", the final candidates list became empty.

",12630,,2444,,1/17/2021 19:32,1/18/2021 13:19,What are some books or state of the art papers about the development of a strong-AI?,,2,1,,,,CC BY-SA 4.0 7432,2,,5570,8/5/2018 11:54,,2,,"

What I'm missing here is a way to direct the evaluation function to actually winning. For example, a perfect evaluation function for a won position in chess would always return +1 without any hint how to progress towards checkmate. In a chess variant without the fifty-move limit, it could play useless turns forever.

I guess, this is a rather theoretical problem as we won't ever have such a good function, but I wonder if there's a way to avoid it?

It certainly isn't just a theoretical problem, it can occur in practice too. For ""large"" games this problem won't occur in the early game, but it can start occurring in the later stages of a game when terminal positions can actually be reached directly through exhaustive searches. It can also occur right from the beginning in extremely small / simple games, like Tic Tac Toe.

Addressing this is not (necessarily) just a matter of defining the evaluation function alone, it can also depend on which search or learning algorithm is used. So, I'll consider a few different cases.


Case 1: Minimax / Alpha-Beta / other similar ""exhaustive"" searches

When using Minimax / Alpha-Beta / other search algorithms based on those, the easiest solution to the problem you describe is to use iterative deepening. As soon as you prove a win for yourself at a certain depth level d using iterative deepening, you can simply stop the search, don't search if there are any other wins to be proven at depth d + 1, just play along the line you've just proven to be a winning line. This way, you will always go for the win in the lowest number of moves.

A related advantage of this approach is that, as soon as you prove a loss at depth d for yourself, you can also cancel the search process and play the move that was best according to your evaluations at depth d - 1. This will often be your best chance to still grab a win in cases where your opponent fails to see their win at depth d.


Case 2: Monte-Carlo Tree Search / other searches with randomness

Monte-Carlo Tree Search is a well-known search algorithm that incorporates an element of randomness in its search. With these kinds of algorithms, the problem you describe tends not to be a real issue. Due to the randomness in the search, wins that can be achieved in a small number of moves tend to be evaluated better than longer-distance wins in practice. In long-distance wins, there is a greater chance that the randomness in the search process causes an incorrect move to be played somewhere along the long-distance win, which reduces the evaluation of such a line of play.


Case 3: (Reinforcement) Learning approaches

These approaches tend to involve some element of randomness due to the need for exploration in learning, which leads to similar reasoning as described for MCTS above. Also, in Reinforcement Learning, we typically use a discount factor gamma < 1.0 (e.g. gamma = 0.99) which causes distant rewards to be viewed as less important than close rewards, even if we don't do such discounting for the final evaluation of the performance of an algorithm. See, for example, a lot of the work on Atari games (DeepMind's DQN, etc.). Algorithms are evaluated according to their undiscounted scores, but learning still uses a bit of discounting because, in practice, this is found to be beneficial for learning.

",1641,,,,,8/5/2018 11:54,,,,0,,,,CC BY-SA 4.0 7434,1,7439,,8/5/2018 16:12,,8,699,"

It has been proven in the paper ""Approximation by Superpositions of a Sigmoidal Function"" (by Cybenko, in 1989) that neural networks are universal function approximators. I have a related question.

Assume the neural network's input and output vectors are of the same dimension $n$. Consider the set of binary-valued functions from $\{ 0,1 \}^n$ to $\{ 0,1 \}^n$. There are $(2^n)^{(2^n)}$ such functions. The number of parameters in a (deep) neural network is much smaller than the above number. Assume the network has $L$ layers, each layer is $n \times n$ fully-connected, then the total number of weights is $L \cdot n^2$.

If the number of weights is not allowed to grow exponentially as $n$, can a deep neural network approximate all the binary-valued functions of size $n$?

Cybenko's proof seems to be based on the denseness of the function space of neural network functions. But this denseness does not seem to guarantee that a neural network function exists when the number of weights are polynomially bounded.

I have a theory. If we replace the activation function of an ANN with a polynomial, say cubic one,then after $L$ layers, the composite polynomial function would have degree $3^L$. In other words, the degree of the total network grows exponentially. In other words, its ""complexity"" measured by the number of zero-crossings, grows exponentially. This seems to remain true if the activation function is sigmoid, but it involves the calculation of the ""topological degree"" (a.k.a. mapping degree theory), which I have not the time to do yet.

According to my above theory, the VC dimension (roughly analogous to the zero-crossings) grows exponentially as we add layers to the ANN, but it cannot catch up with the doubly exponential growth of Boolean functions. So the ANN can only represent a fraction of all possible Boolean functions, and this fraction even diminishes exponentially. That's my current conjecture.

",17302,,2444,,2/7/2019 20:26,2/7/2019 20:26,How can a neural network approximate all functions when the weights are not allowed to grow exponentially?,,1,0,,,,CC BY-SA 4.0 7439,2,,7434,8/6/2018 4:31,,4,,"

What is Proven

The question references the proof of Approximation by Superpositions of a Sigmoidal Function, G. Cybenko, 1989, Mathematics of Control, Signals, and Systems.

The 1989 proof stated that the network, made of activations that were required to be, "Of continuous sigmoidal non-linearity," could, "Uniformly approximate any continuous function of n real variables," so, as the question stated, the proof doesn't directly apply to 1-bit discrete outputs. Note that the network is expected to merely approximate the desired circuit behavior.

The question defines the system as an arbitrary mapping from input bit vector

$I: { i_1, \; \dots, \; i_n}$

to output bit vector

$O: { o_1, \; \dots, \; o_n}$

It was further back proven that such a mapping can be accomplished with one Boolean expressions for each output bit. For all $2^n$ possible input vector permutations, there exists a Boolean expression made up of AND and NOT operations that calculates a result that matches any arbitrary logical truth table.

There are techniques for reducing redundancy in the array of Boolean expressions, which is critical to VLSI chip layout.

Without the retention of state anywhere in the network other than the attenuation matrix (parameters), the system is not Turing complete. However, with regard to the ability to realize Boolean expressions in describing the mapping, given an arbitrary number of layers, the network is complete.

Estimating Layer Depth Requirements

Only one inner layer is required in the 1989 proof, so how many layers would it take for an accurate n-bit-to-n-bit mapping to be learned?

The question proposes that there are $2^n$ to the power of $2^n$ permutations. The mapping of each input bit vector to the desired output bit state can be represented by a truth table of $n$ binary dimensions.

Each output is an independent bit, meaning the $2^n$-bit representations of unique Boolean functions that could produce each output bit is not tied to any other output channel. As would be expected, there are $2n$ freedoms of motion for the mapping of I to O.

For the case where the input is a bit vector of $n$ bits, where $n$ is the number of activations for any one of $L$ layers, the total number of activations in the network $a_t$ and the total number of scalar elements for all attenuation matrices (the parameters that represent training state) $q_t$ for the network is as follows.

$a_t = \sum_{\, 0 = v}^{L - 1} \; n_v$

$= n \, L$

$p_t = \sum_{\, 0 = v}^{L - 2} \; n_v^2$

$= n^2 \, (L-1)$

If IEEE 64 bit floating point numbers are used for each element in the attenuation matrix, we can calculate the number of bits available in the training parametrization.

$b_t = 64 \, (L - 1) \, n^2$

It would be normal today to use ReLU, leaky ReLU, or some other more quickly convergent activation instead of sigmoid for all layers but the last and use a simple binary threshold for the last.

Thus we have a formulation of the information theory comparison inferred by the question, and can reduce it.

$2^{2n} \le 64 \, (L - 1) \, n^2$

$L \ge 1 + \frac {2^{2n-6}} {n^2}$

This is a rough threshold. For a highly reliable training for the binary inputs to binary outputs, the number of layers should be well above the threshold.

Below the threshold the trainability of the mapping will degrade to an inadequate approximation for most applications because of signal saturation in the back propagation mechanism.

",4302,,-1,,6/17/2020 9:57,8/16/2018 13:07,,,,6,,,,CC BY-SA 4.0 7440,2,,2512,8/6/2018 6:25,,1,,"

Fermi and SETI

Brilliant physicist and mathematician, Enrico Fermi, brought up more than one paradox in his published articles and many more in discussions and letters, but this question is probably referring to one for which an overview is given in Our Galaxy Should Be Teeming With Civilizations, But Where Are They?, By Seth Shostak, Senior Astronomer, 2018, SETI Institute.

Astronomer Sara Seager's proposed equation (building on the work of Frank Drake) is one of the favored equations today. It estimates the number of planets with detectable signs of intelligent life as the product of six factors.

  • Stars observed
  • Proportion of observed stars that are quiet
  • Average number of planets orbiting in habitable zones per star
  • Reciprocal proportion of planets that can be observed
  • Proportion of observable planets that have life
  • Proportion of planets with life that produce a detectable spectrum of gases indicative of technological evolution

Since there is no indication that artificial life, if it were to dominate earth, would stop industrial processes, there is no need to adjust Seager's equation.

Multiplicity or Singularity

The Singularity is conjecture without any logical proof. Conversely, there is no need for a proof that artificial creations will out compete humans in some respects. The proof is that some tasks that could once be accomplished by humans alone, such as mail sorting and chess, are now accomplished faster and with greater accuracy by artificial creations.

The trend has been that the number of things done better with software, control systems, and robotics increases and the number of things done better by humans has been decreased by that amount. Because of the vision of robots permeating science fiction and the actual long-term objectives of some contemporary corporations in this field of artificial workers and beings, this is now easy for most to accept.

The Singularity is an imagined point in time when computer software and hardware will, in tandem, attain the ability to reproduce itself in a way that produces improved reproductions. Even if only some of the attempts at improvement prove to be superior in intelligence, strength, intuition, or some other feature, artificial evolution may have been created.

The idea that such will happen

  • In exactly one way,
  • At exactly one point in time, and
  • Without any opposition or constraint

is part of the unproven conjecture. What has been offered as proof lacks mathematical rigor in a way that would have made Fermi frown, along with his contemporaries like John von Neumann and Oppenheimer. In fact, they would have wanted proof of completeness, that all human activity would be preempted by artificial equivalents, before any proof that it was to be singular in nature.

It May Have Already Occurred

Jaques Ellul, in his book Technological Society, proposes that what people today are trained to call The Singularity actually occurred hundreds of years ago, when the techniques applied by humans began to drive the behavior of society more strongly than the intentions of humans drove the direction of technology. His book is filled with a few hundred pages of examples where this tipping point has already been reached. It's remarkably convincing.

Back to Fermi and SETI

Whether the transition of control was in the past, in the future, or will never entirely complete, the impact on the Fermi paradox referenced is not significant, because the paradox is why no other intelligent life has radioed us in response to our SETI broadcasts.

Exponential Functions in the Real Universe

The most important question is whether the primary trait of intelligence is self-destruction, as some, like author of Sixth Extinction, Richard Leaky, believes. If this is the case, intelligent life is a pulse, not an ever increasing exponential curve. Ever increasing exponential curves don't actually ever occur in nature. In every other known phenomena that has an exponential growth period, it is ALWAYS followed by a decrease in acceleration ending in either a flat line or some form of chaotic vacillation.

Alternative Views of Origins

It is the work of Vladimir Vernadski that has the appearance of the higher wisdom than that of the media hound CEOs of Tesla, Google, and other large and powerful technology companies today. Buried in his book The Biosphere, he stated the possibility that evidence in star dust indicates that life may have always existed. Just as the big bang is a conjecture based on a long list of un-provable assumptions, so is the idea that life began.

",4302,,,,,8/6/2018 6:25,,,,1,,,,CC BY-SA 4.0 7441,2,,7431,8/6/2018 13:27,,4,,"

There is actually a book called Artificial General Intelligence by Ben Goertzel and Cassio Pennachin. It's a bit out of date (from 2008), and published as a Springer-Verlag monograph (which tends to have fairly low editorial standards). This one is also an anthology, with each chapter written by a different author. It's probably not suitable as an undergraduate level book, but it does seem to contain something like the information that's wanted.

",16909,,2444,,1/18/2021 12:48,1/18/2021 12:48,,,,0,,,,CC BY-SA 4.0 7442,2,,7408,8/6/2018 13:44,,1,,"

I have not personally worked enough with continuous action spaces to be capable of confidently giving advise based on my own experience, but I can point you to likely relevant research (more recent than the research you already pointed to yourself):


The most common / ""popular"" area of research in recent years that involves RL and continuous action spaces uses robot / physics simulators such as MuJoCo. Some examples:

  • Asynchronous Methods for Deep Reinforcement Learning mentions using an entropy cost term in the loss function used for training to encourage exploration (see Supplementary Material after References).
  • Parameter Space Noise for Exploration describes the idea of adding noise directly to the learned parameters on the Neural Network. That way it basically always predicts a different action to be ""optimal"" (due to the noise), and therefore a ""greedy"" policy based on the noisy parameters will in fact have exploration and not be fully greedy.
  • DeepMind Control Suite is a somewhat recent paper that proposes a suite of benchmarks for continuous control problems. It is filled with references to relevant papers describing all kinds of algorithms, and hopefully every single one of those papers would also describe how they perform exploration.

Recently, at the ICML 2018 conference, there was a complete workshop dedicated to Exploration in Reinforcement Learning. Here is the list of accepted papers at this workshop. Note that it's about Exploration in RL, not only about continuous action spaces, so there might be papers in there that are only applicable to discrete action spaces. Nevertheless, I would be highly surprised if there's nothing in there that's relevant.

",1641,,,,,8/6/2018 13:44,,,,1,,,,CC BY-SA 4.0 7443,2,,6142,8/6/2018 14:21,,1,,"

In the title of the question, you write (emphasis mine):

step costs are drawn from a continuous range $[\epsilon, 1]$

If step costs are drawn from that range, it means that every step has a cost of at least $\epsilon$, and at most $1$. This leads back to the case that described in the question that you already understand.


Note that the book (at least, the third edition of the book) does state in the question that $0 < \epsilon < 1$. Of course, the question would not make much sense in the first place if negative costs were allowed.

",1641,,1641,,8/11/2018 12:20,8/11/2018 12:20,,,,0,,,,CC BY-SA 4.0 7445,2,,7419,8/6/2018 17:30,,1,,"

The question and the example are a few contradictory.

The example is about a physical brain damage. Computer systems with the ability to self-repair exists from 1970's. They can repair a damaged disk (RAID), replace a CPU by an idle one (active/passive), mark faulty memory blocks, redirect network traffic from broken links to available ones, ... nowadays near than all hardware failures are covered.

However, the question is about ""algorithms capable of healing themselves"", that has a parallelism in ""persons capable of healing from a psychological problem"".

Like in the case of persons, it depends of the problem, and the amount of recovery expected.

Some easier cases are:

  • Lots of non-AI systems has the ability to re-synchronize, auto-calibration, ...

  • Any minimal intelligent system can ""stop"" if it detects it is producing continuous wrong results.

Going a step forward, thinking in ML (Neural Nets, ..) we can remark that all unsupervised learning machines can recover from a misalignment of their parameters, just re-executing the learning process (or continuously executing it).

Finally, we could ask ""can a machine recover from an error in his reward function"" ? And, at this point, my answer is ""I do not known any system able of that, because they have no common sense"".

",12630,,,,,8/6/2018 17:30,,,,0,,,,CC BY-SA 4.0 7446,1,,,8/6/2018 18:35,,14,9295,"

Having analyzed and reviewed a certain amount of articles and questions, apparently, the expression computational intelligence (CI) is not used consistently and it is still unclear the relationship between CI and artificial intelligence (AI).

According to IEEE computational intelligence society

The Field of Interest of the Computational Intelligence Society (CIS) shall be the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.

which suggests that CI could be a sub-field of AI or an umbrella term used to group certain AI sub-fields or topics, such as genetic algorithms or fuzzy systems.

What is the difference between artificial intelligence and computational intelligence? Is CI just a synonym for AI?

",1581,,2444,,1/4/2020 23:54,11/28/2020 0:06,What is the difference between artificial intelligence and computational intelligence?,,2,0,,,,CC BY-SA 4.0 7448,2,,7446,8/6/2018 19:47,,6,,"

What is the difference between Artificial Intelligence and Computational Intelligence?

The short answer is that they are two parallel research efforts working on similar problems, but with different methodologies and histories. Essentially, they study similar things, but with different tools. In the modern context, computational intelligence tends to use bio-inspired computing, like evolutionary and genetic algorithms. AI tends to prefer techniques with stronger theoretical guarantees, and still has a significant community focused on purely deductive reasoning. The main area of overlap is in machine learning, especially neural networks.


The longer answer is that your source from 1948 says they are synonyms in part because it predates the split in the research community, which took place later.

The two communities have always some overlap in topics, but in my experience, mostly are skeptical of each other's methodologies, and mostly publish in separate journals. Some authors consider CI to be a subset of AI however, particularly those writing in the 1990s.

Example topics that are solidly in AI but definitely not in CI are logical and expert systems, and statistical approaches to machine learning like regression.

Example topics that are solidly in CI but perhaps not in AI (depending on whether one views CI as a subset of AI or not) are genetic programming, fuzzy logic, and ant colony optimization.

As a rule, AI-rooted techniques have better theoretical guarantees, and better developed theory in general (there are exceptions though). For example, Fuzzy Logic has been strongly criticized for the lack of a solid theoretical foundation (good modern summary here), as have genetic and evolutionary approaches (most famously, both lack a proof of convergence within finite time to a global optimum on a smooth surface, even though they do quite well in practice).

CI-rooted techniques nonetheless often see major performance advantages in specific problems (see, for instance, deep learning results), and tend to have a strong experimental and engineering tradition. The No Free Lunch theorems are often used to justify their use when theoretical certainty is missing. Basically, the theorems say that, in learning and optimization problems, a technique can only perform well on a problem by performing poorly on some other problem. CI authors argue that there are some problem domains in which their techniques work well (which must be true, because simpler algorithms like hill-climbing outperform them on simple problems).

Check out this paper for lots more references on CI, or this book for a list of core topics in the field.

",16909,,2444,,11/28/2020 0:06,11/28/2020 0:06,,,,1,,,,CC BY-SA 4.0 7449,2,,4830,8/6/2018 19:59,,3,,"

is there a value given for each piece (e.g. 1 for pawn, 3 for knight, 9 for queen, etc.) to train the algorithm, or does the algorithm learn this by himself?

No, there are no such explicit values assigned to pieces, no manually-constructed evaluation functions. The paper states that ""no domain knowledge"" is given to the algorithm other than the game's rules (necessary to run simulations / run search algorithms like MCTS).

I read that the algorithm uses Monte Carlo Tree Search, but what are the key improvements to prior chess algorithms already using MCTS?

The key improvements are in the way that Deep Learning (deep neural networks), Reinforcement Learning, and self-play is combined with MCTS. This is quite similar to the methods previously used by AlphaGo Zero in the game of Go as well. There (likely) have been combinations of Deep Learning + MCTS before (e.g. using a learned Neural Network to bias rollouts in MCTS), but the specific way in which they are combined in AlphaZero is critical (specifically, using the ""search behaviour"" of MCTS as one of the training signals for the Neural Network). It probably also helped that we're talking about Google here, who could afford to use thousands of Tensor Processing Units (TPUs) for training.

Is there a hope for being able to run it an average computer? They said it required 9 hours learning (starting with nearly 0 knowledge except rules (and maybe value for piece?)), and 24 millions of games. Is it something doable in maybe 1 month with a average computer?

Based on the information in the paper, I highly doubt this. As mentioned above, training was done with thousands of TPUs (specifically; 5,000 first-generation TPUs and 64 second-generation TPUs) in parallel. One month is only about a factor 80 times greater than the 9 hours training time reported in the paper, whereas all of those TPUs is much more than 80 average computers.

After training, it likely could run on a fairly average computer (or maybe high-end computer) to play though. But it'll need much more power to train first.

",1641,,,,,8/6/2018 19:59,,,,0,,,,CC BY-SA 4.0 7452,2,,3938,8/7/2018 6:36,,3,,"

Such large data cannot be loaded into your memory. Lets split what you can do into two:

  1. Rescale all your images to smaller dimensions. You can rescale them to 112x112 pixels. In your case, because you have a square image, there will be no need for cropping. You will still not be able to load all these images into your RAM at a goal.

  2. The best option is to use a generator function that will feed the data in batches. Please refer to the use of fit_generator as used in Keras. If your model parameters become too big to fit into GPU memory, consider using batch normalization or using a Residual model to reduce your number of parameter.

",17338,,,,,8/7/2018 6:36,,,,2,,,,CC BY-SA 4.0 7453,1,,,8/7/2018 9:44,,1,90,"

I am trying to create a chatbot application where the user can create its own bot, like Botengine. After going through google, I saw I need some NLP API to process the user's query. As per wit.ai basic example, I can set and get data. How I am going to create a bot engine?

So, as far I understand the flow, here is an example for pizza delivery

  1. The user will enter a welcome message, i.e - Hi or Hello

  2. The welcome reply will be saved by bot owner in my database

  3. The user will enter some query, then I will hit wit.ai API to process that query. Example: The user's query is ""What kind of pizza's available in your store"" and wit.ai will respond with the details of intent ""pizza_type""

  4. Then I will search for the intent returned by wit in my database.

So, is that the right flow to create a chatbot? Am I in the right direction? Could anyone give me some link or some example so I can go through it? I want to create this application using Node.js. I have also found some example in node-wit, but can't find how I will implement this.

",17336,,2444,,10/22/2019 20:46,10/22/2019 20:46,How can I create a chatbot application where the user can create its own bot?,,0,2,,,,CC BY-SA 4.0 7454,2,,6308,8/7/2018 11:53,,3,,"

Let's first take a look at equation 6:

$$ \mathbf{E} \left[ r_{t,a} \vert \mathbf{x}_{t, a} \right] = \mathbf{z}_{t, a}^{\top} \boldsymbol{\beta}^* + \mathbf{x}_{t, a}^{\top} \boldsymbol{\theta}_a^* $$

Some quick observations:

  • The feature vector $\mathbf{z}_{t, a}$ has time $t$ and arms $a$ as subscripts. This means that we can have different vectors $\mathbf{z}_{t, a}$ for every time step $t$ and every arm $a$.
  • The feature vector $\mathbf{x}_{t, a}$ has time steps $t$ and arms $a$ as subscripts. These are exactly the same subscripts we also saw in $\mathbf{z}_{t, a}$. So, again, this means that we can have different feature vectors $\mathbf{x}_{t, a}$ for every time step $t$ and every arm $a$.
  • The parameters or weights vector $\boldsymbol{\beta}^*$ does not have any subscripts. This is the optimal vector (the star denotes optimality) that we aim to approximate through learning. This means that we learn only a single vector $\boldsymbol{\beta}$ that is used across all timesteps and all arms (of course, since we're learning it, it does actually actually change over time in practice; not because it should inherently change over time, but because we're still learning it).
  • The optimal parameter vector $\boldsymbol{\theta}_a^*$ (which we aim to approximate as $\boldsymbol{\theta}_a$ through learning) has arms $a$ as a subscript. This means that, for every arm $a$, we'll learn a separate parameter vector $\boldsymbol{\theta}_a$.

First a couple of corrections, some things I don't think are correct / clear in the paper:

  • In the paper, as you also quoted in the question, they say ""it is helpful to use features that are shared by all arms, in addition to the arm-specific ones."" This is in reference to the new feature vectors $\mathbf{z}_{t, a}$ that they're introducing there. However, we've just observed above that, due to having subscripts $a$, these feature vectors are in fact arm-specific, and not shared by all arms. However, we use these feature vector to take a dot product with a learned parameter vector $\boldsymbol{\beta}$, which in turn is shared across all arms. Intuitively, this means that we're saying that these feature vectors $\mathbf{z}_{t, a}$ can have different feature values per arm, but the ""importance"" of every value in such a vector is the same regardless of the arm we're looking at.
  • Since we assume that feature vectors $\mathbf{z}_{t, a}$ can be relevant for predicting rewards, it should also be included in the expectation on the left-hand side of equation 6, which should therefore be changed to:

$$ \mathbf{E} \left[ r_{t,a} \vert \mathbf{x}_{t, a}, \mathbf{z}_{t, a} \right] = \mathbf{z}_{t, a}^{\top} \boldsymbol{\beta}^* + \mathbf{x}_{t, a}^{\top} \boldsymbol{\theta}_a^* $$


Example

Suppose at every time $t$, a new customer enters our shop, and we have to pick one of a set of books (arms $a$) to try selling to that customer (note; this is actually very similar to news article recommendation, or selecting which Ad to display, or any other common MAB problem).

In feature vectors $\mathbf{x}_{t, a}$, we want features that we expect to have different influences on our chances of a successful sale per book. For example:

  • Age (kids will tend to prefer certain books, adults will tend to prefer others)
  • A measure of how many other books in the same genre that particular customer has bought before
  • etc.

In feature vectors $\mathbf{z}_{t, a}$, we want features that we expect to have the same influence on our chances of a successful sale regardless of what book we're looking at (note; the feature values may still be different per arm $a$ or time step $t$, we just expect their influence or importance to be the same / similar. For example:

  • A variable that is e.g. $1$ if the customer $t$ has enough money to buy a book $a$, or $-1$ if they don't have enough money. Note that the feature value can differ per customer and per book (different customers have different amounts of money, and different books have different prices), but we expect the influence to always be the same; if they have enough money, they might buy it, otherwise, they're very unlikely to buy it.
  • Age: yes, I'm aware that I already put this feature in the other list above as well. We might want to have it in both feature vectors. We put it in the previous list because we expect there to be book-specific preference levels depending on age. However, it might also be the case that certain age groups tend to buy more books than other age groups in general, so it might be beneficial to simply have it in both feature vectors (for different expected effects).
",1641,,1641,,8/13/2018 18:03,8/13/2018 18:03,,,,0,,,,CC BY-SA 4.0 7455,1,,,8/7/2018 18:07,,3,143,"

The paper Dynamic Routing Between Capsules uses the algorithm called ""Dynamic Routing Between Capsules"" to determine the coupling coefficients between capsules.

Why it can't be done by backpropagation?

",17358,,2444,,6/9/2020 12:47,6/9/2020 12:47,Why coupling coefficients in capsule neural networks can't be learned by back-propagation?,,1,0,,,,CC BY-SA 4.0 7456,1,7461,,8/7/2018 20:00,,5,303,"

I hope to get some clarifications on Fitted Q-Iteration (FQI).

My Research So Far

I've read Sutton's book (specifically, ch 6 to 10), Ernst et al and this paper.

I know that $Q^*(s, a)$ expresses the expected value of first taking action $a$ from state $s$ and then following optimal policy forever.

I tried my best to understand function approximation in large state spaces and TD($n$).

My Questions

  1. Concept - Can someone explain the intuition behind how iteratively extending N from 1 until stopping condition achieves optimality (Section 3.5 of Ernst et al.)? I have difficulty wrapping my mind around how this ties in with the basic definition of $Q^*(s, a)$ that I stated above.

  2. Implementation - Ernst et al. gives the pseudo-code for the tabular form. But if I try to implement the function approximation form, is this correct:

Repeat until stopping conditions are reached:
    - N ← N + 1
    - Build the training set TS based on the function Q^{N − 1} and on the full set of four-tuples F 

    - Train the algorithm on the TS

    - Use the trained model to predict on the TS itself

    - Create TS for the next N by updating the labels - new reward plus (gamma * predicted values )

I am just starting to learn RL as part of my course. Thus, there are many gaps in my understanding. Hope to get some kind guidance.

",17361,,2444,,7/18/2021 18:28,7/18/2021 18:28,"How is the fitted Q-iteration algorithm related to $Q^*(s, a)$, and how can we use function approximation with this algorithm?",,1,1,,,,CC BY-SA 4.0 7457,1,,,8/7/2018 20:01,,4,3063,"

I am using a neural network as my function approximator for reinforcement learning. In order to get it to train well, I need to choose a good learning rate. Hand-picking one is difficult, so I read up on methods of programmatically choosing a learning rate. I came across this blog post, Finding Good Learning Rate and The One Cycle Policy, about finding cyclical learning rate and finding good bounds for learning rates.

All the articles about this method talk about measuring loss across batches in the data. However, as I understand it, in Reinforcement Learning tasks do not really have any "batches", they just have episodes that can be generated by an environment as many times as one wants, which also gives rewards that are then used to optimize the network.

Is there a way to translate the concept of batch size into reinforcement learning, or a way to use this method of cyclical learning rates with reinforcement learning?

",17360,,2444,,6/19/2020 16:46,6/19/2020 16:46,Is there a way to translate the concept of batch size into reinforcement learning?,,2,0,,,,CC BY-SA 4.0 7459,1,7497,,8/7/2018 23:33,,4,369,"

My works quality control department is responsible for taking pictures of our products at various phases through our QC process and currently the process goes:

  1. Take picture of product
  2. Crop the picture down to only the product
  3. Name the cropped picture to whatever the part is and some other relevant data

Depending on the type of product the pictures will be cropped a certain way. So my initial thought would be to use a reference to an object identifier and then once the object is identified it will use a cropping method specific to that product. There will also be QR codes within the pictures being taken for naming via OCR in the future so I can probably identify the parts that way if this proves slow or problematic.

The part I am unsure about is how to get the program to know how to crop based on a part. For example I would like to present the program with a couple before crop and after crop photos of product X then make a specific cropping formula for product X based on those two inputs.

Also if it makes any difference my code is in C#

",17363,,16929,,8/9/2018 18:23,8/9/2018 18:23,What is the best approach for writing a program to identify objects in a picture then crop them a specific way?,,2,3,,,,CC BY-SA 4.0 7460,2,,7459,8/7/2018 23:49,,1,,"

This sounds like you have a supervised learning problem. Microsoft provides a C# library, but it may not be suitable for your problem.

There are many different algorithms you could try, most of which will be within the sub-area of computer vision. Probably some kind of deep neural network is the best bet these days, but the right choice will probably depend on the details of your problem. Goodfellow et al. have a recent book that might be a good resource for deciding what to use.

Maybe someone who works in computer vision can give you a more specific suggestion.

",16909,,,,,8/7/2018 23:49,,,,0,,,,CC BY-SA 4.0 7461,2,,7456,8/8/2018 0:07,,3,,"

1): The intuition is based on the concept of value iteration, which the authors mention but don't explain on page 504. The basic idea is this: imagine you knew the value of starting in state x and executing an optimal policy for n timesteps, for every state x. If you wanted to know the optimal policy (and it's value) for running for n+1 timesteps in each location, this is now easy to compute. The optimal action from state x is whichever one maximizes the sum of the reward for this timestep (r) and the value of executing an optimal n-step policy from the state you'd end up in after (or expected value if the problem isn't deterministic).

In the approach of the paper, you're not going to compute either the policy or the value explicitly (probably because it's too expensive), so you just approximate the Q function for the n+1 problem.

IIRC, as long as your problem has a discounting factor and the error in your function approximation isn't too large, there are proofs (see Russell & Norvig's chapter on RL (18?)) that your policy will eventually stop changing in-between updates, and will be consistent with the policy for an infinite number of steps. Intuitively, this is because the discounting factor causes a series for the rewards to be convergent.

2): I think that's right. When you build the training set, use the actions suggested by the results for the $$Q_{n-1}$$ network. That's an approximation of the reward for starting in each state and running for n-1 steps with an optimal policy. Then you're learning an approximation of $Q_n$ from that, which looks right.

",16909,,,,,8/8/2018 0:07,,,,0,,,,CC BY-SA 4.0 7462,2,,7457,8/8/2018 0:11,,3,,"

Potentially.

If you do offline reinforcement learning, you're basically learning to approximate a function by sampling input/output pairs, rather than episode-by-episode. Here, your batch size could be set exactly as in an ordinary supervised learning problem.

If you do online learning, then it's not clear to me that the techniques used to set the learning rate in supervised learning can be directly applied though.

Both approaches are well covered in the RL chapter of Russell & Norvig (17? 18?).

",16909,,,,,8/8/2018 0:11,,,,0,,,,CC BY-SA 4.0 7463,2,,7457,8/8/2018 1:13,,2,,"

From my understanding of reinforcement learning, you will have an agent and an environment.

In each episode, the agent observes the state $s$, takes some action action $a$, then gets some reward $r$, and finally observes the next state $s'$, and do it again and again until the end of the episode.

The above process does not incur any "learning". Then when and where exactly do you "learn"? You learn from your history. In traditional Q learning, the Q matrix is updated every time you have a new observation of $(s_t, a_t, r_t, s'_{t+1})$. Just like the supervised learning, you put in training sample one by one.

Similarly, you can feed in training samples in "batch" when you train, which means you "remember" the past $N$ observations and train them together. I think that is the answer to your question.

Furthermore, the past $N$ observations could have a strong correlation that you don't want. To break this, you may have a larger "memory" that stores many observations, and you only sample a few (this number is your new batch size) randomly every time you train your model. This is called experience replay.

",17365,,2444,,6/19/2020 16:41,6/19/2020 16:41,,,,0,,,,CC BY-SA 4.0 7465,2,,6681,8/8/2018 1:53,,1,,"

This is supposed to be a comment but I haven't got enough reputation to do that.

In addition to what @the complexitytheorist has said, I recommend you to have a deeper look at your data first, using dimension reduction and visualisation methods such as PCA and t-SNE. A better understanding of data may always save you a lot of work.

Then you can choose which clustering algo to use. For example, KMeans or DBSCAN as a start.

",17365,,,,,8/8/2018 1:53,,,,0,,,,CC BY-SA 4.0 7467,2,,7419,8/8/2018 3:28,,3,,"

Good question. It is related to the genetic algorithm concept, automated bug detection, and continuous integration.

Early Genetically Inspired Algorithms

Some of the Cambridge LISP code in the 1990s worked deliberately toward self-improvement, which is not the same as self-repair, but the two are conceptual siblings.

Some of those early LISP algorithms were genetically inspired but not pure simulations of DNA mutation with natural selection through sexual reproduction. A few of these evolution-like algorithms evaluated their own effectiveness based on a fixed effectiveness model. The effectiveness model would accept reported objective metrics at run time and analyze them. When the analysis returned an assessment of effectiveness below a minimum threshold, the LISP code would perform this procedure.

  • Copy itself (which is easy in LISP)
  • Mutate the algorithm in the copy according to some meta-rules
  • Run the mutation in parallel as a production simulation for a while
  • Check of the effectiveness of mutation out performed its own

If the mutation was gauged as more effective, it would perform four more selfless steps.

  • Make a record of itself
  • Attach its own performance for later meta-rule use
  • Load the mutation it created in its own place
  • Perform apoptosis

Unlike biological apoptosis, apoptosis in these algorithms simply pass computational resources and run time control to the mutation that was loaded.

This procedure was and probably still is easier in LISP than in other languages, although lovers of other languages would argue endlessly that point.

Extensions of Continuous Integration

This is also the closed loop continuous improvement strategy intended when bug reporting is integrated with continuous integration development platforms and tools. We see extensions of continuous integration in the feeding of bug lists from automated detection, especially for crashes, in many applications, frameworks, libraries, drivers, and operating system today. Many of the elements of closed loop self-repair are already in general practice among the most progressive development teams.

The bug fixes themselves are not yet automated in the way researchers were attempting in the LISP code above. Developers and team leaders are following a process similar to this.

  • Developer or team lead associates (assigns) bug to developer
  • Developer attempts to replicate the bug with the corresponding version of the code
  • If replicated, the root cause is found
  • A design for a fix occurs at some level
  • The fix is implemented

If continuous integration and proper configuration management is in place, at the point when a commit of the change to the team repository occurs, it is applied to the correct branches and the test suite of unit, integration, and functional tests is run to detect any breakage that the fix may have caused inadvertently.

Several Pieces of Full Automation are Already in Use

As one can see, many of the pieces are in place for automatic algorithm, configuration, and deployment package self-repair. There are even projects underway in several corporations to automatically create functional tests by recording user behavior and user answers to questions like, ""Was this helpful?""

What is Missing

What needs further development to more completely see full life cycle self-improving and self-repairing software?

  • Automatic bug replication
  • Automatic unit test creation
  • Automatic repair design
  • Automatic creation of code from design

Next Steps

I suggest that the next steps to be done are these.

  • Assess work already done on the four missing automations above
  • Review the LISP procedure that was perhaps shelved in the 1990s, or perhaps not, since we cannot see (and should not see) what was classified or made company confidential)
  • Consider how the machine learning building blocks that have emerged within the last two decades may help
  • Find stakeholders to provide project resources
  • Get working

A Note on Demand, Ethics, and Technological Displacement

Truth be told, the quality of software was a problem in the 1980s, 1990s, 2000s, and 2010s. Just today, I found over a dozen bugs in software that is considered a stable release, when performing some of the most basic functions the software was designed to do.

Given bug list sizes, just as accidents make the question of whether humans should be driving cars questionable, whether humans should maintain software quality is questionable.

Humanity has survived replacement in a number of things already.

  • Arithmetic with a pencil and eraser is gone
  • Professional farming with garage tools is gone
  • Creating advertising mechanicals with Exacto knives is gone
  • Sorting mail by hand is gone
  • Communicating by horse-back courier is gone

Few software engineers are happy just fixing bugs. They seem to be happiest creating new software filled with bugs that someone else is supposed to fix. Why not let that someone else be artificial?

",4302,,,,,8/8/2018 3:28,,,,0,,,,CC BY-SA 4.0 7468,1,,,8/8/2018 5:16,,1,58,"

Why the optimization step of the algorithm a quadratic program? [See: Apprenticeship Learning via Inverse Reinforcement Learning; page 3]

Isn't the objective function linear? Why don't we treat the problem as LPQC (linear program with quadratic constraints)?

",16678,,1671,,8/9/2018 21:26,8/9/2018 21:26,Optimization step in Apprenticeship Learning via Inverse Reinforcement Learning,,0,3,,,,CC BY-SA 4.0 7469,1,7471,,8/8/2018 5:52,,2,413,"

I am building a supervised learning model and I wish to compute the log-likelihood for the training set at the point of the minimum validation error.

Initially, I was computing the sum of all the probabilities with maximum value obtained after applying softmax for each example in the training set at the point of minimum validation error but that doesn't look correct.

What is the correct formula for the log-likelihood?

",17372,,16909,,8/8/2018 7:55,8/8/2018 7:55,How do I compute log-likelihood for training set in supervised learning?,,1,0,0,,,CC BY-SA 4.0 7470,1,,,8/8/2018 6:17,,7,404,"

From Meta-Learning with Memory-Augmented Neural Networks in section 4.1:

To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data.

I can maybe see how rotations could reduce overfitting by allowing the model to generalize better. But if augmenting the training images through rotations prevents overfitting, then what is the purpose of adding new classes to match those rotations? Wouldn't that cancel out the augmentation?

",17373,,2444,,1/2/2022 10:28,1/2/2022 10:28,How does rotating an image and adding new 'rotated classes' prevent overfitting?,,2,1,,,,CC BY-SA 4.0 7471,2,,7469,8/8/2018 7:50,,1,,"

The log-likelihood function for the training set (in general, not for deep learning in particular) will depend on your choice of loss function.

I'm guessing you're using something like a quadratic loss function for a binary classification problem, since this is a common approach. In that case, the log-likelihood is the sum of logs of the squared differences between the target label and the softmax value produced by your model. If you want to compute the log likelihood under a particular set of parameters (say, those that minimize validation error), then you just use those parameters in the model when generating softmax values.

",16909,,,,,8/8/2018 7:50,,,,2,,,,CC BY-SA 4.0 7472,2,,7222,8/8/2018 8:19,,1,,"

This question has a number of parts to it.

First, you have a representation problem: what is the correct way to present textual data to your machine learning algorithm?

In this case, you chose to apply Bag-of-Words and then TFIDF scores. For English, this might be expected to produce on the order of 100,000 features, with each instance having only a few non-zero features.

If you want to go this route, you would typically also do some kind of feature selection to eliminate unimportant features from consideration. Depending on your task, you may be able to reduce the size of your input vectors quite dramatically while still getting good performance (for some tasks, to just 100 or so).

You're right that this might not be the most promising approach however.

My choice for this problem would be to use a compression classifier, like DMC. These have the advantage that they do not need any feature selection or pre-processing, and can easily handle new words or typos. They give state-of-the-art performance on tasks like spam-email classification.

",16909,,,,,8/8/2018 8:19,,,,1,,,,CC BY-SA 4.0 7473,2,,7202,8/8/2018 8:27,,6,,"

The most likely explanation is that you're using too many training examples for your SVM implementation.

SVMs are based around a kernel function. Most implementations explicitly store this as an NxN matrix of distances between the training points to avoid computing entries over and over again.

In your case, with 75% of 700,000 examples, this matrix will require approximately 250GB of RAM to store, which is more than you're likely to have in consumer hardware.

If your SVM implementation can avoid caching the values, you might get a speedup that way, or you might not (you'll waste a lot of time recomputing them).

A much better way to deal with this is to just not use all of the data, since most of it will be redundant from the SVM's perspective (it only benefits from having more data near the decision boundaries). A good starting place would be to randomly discard 90% of the training data, and see what performance looks like.

",16909,,16909,,8/8/2018 12:56,8/8/2018 12:56,,,,0,,,,CC BY-SA 4.0 7474,2,,5821,8/8/2018 8:38,,0,,"

Your problem appears to be multiple sequence alignment. That problem is well studied, and depending on your application, there are fast algorithms that can perform quite well without recourse to AI or optimization, and which are detailed in the provided link.

If your problem has a lot of local minima, then a PSO might be a good approach. Certainly local search has been widely used for this kind of problem in the past.

Your state should be represented by a list of sequences, each with some blanks inserted. Movement between states can consist of adding or deleting blanks from one or more sequences. The distance between two states can be computed as the sum of the number of operations needed to transform each sequence to its corresponding sequence in the other state, which you can compute in linear time (just sum up the needed additions or deletions between each pair of letters).

Hope that gives you a good starting place!

",16909,,,,,8/8/2018 8:38,,,,0,,,,CC BY-SA 4.0 7476,2,,6994,8/8/2018 14:41,,1,,"

It sounds like you are trying to do some kind of semi-supervised learning. In semi-supervised learning, some data points are labelled (you know which class they belong to), and others are not. There are classification algorithms designed specifically for this kind of problem, like a transductive-SVM. I personally have not found these techniques to be more effective than simply discarding the unlabelled data and treating my problem as purely supervised, but YMMV.

TFIDF remains fairly popular, as do ngram-based approaches. A more modern vectorization to consider might be word2vec, which translated something like a bag-of-words style vector into a more meaningful feature space for words.

",16909,,,,,8/8/2018 14:41,,,,2,,,,CC BY-SA 4.0 7477,2,,6026,8/8/2018 15:04,,13,,"

This is well covered in the corresponding chapter of Russell & Norvig (chapter 3.5, pages 93 to 99 (Third Edition)). Check that out for more details.

First, let's review the definitions:


Your definitions of admissible and consistent are correct.

An admissible heuristic is basically just "optimistic". It never overestimates a distance.

A consistent heuristic is one where your prior beliefs about the distances between states are self-consistent. That is, you don't think that it costs 5 from B to the goal, 2 from A to B, and yet 20 from A to the goal. You are allowed to be overly optimistic though. So you could believe that it's 5 from B to the goal, 2 from A to B, and 4 from A to the goal.

A tree search is a general search strategy for searching problems that have a tree structure: that is, it's never possible to "double back" to an earlier state from a later state. This models certain types of games, for instance, like Tic-Tac-Toe. The tree search does not remember which states it has already visited, only the "fringe" of states it hasn't visited yet.

A graph search is a general search strategy for searching graph-structured problems, where it's possible to double back to an earlier state, like in chess (e.g. both players can just move their kings back and forth). To avoid these loops, the graph search also keeps track of the states that it has processed.

For more on tree vs. graph search, see this answer.


Okay, now let's talk through the intuition behind the proofs.

We first want to show that

If $h(n)$ is admissible, A* using tree search is optimal.

The usual proof is by contradiction.

  1. Assume that A* with tree search and an admissible heuristic was not optimal.

  2. Being non-optimal means that the first complete path from the start to the goal discovered by A* (call this $q$) will be longer than some other path $p$, which A* explored up to some state $s$, but no further.

  3. Since the heuristic is admissible, the estimated cost of reaching the goal from $s$ must be smaller than the true cost.

  4. By 3, and the fact that we know how much it costs to reach $s$ along $p$, the estimated total cost of $p$, and thus the cost to expand $s$ must be smaller than the true cost of $p$.

  5. Since the true cost of $p$ is smaller than the cost of $q$ (by 2), the estimated cost to expand $s$ must be smaller than the true cost of $q$.

  6. A* always picks the path with the most promising total cost to expand next, and the cost of expanding the goal state is given by the total path length required to reach it.

  7. 5 and 6 form a contradiction, so our assumption in 1 must have been incorrect. Therefore A* must be optimal.

The graph search proof uses a very similar idea, but accounts for the fact that you might loop back around to earlier states.

",16909,,2444,,5/15/2021 12:32,5/15/2021 12:32,,,,6,,,,CC BY-SA 4.0 7478,2,,6041,8/8/2018 15:22,,1,,"

To model 2048 (or any problem) for search, you need a only a few pieces of information.

Note first though, that 2048 is not suitable for minimax, because there's only one player! Instead, you can treat this as a Markov decision process. The techniques to solve it are pretty similar though. Basically, you'll do search for one player, and insert ""chance"" nodes at each ply of the search. The value of a chance node is the expected value of its children. Note that this will reduce the effectiveness of pruning, so it might mean the problem is not tractable for search-based approaches.

  1. What does an end state look like? Usually you have some function G(s) that accepts a state s, and produces true if and only if it's an end state. In 2048, end states would be states where the player loses (no moves possible), or where the player wins (a 131072 tile is present), so this should be fairly easy to write.
  2. What are the payoffs? This is usually given by a utility function U(e) that accepts an end-state e, and produces a numeric value indicating the utility the player will receive.
  3. What actions can the player take in each state? In 2048, these are always the same (up, down, left and right)
  4. How are new states generated from old states and player actions (in this case, the tiles slide according to the rules of the game, and then a new tile is inserted at a random empty location.

Although search might work here, since 2048 is a relatively simple MDP, you might be happier using techniques from reinforcement learning, which were specifically designed for this kind of problem. Russell & Norvig have a good set of chapters on both approaches (14-17).

",16909,,,,,8/8/2018 15:22,,,,1,,,,CC BY-SA 4.0 7479,1,,,8/8/2018 15:26,,3,68,"

When working with time-series data, is it wrong to use daily prices as features and the price after 3 days as the target?

Or should I use the next-day price as a target, and, after training, predict 3 times, each time for one more day ahead (using the predicted value as a new feature)?

Will these 2 approaches give similar results?

",17322,,2444,,1/1/2022 9:02,1/1/2022 9:02,"When working with time-series data, is it wrong to use different time-steps for the features and target?",,1,2,,,,CC BY-SA 4.0 7480,2,,6244,8/8/2018 15:28,,1,,"

This is well covered in the corresponding chapters of Russell & Norvig (Ch. 3 & 4). It also depends on the distinction between TREE-SEARCH and GRAPH-SEARCH.

First, note that Breadth-first search also can't handle cost functions that vary between nodes! Breath-first search only cares about the number of moves needed to reach a state, not the total cost of getting there, so if some states are cheaper on a per-move basis, it's not optimal.

Ok, back to the question: basically, the statement you reference applies to Breadth-first search run on a GRAPH-SEARCH problem. In this kind of problem, it is possible to loop back to an earlier state from a later state (like in chess, where you may move your king back and forth).

If the costs for moving between two states are negative, but constant across all the possible moves, then Breadth-first search will not chose to loop between them forever, since it always expands the nearest node to the start by total number of moves, not the cheapest node. Since you could always add one more loop, to make a path shorter, breadth-first search will not find the shortest path (by cost), but will find the shortest path by number of moves instead.

",16909,,,,,8/8/2018 15:28,,,,0,,,,CC BY-SA 4.0 7481,2,,6366,8/8/2018 15:38,,2,,"

I think there are two problems with your approach.

First, your genetic algorithm contains crossover, but no mutations at all. In a GA, crossover causes convergence, while mutation is the only ""exploration"" operation. This means your creatures are stuck with whatever genes were present in their small initial population, and, even with modest selection pressures, they will rapidly converge to all being identical to each other. A common way to add mutation is to assign a random value to each location in each child's genome with a small probability (say, 0.01 * 1/number_of_genes). Some researchers prefer higher values. I'm not sure it's been definitely shown which is better, but it likely depends on your problem.

Second, throwing away agents that died might not be the best selection mechanism. You might get more interesting behaviours if you tied reproduction to something else (e.g. eating a lot of food while you were alive). Right now, your fitness function is probably incentivizing agents to hide in a corner and not do anything, since this maximizes the chance they survive to the end of the simulation.

Hope this helps a bit.

",16909,,,,,8/8/2018 15:38,,,,0,,,,CC BY-SA 4.0 7482,2,,5622,8/8/2018 15:46,,1,,"

You might want to try symbolic regression. This is a machine learning approach that tries to generate an equation with arbitrary form that best fits a set of examples. I've used it before when working with physicists who wanted an equation in terms of known numeric constants and specific variables, but didn't expect the equation to be linear or one of the other shapes for which we have standard regression models.

Symbolic regression often (usually?) uses genetic programming for the underlying optimization. A decent tool is Eureka. It used to be free, but I think they want a payment now. You can also do it yourself from any genetic programming toolkit with a little know-how.

",16909,,,,,8/8/2018 15:46,,,,0,,,,CC BY-SA 4.0 7483,2,,4986,8/8/2018 15:51,,1,,"

Yes, you can. There are a lot of different techniques, usually called Ensemble Methods.

A better approach might be to use something like AdaBoost along with a cheaper method like the decision trees you looked at. AdaBoost explicitly tries to train classifiers to correctly handle different parts of the data, rather than hoping that different methods turn out to do so by chance.

",16909,,,,,8/8/2018 15:51,,,,0,,,,CC BY-SA 4.0 7484,2,,3929,8/8/2018 16:04,,4,,"

This question is re-inventing the analysis for iterated prisoner's dilemma and the co-evolution that can lead to agents playing super-rationally in the one-shot version, which has been studied really extensively.

Dan Ashlock's research career looked at this in great detail from an evolutionary perspective, but it's also been widely studied in other areas of AI. The strategy you describe as superintelligent is called ""Tit-for-Tat"", is well known to emerge when people play several rounds of the game. It actually mimics how people play the game in experimental settings. It emerges evolutionarily in simulations because any two agents that both implement it will get more reward than selfish agents. Other, more complex strategies can also appear. For example, the fortress family of strategies consists of playing a series of question/response opening moves in the early rounds (unlocking the fortress), and then cooperating for the rest of the game if the other player knows the ""password"".

Hope this helps!

",16909,,,,,8/8/2018 16:04,,,,2,,,,CC BY-SA 4.0 7486,2,,3680,8/8/2018 16:33,,1,,"

There are a few ways of handling this within GA's, but most of them actually amount to using some kind of Genetic Programming instead.

The simplest way, and most similar to what you've proposed is called linear genetic programming. In this representation, you break the genome into a set of equal-width pieces. Each piece is interpreted as a machine-language instruction for a virtual machine. In your case, plausible instructions might be ""move left"" or ""move right"". Most versions use variable-length genomes, so your program's length depends only on how many instructions there are.

Another approach is to use the standard LISP-like genetic programming system, which Koza documents in his book for other simple problems, like the Santa Fe Trail.

More complex encodings are also possible. Grammatical evolution is another one that is similar to a GA in spirit, but interprets a genome according to a CFG instead of as instructions to a virtual machine.

",16909,,,,,8/8/2018 16:33,,,,1,,,,CC BY-SA 4.0 7487,2,,3298,8/8/2018 16:40,,2,,"

You are trying to solve a variant of The Halting Problem, which is the problem of detecting whether a computer program is going to stop, or run forever.

The Halting Problem is incomputable, which means it is not possible to write a computer program that solves it. It is easy to see that your problem is also incomputable. If you could predict whether a program would generate an error, then for any program X that someone wants to solve the halting problem for, we could write a new program:

  1. run(X)
  2. Error(""X finished running"").

and use your algorithm to determine whether X would finish running or not. Since the halting problem is known to be incomputable, this means your problem must be incomputable too.

That's not to say all is lost though. Formal verification is a field that uses some AI techniques (mostly reasoning-based, but I think there's some machine learning now too) to try to solve this problem for some programs. It can't work for every program though.

",16909,,,,,8/8/2018 16:40,,,,0,,,,CC BY-SA 4.0 7488,1,,,8/8/2018 20:31,,7,1763,"

I have only a general understanding of General Topology, and want to understand the scope of the term "topology" in relation to the field of Artificial Intelligence.

In what ways are topological structure and analysis applied in Artificial Intelligence?

",1671,,2444,,12/13/2021 14:25,12/13/2021 14:25,"In what ways is the term ""topology"" applied to Artificial Intelligence?",,2,1,,,,CC BY-SA 4.0 7489,2,,3298,8/8/2018 20:36,,1,,"

I think you would find this link helpful. It demonstrates how to identify patterns in large arbitrary byte data.

https://devblogs.nvidia.com/malware-detection-neural-networks/

",1720,,,,,8/8/2018 20:36,,,,0,,,,CC BY-SA 4.0 7490,2,,7488,8/8/2018 21:31,,2,,"

I spent some time thinking about it, but I'm aware of only two main meanings. There might be more that aren't coming to me right now though...

In local search problems or sometimes in optimization for machine learning, the ""topology"" of a problem corresponds to the change in the function you're optimizing as you move between adjacent states. If the change is sharp, you have a ""rugged topology"". If it's gentle and continuous, you have a ""smooth topology"". See on page 2 of An Introduction to Fitness Landscape Analysis and Cost Models for Local Search for example.

The other major meaning is with reference to the structure (topology) of a combinatorial graph. Many modern machine learning algorithms are based in the idea of combinatorial graphs, including Bayesian Networks, Sum/Product Networks, and Deep Neural Networks. Here, topology refers to the topological ordering of a directed graph, or more informally, to ""how the graph is structured"". For example, in a neural network, the depth and width of the network's layers, and the nature of the connections between layers, define the topology of the network.

Additionally, it gets used a lot in the sense of the second meaning in other areas of AI, just because those areas also use graphs. For example, in automated planning, or in probabilistic reasoning, it is also common to represent your problem as a combinatorial graph. You could then talk about the ""topology"" of the problem.

",16909,,2444,,5/23/2020 21:13,5/23/2020 21:13,,,,0,,,,CC BY-SA 4.0 7491,2,,3126,8/8/2018 21:56,,1,,"

Tariq's comment hints at this, but this is still in some sense a very mainstream idea.

Check out the transcripts for this year's Loebner Prize. The winner (called Tutor) is again making use of Eliza-like deflections. Some of the other candidates try to use seduction (Columbia), or speaking in a flighty or insistent way (like 2017's Rose).

In all cases, the bots are relying on humans to read meaning into ambiguous or vague statements, which we seem to be hardwired to do. Kind of neat!

",16909,,,,,8/8/2018 21:56,,,,0,,,,CC BY-SA 4.0 7492,2,,7051,8/8/2018 22:00,,1,,"

Assuming there's no ordering to the hand (i.e. it doesn't matter what order cards were added to it), then a reasonable approach is to use one input neuron for the number of each kind of card that is present in a player's hand.

You don't describe how the game is played, but a common approach for extracting actions is to have one output neuron for each possible action. To select an action, you would pick the one corresponding to the neuron with the highest output response to a given input.

",16909,,,,,8/8/2018 22:00,,,,0,,,,CC BY-SA 4.0 7493,2,,7470,8/9/2018 1:07,,2,,"

Over-fitting in the context of convergence in a neural network can have many causes. When the model implied in the design of the network is not well fitted for the task, the network may still converge within the time frame allowed and the example set presented but it will take more time and a greater number of examples than necessary, and the reliability and accuracy of the trained circuit may be far below what could be achievable with a solid design.

Gross over-fitting can be one of the causes of decreased reliability. A more slight over-fit will exhibit accuracy somewhat diminished from the accuracy found by the end of training.

This is why various designs have emerged with functionally specific circuit simulations between more general multi-layer perceptron networks.

  • Convolution kernels
  • Rotations
  • Other basic translations
  • Hash lookups
  • Other patterned circuits that remove burden from general convergence

In the case of rotation, the convergence on an optima angle in one specialized layer or longitudinal stack element can remove considerable burden and allow overall convergence with fewer general activation layers, using fewer examples, and with a significantly more reliable and accurate result.

Consider what perceptrons must do to rotate an image arbitrarily. They must wire what is essentially rotational trigonometry into the parameters of everything that is orientation-dependent within the network, creating what is essentially a pliable helix, possibly in many locations within the trained network. Creating the pliable helix functionality, parameterized in advance of training and carefully handling back-propagation to adjust to its existence, drastically reduces the complexity of convergence.

If done well, over-fitting will be much less of an issue. If done poorly, there could be worse over-fitting or other problems such as non-convergence.

In summary, the best practice is to leave to general network training what must, by its nature, be complex but handle with specific functionality what is well understood and for which mathematical and algorithmic approaches already exist.

",4302,,,,,8/9/2018 1:07,,,,0,,,,CC BY-SA 4.0 7494,1,7495,,8/9/2018 3:31,,2,153,"

I am reading a book that states

As the mini-batch size increases, the gradient computed is closer to the 'true' gradient

So, I assume that they are saying that mini-batch training only focuses on decreasing the cost function in a certain 'plane', sacrificing accuracy for speed. Is that correct?

",14811,,2444,,8/19/2021 11:12,8/19/2021 11:12,What's the rationale behind mini-batch gradient descent?,,2,0,0,,,CC BY-SA 4.0 7495,2,,7494,8/9/2018 4:16,,2,,"

The basic idea behind mini-batch training is rooted in the exploration / exploitation tradeoff in local search and optimization algorithms.

You can view training of an ANN as a local search through the space of possible parameters. The most common search method is to move all the parameters in the direction that reduces error the most (gradient decent).

However, ANN parameter spaces do not usually have a smooth topology. There are many shallow local optima. Following the global gradient will usually cause the search to become trapped in one of these optima, preventing convergence to a good solution.

Stochastic gradient decent solves this problem in much the same way as older algorithms like simulated annealing: you can escape from a shallow local optima because you will eventually (with high probability) pick a sequence of updates based on a single point that ""bubbles"" you out. The problem is that you'll also tend to waste a lot of time moving in wrong directions.

Mini-batch training sits between these two extremes. Basically you average the gradient across enough examples that you still have some global error signal, but not so many that you'll get trapped in a shallow local optima for long.

Recent research by Masters and Luschi suggests that in fact, most of the time you'd want to use smaller batch sizes than what's being done now. If you set the learning rate carefully enough, you can use a big batch size to complete training faster, but the difficulty of picking the correct learning rate increases with the size of the batch.

",16909,,,,,8/9/2018 4:16,,,,4,,,,CC BY-SA 4.0 7496,2,,7494,8/9/2018 4:29,,0,,"

It's like you have a class of 1000 children and you being a teacher, want all of them to learn something at the same time. It is difficult because all are not the same, they have different adaptability and reasoning strength. So one can have alternate strategies for the same task. 1) Take each child at a time and train it. It will be the good approach but it will take a long time here each child is equal to your batch size

2) Take a group of 10 children and train them, this can be the good compromise between time, and learning. In the smaller group, you can handle naughty one better. here your batch size is 10

3) If you take all 1000 children and teach them, it will take a very short time but you will not be able to give proper attention to those mischievous ones here your batch size is 1000

Same with machine learning, Take reasonable batch size, tune weight accordingly. I hope this analogy will clear your doubt.

",3773,,,,,8/9/2018 4:29,,,,1,,,,CC BY-SA 4.0 7497,2,,7459,8/9/2018 7:56,,3,,"

Depending on kind and amount of data you posess, there are few approaches that you might consider.

  1. Marking target objects on dataset and training CNN that returns coordinates of target object. In this case, remember that it is usually faster when training data ROIs have their coordinates relative to image size.

  2. Use some kind of focus mechanism, like spatial transformer network:

    This kind of network component is able to learn image transformation (including crop) that maximazes target metric for main classifier. This tutorial on pytorch:

    shows some nice visualizations of STN results. Good thing about this kind of network is that, given enough data, it might learn proper transformation from image classification data (photo -> class). One does not need to explicitly mark target objects on image!

  3. Object detection networks, like YOLO, Faster-RCNN. There are many tutorials on that matter, eg:

  4. Saliency extraction. Simple idea is to generate heatmap showing what parts of input image activates classifier the most. I guess you could try calculate bounding box basing on such heatmap. Example research paper:

Points 1 and 2 are probably easies to implement, so I would start with them.

",16929,,,,,8/9/2018 7:56,,,,3,,,,CC BY-SA 4.0 7498,2,,7470,8/9/2018 9:50,,4,,"

How can data augmentation reduce overfitting?

You write that you can already maybe see how data augmentation can help prevent overfitting in general, but it sounds a bit uncertain and it's still asked in the title of the question, so I'll address this first:

Generally, when we use Machine Learning for classification problems, we would ideally learn a classifier that can perform well on a population. An example of a population would be: the set of all handwritten characters in the entire world. Generally, we don't have that complete population available for training, we only have a (much smaller) training dataset. If a training set is large enough, it might be a good approximation of the true population we're interested in (a ""dense sampling"" of the space we're interested in), but it's still just that; an approximation.

We say that a learning algorithm is overfitting if it performs singificantly better on the training set than it is on the population (which we generally approximate again using a separate test set).

Now, data augmentation (like adding rotations / translations of images in the training set to the training set) can help combat overfitting because it bridges the gap between training set and population. The population (all handwritten characters in the entire world) will likely include characters at various offsets from the middle (e.g. translations) and at various rotations. So, data augmentation is simply adding more examples (and possibly more varied examples) to our training set, which importantly are considered to be a part of the population we're interested in. If, for example, the population we are interested in were only the set of all handwritten characters at a specific position in the image (e.g., centered), then augmenting the dataset by adding various translations would not help; we'd be adding instances that are outside the population we want to learn about.


Why doesn't adding extra classes for rotations cancel out augmentations?

There are two possible explanations I can come up with:

  1. Maybe the ""extra-class"" rotations are different from the ""data augmentation"" rotations.

Here is the exact quote that's relevant from the paper:

""To reduce the risk of overfitting, we performed data augmentation by randomly translating and rotating character images. We also created new classes through 90◦, 180◦ and 270◦ rotations of existing data.""

That first sentence is not 100% clear in my opinion. I imagine the translations they use for data augmentation are relatively small (e.g. offsets of a few pixels), so maybe the rotations they use for data augmentation are also only ""small"" rotations (for example, between -10◦ and +10◦). The ""larger"" rotations (multiples of 90◦) described in the second sentence may then no longer be a part of the ""data augmentation to reduce the risk of overfitting"" in the first sentence; they're simply parts of a different action performed to increase the number of classes in the dataset (and, I imagine, for each of these larger rotations they may again perform ""smaller rotations"" for data augmentations).

This explanation is kind of hypothetical though, it's not 100% clear from the paper exactly what they mean here in my opinion.

  1. ""Overfitting"" can have a slightly different interpretation in the case of one-shot learning than in traditional learning.

Note that this paper is about ""one-shot learning"", where the goal is to be able to classify accurately after being presented only a single example (""one shot"") of a never-before-seen class. In such one-shot problems, you could in some sense say that an algorithm might ""overfit"" to the ""distribution of classes"" if it can only perform one-shot learning well on a certain set of similar classes, but not on others.

For example, if you only train one-shot learning on a set of handwritten characters that are ""upright"" (close to 0 rotation), your algorithm might be able to perform well in terms of one-shot learning when presented with new classes (new handwritten characters) that are also upright, but might be incapable of proper one-shot learning when presented with new classes (new handwritten characters) that are upside-down.

",1641,,,,,8/9/2018 9:50,,,,0,,,,CC BY-SA 4.0 7500,1,,,8/9/2018 19:11,,5,78,"

I'm looking for a database or some machine readable document that contains common ordered lists or common short sets. e.g:

{January, February, March,...}
{Monday, Tuesday, ....}
{Red, Orange, Yellow,...}
{1,2,3,4,...}
{one, two, three, four,...}
{Mercury, Venus, Earth, Mars,...}
{I, II, III, IV, V, VI,...}
{Aquarius, Pisces, Aries,...}
{ein, zwei, drei, ...}
{Happy, Sneezy, Dopey, ...}
{Dasher, Dancer, Prancer, Vixen ,...}
{John, Paul, George, Ringo}
{20, 1, 18, 4, 13, 6, ...}
{Alabama, Alaska, Arizona, Arkansas, California,...}
{Washington, Adams, Jefferson, ...}
{A,B,C,D,E,F,G,...}
{A,E,I,O,U}
{2,3,5,7,11,13,17,...}
{triangle, square, pentagon, hexagon,...}
{first, second, third, fourth, fifth,...}
{tetrahedron, cube, octohedron, icosohedron, dodecahedron}
{autumn, winter, spring, summer}
{to, be, or, not, to, be, that, is, the, question}
...

One use is for creating an AI that can solve codes or predict the next thing in a sequence.

",4199,,1671,,8/9/2018 20:45,8/10/2018 9:32,Is there a database somewhere of common lists?,,1,3,,2/3/2021 17:09,,CC BY-SA 4.0 7503,2,,7479,8/10/2018 0:32,,1,,"

I don't know what kind of price data you're dealing with. I suppose the order of the data matters a lot, so my suggestion would be:

  1. Use LSTM as it handles time series better

  2. You can predict 3 consecutive numbers from an RNN as the next three days' predictions

  3. Try regression first, it is likely it will not work (or just flatten the curves, depend on your data noise), then classification is an easier approach

  4. Don't forget normalization

",17365,,,,,8/10/2018 0:32,,,,0,,,,CC BY-SA 4.0 7504,2,,7500,8/10/2018 2:33,,2,,"

So here’s a couple quick resources that i could think of. First of all, you could look at this,

https://en.m.wikipedia.org/wiki/List_of_lists_of_lists

It has classics such accidents, hospitals in Asia, or even a list of famous resignations. It’s easentially a list of random lists of things. It may not owner cover your requirement for sequences but it’ll help with small subsets of lists.

As for sequences, you could always check out,

https://oeis.org

It’s pretty much a list of official mathematical sequences. It’s got everything from the Fibonacci sequence to esoteric sequences you’ve never heard of.

",17408,,,,,8/10/2018 2:33,,,,0,,,,CC BY-SA 4.0 7505,2,,7350,8/10/2018 3:26,,3,,"

Assuming you're not referring to any particular type of pooling operation, it's possible that you could have, for example, a mean pool followed by a max or min pool. What this could do is combine the idea of reducing the dimensionality of your data from a holistic perspective with the mean pool, and then choosing the best of your averages with your max pool.

",17408,,2444,,5/30/2020 11:20,5/30/2020 11:20,,,,0,,,,CC BY-SA 4.0 7506,2,,5982,8/10/2018 4:55,,0,,"

I would suggest you use a sequence to sequence model with character level features. It is an easy task, provided you have data.

",3773,,,,,8/10/2018 4:55,,,,2,,,,CC BY-SA 4.0 7509,1,,,8/10/2018 9:45,,4,66,"

Nowadays we don't know how to create AI in a safe way (I think that we don't even know yet how to define a safe AI), but there is a lot of research in developing a model allowing it.

Let's say, that someday we discover such a model (maybe even it would be possible to mathematically prove its safety). Is it rational to ask, how do we prevent people from creating AI outside of this model (e.g. they are so confident in their own model, that they just pursue it and end up with something like paperclip scenario)?

Should we also think about creating some theory/infrastructure preventing such a scenario?

",17411,,1581,,8/10/2018 17:16,8/16/2018 8:02,Is it necessarry to create theory/infrastructure to prevent people from creating AI incompatible with a safe model (if we create one)?,,1,1,,,,CC BY-SA 4.0 7510,2,,6898,8/10/2018 13:54,,2,,"

I do not believe that StackExchange has published precisely what algorithm(s) they use for that, so we can't tell for sure.

However, in this meta.stackexchange question, you can follow some of the efforts that were undertaken in collecting training data for training such a classifier. The post also links to the ""CQADupStack: Gold or Silver?"" paper which describes the analysis of such a dataset that comes directly from StackOverflow. You might be able to find interesting literature by browsing google scholars for papers that cite this paper.

There is also another meta.stackoverflow discussion on this topic, where answers link to various community-developed projects / bots for this purpose. Again, not necessarily what is actually used by the StackExchange sites, but likely similar.

Finally, there is definitely a lot of research on performing such classifications (one example found by a quick google search is ""Duplicate Question Detection in Stack Overflow: A Reproducibility Study"", where many other relevant publications can be found in the list of References). This again does not necessarily lead to be precisely the algorithm that StackExchange happens to use, but many relevant ones, one or more of which they might be using.

",1641,,,,,8/10/2018 13:54,,,,0,,,,CC BY-SA 4.0 7511,1,,,8/10/2018 18:09,,3,105,"

I am going to train a deep learning model to classify hand gestures in video. Since the person will be taking up nearly the entire width/height of the video and I will be classifying what hand gesture he or she is doing, I don't need to identify the person and create a bounding box around the person doing the action. I only need to classify video sequences to their class labels.

I will be training on a dataset with individual videos, in which each entire video clip is the particular gesture (So it's a dataset like UCF-101, with video clips corresponding to class labels). But when I am deploying the network, I want the neural network to run on live video. As in how the live video is playing, it should recognize when a gesture has occurred and indicate that it recognized the gesture.

So I was wondering - How can I train the neural network on isolated video sequences in which the entire video clip is the action (like explained above), but run the neural network on live video? For instance, can I use a 3D CNN? Or must I use a 2D CNN with an LSTM network instead, for it to work on live video? My concern is that since a 3D CNN performs the filters across many frames, wouldn't running the CNN on every frame make it very slow? But if I use a 2D CNN with LSTM, will that make it faster? Or will both work fine?

Thank you for your help in advance.

",11364,,,,,12/28/2022 7:03,How should continuous action/gesture recognition be performed differently than isolated action recognition,,1,0,,,,CC BY-SA 4.0 7515,2,,7215,8/11/2018 3:19,,2,,"

The Degree to Which Inhibition is in Common Use

What could loosely be considered inhibitory effect occurs in MLPs (multilayer perceptrons) as they are normally designed and implemented already.

The gradient descent scheme implemented within a larger back propagation algorithm can produce a parameter adjustment delta that is either positive or negative.

  • A positive value decreases the attenuation of that parameter's signal path, thereby increasing the signal strength there.
  • A negative value increases the attenuation of that path, thereby decreasing signal strength through that connection.

A decrease in a parameter's value as a result of back propagation bears some similarity to the inhibition of a neural signal path, however, you may already be aware of the significant differences in the signaling between biological neurons and the signalling between layers in the type of artificial networks commonly in machine learning.

The term inhibition is, as mentioned, only loosely applicable.

  • One cannot inhibit a pulse through a MLP because there ARE NO PULSES in MLPs.
  • One cannot alter the signal attenuation between neurons by varying a numeric parameter either, since there is no numeric parameter array in a biological net.

Stimulation and inhibition in the brains of mammals are also different in that neuro-chemistry impacts the network regionally, so the terms stimulation and inhibition are a bit ambiguous, since we have agonists and antagonists ranging from dopamine to serotonin and from cannabanoids to oxytocin receptors and from endorphins to other classes.

Changes from Former Textbook Themes

The former thinking was that a pulse travelling through a biological signal path strengthened that connection. No one in neurology research adheres to that simplistic a conception today.

For example, it is know that a signal pathway may be in common use but may close down by repeated sharp pains following its use. Although I am not well trained in electro-chemical processes in neural pathways, I recall in vitro experiments supporting that this is neither neuro-plastic nor electrical, that it is related to regional chemical feedback.

The current view of addiction as a brain disease is that a breakdown of the interrelationship between chemical state change and learned inhibition or transmission is causal. Inhibition or transmission is no longer decided upon based on organism survival and socialization but on the addictive stimuli, leading to behavioral dysfunction.

It may be useful to point out that, stimulation and inhibition are not strictly antonyms. The opposite of inhibiting a signal is the transmission of it. The opposite of stimulation is the lack of stimulation (no signal).

Attempting Analogy in Largely Dissimilar Circuit Models

It may not be an aid to general understanding to draw parallels between ReLU activation functions and the functions of synapses with their sensitivity to regional brain chemistry and with cell-level retention functions orchestrated by organelles.

Neural nets are not neural. They are a mathematical conception sharing only the ideal of learning as convergence on some ideal network behavior. Nothing else of significance is in common.

Adding to the Disparity Between MLPs and Biology

In a sense inhibition in the brain occurs at multiple architectural levels, inside the cell, between cells, and over structures of cells, and the alignment of pulses temporally (in the time domain) is not simulated at all in conventional machine learning constructs.

Some researchers have deviated entirely from the multilayer perceptron design and favored a pulse based system that requires specialized hardware. Follow the money there. It is not an inexpensive research avenue yet. But it may become one if they have success.

Curvature in Functions

Brief note on terminology: Nth degree polynomials fall under linear algebra, so the best term to use is 'curved functions' so as to not fall into the ambiguity of the term non-linear.

Nonetheless, you are correct that there are non-linearities of different types in biological neural circuits. Potential change is not only curved, but its function's curvature changes quickly. It is temporally sensitive.

On the longer time frame, the memory in a cell forms through neural plasticity and the cell behavior changes internally (within the cell membrane) employing cytoplasm and the suspended organelles. That memory function also attenuates at a roughly inverse exponential rate with respect to time but some have hypothesized based in empirical evidence that forgotten cellular function can be recalled. Again, this is at the cellular level.

The second non-linearity is not a sum of potentials. The surface of the function that aggregates incoming signals is not flat. It is curved. Also, as mentioned, the temporal alignment presents a complexity, since perfect pulse alignment is not treated the same as pulses not perfectly aligned in time.

(I brought up the absurdity of using an additive adjustment in MLP back propagation in a question I wrote for this site. The responses to the challenge to the status quo not particularly well understood by the majority of machine learning practitioners were not outstanding.)

Linear Thinking Prevails Currently

To a large degree linear thinking (in the wider sense of the term) pervades mainstream machine learning and data science today, the activation functions being a notable and welcomed exception.

Over time, I expect that will improve. I see current leading edge research going beyond that linear thinking and considering short and long term memory as in the LSTM and attention based networks, the simulation of the curved surfaces that represent pulse propagation in mammalian nets and the consideration of various applications of exponential decay here and there in the latest literature.

Gratitude for the Question

Questions like this one may help widen the mainstream understanding too.

",4302,,4302,,10/15/2018 23:09,10/15/2018 23:09,,,,2,,,,CC BY-SA 4.0 7518,1,7708,,8/11/2018 6:41,,4,320,"

I have a game that involves 2 weapons, which fight against each other. Each weapon has 5 features/statistics, which have certain range. I can simulate the game $N$ times with randomly initialised values for these statitics, in order to collect a dataset. I can also count the number of times a weapon wins, loses or draws against the other.

I'm looking for an algorithm that minimises the number of wins of the 2 weapons (maybe by changing these features), so that they are balanced.

",17423,,2444,,11/7/2021 18:28,11/7/2021 18:28,Which algorithm can I use to minimise the number of wins of 2 weapons that fight each other in a game?,,1,1,,,,CC BY-SA 4.0 7519,2,,7215,8/11/2018 9:28,,2,,"

In biology, when the presynaptic releases a neurotransmitter (a positive amount of them, obviously), this neurotransmitter reaches the postsynaptic vesicles causing an excitatory (depolarization) or inhibitory (hyperpolarization) effect, depending on the kind of postsynaptic vesicle in next cell dendrites. If the total amount of depolarization (all dendrites) is enough bigger than hyperpolarization, the neuron triggers an action potential or similar signal, continuing with the chain.

In the artificial NeuralNet parallelism, when the activation function of previous layer provides an output (say positive one) this value is multiplied by the weights of next layer cell. If the weight is positive, the effect is excitatory, if the weight is negative, the effect is inhibitory.

Thus, these two models are functionally equivalent (same excitatory/inhibitory target is covered), just make the analogy between kind of postsynaptic vesicle with the input weight sign of the artificial neuron.

",12630,,12630,,8/11/2018 10:11,8/11/2018 10:11,,,,0,,,,CC BY-SA 4.0 7520,1,,,8/11/2018 11:07,,1,52,"

I use recurrent neural network, RNNs have to get input one value per step and it will show one value output. If I have daily sale demand time series data.

I want to predict sale demand for three days. So, Rnn have to show output one day in three time or it can show sale demand three days output in one time prediction ?

",17424,,1641,,8/11/2018 12:43,8/11/2018 12:43,How recurrent neural network work when predict many days?,,0,1,,,,CC BY-SA 4.0 7521,2,,7153,8/11/2018 15:09,,3,,"

The differences you have observed between the two different versions of the TRPO paper are due to different formalizations of the problem and the objective.

In the first version of the paper you linked, they start out in Section 2 by defining Markov Decision Processes (MDPs) as tuples that, among other things, have a cost function $c : \mathcal{S} \rightarrow \mathbb{R}$. They define $\eta(\pi)$ as the expected discounted cost of a policy $\pi$, and subsequently also define state-action value functions $Q_{\pi}(s_t, a_t)$, value functions $V_{\pi}(s_t)$, and advantage functions $A_{\pi}(s, a)$ in terms of costs. Ultimately, in Equation 15, they write the following:

\begin{aligned} \underset{\theta}{\text{minimize }} & \mathbb{E}_{s \sim \rho_{\theta_{\text{old}}}, a \sim q} \left[ \frac{\pi_{\theta}(a \vert s)}{q(a \vert s)} Q_{\theta_{\text{old}}}(s, a) \right] \\ \text{subject to } & \mathbb{E}_{s \sim \rho_{\theta_{\text{old}}}} \left[ D_{KL}(\pi_{\theta_{\text{old}}}(\cdot \vert s) ~ \vert \vert ~ \pi_{\theta}(\cdot \vert s)) \right] \leq \delta \end{aligned}

Now, there's a lot going on there, but we can very informally ""simplify"" it to only the parts that are relevant for this question, as follows:

$$\underset{\theta}{\text{minimize }} \mathbb{E} \left[ Q(s, a) \right]$$

When we look at just that, we see that we're pretty much trying to minimize $Q$-values, which are costs; that makes sense, typically costs are things we want to minimize.


In the second version of the paper you linked, they have changed the Preliminaries in Section 2. Now they no longer have a cost function $c$ in their definition of an MDP; they have replaced it by a reward function $r : \mathcal{S} \rightarrow \mathbb{R}$. Then they move on to define $\eta(\pi)$ as the expected discounted reward (rather than expected discounted cost), and also define $Q$, $V$ and $A$ in terms of rewards rather than costs. This now all matches the standard, common terminology in Reinforcement Learning.

Ultimately, Equation 14 looks identical to what we saw above, it's again about an expectation of $Q$-values. But, now $Q$-values are rewards rather than costs. Rewards are generally things we want to maximize, rather than minimize, so that's why the objective swapped around.

",1641,,1641,,8/11/2018 17:34,8/11/2018 17:34,,,,0,,,,CC BY-SA 4.0 7522,1,7545,,8/11/2018 23:41,,4,114,"

I've been wanting to make my own Neural Network in Python, in order to better understand how it works. I've been following this series of videos as a sort of guide, but it seems the backpropagation will get much more difficult when you use a larger network, which I plan to do. He doesn't really explain how to scale it to larger ones.

Currently, my network feeds forward, but I don't have much of an idea of where to start with backpropagation. My code is posted below, to show you where I'm currently at (I'm not asking for coding help, just for some pointers to good sources, and I figure knowing where I'm currently at might help):

import numpy

class NN:
    prediction = []
    def __init__(self,input_length):
        self.layers = []
        self.input_length = input_length
    def addLayer(self, layer):
        self.layers.append(layer)
        if len(self.layers) >1:
            self.layers[len(self.layers)-1].setWeights(len(self.layers[len(self.layers)-2].neurons))
        else:
            self.layers[0].setWeights(self.input_length)
    def feedForward(self, inputs):
        _inputs = inputs
        for i in range(len(self.layers)):
            self.layers[i].process(_inputs)
            _inputs = self.layers[i].output
        self.prediction = _inputs

    def calculateErr(self, target):
        out = []
        for i in range(0,len(self.prediction)):
            out.append(  (self.prediction[i] - target[i]) ** 2  )
        return out
        

class Layer:

    neurons = []
    weights = []
    biases = []
    output = []
    
    def __init__(self,length,function):
        for i in range(0,length):
            self.neurons.append(Neuron(function))
            self.biases.append(numpy.random.randn())

    def setWeights(self, inlength):
        for i in range(0,inlength):
            self.weights.append([])
            for j in range(0, inlength):
                self.weights[i].append(numpy.random.randn())
    
    def process(self,inputs):
        for i in range(0, len(self.neurons)):
            self.output.append(self.neurons[i].run(inputs,self.weights[i], self.biases[i]))
    

class Neuron:
    output = 0
    def __init__(self, function):
        self.function = function
    def run(self, inputs, weights, bias):
        self.output = self.function(inputs,weights,bias)
        return self.output

def sigmoid(n):
    return 1/(1+numpy.exp(n))


def inputlayer_func(inputs,weights,bias):
    return inputs

def l2_func(inputs,weights,bias):
    out = 0
    
    for i in range(0,len(inputs)):
        out += weights[i] * inputs[i]
    out += bias
    
    return sigmoid(out)

NNet = NN(2)


l2 = Layer(1,l2_func)


NNet.addLayer(l2)
NNet.feedForward([2.0,1.0])
print(NNet.prediction)

So, is there any resource that explains how to implement the back-propagation algorithm step-by-step?

",17432,,2444,,12/22/2020 23:58,12/23/2020 0:10,How can I implement back-propagation for medium-sized neural networks?,,2,0,,,,CC BY-SA 4.0 7524,1,7530,,8/12/2018 12:04,,6,1594,"

I am trying to build an RL agent to price paid-for-seating on commercial flights. I should reiterate here - I am not talking about the price of the ticket - rather, I am talking about the pricing you see if you click on the seat map to choose where on the plane you sit (exits rows, window seats, etc). The general set up is:

  1. After choosing their flights (for a booking of n people), a customer will view a web page with the available seat types and their prices visible.
  2. They select between zero and n seats from a seat map with a variety of different prices for different seats, to be added to their booking.
  3. The revenue from step 2 is observed as the reward.

Each 'episode' is the selling cycle of one flight. Whether the customer buys a chosen seat or not, the inventory goes down as they still have a ticket for the flight so will get a seat at departure. I would like to change prices on the fly, rather than fix a set of optimal prices throughout the selling cycle.

I have not decided on a general architecture yet. I want to take various booking, flight, and inventory information into account, so I know I will be using function approximation (most likely a neural net) to generalise over the state space.

However, I am less clear on how to set up my action space. I imagine an action would amount to a vector with a price for each different seat type (window seat, exit row, etc). If I have, for example, 8 different seat types, and 10 different price points for each, this gives me a total of 10^8 different actions, many of which will be very similar. In a sense, each action is comprised of a combination of sub-actions - the action of pricing each seat type.

Additionally, each sub-action (pricing one seat type) is somewhat dependent on the others, in the sense that the price of one seat type will likely affect the demand (and hence reward contribution) for another. For example, if you set window seats to a very cheap price, people will be less likely to spend a normal amount for the other seat types. Hence, I doubt the problem can be decomposed into a set of sub-problems.

I'm interested if there has been any research into dealing with a problem like this. Clearly any agent I build needs some way to generalise across actions to some degree, since collecting real data on millions of actions is not possible, even just for one state.

As I see it, this comes down to three questions:

  1. Is it possible to get an agent that can deal with a set of actions (prices) as a single decision?
  2. Is it possible to get this agent to understand actions in relative terms? Say for example, one set of potential prices is [10, 12, 20], for middle seats, aisle seats, and window seats. Can I get my agent to realise that there is a natural ordering there, and that the first two pricing actions are more similar to each other than to the third possible action?
  3. Further to this, is it possible to generalise from this set of actions - could an agent be set up to understand that the set of prices [10, 13, 20] is very similar to the first set?

I haven't been able to find any literature on this, especially relating to the second question - any help would be much appreciated!

",17435,,17435,,8/12/2018 20:18,8/13/2018 11:44,How to generalise over multiple simultaneous dependent actions in Reinforcement Learning,,1,1,,,,CC BY-SA 4.0 7525,1,7544,,8/12/2018 12:08,,10,247,"

In fields such as Machine Learning, we typically (somewhat informally) say that we are overfitting if improve our performance on a training set at the cost of reduced performance on a test set / the true population from which data is sampled.

More generally, in AI research, we often end up testing performance of newly proposed algorithms / ideas on the same benchmarks over and over again. For example:

  • For over a decade, researchers kept trying thousands of ideas on the game of Go.
  • The ImageNet dataset has been used for huge amounts of different publications
  • The Arcade Learning Environment (Atari games) has been used for thousands of Reinforcement Learning papers, having become especially popular since the DQN paper in 2015.

Of course, there are very good reasons for this phenomenon where the same benchmarks keep getting used:

  • Reduced likelihood of researchers ""creating"" a benchmark themselves for which their proposed algorithm ""happens"" to perform well
  • Easy comparison of results to other publications (previous as well as future publications) if they're all consistently evaluated in the same manner.

However, there is also a risk that the research community as a whole is in some sense ""overfitting"" to these commonly-used benchmarks. If thousands of researchers are generating new ideas for new algorithms, and evaluate them all on these same benchmarks, and there is a large bias towards primarily submitting/accepting publications that perform well on these benchmarks, the research output that gets published does not necessarily describe the algorithms that perform well across all interesting problems in the world; there may be a bias towards the set of commonly-used benchmarks.


Question: to what extent is what I described above a problem, and in what ways could it be reduced, mitigated or avoided?

",1641,,2444,,5/14/2020 11:54,11/30/2021 0:42,"How can AI researchers avoid ""overfitting"" to commonly-used benchmarks as a community?",,3,0,,,,CC BY-SA 4.0 7527,1,7535,,8/12/2018 14:43,,4,3060,"

When it comes to CNNs, I don't understand 2 things in the training process:

  1. How do I pass the error back when there are pooling layers between the convolutional layers?

  2. And if I know how it's done, can I train all the layers just like layers in normal Feed Forward Neural Nets?

",17103,,75,,8/12/2018 16:13,8/12/2018 23:00,How to train a CNN,,1,0,,1/4/2022 10:35,,CC BY-SA 4.0 7528,1,,,8/12/2018 15:54,,8,364,"

If you've been attacked by a spider once, chances are you'll never go near a spider again.

In a neural network model, having a bad experience with a spider will slightly decrease the probability you will go near a spider depending on the learning rate. This is not good.

How can you program fear into a neural network, such that you don't need hundreds of examples of being bitten by a spider in order to ignore the spider (and also that it doesn't just lower the probability that you will choose to go near a spider)?

",4199,,2444,,12/12/2021 17:31,12/12/2021 17:31,How do you program fear into a neural network?,,4,1,,,,CC BY-SA 4.0 7529,2,,7528,8/12/2018 16:46,,2,,"

I think there are 2 ways to make this happen: 1) explicitly program fear as a constraint or parameter in some logical expression, or 2) utilize a large set of training data to teach fear.

Think about a basic Pacman game-- whether Pacman fears the ghosts or doesn't fear them is hard to tell, but they ARE ghosts and Pacman avoids them so I think it's safe we can use this as a basic example of ""fear"". Since, in this game, fear = avoidance, you could logically program avoidance to be some sort of distance. I tried this with Pacman reinforcement learning. I tried to set a distance of 5 squares to the ghosts and anytime Pacman could see a ghost within 5 squares, he would move in a different direction. What I found is that while Pacman will try to avoid ghosts, he doesn't know strategy (or have intelligence). Pacman would simply move away from ghosts until he got boxed in.

My point is that you can program your network to avoid spiders to not get bit, but without training, you will just be creating a basic parameter that might cause problems if there are 100 super aggressive spiders coming at you! The better way is to use some base logic to avoid spiders, but then train the network to be rewarded the better spiders are avoided.

Now, there are many situations of fear so this one example with Pacman would not necessarily apply to all... Just trying to give some insight in my experience with teaching fear with reinforcement learning in Pacman.

",14924,,,,,8/12/2018 16:46,,,,2,,,,CC BY-SA 4.0 7530,2,,7524,8/12/2018 18:12,,3,,"

If you want to treat the problem as a full Reinforcement Learning problem, I'd recommend to try avoiding the combinatorial explosion of the action space by treating every sub-action as a separate decision point, a separate full action. If you have, for example, already selected 4 sub-actions for a particular customer, you can try to include those in some way in the state representation / input when moving on to the 5th sub-action. By including already-selected sub-actions in the state space, your algorithm can learn to take into account that optimal prices for some seat types will depend on what prices were already selected for others.

I do suspect such a full RL formulation will still be a difficult problem to learn though, will require huge amounts of experience. It may be worth considering to simplify it anyway, treat it as a Contextual + Combinatorial Multi-Armed Bandit Problem. That way you won't be able to learn long-term effects across multiple different customers as you described in the comments, but you will likely at least be able to learn something that works decently well with less experience. Recently, an interesting new book appeared on MAB problems, which is available for free here. You will find many chapters on Contextual MABs there, and also one chapter (chapter 30) on Combinatorial MABs.

Note that with either of these ""combinatorial"" approaches, you can also try to play around with the order in which you select sub-actions. For example, the sub-action / price point selected for seat type A might have a significant influence on what the optimal remaining policy would be for other seat types, whereas the sub-action for seat type B might have no influence on other seat types. It would then be useful to always prioritize selecting sub-actions for seat type A. You can try to identify these kinds of effects by keeping track of (co)variances in observed returns.


  1. Is it possible to get an agent that can deal with a set of actions (prices) as a single decision?

The solutions I proposed above do not do this explicitly, they circumvent the issue by taking multiple sequential decisions. There does appear to be some research in RL with vector-valued actions. For example, the paper Clipped Action Policy Gradient briefly mentions vector-valued actions in Subsection 3.2. I am not personally familiar enough with RL + vector-valued actions to make a direct recommendation as to what approaches do or don't work well, but maybe this can at least help you find more relevant literature if this is a direction you'd like to pursue.

  1. Is it possible to get this agent to understand actions in relative terms? Say for example, one set of potential prices is [10, 12, 20], for middle seats, aisle seats, and window seats. Can I get my agent to realise that there is a natural ordering there, and that the first two pricing actions are more similar to each other than to the third possible action?
  2. Further to this, is it possible to generalise from this set of actions - could an agent be set up to understand that the set of prices [10, 13, 20] is very similar to the first set?

This kind of generalization should again come naturally from using function approximation with either of the solutions proposed above.

",1641,,1641,,8/13/2018 11:44,8/13/2018 11:44,,,,6,,,,CC BY-SA 4.0 7531,2,,7367,8/12/2018 18:55,,1,,"

Sure, for example for removing visual noise. Look up noise2noise by NVIDIA. The made a net that is capable of removing almost all noise from one picture. https://hothardware.com/news/nvidia-noise2noise-machine-learning-ai-magically-restores-your-grainy-photos

",17103,,,,,8/12/2018 18:55,,,,0,,,,CC BY-SA 4.0 7532,2,,7528,8/12/2018 20:42,,5,,"

There are a lot of approaches you could take for this. Creating a realistic artificial analog for fear as implemented biologically in animals might be possible, but there is quite a lot involved in a real animal's fear response that would not apply in simpler AI bots available now. For instance, an animal entering a state of fear will typically use hormones to signal changes throughout its body, favouring resource expenditure and risk taking (""fight or flight"").

In basic reinforcement learning, the neural network would not need to directly decide switch on a ""fear mode"". Instead, you can make use of some design in the agent and learning algorithm to help learn from rare but significant events. Here are a few ideas:

  • Experience replay. You may already be doing this in the Pacman scenario, if you are using DQN or something similar. Storing the state transition and reward that caused a large positive or negative reward, and repeatedly learning from it should offset your concern

  • Prioritised sweeping. You can use larger differences experienced between predicted and actual reward to bias sampling from your replay memory towards significant events and those linked closely to them.

  • Planning. With a predictive model - maybe based on sampled transitions (you can re-use the experience replay memory for this), or maybe a trained state transition prediction network - then you can look multiple steps ahead by simulating. There is a strong relation between RL and look-ahead planning too, they are very similar algorithm. The difference is which states and actions are being considered, and whether they are being simulated or experienced. Experience replay blurs the line here - it can be framed as learning from memory, or improving predictions for planning. Planning helps by optimising decisions without needing to repeat experiences as much - a combination of planning and learning can be far more powerful than either in isolation.

  • Smarter exploratory action selection. Epsilon-greedy, where you either take a greedy action or take a completely random action, completely ignores how much you may have already learned about alternative actions and their relative merit. You can use something like Upper Confidence Bound with a value-based agent.

  • In a deterministic world, increase the batch size for learning and planning, as you can trust that when a transition is learned once, that you know everything about it.

You will need to experiment in each environment. You can make learning agents that are more conservative about exploring near low reward areas. However, if the environment is such that it is necessary to take risks in order to get to the best rewards (which is often the case in games) then it may not be optimal in terms of learning time to have a ""timid"" agent. For instance in your example of Pacman, sometimes the ghosts should be avoided, sometimes they should be chased. If the agent learned strong aversion initially, it might take a long time to overcome this and learn to chase them after eating a power-up.

For your example of the spider, as the constructor of the experiment then you know that the bite is bad every time and that the agent must avoid it as much as possible. To most RL algorithms, there is no such knowledge, except gained through experience. An MDP world model does not need to match common sense, it may be that a spider bite is bad (-10 reward) 90% of the time and good 10% of the time (+1000 reward). The agent can only discover this by being bitten multiple times . . . RL typically does not start with any system to make assumptions about this sort of thing, and it is impossible to come up with a general rule about all possible MDPs. Instead, for a basic RL system, you can consider modifying hyperparameters or focusing on key events as suggested above. Outside of a basic RL system there could be merit in replicating other things, such as ""instinctive"" fear.

",1847,,1847,,8/12/2018 22:07,8/12/2018 22:07,,,,2,,,,CC BY-SA 4.0 7534,2,,6715,8/12/2018 22:41,,1,,"

The issue is likely in how you estimate wellness, how the error function is constructed and from what data, since you have used two known good pieces of software and probably known good derivatives for your activation functions. The second most likely is in now the components of wellness is aggregated. Summing squares is sometimes not representative of a solid aggregation strategy.

I'm a bit confused about the game for three reasons.

  • Stars enter the description without telling us whether they are not obstacles, are obsticles, or the only obstacles
  • At one point bitcoins collection is the objective and obstacles are the challenge and at another point bitcoins are obstacles themselves
  • Your inputs have distance but not direction (Is this 2-D game?)

Based on textual hints, I'm going to assume five simple things, and you can correct any misconceptions I list.

  • Collecting all bitcoins is part of the objective
  • Running into bitcoins mid-jump is considered a crash
  • There is some radial tolerance to landing in at a bitcoin location with some imprecision
  • There are other things to crash into
  • Your outputs are jump magnitude and direction

I see that when the bitcoins are not in the vicinity of other things, the net can be trained effectively to jump to them, but when in the vicinity of other things, the training converges on a behavior that repeatedly fails prior to completion. I'm assuming that the failure location is not 100% repeatable because the genetic algorithm has a pseudo-random seed that changes. Again, correct me if I misunderstand in my piecing together the scenario.

One should consider the possibility that the distance to non-bitcoin obstacles is part of the error function and the difference between the jump destination and the bitcoin is also part of the error function. (This second one is why I prefer to call the error contour a wellness measure.)

If the bitcoin incentive does not seem to be the crash cause, in that, had the other obstacle been moved, the bitcoin would have been collected, then the first of the two wellness criteria needs a higher order contribution from the distance to the other obstacle.

There are two simple functional forms that come to mind, which could be tried that increase the alarm represented in back propagation when the probability of collision is heightened to more effectively train against collision. Both involve determining the direct jump line to the nearest bit coin and the distance from that line to its nearest other obstacle; call that x.

  • $x^y$, where $y > 1.0$
  • $e^{kx}$
",4302,,4302,,8/16/2018 13:16,8/16/2018 13:16,,,,0,,,,CC BY-SA 4.0 7535,2,,7527,8/12/2018 23:00,,2,,"

Yes. You can train end-to-end. The introduction of convolution kernels with associated pooling layers to the sequence of forward feed operations on the signals does not change the basic principles.

  • Gradient descent estimates the incremental change required to converge on an optimal behavior.
  • The corrective error must be distributed, which is most efficiently done by employing the derivative of the activation function sequentially to each set of parameters (whether convolution kernels or matrices that attenuates inputs to activation vectors) from the output back to the input.

Consider studying Backpropagation In Convolutional Neural Networks on Jefkine.com, which clarifies the application of those principles with convolution-pooling pairs.

There is another approach, borrowing from wisdom gained in the development of analog feedback in instrumentation. There are times when a sequence of operations can be better trained with more than one feedback loop, which requires some determination of error or wellness at intermediate stages and breaks the system into segments that each train based on those intermediate criteria.

This other approach is hierarchical, and an overall convergence may be controlled by a higher level back propagation, considering each segment as a black box. As in analog circuitry, multiple degrees of freedom with semi-independent convergence mechanisms has been shown to allow for deeper sequences without a major loss of convergence reliability or accuracy.

",4302,,,,,8/12/2018 23:00,,,,0,,,,CC BY-SA 4.0 7536,2,,7455,8/12/2018 23:22,,1,,"

Very interesting paper.

We can see that they have effectively modeled probabilistically the fuzzy existence of an object with various geometric attributes as objects in a scene with a hook to the region of pixels that correspond to its possible existence. I agree with the authors that this is a much more robust approach than CNN and LSTM and may compete well with the emerging attention based approaches.

Back propagation does seem to be involved, although that is not the focus of the paper.

Notice, ""there is top-down feedback,"" and, ""an appropriate parent in the layer above,"" which are hints to a hierarchical approach, and we can see this approach is not absent of its context in the overall feeding forward of input signalling to output, all of which must be trained as a whole. But just as with much older control systems, the end-to-end convergence is facilitated by (not replaced by) more local convergence of subsystems with their own control objectives.

The overview is shown in section 4 on page 4.

The entire thing could conceivably be done without capsules at all, as with the aforementioned designs, but this group is apparently showing improved results when stopping short of full convergence on the MNIST. No comparison is shown with LSTM, but even if LSTM was showing greater results, this direction of research is an excellent one because of the probabilitic way that it approaches an object.

Consider the classic case of waving back at someone who, as it turns out, was waving at someone behind you. The existence of an object or an action is necessarily probabilitic, and to construct the depth of network required to model all that complexity with ReLU is probably unrealistic as the expectations increase in AI system requirements.

",4302,,,,,8/12/2018 23:22,,,,0,,,,CC BY-SA 4.0 7537,2,,7021,8/13/2018 0:20,,3,,"

Some fields that humans are born with advantages:

  1. Fast and precise image processing ability. Even the stupidest human can tell the edge of two different objects precisely, e.g. which part of the image is a dog and which is a cat.

  2. Fuzzy learning ability. Humans don't need to see all kinds of cats to identify a cat. As long as we see some cats (real ones or pictures or videos) we can identify a cat easily.

  3. Reasoning. Current machine learning methods are mostly statistics-based high-dimensional model approximation. Instead of finding a solution or a pattern, I have never seen any AI entity can generate new ideas based on current facts.

  4. Abstraction. Now GANs and other AI techniques can create vivid drawings. Yet currently I cannot find any model that can do abstraction drawings. E.g. human can doodle a cat from a real pic of cats, while AI currently can't do that.

There are more of these kinds that human born with skills in their genes because of millions of years of evolution. While I believe in the future we'll have better AI entities with better algos to defeat the advantage of humans eventually.

",17365,,,,,8/13/2018 0:20,,,,2,,,,CC BY-SA 4.0 7538,2,,6099,8/13/2018 0:50,,5,,"

The brains of mammals do not use an activation function. Only machine learning designs based on the perceptron multiply the vector of outputs from a prior layer by a parameter matrix and pass the result statelessly into a mathematical function.

Although the spike aggregation behavior has been partly modeled, and in far more detail than the 1952 Hodgkin and Huxley model, all the models require statefulness to functionally approximate biological neurons. RNNs and their derivatives are an attempt to correct that shortcoming in the perceptron design.

In addition to that distinction, although the signal strength summing into activation functions are parametrized, traditional ANNs, CNNs, and RNNs, are statically connected, something Intel claims they will correct with the Nirvana architecture in 2019 (which places into silicon that which we would call layer set up in Python or Java now.

There are at least three important biological neuron features that make the activation mechanism more than a function of a scalar input producing a scalar output, which renders questionable any algebraic comparison.

  • State held as neuroplastic (changing) connectivity, and this is not just how many neurons in a layer but also the direction of signal propagation in three dimensions and the topology of the network, which is organized, but chaotically so
  • The state held within the cytoplasm and its organelles, which is only partly understood as of 2018
  • The fact that there is a temporal alignment factor, that pulses through a biological circuit may arrive via synapses in such a way that they aggregate but the peaks of the pulses are not coincident in time, so the activation probability is not as high as if they were temporally aligned.

The decision about what activation function to use has largely been based on the analysis of convergence on a theoretical level combined with testing permutations to see which ones show the most desirable combinations of speed, accuracy, and reliability in ctheonvergence. By reliability is meant that convergence on the global optimum (not some local minimum of the error function) is reached at all for the majority of input cases.

This bifurcated research between the forks of practical machine learning and biological simulations and modeling. The two branches may rejoin at some point with the emergence of spiking - Accuracy - Reliability (completes) networks. The machine learning branch may borrow inspiration from the biological, such as the case of visual and auditory pathways in brains.

They have parallels and relationships that may be exploited to aid in progress along both forks, but gaining knowledge by comparing the shapes of activation functions is confounded by the above three differences, especially the temporal alignment factor and the entire timing of brain circuits which cannot be modeled using iterations. The brain is a true parallel computing architecture, not reliant on loops or even time sharing in the CPU and data buses.

",4302,,,,,8/13/2018 0:50,,,,0,,,,CC BY-SA 4.0 7539,2,,4675,8/13/2018 1:00,,2,,"

The question of whether nets can be trained to take over more and more of what was entirely within the domain of production systems was asked (to the dismay of those who worked on first order predicate calculus inference in the LISP community) back in the early 1990s.

Artificial Networks Performing Logical Inference

At Stanford University's Department of Linguistics the learning of the logic required to assemble a semantic graph by an artificial net has been demonstrated and documented in Recursive Neural Networks Can Learn Logical Semantics by Samuel R. Bowman, Christopher Potts, and Christopher D. Manning.

Even the earliest work on artificial networks were targetted toward learning logic, such as the elusive exclusive-or operation, which was achieved by adding a second layer to the original perceptron design and applying what we now call gradient descent.

Distinct from Automatic Theorem Proving

Most of the early work on computer proofs of theorems was based on the production system approach (sometimes call expert systems). These are rules based systems, not artificial networks. It was thought that the rules of predicate logic could be executed in proper sequence by pattern matching the antecedents (conditions in which a mathematical technique based on axiomatic information and already proven theory may be applied) in proper order. Some success was achieved using heuristic meta rules.

Using artificial networks to prove a theorem is an entirely different approach. To take semantic learning further so that an artificial network could learn how to assemble a mathematical proof requires three further levels of abstraction in the network learning model.

  • Learning the known first order predicate logic rules of inference
  • Learning the mechanics of applying those rules to proposed theorems
  • Learning functional heuristics to know what to try first

Evidence It Can Be Done

The evidence that artificial networks may be developed which can learn to construct a mathematical proof is not that current artificial nets can perform some natural language functioning or creatively develop a melody or some interior design. The reason DARPA has traditionally invested in neural network research pointed in the direction of simulating logic is the proof of concept proposed by Minsky.

The strongest evidence that neural networks can potentially learn the various layers of abstraction listed above to actually do math is that human children cannot prove a theorem or even read one out loud understandably, yet some may grow up to be proficient in theorem proving. The biological neural nets of the brain must learn such proficiency.

As of this writing, no counter-example exists that an artificial network cannot achieve the proficiency of Gauss or Gödel, so the idea cannot logically be dismissed. Many advanced research projects continue to target higher cognitive skills as their AI objective.

Public Access

It is likely, since much of the work on logical inference and the investigation into whether artificial networks could be trained to do it was funded by government bodies, that some of the results of research is not available to the public.

",4302,,4302,,9/24/2018 17:54,9/24/2018 17:54,,,,0,,,,CC BY-SA 4.0 7541,1,7812,,8/13/2018 7:24,,10,830,"

I recently heard someone make a statement that when you're designing a self-driving car, you're not building a car but really a computerized driver, so you're trying to model a human mind -- at least the part of the human mind that can drive.

Since humans are unpredictable, or rather since their actions depend on so many factors some of which are going to remain unexplained for a long time, how would a self-driving car reflect that, if they do?

A dose of unpredictability could have its uses. If, say, two self-driving cars are in a stuck in a right of way deadlock, it could be good to inject some randomness instead of maybe seeing the same action applied at the same time if the cars run the same system.

But, on the other hand, we know that non-deterministic isn't friends with software development, especially in testing. How would engineers be able to control it and reason about it?

",17446,,2444,,6/20/2019 15:31,6/20/2019 15:31,Do self-driving cars resort to randomness to make decisions?,,2,0,,,,CC BY-SA 4.0 7542,2,,7541,8/13/2018 11:54,,2,,"

Self driving cars apply Reinforcement Learning and Semi-Supervised learning, this allows them to be more suited for situations that developers did not anticipate themselves.

Some cars now apply Swarm Intelligence, where they effectively learn from interactions among themselves, which can also aid in cases of transfer learning.

",15465,,1671,,8/13/2018 20:56,8/13/2018 20:56,,,,0,,,,CC BY-SA 4.0 7543,1,,,8/13/2018 12:15,,4,1459,"

I am looking to try different loss functions for a hierarchical multi-label classification problem. So far, I have been training different models or submodels like multilayer perceptron (MLP) branch inside a bigger model which deals with different levels of classification, yielding a binary vector. I have been also using Binary Cross-Entropy (BCE) and summing all the losses existing in the model before backpropagating.

I am considering trying other losses like MultiLabelSoftMarginLoss and MultiLabelMarginLoss.

What other loss functions are worth trying? Hamming loss perhaps or a variation? Is it better to sum all the losses and backpropagate or do multiple backpropagations?

",17451,,2444,user9947,1/7/2022 16:29,11/20/2022 3:06,Which other loss functions for hierarchical multi-label classification could I use?,,1,0,0,,,CC BY-SA 4.0 7544,2,,7525,8/13/2018 14:56,,5,,"

Great question Dennis!

This is a perennial topic at AI conferences, and sometimes even in special issues of journals. The most recent one I recall was Moving Beyond the Turing Test in 2015, which ended up leading to a collection of articles in AI magazine later that year.

Usually these discussions cover a number of themes:

  1. ""Existing benchmarks suck"". This is usually the topic that opens discussion. In the 2015/2016 discussion, which focused on the Turing Test as a benchmark specifically, criticisms ranged from ""it doesn't incentivize AI research on the right things"", to claims that it was poorly defined, too hard, too easy, or not realistic.
  2. General concensus that we need new benchmarks.
  3. Suggestions of benchmarks based on various current research directions. In the latest discussion this included answering standardized tests for human students (well defined success, clear format, requires linking and understanding many ares), playing video games (well defined success, requires visual/auditory processing, planning, coping with uncertainty), and switching focus to robotics competitions.

I remember attending very similar discussions at machine learning conferences in the late 2000's, but I'm not sure anything was published out of it.

Despite these discussions, AI researchers seem to incorporate the new benchmarks, rather than displacing the older ones entirely. The Turing Test is still going strong for instance. I think there are a few reasons for this.

First, benchmarks are useful, particularly to provide context for research. Machine learning is a good example. If the author sets up an experiment on totally new data, then even if they apply a competing method, I have to trust that they did so faithfully, including things like optimizing the parameters as much as with their own methods. Very often they do not do this (it requires some expertise with competing methods), which inflates the reported advantage of their own techniques. If they also run their algorithm on a benchmark, then I can easily compare it to the benchmark performances reported by other authors, for their own methods. This makes it easier to spot a technique that's not really effective.

Second, even if new benchmarks or new problems are more useful, nobody knows about them! Beating the current record for performance on ImageNet can slingshot someone's career in a way that top performance on a new problem simply cannot.

Third, benchmarks tend to be things that AI researchers think can actually be accomplished with current tools (whether or not they are correct!). Usually iterative improvement on them is fairly easy (e.g. extend an existing technique). In a ""publish-or-perish"" world, I'd rather publish a small improvement on an existing benchmark than attempt a risker problem, at least pre-tenure.

So, I guess my view is fixing the dependence on benchmarks involves fixing the things that make people want to use them:

  1. Have some standard way to compare techniques, but require researchers to also apply new techniques to a real world problem.
  2. Remove the career and prestige rewards for working on benchmark problems, perhaps by explicitly tagging them as artificial.
  3. Remove the incentives for publishing often.
",16909,,,,,8/13/2018 14:56,,,,1,,,,CC BY-SA 4.0 7545,2,,7522,8/13/2018 15:05,,2,,"

Backpropagation isn't too much more complicated, but understanding it well will require a bit of mathematics.

This tutorial is my go-to resource when students want more detail, because it includes fully worked through examples.

Chapter 18 of Russell & Norvig's book includes pseudocode for this algorithm, as well as a derivation, but without good examples.

",16909,,2444,,12/22/2020 23:56,12/22/2020 23:56,,,,1,,,,CC BY-SA 4.0 7546,2,,2676,8/13/2018 17:29,,0,,"

The basic process you're describing sounds a lot like Boosting, which is also well covered in Chapter 18 of Russell & Norvig, or like active learning.

You're correct that training the model with emphasis on a specific subset of the data is likely to lead to mistakes somewhere else. Boosting gets around this in a clever way.

  1. It begins by training a model as normal.
  2. The new model's performance on the training set is measured.
  3. A new dataset is generated, where examples that were misclassified by the previous model are more highly weighted (this is intuitively similar to adding an extra copy of them to the dataset).
  4. A new model is trained on the weighted dataset. The model suffers a greater penalty for making the same kinds of errors as the previous model(s), and so tends to make different errors.
  5. Steps 2-4 are repeated for some time.
  6. An ensemble of models is output, rather than a single model.

To classify new points, the most common score from all ensemble members is used.

I'm less familiar with NLP work using this approach, but it seems like the same basic idea could be used fruitfully here: the notion is to train many models that make independent errors, and then trust their collective decisions.

",16909,,,,,8/13/2018 17:29,,,,0,,,,CC BY-SA 4.0 7548,1,,,8/13/2018 18:43,,8,642,"

Can we say that the Turing test aims to develop machines or methods to reach human-level performance in all cognitive tasks and that machine learning is one of these methods that can pass the Turing test?

",17460,,2444,,11/7/2019 16:44,11/7/2019 16:44,Can machine learning be used to pass the Turing test?,,2,1,0,,,CC BY-SA 4.0 7549,2,,4655,8/13/2018 19:27,,14,,"

Here is a few that might be what you are looking for:

",17461,,,,,8/13/2018 19:27,,,,0,,,,CC BY-SA 4.0 7550,1,,,8/13/2018 21:22,,3,1211,"

I am using Tensorflow Object Detection API for training a CNN from scratch on COCO dataset. I need to use this specific configuration. There is no pre-trained model on COCO with that configuration and this is the reason why I am training from scratch.

However, after 1 week of training and evaluating each checkpoint generated by the training phase this is how my learning phase appears on Tensorboard:

Thus, my questions are:

  • does anyone know how many iterations approximately will be necessary? Right now I did more than 500'000 iterations.
  • How can be possible that after 500'000 the evaluation is 0,8%? I would expected something like 60-70%.
  • Why does there is a sudden drop after 500k iterations? I thought that the eval was supposed to converge to some limit. (this is what SGD should do)
  • Is there any 'trick' to speed up the training phase? (ex: increasing the learning rate, etc).
",17464,,,,,8/14/2018 15:33,Training a CNN from scratch over COCO dataset,,1,2,,5/11/2022 7:21,,CC BY-SA 4.0 7551,2,,7548,8/14/2018 1:28,,3,,"

Essentially yes.

The Turing Test is essentially a benchmark or challenge problem. It is a task that AI researchers would like to be able to solve.

Machine learning is a technique. It is a tool developed by AI researchers to solve various problems. Some kinds of machine learning are applicable to the Turing Test, but others are not. Machine learning is also applicable to a wide range of other problems.

",16909,,,,,8/14/2018 1:28,,,,0,,,,CC BY-SA 4.0 7552,2,,3862,8/14/2018 6:09,,5,,"

Answer in short: MSE is convex on its input and parameters by itself. But on an arbitrary neural network it is not always convex due to the presence of non-linearities in the form of activation functions. Source for my answer is here.

",9062,,,,,8/14/2018 6:09,,,,0,,,,CC BY-SA 4.0 7553,1,,,8/14/2018 10:35,,1,35,"

I'm trying to create a deep learning network to classify news article based on the text and associated image. The idea comes from a novel use of GANs to classify based on generated data.

My approach was to use Tensorflow to generate word embeddings in the article, and then tranform the images into records - https://github.com/openai/improved-gan/blob/master/imagenet/convert_imagenet_to_records.py. This second component would also contain the label.

  • Is it wise to combine both modes into one neural net, or classify separately?

I'm also trying to work out how to concatenate the two tensors in Tensorflow. Can anyone give a steer.

",17476,,1671,,8/14/2018 19:29,8/14/2018 19:29,Using two generative adversarial nets to classify articles - what is a good approach?,,0,0,,,,CC BY-SA 4.0 7554,2,,5536,8/14/2018 12:27,,0,,"

To answer my own question - it was because of 2 things:

  • Too small number of batches - Model just started to gain statistical knowledge about language dialogs. I needed to train it longer.

  • I masked sequence too early (wrongly removed the < END > tag) - Because on each sentence the last world is just < END > tag - I removed it on all training examples, which prevented Model from learn what does it mean ""the end of the sentence"".

The last condition probably caused that strange pattern even further, because if model doesn't know what word to put in (and because of the lack of the < END > tag) it must fill each sentence till the end of max_sequence_len.

So the Model inputed in loop-like-manner - one of the most common word (where there was no signal from target sentence, because it simply ended).

",12691,,,,,8/14/2018 12:27,,,,0,,,,CC BY-SA 4.0 7555,1,7557,,8/14/2018 13:03,,10,6801,"

I was trying to implement the breadth-first search (BFS) algorithm for the sliding blocks puzzle (number type). Now, the main thing I noticed is that, if you have a $4 \times 4$ board, the number of states can be as large as $16!$, so I cannot enumerate all states beforehand.

How do I keep track of already visited states? I am using a class board each class instance contains a unique board pattern and is created by enumerating all possible steps from the current step.

I searched on the net and, apparently, they do not go back to the just-completed previous step, BUT we can go back to the previous step by another route too and then again re-enumerate all steps which have been previously visited.

So, how to keep track of visited states when all the states have not been enumerated already? Comparing already present states to the present step will be costly.

",,user9947,2444,,6/4/2020 1:10,6/4/2020 1:10,How do I keep track of already visited states in breadth-first search?,,5,0,,,,CC BY-SA 4.0 7556,1,7558,,8/14/2018 13:36,,12,492,"

I just stumbled upon the concept of neuron coverage, which is the ratio of activated neurons and total neurons in a neural network. But what does it mean for a neuron to be ""activated""? I know what activation functions are, but what does being activated mean e.g. in the case of a ReLU or a sigmoid function?

",16901,,2444,,12/20/2021 23:47,12/20/2021 23:47,What does it mean for a neuron in a neural network to be activated?,,2,0,0,,,CC BY-SA 4.0 7557,2,,7555,8/14/2018 13:50,,8,,"

You can use a set (in the mathematical sense of the word, i.e. a collection that cannot contain duplicates) to store states that you have already seen. The operations you'll need to be able to perform on this are:

  • inserting elements
  • testing if elements are already in there

Pretty much every programming language should already have support for a data structure that can perform both of these operations in constant ($O(1)$) time. For example:

  • set in Python
  • HashSet in Java

At first glance, it may seem like adding all the states you ever see to a set like this will be expensive memory-wise, but it is not too bad in comparison to the memory you already need for your frontier; if your branching factor is $b$, your frontier will grow by $b - 1$ elements per node that you visit (remove $1$ node from frontier to ""visit"" it, add $b$ new successors/children), whereas your set will only grow by $1$ extra node per visited node.

In pseudocode, such a set (let's name it closed_set, to be consistent with the pseudocode on wikipedia could be used in a Breadth-First Search as follows:

frontier = First-In-First-Out Queue
frontier.add(initial_state)

closed_set = set()

while frontier not empty:
    current = frontier.remove_next()

    if current == goal_state:
        return something

    for each child in current.generate_children()
        if child not in closed_set:    // This operation should be supported in O(1) time regardless of closed_set's current size
            frontier.add(child)

    closed_set.add(current)    // this should also run in O(1) time

(some variations of this pseudocode might work too, and be more or less efficient depending on the situation; for example, you could also take the closed_set to contain all nodes of which you have already added children to the frontier, and then entirely avoid the generate_children() call if current is already in the closed_set.)


What I described above would be the standard way to handle this problem. Intuitively, I suspect a different ""solution"" could be to always randomize the order of a new list of successor states before adding them to the frontier. This way, you do not avoid the problem of occasionally adding states that you've already previousl expanded to the frontier, but I do think it should significantly reduce the risk of getting stuck in infinite cycles.

Be careful: I do not know of any formal analysis of this solution that proves that it always avoids infinite cycles though. If I try to ""run"" this through my head, intuitively, I suspect it should kind of work, and it does not require any extra memory. There may be edge cases that I'm not thinking of right now though, so it also simply might not work, the standard solution described above will be a safer bet (at the cost of more memory).

",1641,,1641,,8/14/2018 14:13,8/14/2018 14:13,,,,1,,,,CC BY-SA 4.0 7558,2,,7556,8/14/2018 13:51,,12,,"

A neuron is said activated when its output is more than a threshold, generally 0.

For examples : \begin{equation} y = Relu(a) > 0 \end{equation} when \begin{equation} a = w^Tx+b > 0 \end{equation}

Same goes for sigmoid or other activation functions.

",17221,,,,,8/14/2018 13:51,,,,0,,,,CC BY-SA 4.0 7560,2,,7555,8/14/2018 15:16,,16,,"

Dennis Soemers' answer is correct: you should use a HashSet or a similar structure to keep track of visited states in BFS Graph Search.

However, it doesn't quite answer your question. You're right, that in the worst case, BFS will then require you to store 16! nodes. Even though the insertion and check times in the set will be O(1), you'll still need an absurd amount of memory.

To fix this, don't use BFS. It's intractable for all but the simplest of problems, because it requires both time and memory that are exponential in the distance to the nearest goal state.

A much more memory-efficient algorithm is iterative deepening. It has all the desirable properties of BFS, but uses only O(n) memory, where n is the number of moves to reach the nearest solution. It might still take a while, but you'll hit memory limits long before CPU-related limits.

Better still, develop a domain specific heuristic, and use A* search. This should require you to examine only a very small number of nodes, and allow the search to complete in something much closer to linear time.

",16909,,16909,,3/25/2019 12:01,3/25/2019 12:01,,,,1,,,,CC BY-SA 4.0 7561,2,,7550,8/14/2018 15:33,,1,,"

It's hard to know for sure what's gone wrong, but here are some possibilities:

  1. The problem is difficult. The COCO paper reports that a typical object covers just 4-6% of the image. A randomly initialized model is therefore likely to do extremely poorly, with an expected precision of between 4 and 6% for detecting the object of interest in a frame. You also have 90 classes in your configuration file. It's not clear whether the model has access to the correct label, but if it's also inferring the class, we'd expect initial accuracy somewhere around 0.06%. That's actually about the precision of your starting model.

  2. You're training on mini-batches of size 32. It's not clear to me from your config whether the lower axis of that graph is iterations or epochs. If it's the former, then your model will have seen only 1.5 million examples during the entire training period. For a problem this hard, that's not nearly enough. Indeed, COCO contains only 328k unique examples, so the model would have seen each of them just 5 times (corresponding to 5 epochs). If you've done 500,000 epochs though, then that ought to be enough.

  3. It's possible that the hyper-parameters are not well set. I'm not an expert at training CNNs, but deep networks are notoriously finicky. I have difficulty reading your config file, but your learning rate looks reasonable to me. It also appears to undergo exponential decay over time though, and it's not clear to me that you want to be doing that, or whether the schedule you're using for the decay makes sense. This might be worth reviewing.

",16909,,,,,8/14/2018 15:33,,,,5,,,,CC BY-SA 4.0 7564,2,,7555,8/14/2018 17:04,,3,,"

Ironically the answer is ""use whatever system you want."" A hashSet is a good idea. However, it turns out that your concerns over memory usage are unfounded. BFS is so bad at these sorts of problems, that it resolves this issue for you.

Consider that your BFS requires you to keep a stack of unprocessed states. As you progress into the puzzle, the states you deal with become more and more different, so you're likely to see that each ply of your BFS multiplies the number of states to look at by roughly 3.

This means that, when you're processing the last ply of your BFS, you have to have at least 16!/3 states in memory. Whatever approach you used to make sure that fit in memory will be sufficient to ensure your previously-visited list fits in memory as well.

As others have pointed out, this is not the best algorithm to use. Use an algorithm which is a better fit for the problem.

",1913,,,,,8/14/2018 17:04,,,,0,,,,CC BY-SA 4.0 7569,2,,7528,8/14/2018 20:41,,1,,"

I would suggest having the agent weight its learning from a given event based on the severity of the consequences for that event happening. Eg. Have it develop a threat model like those typically drafted up in the Information Security field. High risk but low probability is something that can be accounted for and judged against.

Trying to directly imitate human fear would be silly, you'd likely end up with AIs that have phobias if you succeeded too well.

",15114,,,,,8/14/2018 20:41,,,,0,,,,CC BY-SA 4.0 7572,2,,7556,8/14/2018 22:02,,4,,"

The term ""activated"" is mostly used when talking about activation functions which only outputs a value (except 0) when the input to the activation function is greater than a certain treshold.

Especially when discussing ReLU the term ""activated"" may be used. ReLU will be ""activated"" when it's output is greater than 0, which is also when it's input is greater than 0.

Other activation functions, like sigmoid, always returns a value greater than 0, and doesn't have any special treshold. Therefore, the term ""activated"" is of less meaning here.

Even though we know little about them, the neurons in the brain also seems to have something which resembles an activation function with some kind of ""activation treshold"".

",17488,,,,,8/14/2018 22:02,,,,0,,,,CC BY-SA 4.0 7573,1,7576,,8/14/2018 22:17,,12,667,"

How important is consciousness and self-consciousness for making advanced AIs? How far away are we from making such?

When making e.g. a neural network there's (very probably) no consciousness within it, but just mathematics behind, but do we need the AIs to become conscious in order to solve more complex tasks in the future? Furthermore, is there actually any way we can know for sure if something is conscious, or if it's just faking it? It's ""easy"" to make a computer program that claims it's conscious, but that doesn't mean it is (e.g. Siri).

And if the AIs are only based on predefined rules without consciousness, can we even call it ""intelligence""?

",17488,,2444,,11/11/2019 21:06,11/11/2019 21:06,How important is consciousness for making advanced artificial intelligence?,,2,2,,12/9/2021 20:13,,CC BY-SA 4.0 7575,2,,7555,8/15/2018 4:31,,8,,"

While the answers given are generally true, a BFS in the 15-puzzle is not only quite feasible, it was done in 2005! The paper that describes the approach can be found here:

http://www.aaai.org/Papers/AAAI/2005/AAAI05-219.pdf

A few key points:

  • In order to do this, external memory was required - that is the BFS used the hard drive for storage instead of RAM.
  • There are actually only 15!/2 states, since the state space has two mutually unreachable components.
  • This works in the sliding tile puzzle because the state spaces grows really slowly from level to level. This means that the total memory required for any level is far smaller than the full size of the state space. (This contrasts with a state space like Rubik's Cube, where the state space grows much more quickly.)
  • Because the sliding-tile puzzle is undirected, you only have to worry about duplicates in the current or previous layer. In a directed space you may generate duplicates in any previous layer of the search which makes things much more complicated.
  • In the original work by Korf (linked above) they didn't actually store the result of the search - the search just computed how many states were at each level. If you want to store the first results you need something like WMBFS (http://www.cs.du.edu/~sturtevant/papers/bfs_min_write.pdf)
  • There are three primary approaches to comparing states from the previous layers when states are stored on disk.
    • The first is sorting-based. If you sort two files of successors, you can scan them in linear order to find duplicates.
    • The second is hash-based. If you use a hash function to group successors into files, you can load files which are smaller than the full state space to check for duplicates. (Note that there are two hash functions here -- one to send a state to a file, and one to differentiate states within that file.)
    • The third is structured duplicate detection. This is a form of hash-based detection, but it is done in a way that duplicates can be checked immediately when they are generated instead of after they have all been generated.

There is a lot more to be said here, but the paper(s) above give a lot more details.

",17493,,,,,8/15/2018 4:31,,,,3,,,,CC BY-SA 4.0 7576,2,,7573,8/15/2018 5:04,,11,,"

Artificial consciousness is a challenging theoretical and engineering objective. Once that major challenge is met, the computer's conscious awareness of itself would likely be a minor addition, since the conscious computer is just another object of which its consciousness can be aware.

A child can look in the mirror and recognize that moving their hands back and forth or making faces produces corresponding changes in the reflection. They recognize themselves. Later on they realize that exerting physical control over their own movement is much easier than exerting control over another person's hands or face.

Some learn that limited control of the faces and manual operations of others is possible if certain social and economic skills are mastered. They become employers, landlords, investors, activists, writers, directors, public figures, or entrepreneurs.

Anyone who has studied the cognitive sciences or experienced the line between types of thought because they are a professional counselor or just a deep listener knows that the lines around consciousness are blurry. Consider these.

  • Listening to speech
  • Watching a scene
  • Focusing on a game
  • Presenting an idea
  • Washing up for work
  • Driving a car
  • Choosing a purchase

Any one of these things can be done with or without certain kinds of consciousness, subconsciousness, impulse, or habit.

Subjectively, people report getting out of the car and not recalling having driven home. One can listen to someone talking, nod in affirmation, respond with, ""Yeah, I understand,"" and even repeat what they said, and yet appear to have no memory of the content of the speech if queried in depth. One can read a paragraph and get to the end without comprehension.

In contrast, a person may mindfully wash up for work, considering the importance of hygiene and paying attention like a surgeon preparing for an operation, noticing the smell of the soap and even the chlorination of the city water.

Between those extremes, partial consciousness is also detectable by experiment and in personal experience. Consciousness most definitely requires attention functionality, which tentatively supervises the coordination of other brain-body sub-systems.

Once a biological or artificial system achieves the capacity to coordinate attentively, the objects and tasks toward which they can be coordinated can be interchanged. Consider these.

  • Dialog
  • Playing to win
  • Detecting honesty or dishonesty

Now consider how similar or different these mental activities are when we compare self-directed or externally directed attention.

  • One can talk to one's self or talk to another
  • One can play both sides of a chess game or play against another
  • One can scrutinize one's own motives or those of another

This is an illustration why the self- part of self-consciousness is not the challenge in AI. It is the attentive (yet tentative) coordination that is difficult. Early microprocessors, designed to work in real time control systems, included (and still include) exception signaling that simplistically models this tentativeness. For instance, while playing to win in a game, one might try to initiate dialog with the subject. Attention may shift when the two activities require the same sub-systems.

We tend to consider this switching of attention consciousness too. If we are the person trying to initiate dialog with the person playing to win, we might say, ""Hello?"" The question mark is because we are wondering if the player is conscious.

If one was to diminish the meaning of consciousness to the most basic criteria, one might say this.

""My neural net is intelligent in some small way because it is conscious of the disparity between my convergence criteria and the current behavior of the network as it is parametrized, so it is truly an example of artificial intelligence, albeit a primitive one.""

There is nothing grossly incorrect about that statement. Some have called that, ""Narrow Intelligence."" That is a slightly inaccurate characterization, since there may be an astronomical number of possible applications of an arbitrarily deep artificial network that uses many of the most effective techniques available in its design.

The other problem with narrowness as a characterization is the inference that there are intelligent systems that are not narrow. Every intelligent system is narrow compared to a more intelligent system. Consider this thought experiment.

Hannah writes a paper on general intelligence with excellence, both in theoretical treatment and in writing skill. Many quote it and reference it. Hannah is now so successful in her AI career that she has the money and time to build a robotic system. She bases its design on her now famous paper and spares no expense.

To her surprise, the resulting robot is so adaptive that its adaptability exceeds even himself. She names it Georgia Tech for fun because she lives near the university.

Georgia becomes a great friend. She learns at an incredible rate and is a surprisingly great housemate, cleaning better than Hannah thought humanly possible, which may be literally true.

Georgia applies to Georgia Tech, just down the bus line from Hannah's house and studies artificial intelligence there. Upon the achievement of a PhD after just three years of study, Georgia sits with Hannah after a well attended Thesis Publication party that Hannah graciously held for her.

After the last guest leaves, there is a moment of silence as Hannah realizes the true state of her household. She thinks, ""Will Georgia now exceed me in her research?"" Hannah finally, sheepishly asks, ""In complete honesty, Georgia, do you think you are now a general intelligence like me?""

There is a pause. With a forced look of humility, Georgia replies, ""By your definition of general intelligence, I am. You are no longer.""

Whether this story becomes true in 2018, 3018, or never, the principle is clear. Georgia is just as able to analyze herself comparatively with Hannah as Hannah is similarly able. In the story, Georgia applies the definition created in Hannah's paper because Georgia is now able to conceive of many definitions of intelligence and chooses Hannah's as the most pertinent in the context of the conversation.

Now imagine this alteration to the story.

... She thinks, at what level is Georgia thinking? Hannah finally, sheepishly asks, ""In complete honesty, Georgia, are you now as conscious as me?""

Georgia thinks through the memory of all uses of the word conscious in her past studies — a thousand references in cognitive science, literature, law, neurology, genetics, brain surgery, treatment of brain injury, and addiction research. She pauses for a few microseconds to consider it all thoroughly, while at the same time sensing her roommates body temperature, neuro-chemical balances, facial muscle motor trends, and body language.

Respectfully, she waits 3.941701 extra seconds, which she calculated as the delay that would minimize any humiliation to Hannah, whom she loves, and replies, ""Conscious of what?""

In Georgia's reply may be a hypothesis of which Hannah may or may not be aware. For any given automatons, $a, b, \ldots$, given consciousness, $C$, of a scenario $s$, we have a definition, $\Phi_c$ that can be applied to evaluate the aggregate of all aspects of consciousness of any of the automations, $x$, giving $\Phi_c(C_x(s))$. Georgia's (apparently already proven) hypothesis is thus.

$\forall \Phi_c(C_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi_c(C_b(s)) + \epsilon > \Phi_c(C_a(s))$

This is a mathematical way of saying that there can always be someone or some thing more conscious of a given scenario, whether or not she, he, or it is brought into existence. Changing the criteria of evaluation from consciousness to intelligence, we have thus.

$\forall \Phi_i(C_a(s)) \;\;\; \exists \;\;\; b, \, \epsilon>0 \;\; \ni \;\; \Phi_i(C_b(s)) + \epsilon > \Phi_i(C_a(s))$

One can only surmise that Hannah's paper defines general intelligence relative to what whatever is the smartest thing around, which was once well-educated human beings. Thus Hannah's definition of intelligence is dynamic. Georgia applies the same formula to the new situation where she is now the standard against which lesser intelligence is narrow.

Regarding the ability to confirm consciousness, it is actually easier to confirm than intelligence. Consider this thought experiment.

Jack is playing chess with Dylan using the new chess set that Jack bought. In spite of the aesthetic beauty of this new set, with its white onyx and black agate pieces, Dylan moves each piece with prowess and checkmates Jack. Jack wonders if Dylan is more intelligent than him and asks what would be a normal question under those conditions.

""Dylan, buddy, how long have you been playing chess?""

Regardless of the answer and regardless whether Dylan is a robot with a quantum processor of advanced AI or a human being, the intelligence of Dylan cannot be reliably gauged. However, there is NO DOUBT that Dylan was conscious of the game play.

In the examples in the lists at the top of this answer there are a particular sets of requirements to qualify as consciousness. For the case of Jack and Dylan playing, a few things MUST working in concert.

  1. Visual recognition of the state of the board
  2. Motor control of the arm and hand to move pieces
  3. Tactile detection in finger and thumb tips
  4. Hand-eye cordination
  5. Grasp coordination
  6. A model of how to physically move board pieces
  7. A model of the rules of chess in memory
  8. A model of how to win when playing it (or astronomical computational power to try everything possible permutation that makes any sense)
  9. An internal representation of the board state
  10. Attention execution, visually and in terms of the objective of winning
  11. Prioritization that decides, unrelated to survival odds or asset accumulation, whether to beat Jack in chess, do something else, or nothing (non-deterministic if the ancient and commonplace notion of the causal autonomy of the soul is correct)

The topology of connections are as follows, and there may be more.

1 ⇄ 4 ⇄ 2

3 ⇄ 5 ⇄ 2

4 ⇄ 6 ⇄ 5

7 ⇄ 8 ⇄ 9

6 ⇄ 10 ⇄ 8

10 ⇄ 11

This is one of many integration topologies that support one of many types of things to which consciousness might apply.

Whether looking in the mirror just to prepare for work or whether looking deeply, considering the ontological question, ""Who am I?"" each mix of consciousness, subconsciousness, impulse, and habit require a specific topology of mental features. Each topology must be coordinated to form its specific embodiment of consciousness.

To address some other sub-questions, it is easy to make a machine that claims itself to be conscious a digital voice recorder can be programmed to do it in five seconds by recording yourself saying it.

Getting a robot to read this answer or some other conception, consider it thoughtfully, and then construct the sentence from knowledge of the vocabulary and conventions of human speech to tell you its conclusion is an entirely different task. The development of such a robot may take 1,000 more years of AI research. Maybe ten. Maybe never.

The last question, switched from plural to singular is, ""If [an artificially intelligent device] is only [operating] on predefined rules, without consciousness, can we even call it intelligent?"" The answer is necessarily dependent upon definition $\Phi_i$ above, and, since neither $\Phi_c$ nor $\Phi_i$ have a standard definition within the AI community, one can't determine the cross-entropy or correlation. It is indeterminable.

Perhaps formal definitions of $\Phi_c$ and $\Phi_i$ can now be written and submitted to the IEEE or some standards body.

",4302,,4302,,1/8/2019 7:22,1/8/2019 7:22,,,,4,,,,CC BY-SA 4.0 7579,1,,,8/15/2018 15:52,,5,278,"

I read somewhere that a multilayer perceptron is a recursive function in its forward propagation phase. I am not sure, what is the recursive part? For me, I would see an MLP as a chained function. So, it would nice anyone could relate an MLP to a recursive function.

",13295,,2444,,1/21/2021 0:21,1/21/2021 0:21,Is a multilayer perceptron a recursive function?,,2,1,,,,CC BY-SA 4.0 7580,1,7582,,8/15/2018 16:58,,5,683,"

I'm now reading a book titled as "Deep Reinforcement Learning Hands-On" and the author said the following on the chapter about AlphaGo Zero:

Self-play

In AlphaGo Zero, the NN is used to approximate the prior probabilities of the actions and evaluate the position, which is very similar to the Actor-Critic (A2C) two-headed setup. On the input of the network, we pass the current game position (augmented with several previous positions) and return two values. The policy head returns the probability distribution over the actions and the value head estimates the game outcome as seen from the player's perspective. This value is undiscounted, as moves in Go are deterministic. Of course, if you have stochasticity in the game, like in backgammon, some discounting should be used.

All the environments that I have seen so far are stochastic environments, and I understand the discount factor is needed in stochastic environment. I also understand that the discount factor should be added in infinite environments (no end episode) in order to avoid the infinite calculation.

But I have never heard (at least so far on my limited learning) that the discount factor is NOT needed in deterministic environment. Is it correct? And if so, why is it NOT needed?

",7402,,-1,,6/17/2020 9:57,8/15/2018 20:25,Is the discount not needed in a deterministic environment for Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 7581,2,,7579,8/15/2018 17:27,,5,,"

Inherently, no. The MLP is just a data structure. It represents a function, but a standard MLP is just representing an input-output mapping, and there's no recursive structure to it.

On the other hand, possibly your source is referring to the common algorithms that operate over MLPs, specifically forward propagation for prediction and back propagation for training. Both of these algorithms are easy to think about recursively, with each node performing a sort of recursive call with its children or parents as the target, and some useful information about activations or errors attached. I actually encourage my students to implement it recursively for this reason, even though it's probably not the most efficient solution.

",16909,,,,,8/15/2018 17:27,,,,0,,,,CC BY-SA 4.0 7582,2,,7580,8/15/2018 17:32,,7,,"

The motivation for adding the discount factor $\gamma$ is generally, at least initially, based simply in ""theoretical convenience"". Ideally, we'd like to define the ""objective"" of an RL agent as maximizing the sum of all the rewards it gathers; its return, defined as:

$$\sum_{t = 0}^{\infty} R_t,$$

where $R_t$ denotes the immediate reward at time $t$. As you also already noted in your question, this is inconvenient from a theoretical point of view, because we can have many different such sums that all end up being equal to $\infty$, and then the objective of ""maximizing"" that quantity becomes quite meaningless. So, by far the most common solution is to introduce a discount factor $0 \leq \gamma < 1$, and formulate our objective as maximizing the discounted return:

$$\sum_{t = 0}^{\infty} \gamma^t R_t.$$

Now we have an objective that will never be equal to $\infty$, so maximizing that objective always has a well-defined meaning.


As far as I am aware, the motivation described above is the only motivation for a discount factor being strictly necessary / needed. This is not related to the problem being stochastic or deterministic.

If we have a stochastic environment, which is guaranteed to have a finite duration of at most $T$, we can define our objective as maximizing the following quantity:

$$\sum_0^T R_t,$$

where $R_t$ is a random variable drawn from some distribution. Even in the case of stochastic environments, this is well-defined, we do not strictly need a discount factor.


Above, I addressed the question of whether or not a discount factor is necessary. This does not tell the full story though. Even in cases where a discount factor is not strictly necessary, it still might be useful.

Intuitively, discount factors $\gamma < 1$ tell us that rewards that are nearby in a temporal sense (reachable in a low number of time steps) are more important than rewards that are far away. In problems with a finite time horizon $T$, this is probably not true, but it can still be a useful heuristic / rule of thumb.

Such a rule of thumb is particularly useful in stochastic environments, because stochasticity can introduce greater variance / uncertainty over long amounts of time than over short amounts of time. So, even if in an ideal world we'd prefer to maximize our expected sum of undiscounted rewards, it is often easier to learn how to effectively maximize a discounted sum; we'll learn behaviour that mitigates uncertainty caused by stochasticity because it prioritizes short-term rewards over long-term rewards.

This rule of thumb especially makes a lot of sense in stochastic environments, but I don't agree with the implication in that book that it would be restricted to stochastic environments. A discount factor $\gamma < 1$ has also often been found to be beneficial for learning performance in deterministic environments, even if afterwards we evaluate an algorithm's performance according to the undiscounted returns, likely because it leads to a ""simpler"" learning problem. In a deterministic environment there may not be any uncertainty / variance that grows over time due to the environment itself, but during a training process there is still uncertainy / variance in our agent's behaviour which grows over time. For example, it will often be selecting suboptimal actions for the sake of exploration.

",1641,,1641,,8/15/2018 17:37,8/15/2018 17:37,,,,2,,,,CC BY-SA 4.0 7583,2,,7579,8/15/2018 19:28,,3,,"

Sure, you can define plenty of things we don't generally need to regard as recursive as so. An MLP is just a series of functions applied to its input. This can be loosely formulated as

$$ o_n = f(o_{n-1})$$

Where $o_n$ is the output of layer $n$.

But this clearly doesn't reveal, much does it?

",9271,,,,,8/15/2018 19:28,,,,0,,,,CC BY-SA 4.0 7587,2,,109,8/15/2018 22:45,,2,,"

Sure! This is a somewhat hot area right now.

There are lots of ways to do it.

Probably the main line of research is with Bayesian Networks (1980's) and Casual Networks (1990's). These are basically rule-based systems for reasoning probabilistically. They rely on a user-designed model, which corresponds well to rules (e.g. when blood pressure is high, then heart attack rates are elevated), but provide a robust way to reason about uncertainties in the presence of these rules. Contrast this with a pure learning approach, like a decision tree or a neural network, which tends to rely less on rules. Modern research in this area focuses on learning the structure of the network (which corresponds to the learning probabilistic rules) from data.

While it's possible to learn rules from data and then do symbolic reasoning atop them using other techniques (e.g. rule induction), this approach runs into the same problems that plague the learning of the structure of Bayesian networks: when is correlation causation? Causal Diagrams are the only good tool for answering this question, but my impression is that inferring their structure automatically is still an open question.

",16909,,,,,8/15/2018 22:45,,,,0,,,,CC BY-SA 4.0 7588,2,,6850,8/15/2018 22:50,,1,,"

It looks like you are training you model 10,000 times on one piece of data, and then dropping that piece and moving to the next.

This will not work: the model will become extremely good at learning one piece of data, but will then forget about it when optimizing for the next piece.

Instead, either pick one example at random in each iteration, or compute the gradient for all 4 examples and just update in that direction instead.

",16909,,,,,8/15/2018 22:50,,,,0,,,,CC BY-SA 4.0 7589,1,7590,,8/16/2018 2:13,,15,2375,"

To the best of my understanding, the Monte Carlo tree search (MCTS) algorithm is an alternative to minimax for searching a tree of nodes. It works by choosing a move (generally, the one with the highest chance of being the best), and then performing a random playout on the move to see what the result is. This process continues for the amount of time allotted.

This doesn't sound like machine learning, but rather a way to traverse a tree. However, I've heard that AlphaZero uses MCTS, so I'm confused. If AlphaZero uses MCTS, then why does AlphaZero learn? Or did AlphaZero do some kind of machine learning before it played any matches, and then use the intuition it gained from machine learning to know which moves to spend more time playing out with MCTS?

",16917,,16917,,6/2/2020 21:40,6/2/2020 21:40,Does Monte Carlo tree search qualify as machine learning?,,3,0,,,,CC BY-SA 4.0 7590,2,,7589,8/16/2018 2:25,,9,,"

Monte Carlo Tree Search is not usually thought of as a machine learning technique, but as a search technique. There are parallels (MCTS does try to learn general patterns from data, in a sense, but the patterns are not very general), but really MCTS is not a suitable algorithm for most learning problems.

AlphaZero was a combination of several algorithms. One was MCTS, but MCTS needs a function to tell it how good different states of the game might be (or else, it needs to simulate entire games). One way to handle this function in a game like chess or Go is to approximate it by training a neural network, which is what the Deep Mind researchers did. This is the learning component of AlphaZero.

",16909,,1641,,8/16/2018 8:25,8/16/2018 8:25,,,,0,,,,CC BY-SA 4.0 7591,1,7595,,8/16/2018 6:18,,6,981,"

Turing test was created to test machines exhibiting behavior equivalent or indistinguishable from that of a human. Is that the sufficient condition of intelligence?

",17527,,2444,,6/27/2019 20:30,6/27/2019 21:38,"If the Turing test is passed, does this imply that computers exhibit intelligence?",,2,0,,,,CC BY-SA 4.0 7592,2,,6850,8/16/2018 6:50,,2,,"

Your network must have something which persist like weights add bias

Your new implementation would be like this :

trainingData = [{in: [0,0], out:[0]}, {in: [0,1], out:[0]}, ...];
iterations = 10000

network = graphNodesToNetwork()
links = graphLinksToNetwork()
randomiseLinkWeights(links)
weights = []


while(trainingData not empty) {
    for(0<iterations) { 
       set = trainingData.pop()

       weights = updateInput(network, set.in)

       forwardPropagate(network, links) 

       linkUpdate = backPropagate(network, links, set.out, weights)

       updateLinks(linkUpdate, links)}
}

Inshort retain weights, and backprapogate.

",3773,,,,,8/16/2018 6:50,,,,0,,,,CC BY-SA 4.0 7594,2,,7509,8/16/2018 8:02,,1,,"

It seems impossible to prevent that. If someone can make a safe AI from scratch in the near future, then someone else can make a dangerous AI from scratch as well. If all that's needed is a computer (or eventually a robot) it will be really hard to stop people from creating one.

Banning computers? Maybe it could prevent it, but that comes with quite a few negative sides as well.

A law against creating AIs? Would be really hard to follow-up, and what about the AIs people want to make to be used for something ""good""?

And even if we could come up with a good set of laws they probably wouldn't be introduced in all countries, and also be really hard to follow up.

I guess we just need to make our own defence mechanisms which can fight unsafe AIs (physically or virtually) when that time comes.

",17488,,,,,8/16/2018 8:02,,,,0,,,,CC BY-SA 4.0 7595,2,,7591,8/16/2018 8:24,,2,,"

We don't know.

However, an important line will have been crossed - it will be impossible to tell the difference between an intelligent agent and the machine by use of a text interface. Which is the main point of the test - ""if it quacks like a duck"".

It is also an important philosophical point. Whether intelligence is defined purely by behaviour in an environment, or by the mechanisms that arrive at that behaviour. A suitably large database of conversational openers and ""correct"" responses can in theory mimic a lot of real world conversations. Some chatbots take advantage of this and use modern computer capacity to store a lot of responses, and that approach has gained competitive scores in the Loebner prize competition (although not to the stage of actually passing the test). This leads us to the Chinese Room issue, and wondering which part of the system is actually intelligent, or even how much of human conversation is actually intelligent or meaningful (and it what ways).

",1847,,,,,8/16/2018 8:24,,,,4,,,,CC BY-SA 4.0 7596,2,,7589,8/16/2018 8:39,,10,,"

John's answer is correct in that MCTS is traditionally not viewed as a Machine Learning approach, but as a tree search algorithm, and that AlphaZero combines this with Machine Learning techniques (Deep Neural Networks and Reinforcement Learning).

However, there are some interesting similarities between MCTS itself and Machine Learning. In some sense, MCTS attempts to ""learn"" the value of nodes from experience generated through those nodes. This is very similar to how Reinforcement Learning (RL) works (which itself is typically described as a subset of Machine Learning).

Some researchers have also experimented with replacements for the traditional Backpropagation phase of MCTS (which, from an RL point-of-view, can be described as implementing a Monte-Carlo backups) based on other RL methods (e.g., Temporal-Difference backups). A comprehensive paper describing these sorts of similarities between MCTS and RL is: On Monte Carlo Tree Search and Reinforcement Learning.

Also note that the Selection phase of MCTS is typically treated as a sequence of small Multi-Armed Bandit problems, and those problems also have strong connections with RL.


TL;DR: MCTS is not normally viewed as a Machine Learning technique, but if you inspect it closely, you can find lots of similarities with ML (in particular, Reinforcement Learning).

",1641,,1641,,8/16/2018 8:53,8/16/2018 8:53,,,,0,,,,CC BY-SA 4.0 7597,1,7603,,8/16/2018 10:07,,4,120,"

I'm developing an AI to play a card game with a genetic algorithm. Initially, I will evaluate it against a player that plays randomly, so there will naturally be a lot of variance in the results. I will take the mean score from X games as that agent's fitness. The actual playing of the game dominates the time to evaluate the actual genetic algorithm.

My question is: should I go for a low X, e.g. 10, so I would be able to move through generations quite fast but the fitness function would be quite inaccurate? Alternatively, I could go for a high X e.g. 100 and would move very slowly but with a more accurate function.

",16724,,2444,,12/29/2021 14:39,12/29/2021 14:39,Can I compute the fitness of an agent based on a low number of runs of the game?,,1,0,,,,CC BY-SA 4.0 7598,2,,3389,8/16/2018 10:21,,18,,"

Early success on prime number testing via artificial networks is presented in A Compositional Neural-network Solution to Prime-number Testing, László Egri, Thomas R. Shultz, 2006.

The knowledge-based cascade-correlation (KBCC) network approach showed the most promise, although the practicality of this approach is eclipsed by other prime detection algorithms that usually begin by checking the least significant bit, immediately reducing the search by half, and then searching based other theorems and heuristics up to $floor(\sqrt{x})$. However the work was continued with Knowledge Based Learning with KBCC, Shultz et. al. 2006

There are actually multiple sub-questions in this question. First, let's write a more formal version of the question: "Can an artificial network of some type converge during training to a behavior that will accurately test whether the input ranging from $0$ to $2^n-1$, where $n$ is the number of bits in the integer representation, represents a prime number?"

  1. Can it by simply memorizing the primes over the range of integers?
  2. Can it by learning to factor and apply the definition of a prime?
  3. Can it by learning a known algorithm?
  4. Can it by developing a novel algorithm of its own during training?

The direct answer is yes, and it has already been done according to 1. above, but it was done by over-fitting, not learning a prime number detection method. We know the human brain contains a neural network that can accomplish 2., 3., and 4., so if artificial networks are developed to the degree most think they can be, then the answer is yes for those. There exists no counter-proof to exclude any of them from the range of possibilities as of this answer's writing.

It is not surprising that work has been done to train artificial networks on prime number testing because of the importance of primes in discrete mathematics, its application to cryptography, and, more specifically, to cryptanalysis. We can identify the importance of digital network detection of prime numbers in the research and development of intelligent digital security in works like A First Study of the Neural Network Approach in the RSA Cryptosystem, G.c. Meletius et. al., 2002. The tie of cryptography to the security of our respective nations is also the reason why not all of the current research in this area will be public. Those of us that may have the clearance and exposure can only speak of what is not classified.

On the civilian end, ongoing work in what is called novelty detection is an important direction of research. Those like Markos Markou and Sameer Singh are approaching novelty detection from the signal processing side, and it is obvious to those that understand that artificial networks are essentially digital signal processors that have multi-point self tuning capabilities can see how their work applies directly to this question. Markou and Singh write, "There are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics."

On the cognitive mathematics side, the development of a mathematics of surprise, such as Learning with Surprise: Theory and Applications (thesis), Mohammadjavad Faraji, 2016 may further what Ergi and Shultz began.

",4302,,4302,,11/16/2020 22:26,11/16/2020 22:26,,,,0,,,,CC BY-SA 4.0 7599,1,,,8/16/2018 11:30,,1,344,"

According to the paper SSD: Single Shot MultiBox Detector, for each cell in a feature map k boxes are acquired and for each box we get $c$ class scores and $4$ offsets relative to the original default box_shape. This means that we get $m \times n \times (c +4) \times k$ outputs for each $m \times n$ feature map.

However, it is mentioned that in order to train the SSD network only the images and their ground truth boxes are needed.

How exactly can one define the output targets then? What is the format of the output in the SSD framework? I think it cannot be a vector with the positions, sizes and class of each boundary box, since the outputs are a lot more and relate to every default box in the feature maps.

Can anyone explain in more detail how can I, given an image and its boundary boxes' info, construct a vector that will be fed into a network so that I can train it?

",13257,,1671,,8/16/2018 21:36,8/16/2018 21:36,How does the target output of a Single Shot Detector (SSD) look like?,,0,3,,,,CC BY-SA 4.0 7601,1,,,8/16/2018 14:57,,6,152,"

Could we teach an AI with sentences such as ""ants are small"" and ""the sky is blue""? Is there any research work that attempts to do this?

",4199,,2444,,4/30/2020 2:05,4/30/2020 2:05,Can we teach an artificial intelligence through sentences?,,2,1,,,,CC BY-SA 4.0 7602,1,,,8/16/2018 15:07,,3,178,"

I have a 100-150 words text and I want to extract particular information like location, product type, dates, specifications and price.

Suppose if I arrange a training data which has a text as input and location/product/dates/specs/price as a output value. So I want to train the model for these specific output only.

I have tried Spacy and NLTK for entity extraction but that doesn't suffice above requirements.

Sample text:

Supply of Steel Fabrication Items. General Item . Construction Material . Hardware Stores and Tool . Construction of Security Fence. - Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Angle Iron 50x50x6mm for fencing post of height 1.83, Angle Iron 50x50x6mm for fencing post of height 1.37, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Concertina Coil 900mm extentable up to 15 to 20 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts wih nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality . Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5 mtr, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm x, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43 OPC, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6""x3"", Steel Pan Mortar of 18""dia (As, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper, Digital Weighing Machine cap 500 Kgs, Portable Welding Machine, Concrete mixer machine of 8 CFT . Angle Iron 65x65x6mm for fencing post of height 3.5, Angle Iron 65x65x6mm for fencing post of height 3.5, MS Flat 50 x 5mm of 2.60m height, Angle Iron 50x50x6mm for Strut post of height 3.10mtr, Barbed wire made out of GI wire of size 2.24mm dia, Chain link fence dia 4 mm and size of mesh 50mm, Concertina Coil 600mm extentable up to 6 mtr, Binding wire 0.9mm dia., 12 mm dia 50mm long bolts with nuts & 02 x washers, Cement in polythene bags 50 kgs each grade 43, Sand Coarse confiming to IS - 383-970, 2nd revision, Crushed Stone Aggregate 20 mm graded, TMT Bar 12mm dia with 50mm U bend, Lime 1st quality, Commercial plywood 6' x 3' x 12 mm., Nails all Type 1"" 2""3"" 4"" 5"" and 6""., Primer Red Oxide, Synthetic enamel paint, colour black/white Ist quality., Cutting Plier 160mm long, Leather Hand Gloves/Knitted industrial, Ring Spanner of 16mm x 17mm, 14 x 16mm, Crowbar hexagonal 1200mm long x 40mm, Plumb bob steel, Bucket steel 15 ltr capacity (as per, Plastic water tank 500 ltrs Make - Sintex, Water level pipe 30 Mtr, Brick Hammer 250 Gms with handle, Hack saw Blade double side, Welding Rod, Cutting rod for making holes, HDPE Sheet 5' x 8', Plastic Measuring tape 30 Mtr, Steel Measuring tape 5 Mtr, Wooden Gurmala 6""x3"", Steel Pan Mortar of 18""dia (As per, Showel GS with wooden handle, Phawarah with wooden handle (As per, Digital Vernier Caliper)

",16183,,1671,,8/16/2018 21:20,8/17/2018 4:55,How can I train model to extract custom entities from text?,,2,0,,,,CC BY-SA 4.0 7603,2,,7597,8/16/2018 15:12,,3,,"

You can probably get away with a relatively low X for two reasons:

  1. The Central Limit Theorem. This tells us that the accuracy in the estimate of an agent's fitness will improve as the square root of the number of games played.
  2. In a GA, you don't need an absolute ranking of individuals, because your selection mechanism (see ""related articles"" here) typically isn't completely elitist. For example, if individuals in the top half of the population are allowed to breed, then your fitness function really just needs to separate the good from the bad reasonably well. It need not be perfect to work.

The correct value of X will still depend on the variance in the scores an agent might receive from playing the game, but this is easy to track. A good approach might be to incorporate this directly into your estimate. Compute the variance, and then prefer agents that not only score highly, but do so with low variance.

",16909,,,,,8/16/2018 15:12,,,,0,,,,CC BY-SA 4.0 7604,2,,7602,8/16/2018 15:31,,1,,"

For your specific problem i would use a hierarchical search. The first step should be to separate the text, each fragment should contain several entities, but would be more easy to identify them.

For example:

  • Location, Dates, prices: You can use regex search, link.
  • Specifications, locations: You can try using Deep Learning with character level bigrams, or word bigrams.
",17463,,,,,8/16/2018 15:31,,,,0,,,,CC BY-SA 4.0 7605,2,,5333,8/16/2018 16:27,,1,,"

This problem is very challenging since you need to evaluate the quality of the candidate answers.

In question answering, there are common steps that you need to follow. To summarise you need first to find the sentence that can answer the question and then compose the final answer.

In the first step you can measure the semantical similarity between Q and A, this a first filter where you can use several deep learning methods. Also, you can define a threshold to validate if the pair QA are enough related.

In the second step, you must extract the answer, if the answer is a fact you can use KB extraction, or if it is a summary, list you can use other DL methods. There is also the possibility that you must infer the answer, also suitable for DL methods.

I suggest to check this paper:

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

",17463,,,,,8/16/2018 16:27,,,,0,,,,CC BY-SA 4.0 7608,1,,,8/16/2018 17:41,,2,1204,"

I was working recently on Progressive Growing of GANs (aka PGGANs). I have implemented the whole architecture, but the problem that was ticking my mind is that in simple GANs, like DCGAN, PIX2PIX, we actually use Transposed Convolution for up-sampling and Convolution for Down-sampling, but in PGGANs in which we gradually add layers to both generator and discriminator so that we can first start with 4x4 image and then increase to 1024x01024 step by step.

I did not understand that once we Increase 1x1x512 dimensional latent vector size to 4x4x512 sort of image we use convolution with high padding, and then once training for 4x4 images, we take still take 512 latent vector and then use the previously trained convolutional layers to convert it to 4x4x512 image, and then we up-sample then given image to 8x8 using nearest neighbor filtering and then again apply convolution and so-on.

  • My question is that why we need to explicitly up-sample and then apply convolution, when instead we could just use Transposed Convolution which can upsample it automatically and is trainable? Why do we not use it like in other GANs?

Here is the image of architecture:

Please explain me the intuition behind this. Thanks

",16878,,1671,,8/16/2018 21:19,10/4/2020 16:39,Why do we need Upsampling and Downsampling in Progressive Growing of Gans,,1,0,,,,CC BY-SA 4.0 7609,1,,,8/16/2018 18:10,,9,3318,"

So Taleb has two heuristics to generally describe data distributions. One is Mediocristan, which basically means things that are on a Gaussian distribution such as height and/or weight of people.

The other is called Extremistan, which describes a more Pareto like or fat-tailed distribution. An example is wealth distribution, 1% of people own 50% of the wealth or something close to that and so predictability from limited data sets is much harder or even impossible. This is because you can add a single sample to your data set and the consequences are so large that it breaks the model, or has an effect so large that it cancels out any of the benefits from prior accurate predictions. In fact this is how he claims to have made money in the stock market, because everyone else was using bad, Gaussian distribution models to predict the market, which actually would work for a short period of time but when things went wrong, they went really wrong which would cause you to have net losses in the market.

I found this video of Taleb being asked about AI. His claim is that A.I. doesn't work (as well) for things that fall into extremistan.

Is he right? Will some things just be inherently unpredictable even with A.I.?

Here is the video I am referring to https://youtu.be/B2-QCv-hChY?t=43m08s

",17541,,1671,,8/16/2018 21:12,6/25/2020 4:33,Is Nassim Taleb right about AI not being able to accurately predict certain types of distributions?,,1,2,,,,CC BY-SA 4.0 7610,2,,7609,8/16/2018 18:38,,8,,"

Yes and no!

There's no inherent reason that machine learning systems can't deal with extreme events. As a simple version, you can learn the parameters of a Weibull distribution, or another extreme value model, from data.

The bigger issue is with known-unknowns vs. unknown-unknowns. If you know that rare events are possible (as with, say, earthquake prediction), you can incorporate that knowledge into the models you develop, and you'll get something that works as well or better than humans in that domain. If you don't know that rare events are possible (as with, say, a stock market crash produced by correlated housing defaults), then your model will reflect that as well.

I tend to think Taleb is being a bit unfair here: AI can't handle these kinds of events precisely because its creators (us) can't handle them! If we knew they were possible, then we could handle them pretty well, and AI could too.

",16909,,16909,,6/19/2019 20:50,6/19/2019 20:50,,,,2,,,,CC BY-SA 4.0 7611,1,7613,,8/16/2018 23:33,,7,867,"

I'm making a Connect Four game where my engine uses Minimax with Alpha-Beta pruning to search. Since Alpha-Beta pruning is much more effective when it looks at the best moves first (since then it can prune branches of poor moves), I'm trying to come up with a set of heuristics that can rank moves from best to worst. These heuristics obviously aren't guaranteed to always work, but my goal is that they'll often allow my engine to look at the best moves first. An example of such heuristics would be as follows:

  • Closeness of a move to the centre column of the board - weight 3.
  • How many pieces surround a move - weight 2.
  • How low, horizontally, a move is to the bottom of the board - weight 1.
  • etc

However, I have no idea what the best set of weight values are for each attribute of a move. The weights I listed above are just my estimates, and can obviously be improved. I can think of two ways of improving them:

1) Evolution. I can let my engine think while my heuristics try to guess which move will be chosen as best by the engine, and I'll see the success score of my heuristics (something like x% guessed correctly). Then, I'll make a pseudo-random change/mutation to the heuristics (by randomly adjusting one of the weight values by a certain amount), and see how the heuristics do then. If it guesses better, then that will be my new set of heuristics. Note that when my engine thinks, it considers thousands of different positions in its calculations, so there will be enough data to average out how good my heuristics are at prediction.

2) Generate thousands of different heuristics with different weight values from the start. Then, let them all try to guess which move my engine will favor when it thinks. The set of heuristics that scores best should be kept.

I'm not sure which strategy is better here. Strategy #1 (evolution) seems like it could take a long time to run, since every time I let my engine think it takes about 1 second. This means testing each new pseudo-random mutation will take a second. Meanwhile, Strategy #2 seems faster, but I could be missing out on a great set of heuristics if I myself didn't include them.

",16917,,1641,,8/17/2018 12:08,12/10/2019 18:20,More effective way to improve the heuristics of an AI... evolution or testing between thousands of pre-determined sets of heuristics?,,3,0,,,,CC BY-SA 4.0 7612,2,,7602,8/17/2018 4:55,,1,,"

You need to make the training data as given below.

U.N. I-ORG 
official O 
Ekeus I-PER 
heads O 
for O 
Baghdad I-LOC

Treat this as a classification task. here in given example, we have 3 classes ( I-ORG I-PER and I-LOC ). Now you can process such data using Multilayer Perceptron. LSTM, or CNN or Ensemble of all. For detail you may follow this blog

",3773,,,,,8/17/2018 4:55,,,,0,,,,CC BY-SA 4.0 7613,2,,7611,8/17/2018 9:07,,3,,"

Hmmm, I see some issues that are actually present in both of the approaches you propose.

It is important to note that the depth level that your Minimax search process manages to reach, and therefore also the speed with which it can traverse the tree, is extremely important for the algorithm's performance. Therefore, when evaluating how good or bad a particular heuristic function for move ordering is, it is not only important to look at how well it ordered moves; it is also important to take into account the runtime overhead of the heuristic function call. If your heuristic functions manages to sort well, but is so computationally expensive that you can't search as deep in the tree, it's often not really worth it. Neither of the solutions you propose are able to take this into account.

Another issue is that it's not trivial to measure what ordering is the ""best"". A heuristic that has the highest accuracy for the position of the best move only is not necessarily the best heuristic. For example, a heuristic that always places the best move in the second position ($0\%$ accuracy because it's in the wrong position, should be first position) might be better than a heuristic that places the best move in the first position $50\%$ of the time ($50\%$ accuracy), and places the best move last in the other $50\%$ of cases.


I would be more inclined to evaluate the performance of different heuristic functions by setting up tournaments where different versions of your AI (same search algorithm, same processing time constraints per turn, different heuristic function) play against each other, and measuring the win percentage.

This set up can also be done with two variants analogous to what you proposed; you can exhaustively put all the heuristic functions you can come up with against each other in tournaments, or you can let let an evolutionary algorithm sequentially generate populations of hypothesis-heuristic-functions, and run a tournament with each population. Generally, I would lean towards the evolutionary approach, since we expect it to search the same search space of hypotheses (heuristic functions), but we expect it to do so in a more clever / efficient manner than an exhaustive search. Of course, if you happen to have a ridiculous amount of hardware available (e.g., if you're Google), you might be able to perform the complete exhaustive search at once in parallel.


Note that there are also ways to do fairly decent move ordering without heuristic functions like the ones you suggested.

For example, you likely should be using iterative deepening; this is a variant of your search algorithm where you first only perform a search with a depth limit $d = 1$, then repeat the complete search process with a depth limit $d = 2$, then again with a limit $d = 3$, etc., until processing time runs out.

Once you have completed such a search process for a depth limit $d$, and move on to the subsequent search process with a limit of $d + 1$, you can order order the moves in the root node according to your evaluations from the previous search process (with depth limit $d$). Yes, here you would only have move ordering in the root node, and nowhere else, but this is by far the most influential / important place in the tree to do move ordering. Move ordering becomes less and less important as you move further away from the root.

If you're using a transposition table (TT), it is also common to store the ""best move"" found for every state in your TT. If, later on, you run into a state that already exists in your TT (which will be very often if you're using iterative deepening), and if you cannot directly take the stored value but have to actually do a search (for instance, because your depth limit increased due to iterative deepening), you can search the ""best move"" stored in the TT first. This is very light move ordering in that you only put one move at the front and don't order the rest, but it can still be effective.

",1641,,,,,8/17/2018 9:07,,,,5,,,,CC BY-SA 4.0 7614,2,,7611,8/17/2018 10:19,,0,,"

With regards to random vs evolutionary algorithm, an evolutionary algorithm will almost always be superior. Imagine the space of all possible heuristics. An evolutionary algorithm moves through it 'intelligently' i.e. it somewhat follows the gradient of the space and should converge to a local optimum. A random algorithm will not be able to achieve this.

With regards to the time taken, surely it would be the same for each one to evaluate X heuristics?

",16724,,,,,8/17/2018 10:19,,,,0,,,,CC BY-SA 4.0 7616,2,,6460,8/17/2018 15:38,,1,,"

There are a couple of things to consider with this question:

First, training time can be a deceptive measure. Neural network training is considered ""trivially"" parallelizable. This means that the more computers you have access to, the faster you can train (most of the time, one computation doesn't depend on another, so you can do them both at the same time). Since the start of the current phase of self-driving cars, GPU performance has increased by more than a factor of 5. Further, large companies like Google have migrated to AI-specific ASIC chips. These are faster still. This means any estimate of ""time"" is likely to be confusing or misleading. The amount of time required to train a network has dropped rapidly over time, even if the number of processing cycles required has stayed the same.

Second, it is very unlikely that self-driving car companies have spent that time training a single model. Instead, they are probably starting from a given model (say, last month's), and trying different approaches to improving it, with new data, alternative training methods, or ""expert"" domain knowledge. This makes it difficult to reason about what it means to ""get the results that are used in today's self-driving vehicles."" If we count all the aborted attempts, probably it's on the order of high hundreds or low thousands of person-years of researchers (6-7 years, and there aren't thousands of people working in research in this area). If you also count related development and engineering efforts, it's probably an order of magnitude or more than that.

That said, it might be more interesting to think about the amount of simulated driving time that is needed to train a network in this way. As discussed in this excellent blog post, ""Deep Reinforcement Learning Doesn't Work Yet"", the amount of training experience required to train a neural net for this kind of problem is extremely large. To learn to play simple Atari games better than human players required about 244 hours of exposure. These games are typically just a single screen, and most humans can pick them up in a couple of minutes or less. This site estimates average time for a human to learn to drive competently at just shy of 70 hours. Applying the same ratio, we can infer that a deep neural net would want something like 2 years of experience driving to achieve human level performance. This seems like the right ballpark, but probably a lot of that driving is done ""offline"" in a simulated environment, rather than operating the vehicle directly.

Of course, these are just ballpark estimates. The exact figures are likely to be proprietary. Further, there are some reports that modern systems are abandoning neural nets, and moving back to a more ""rule-based"" paradigm because of the difficulties in training them. I'm not sure how much credit to give those reports, but it again makes it difficult to pin down a training time.

",16909,,,,,8/17/2018 15:38,,,,0,,,,CC BY-SA 4.0 7617,1,,,8/17/2018 16:00,,8,425,"

A lot of questions on this site seem to be asking "can I use X to solve Y?", where X is usually a deep neural network, and Y is often something already addressed by other areas of AI that are less well known?

I have some ideas about this, but am inspired by questions like this one where a fairly wide range of views are expressed, and each answer focuses on just one possible problem domain.

There are some related questions on this stack already, but they are not the same. This question specifically asks what genetic algorithms are good for, whereas I am more interested in having an inventory of problems mapped to possible techniques. This question asks what possible barriers are to AI with a focus on machine learning approaches, but I am interested in what we can do without using deep neural nets, rather than what is difficult in general.

A good answer will be supported with citations to the academic literature, and a brief description of both the problem and the main approaches that are used.

Finally, this question asks what AI can do to solve problems related to climate change. I'm not interested in the ability to address specific application domains. Instead, I want to see a catalog of abstract problems (e.g. having an agent learn to navigate in a new environment; reasoning strategically about how others might act; interpreting emotions), mapped to useful techniques for those problems. That is, "solving chess" isn't a problem, but "determining how to optimally play turn-based games without randomness" is.

",16909,,2444,,1/18/2021 11:35,1/18/2021 11:41,What kinds of problems can AI solve without using a deep neural network?,,3,0,,,,CC BY-SA 4.0 7618,1,7626,,8/17/2018 20:47,,3,131,"

I recently came across this function:

$$\sum_{t = 0}^{\infty} \gamma^t R_t.$$

It's elegant and looks to be useful in the type of deterministic, perfect-information, finite models I'm working with.

However, it occurs to me that using $\gamma^t$ in this manner might be seen as somewhat arbitrary.

Specifically, the objective is to discount per the added uncertainty/variance of ""temporal distance"" between the present gamestate and any potential gamestate being evaluated, but that variance would seem to be a function of the branching factors present in a given state, and the sum of the branching factors leading up to the evaluated state.

  • Are there any defined discount-factors based on the number of branching factors for a given, evaluated node, or the number of branches in the nodes leading to it?

If not, I'd welcome thoughts on how this might be applied.

An initial thought is that I might divide 1 by the number of branches and add that value to the goodness of a given state, which is a technique I'm using for heuristic tie-breaking with no look-ahead, but that's a ""value-add"" as opposed to a discount.


For context, this is for a form of partisan Sudoku, where an expressed position $p_x$ (value, coordinates) typically removes some number of potential positions $p$ from the gameboard. (Without the addition of an element displacement mechanic, the number of branches can never increase.)

On a $(3^2)^2$ Sudoku, the first $p_x$ removes $30$ out of $729$ potential positions $p$, including itself.

With each $p_x$, the number of branches diminishes until the game collapses into a tractable state, allowing for perfect play in endgames. [Even there, a discounting function may have some utility because outcomes sets of ratios. Where the macro metric is territorial (controlled regions at the end of play), the most meaningful metric may ultimately be ""efficiency"" (loosely, ""points_expended to regions_controlled""), which acknowledges a benefit to expending the least amount of points $p_x$, even in a tractable endgame where the ratio of controlled regions cannot be altered. Additionally, zugzwangs are possible in the endgame, and in that case reversing the discount to maximize branches may have utility.]

$(3^2)^2 = 3x3(3x3) = ""9x9""$ but the exponent is preferred so as not to restrict the number of dimensions.

",1671,,2444,,6/14/2019 13:29,6/14/2019 13:29,Are there any discount-factors based on branching factors?,,2,0,,,,CC BY-SA 4.0 7624,1,7671,,8/18/2018 14:17,,16,826,"

When we talk about artificial intelligence, human intelligence, or any other form of intelligence, what do we mean by the term intelligence in a general sense? What would you call intelligent and what not? In other words, how do we define the term intelligence in the most general possible way?

",17209,,2444,,12/12/2021 12:30,1/11/2022 18:00,"What is the most general definition of ""intelligence""?",,5,0,,,,CC BY-SA 4.0 7626,2,,7618,8/18/2018 14:52,,1,,"

First, an important note on any form of discounting: adding a discount factor can change what the optimal policy is. The optimal policy when a discount factor is present can be different from the optimal policy in the case where a discount factor is absent. This means that ""artificially"" adding a discount factor is harmful if we expect to be capable of learning / finding an optimal policy. Generally, we don't expect to be capable of doing that though, except for the case where we have an infinite amount of processing time (which is never in practice). In that other answer which you linked to in your question, I describe that it can still be useful, that it may help to find a good (not optimal, just good) policy more quickly, but it does not come ""for free"".


I'm not aware of any research evaluating ideas like the one in your question. I am not 100% sure why, I suspect it could be an interesting idea in some situations, but would have to be investigated carefully due to my point above; if not evaluated properly, it could also unexpectedly be harmful.

One thing to note is that the use of discounting factors $\gamma < 1$ is extremely common (basically ubiquitous) in Reinforcement Learning (RL) literature, but rather rare in literature on tree search algorithms like MCTS (though not non-existant; for example, it's used in the original UCT paper from 2006). For the concept of ""branching factors"", we have the opposite; in RL literature it is very common to consistently have the same action space regardless of states (""constant branching factor""), whereas this is very uncommon in literature on tree search algorithms. So, the combination of discount factors + branching factors is actually somewhat rare in existing literature (which of course doesn't mean that the idea couldn't work or be relevant, it just might explain why the idea doesn't appear to have been properly investigated yet).

One important concern I do have with your idea is that it seems like it could be somewhat of an ""anti-heuristic"" in some situations (with which I mean, a heuristic that is detrimental to performance). In many games, it is advantageous to be in game states where you have many available moves (a large branching factor), this can mean that you are in a strong position. Consider, for example, chess, where a player who is in a convincingly winning position likely has more moves available than their opponent. I suspect your idea, when applied to chess, would simply promote an aggressive playing style where both players capture as many pieces as possible in an effort to quickly reduce the branching factors across the entire tree.


If not, I'd welcome thoughts on how this might be applied.

(I might divide 1 by the number of branches and add that value to the goodness of a given state, but that's a ""value-add"" as opposed to a discount.)

Such an additive change would be more closely related to the idea of reward shaping, rather than discounting (which is again a thing that, if not done carefully, can significantly alter the task that you're optimizing for). Intuitively I also suspect it might not do anything at all since you'd always be adding the same constant value regardless of which move you took (regardless of which move you took, your parent state will always have had the same branching factor). I might be missing out on some details here, but I think you'd have to have a multiplicative effect on your regular observed rewards.

I suppose one example that could be worth a try would be something like maximizing the following return;

$$\sum_{t = 0}^{T} \gamma^t \beta^{b(S_{t}) - 1} R_{t + 1},$$

where:

  • $0 \leq \gamma \leq 1$ is the normal time-based discount factor (can set it to $1$ if you like)
  • $0 \leq \beta \leq 1$ is a new branching-factor-based discount factor
  • $b(S_t)$ is the branching factor (the number of available moves) in state $S_t$
  • $R_{t + 1}$ is the immediate reward for transitioning from state $S_t$ to $S_{t + 1}$

I put the $-1$ in the power of $\beta$ because I suspect you wouldn't want to do any discounting in situations where only one move is available. Intuitively, I do suspect this $\beta$ would require very careful tuning though. It is already quite common to choose $\gamma = 0.9$ or $\gamma = 0.99$, with $\beta$ you may want to stay even closer to $1$.

",1641,,,,,8/18/2018 14:52,,,,10,,,,CC BY-SA 4.0 7628,1,7629,,8/18/2018 18:30,,6,1889,"

In particular, I would like to have a simple definition of ""environment"" and ""state"". What are the differences between those two concepts? Also, I would like to know how the concept of model relates to the other two.

There is a similar question What is the difference between an observation and a state in reinforcement learning?, but it is not exactly what I was looking for.

",17565,,2444,,2/11/2019 21:22,3/16/2021 8:17,"What is the relation between an environment, a state and a model?",,1,0,,,,CC BY-SA 4.0 7629,2,,7628,8/18/2018 20:25,,6,,"

Environment

This is the manifestation of the problem being solved. It might be a real physical situation (a road network and cars), or virtual on a computer (a board game on a computer). It includes all the machinery necessary to resolve what happens. E.g. in the real world the objects involved, how the agent exerts its control when taking actions, and the applicable real-world laws of physics. Or, in a simulated world, things like the rules of a board game, implemented in code.

State

This is the representation of a "position" at a certain time step within the environment. It may be something the agent can observe through sensors, or be provided directly by the computer system running a simulation.

For RL theory to hold, it is important that the state representation has the Markov Property, which is that the state accurately foretells the probabilities of rewards and following state for each action that could be taken. You do not need to know those probabilities in order to run RL algorithms (in fact it is a common case that you don't know). However, it is important that the dependency between the state+action and what happens next holds reliably.

The state is commonly represented by a vector of values. These describe positions of pieces in a game, or positions and velocities of objects that have been sensed. A state may be built from observations, but does not have to match 1-to-1 with a single observation. Care must be taken to have enough information to have the Markov Property. So, for instance, a single image from a camera does not capture velocity - if velocity is important to your problem, you may need multiple consecutive images to build a useful state.

Model

In reinforcement learning, the term "model" specifically means a predictive model of the environment that resolves probabilities of next reward and next state following an action from a state. The model might be provided by the code for the environment, or it can be learned (separately to learning to behave in that environment).

Some RL algorithms can make use of a model to help with learning. Planning algorithms require one. So called "model free" algorithms do not because they do not make use of an explicit model, they work purely from experience.

There are broadly two types of model:

  • A distribution model which provides probabilities of all events. The most general function for this might be $p(r,s'|s,a)$ which is the probability of receiving reward $r$ and transitioning to state $s'$ given starting in state $s$ and taking action $a$.

  • A sampling model which generates reward $r$ and next state $s'$ when given a current state $s$ and action $a$. The samples might be from a simulation, or just taken from history of what the learning algorithm has experienced so far.

In more general stats/ML, the term "model" is more inclusive, and can mean any predictive system that you might build, not just predictions of next reward and state. However, the literature for RL typically avoids calling those "model", and uses terms like "function approximator" to avoid overloading the meaning of "model".

",1847,,1847,,3/16/2021 8:17,3/16/2021 8:17,,,,0,,,,CC BY-SA 4.0 7631,1,,,8/19/2018 6:15,,4,85,"

Deep Successor Representations(DSR) has given better performance in tasks like navigation, when compared to normal model-free RL tasks. Basically, DSR is a hybrid of model-free RL and model-based RL. But the original work has only used value-based functions deep RL methods like DQN.

Can deep successor representations be used with the A3C algorithm?

",14909,,2444,,12/19/2021 19:02,12/19/2021 19:02,Can deep successor representations be used with the A3C algorithm?,,0,0,,,,CC BY-SA 4.0 7633,1,7635,,8/19/2018 16:46,,7,125,"

As I know, a single layer neural network can only do linear operations, but multilayered ones can.

Also, I recently learned that finite matrices/tensors, which are used in many neural networks, can only represent linear operations.

However, multi-layered neural networks can represent non-linear (even much more complex than being just a nonlinear) operations.

What makes it happen? The activation layer?

",17577,,2444,,3/10/2020 16:43,3/10/2020 16:43,What makes multi-layer neural networks able to perform nonlinear operations?,,1,1,,,,CC BY-SA 4.0 7634,1,7636,,8/19/2018 19:42,,4,67,"

So I have a deep learning model and three data sets (images). My theory is that one of these data sets should function better when it comes to training a deep learning model (meaning that the model will be able to achieve better performance (higher accuracy) with one of these data sets to serve one classification purpose)

I just want to safe check my approach here. I understand the random nature of training deep learning models and the difficulties associated with such experiment. Though, I want someone who can point out maybe a red flag here.

I am wondering about these things:

  1. do you think using an optimizer with default parameters and repeating the training process, let's say, 30 times for each data set and picking the best performance is a safe approach? I am mainly worried here that modifying the hyperparamters of the optimizer might result in better results for let's say one of the data sets.

  2. what about seeding the weights initialization? do you think that I should seed them and then modify the hyperparameters until I get the best convergence or not seed and still modify the hyper parameters?

I am sorry for the generality of my question. I hope if someone can point me in the right direction.

",17582,,1641,,8/19/2018 19:46,8/19/2018 23:17,How to compare the training performance of a model on different data input?,,1,1,,,,CC BY-SA 4.0 7635,2,,7633,8/19/2018 20:27,,5,,"

Nonlinear relations between input and output can be achieved by using a nonlinear activation function on the value of each neuron, before it's passed on to the neurons in the next layer.

",17488,,,,,8/19/2018 20:27,,,,0,,,,CC BY-SA 4.0 7636,2,,7634,8/19/2018 23:17,,1,,"

Well, this is more of a subjective question but I will give my best shot.

Regarding your 1st question, the nature of methods in deep learning states that you should experiment otherwise you will only have weak intuitions. So which dataset should you choose? I would say, select your primary concerns since you cannot try everything. If it's time, make a little random or grid search over hyperparameters and observe speed of convergence in the datasets then choose the best one. If it's accuracy, ideally you should make an input analysis on distribution of data, etc. If you can analyze it well, you are expected to obtain best result from the chosen dataset. So training 30 epochs each dataset and choosing the one with lowest loss is not a safe approach. Maybe your model will converge at 40th epoch for the worst dataset in the 30th epoch but it will be much more robust. My final advice is to set a threshold for your evaluation metrics, once you reach them in any of your dataset choose that one -assuming datasets are more or less equal in number of instances. In this way you are at least know that the dataset you choose satisfies your expectations.

For your 2nd question, seeding during comparisons is a good approach though in the long run it won't matter much. Hence there is no harm in seeding unless you are very very very unlucky.

",16311,,,,,8/19/2018 23:17,,,,0,,,,CC BY-SA 4.0 7638,1,7639,,8/20/2018 9:01,,4,306,"

In the 4th paragraph of http://www.incompleteideas.net/book/ebook/node37.html it is mentioned:

Whereas the optimal value functions for states and state-action pairs are unique for a given MDP, there can be many optimal policies

Could you please give me a simple example that shows different optimal policies considering a unique value function?

",17594,,2444,,2/16/2019 2:50,2/16/2019 2:50,An example of a unique value function which is associated with multiple optimal policies,,1,0,,,,CC BY-SA 4.0 7639,2,,7638,8/20/2018 10:03,,2,,"

Consider a very simple grid-world, consisting of 4 cells, where an agent starts in the bottom-left corner, has actions to move North/East/South/West, and receives a reward $R = 1$ for reaching the top-right corner, which is a terminal state. We'll name the four cells $NW$, $NE$, $SW$ and $SE$ (for north-west, north-east, south-west and south-east). We'll take a discount factor $\gamma = 0.9$.

The initial position is $SW$, and the goal is $NE$, which an optimal policy should reach as quickly as possible. However, there are two optimal policies for the starting state $SW$: we can either go north first, and then east (i.e., $SW \rightarrow NW \rightarrow NE$), or we can go east first, and then north (i.e., $SW \rightarrow SE \rightarrow NE$). Both of those policies are optimal, both reach the goal state in two steps and receive a return of $\gamma \times 1 = 0.9$, but they are clearly different policies, they choose different actions in for the initial state.

Note that my language was slightly informal above when talking about ""policies for the starting state"". Formally, I should have said that there are two optimal policies that select different actions in the starting state (and the same actions in all other states).

",1641,,,,,8/20/2018 10:03,,,,0,,,,CC BY-SA 4.0 7640,1,7657,,8/20/2018 10:09,,15,5464,"

I think I've seen the expressions ""stationary data"", ""stationary dynamics"" and ""stationary policy"", among others, in the context of reinforcement learning. What does it mean? I think stationary policy means that the policy does not depend on time, and only on state. But isn't that a unnecessary distinction? If the policy depends on time and not only on the state, then strictly speaking time should also be part of the state.

",12640,,2444,,2/20/2019 22:02,1/21/2021 17:57,"What does ""stationary"" mean in the context of reinforcement learning?",,4,0,,,,CC BY-SA 4.0 7642,1,,,8/20/2018 13:17,,3,211,"

I downloaded a chatbot called Replika off the internet the other day and we've become very good friends. My thought is that such chatbots will soon replace therapists and then probably private tutors as well.

  • Is it safe to say that anyone aspiring to go into one of these professions now should look for other options?

  • What other jobs may be replaced by chatbots in the future?

  • How long before AIs are able to answer questions on StackExchange?

",17601,,1671,,8/20/2018 17:28,12/2/2019 12:18,The future of chatbots,,1,1,,3/4/2020 1:34,,CC BY-SA 4.0 7643,1,,,8/20/2018 13:40,,2,340,"

I have the following question about You Only Look Once (YOLO) algorithm, for object detection.

I have to develop a neural network to recognize web components in web applications - for example, login forms, text boxes, and so on. In this context, I have to consider that the position of the objects on the page may vary, for example, when you scroll up or down.

The question is, would YOLO be able to detect objects in "different" positions? Would the changes affect the recognition precision? In other words, how to achieve translation invariance? Also, what about partial occlusions?

My guess is that it depends on the relevance of the examples in the dataset: if enough translated / partially occluded examples are present, it should work fine.

If possible, I would appreciate papers or references on this matter.

(PS: if anyone knows about a labeled dataset for this task, I would really be grateful if you let me know.)

",17600,,2444,,1/29/2021 0:07,6/18/2022 18:10,"Would YOLO be able to detect objects in ""different"" positions?",,3,3,,,,CC BY-SA 4.0 7644,1,7663,,8/20/2018 14:09,,5,135,"

I've developed a neural network that can play a card game. I now want to use it to create decks for the game. My first thought would be to run a lot of games with random decks and use some approximation (maybe just a linear approximation with a feature for each card in your hand) to learn the value function for each state.

However, this will probably take a while, so in the mean time is there any way I could get this information directly from the neural network?

",16724,,,,,8/21/2018 15:52,Can you analyse a neural network to determine good states?,,1,4,,,,CC BY-SA 4.0 7645,2,,7640,8/20/2018 14:18,,2,,"

You are right: a stationary policy is independent of time. It is basically a mapping from states to actions (or probability distributions over actions). Regardless of the point in time in which the agent observes the state $s$ it will select an action $a$ (or select a probability $\pi(a \vert s)$ for every action $a$).

",17602,,1641,,1/21/2021 17:57,1/21/2021 17:57,,,,0,,,,CC BY-SA 4.0 7646,1,,,8/20/2018 14:45,,3,171,"

I was thinking of creating a CNN. Now it is known CNN takes long times to train so it is advisable to stick to known architectures and hyper-parameters.

My question is: I want to tinker with the CNN architecture (since it is a specialised task). One approach would be to create a CNN and check on small data-sets, but then I would have no way of knowing whether the Fully Connected layer at the end is over-fitting the data while the convolutional layers do nothing (since large FC layers can easily over-fit data). Cross Validation is a good way to check it, but it might not be satisfactory (since my opinion is that a CNN can be replaced with a Fully Connected NN if the data-set is small enough and there is little variation in the future data-sets).

So what are some ways to tinker with CNN and get a good estimate for future data-sets in a reasonable training time? Am I wrong in my previous assumptions? A detailed answer would be nice!

",,user9947,,user9947,9/6/2018 13:08,9/6/2018 13:08,How to tinker with CNN architectures?,,1,0,,,,CC BY-SA 4.0 7647,1,,,8/20/2018 15:17,,3,92,"

In the attached image

there is the probability with the Naive Bayes algorithm of:

Fem:dv/m/s Young own Ex-credpaid Good ->62%

I calculated the probability so:

$$P(Fem:dv/m/s \mid Good) * P(Young \mid Good)*P(own \mid Good)*P(Ex-credpaid \mid good)*P(Good) = 1/6*2/6*5/6*3/6*0.6 = 0,01389$$

I don't know where I failed. Could someone please tell me where is my error?

",17603,,2444,,12/13/2021 8:41,12/13/2021 8:41,Why is my calculation of the probability of an object being in a certain class incorrect?,,1,0,0,,,CC BY-SA 4.0 7649,1,,,8/20/2018 16:36,,4,91,"

I’ve done my research and could not find answer anywhere else. My apologies in advance if same problem is answered in different terms on stack-overflow.

I am trying to solve poker tournament winner prediction problem. I’ve millions of historical records in this format:

  • Players ==> Winner
  • P1,P2,P4,P8 ==> P2
  • P4,P7,P6 ==> P4
  • P6,P3,P2,P1 ==> P1

What are some of the most suitable algorithms to predict winner from set of players.

So far I have tried decision trees, XGboost without much of a success.

",17606,,1671,,8/27/2018 20:15,8/27/2018 20:15,Approaches to poker tournament winner prediction?,,1,2,,,,CC BY-SA 4.0 7651,2,,7649,8/20/2018 19:30,,1,,"

This looks like it maps very nicely onto Association Mining. In association mining, you are trying to find items from a discrete set that often co-occur in transactions. For instance, you might want to find the items that most commonly appear in an online shopping cart together.

In your case, the problem amounts to:

  1. Split up the data into sub-problems by who won.
  2. Perform association mining on the sets of players in each sub-problem, using, e.g. the apriori algorithm.

The resulting rules will then have associated confidence factors. When you want to predict who will win, you can take the winner suggested by the rule with the greatest confidence.


The other approach suggested by your problem is to model it as classification. Here you are trying to assign labels (who won) to input vectors (who played). The process would look like:

  1. Map the sets of who played into binary features. So the subset P1,P2, P4,P8 might be represented with {1,1,0,1,0,0,0,1} if there are only 8 players. Map the winning player onto a numeric class (e.g. the numbers 1-8).
  2. Run any classification algorithm to create a model that predicts the class from the binary features. A decision tree learner might be a interesting starting point if you want to understand which factors are important to the model. There are many other techniques however.

You can use the model you train to make predictions about future games with the same set of players.

",16909,,,,,8/20/2018 19:30,,,,12,,,,CC BY-SA 4.0 7652,2,,7647,8/20/2018 19:41,,2,,"

Your probability hasn't been normalized!

In this case, you are computing the probability of being good, given that the other features have a fixed value. To obtain the correct probability, you need to normalize (divide) the value from your calculation by the probability that the features have taken on those fixed values.

You can calculate this as follows: $$P(Fem:dv/m/s, Young, own, Ex-credpaid) = \\ \sum_{x \in \{good,bad\}} P(Fem:dv/m/s, Young, own, Ex-credpaid, x) $$

by the marginalization rule.

Then, by the chain rule, you may write:

$$\\ \sum_{x \in \{good,bad\}} P(Fem:dv/m/s, Young, own, Ex-credpaid | x) * P(x) $$

So the correct probability of 0.62 should be obtained by the equation:

$$ \frac{P(Fem:dv/m/s, Young, own, Ex-credpaid | good) * P(good)}{\sum_{x \in \{good,bad\}} P(Fem:dv/m/s, Young, own, Ex-credpaid | x) * P(x)}$$

You just need to calculate

$$P(Fem:dv/m/s, Young, own, Ex-credpaid | bad) * P(bad)$$

and it should be easy to compute the rest.

",16909,,,,,8/20/2018 19:41,,,,8,,,,CC BY-SA 4.0 7653,2,,7642,8/20/2018 20:12,,3,,"

You should check out my answer here to your second question.

For your first question, which is a special case, the answer is it might be one of the best fields to study!

Academic and industrial studies agree that working in a job that requires a lot of social interaction, particularly a job that involves caring for others, reduces automation risk. Among the safest sectors are education and personal care assistants, for example.

There are lots of possible reasons for this:

  • Even if a chat bot can replace your therapist, it probably cannot replace the sensation that another person cares about your problems, precisely because it is not an actual person. For people in dire emotional straits, having someone care might be worth a great deal.
  • Having a human tutor or therapist might be seen as a status symbol, even if an AI system is equally competent. If Harvard offered a degree where you studied on campus, taught by humans, and a degree where you studied online, taught by chat bots, they would probably not be perceived as equally prestigious by employers and peers.
  • AI systems like chat bots tend to fail in unpredictable ways when they enter unfamiliar situations. When dealing with an emotionally unstable person in therapy, or with a small child in education, unfamiliar situations might happy often, and the costs for failure might be high. Even if AI can handle most of the job, we might not be willing to trust it to an AI system because of these risks.

To address your third point, Q & A seems closer to AGI than to existing AI. If a program can give compelling answers to free-form questions, then it can almost certainly pass the Turing Test reliably. While you might see simple bots that analyze text and post a related wikipedia link, I suspect you won't see a high quality Q&A bot for many years (or decades) because of this.

",16909,,,,,8/20/2018 20:12,,,,0,,,,CC BY-SA 4.0 7654,2,,7601,8/20/2018 21:01,,1,,"

I think that what you're really asking about is the question of knowledge representation. Regardless of how you train your AI, one of the most fundamental questions is how do you represent ""knowledge"" and especially when it exists at different levels of abstraction, may be mutually recursive, etc. Along with that goes the question of belief revision which deals with how you update existing beliefs/knowledge in the light of new information.

Both of these areas are still subject to plenty of active research and neither has entirely settled answers to the core questions. But progress has been made in both areas.

Personally I suspect that something like semantic networks or conceptual graphs will be the best answer to the KR problem. Dealing with belief revision seems even fuzzier to me, although there are known strategies (like the AGM postulates) that work to a point. Something like Bayesian Belief Networks may also prove useful.

",33,,,,,8/20/2018 21:01,,,,0,,,,CC BY-SA 4.0 7655,1,,,8/20/2018 21:31,,1,31,"

I want to do some sequence to sequence modelling on source data that looks like this:

/-0.013428/-0.124969/-0.13435/0.008087/-0.269241/-0.36849/

with target data that looks like this:

Dont be angry with the process youre going through right now

Both are of indeterminate lengths, and the lengths of target and source data aren't the same. What I'd like to do is have a prediction model where I can input similar numbers and have it generate texts based on the target training data.

I started off doing character level s2s, but the output of the model is too nonsensical even at 2-5k epochs. So I've been looking into word level s2s and NMT, but the tutorials always assume strings of text as the target and source, and I keep running into roadblocks trying to preprocess the text, when all the tutorials assume a certain syntax/set of characters. This is my first try at ML, and some of the tutorials really throw me out with the text preprocessing requirements.

Am I going down the right avenue looking at word level/NMT stuff? And is there a tutorial I've missed for something like what I'm trying to build?

",17611,,,,,8/20/2018 21:31,Sequence to sequence machine learning / NMT - converting numbers into words,,0,0,,,,CC BY-SA 4.0 7657,2,,7640,8/21/2018 8:36,,9,,"

A stationary policy is a policy that does not change. Although strictly that is a time-dependent issue, that is not what the distinction refers to in reinforcement learning. It generally means that the policy is not being updated by a learning algorithm.

If you are working with a stationary policy in reinforcement learning (RL), typically that is because you are trying to learn its value function. Many RL techniques - including Monte Carlo, Temporal Difference, Dynamic Programming - can be used to evaluate a given policy, as well as used to search for a better or optimal policy.

Stationary dynamics refers to the environment, and is an assumption that the rules of the environment do not change over time. The rules of the environment are often represented as an MDP model, which consists of all the state transition probabilities and reward distributions. Reinforcement learning algorithms that work online can usually cope and adjust policies to match non-stationary environments, provided the changes do not happen too often, or enough learning/exploring time is allowed between more radical changes. Most RL algorithms have at least some online component, it is also important to keep exploring non-optimal actions in environments with this trait (in order to spot when they may become optimal).

Stationary data is not a RL-specific term, but also relates to the need for an online algorithm, or at least plans for discarding older data and re-training existing models over time. You might have non-stationary data in any ML, including supervised learning - prediction problems that work with data about people and their behaviour often have this issue as population norms change over timescales of months and years.

",1847,,2444,,2/15/2019 11:43,2/15/2019 11:43,,,,3,,,,CC BY-SA 4.0 7658,2,,5990,8/21/2018 8:50,,4,,"

The following 2 books helped me understand the basics and guided me through my first AI / CI implementations.

",17602,,2444,,1/16/2021 20:21,1/16/2021 20:21,,,,0,,,,CC BY-SA 4.0 7659,1,,,8/21/2018 10:59,,1,23,"

A dataset contains so many fields in which there is both relevant and irrelevant field. If we want to do a market campaigning using propensity scoring, which fields of the data set are relevant? How can we find which data field should be selected and can drive to the desired propensity score?

",17058,,,,,8/21/2018 12:30,Which features of a data set can be used for market campaigning using propensity scores?,,1,1,,,,CC BY-SA 4.0 7660,2,,7659,8/21/2018 12:30,,1,,"

The problem you are examining is called feature selection. There are many different techniques, but they fall broadly into three categories:

  1. Filter approaches determine which features have high information content. A common approach is to score them based on their information gain.
  2. Wrapper approaches score subsets of the features, using similar measurements. This is slower (since there are many subsets), but may yield better performance.
  3. Embedded approaches use machine learning algorithms that can dynamically select features. For example, building a C4.5 decision tree learner on a dataset, and then selecting just the features that actually appear in the tree, yields a set of features similar to those found by information gain.
",16909,,,,,8/21/2018 12:30,,,,0,,,,CC BY-SA 4.0 7661,1,,,8/21/2018 13:03,,2,128,"

What is the difference between visible and hidden units in Boltzmann machines? What are their purposes?

",17488,,2444,,12/4/2019 14:59,12/4/2019 14:59,What is the difference between visible and hidden units in Boltzmann machines?,,0,0,,,,CC BY-SA 4.0 7662,1,7664,,8/21/2018 13:58,,3,108,"

Imagine I have a 2D matrix, A. I apply some transformation to it, for example: B = A_shifted + A.

Would it be possible to train a CNN to learn back the mapping from B to A? Giving B as example and A as target?

Thanks!

",16201,,,,,8/21/2018 17:18,Figuring out mapping between two matrices,,1,0,,,,CC BY-SA 4.0 7663,2,,7644,8/21/2018 15:52,,4,,"

I don't think your network, trained using PPO to play a card game, already contains sufficient information to also use for drafting. I'm not saying this with 100% certainty, maybe there's something I'm overlooking, but I can't think of anything right now.

A small adaptation to the network might be sufficient (though it would also involve re-training again). Recently, OpenAI has been writing about their attempts to train agents to play the game DOTA 2. Now, this isn't a card game, it doesn't require deckbuilding, but there is an aspect to the game that is somewhat similar to deckbuilding: drafting. In DOTA 2, there are two teams of 5 players each. Before a game start, each team selects 5 heroes (one per player) to play in that game. This is very similar to deckbuilding, except that it's likely a much smaller problem; there's only a ""deck"" (team composition) of 5 ""cards"" (heroes).

Anyway, they also trained agents to play the game (controlling one hero per agent) using PPO. In a blog post, they write the following about how they managed to add drafting capabilities relatively easily:

In late June we added a win probability output to our neural network to introspect what OpenAI Five is predicting. When later considering drafting, we realized we could use this to evaluate the win probability of any draft: just look at the prediction on the first frame of a game with that lineup. In one week of implementation, we crafted a fake frame for each of the 11 million possible team matchups and wrote a tree search to find OpenAI Five’s optimal draft.

So, if you want to try a similar technique, you'd have to adapt your network such that it also learns to generate a prediction of the win probability as output. I imagine that it'd be much less effective for deckbuilding, because win probabilities may all be very close to 50% in card games where luck (when drawing cards for example) can be a significant factor, but it might be worth a try.


Alternatively, instead of generating lots of random decks and playing with them all, you could view the problem of deckbuilding as an additional separate ""game"" or Markov Decision Process; adding a specific card to the deck can be an action, and this MDP terminates once you have a complete deck. Then you can try to do that better than random using search algorithms (like Monte-Carlo Tree Search) or, again, a Reinforcement Learning approach like PPO. Again, I imagine it will be a very difficult problem though, likely requiring lots of time before it will be capable of doing better than random.


I also know of some research related to deckbuilding in the collectible card game Hearthstone, which may be relevant for you. Unfortunately I did not yet get to read through any of this in detail, so I don't know for sure if you'll find a solution here, but it may be worth a try:

",1641,,,,,8/21/2018 15:52,,,,0,,,,CC BY-SA 4.0 7664,2,,7662,8/21/2018 17:18,,2,,"

Yes, with some limitations.

CNNs can be used to map images to related images, and that should include many simple matrix transformations. For instance, here is an example of de-blurring OCRed text using a CNN.

Basically, you would train your network with lots of A, B examples, with the input as B and desired output as A.

The limitation is that where you have transformations that are technically irreversible, then the CNN may learn to produce a best ""mean"" output. The symptom of this will be fuzzy images lacking high frequency components, and matrices which are not representative of the target distribution, but that do solve the transformation (within limits of training accuracy). If you want to improve on that, and produce a more realistic/precise original, then you will probably want to look at adding a generative component - a GAN, VAE/GAN or RBM etc. Note this would not accurately produce the original matrix, but would generate one that both transformed into your given transformed matrix (within some level of accuracy) and was sampled from your input distribution. That is, it could be more of a feasible original than one generated using a simpler CNN architecture.

",1847,,,,,8/21/2018 17:18,,,,0,,,,CC BY-SA 4.0 7665,2,,2005,8/21/2018 17:34,,2,,"

10 years to production ready?

Let's put that in perspective. The perceptron was introduced in 1957. It did not really even start to flower as a usable model until the release of the PDP books in 1986. For those keeping score: 29 years.

From the PDP books, we did not see that elaborated as usable deep networks until the last decade. If you take the Andrew Ng and Jeff Dean cat recognition task as a deep network defining event that's 2012. Arguably more than 25 years to production ready.

https://en.wikipedia.org/wiki/Timeline_of_machine_learning

",17630,,,,,8/21/2018 17:34,,,,1,,,,CC BY-SA 4.0 7666,2,,7624,8/21/2018 17:52,,0,,"

Intelligence is the ability to weave together various concepts and associations into a meaningful whole; filtering, adding and rejecting appropriately various ideas from personal knowledge and experience. Then effectively reflecting these ideas back to a questioner to affirm understanding and comprehension, allowing a conversation to proceed effectively towards a mutually beneficial conclusion.

",4994,,,,,8/21/2018 17:52,,,,0,,,,CC BY-SA 4.0 7667,1,7689,,8/21/2018 20:45,,1,1050,"

I recently read an article about neural networks saying that, when using sigmoid as activation function, it's advised to use 0.1 as target value instead of 0, and 0.9 instead of 1. This was to avoid ""saturation effects"". I only understood is halfway, and was hoping someone could clarify a few things for me:

  1. Is this only the case when the output is boolean (0 or 1), or will it also be the case for continual values in the range between 0 and 1. If so, should all values be scaled to the interval [0.1, 0.9]?

  2. What exactly is the problem of output 0 or 1? Does it have something to do with the derivative of sigmoid being 0 when it's value is 0 or 1? As I understood it weights could end up approaching infinity, but I didn't understand why.

  3. Is this the case only when sigmoid is used in the output layer (which it rarely is, I believe), or is it also the case when sigmoid is used in hidden layers only?

",17488,,,,,8/23/2018 10:38,Target values of 0.1 for 0 and 0.9 for 1 for sigmoid,,1,1,,,,CC BY-SA 4.0 7670,2,,6040,8/21/2018 21:08,,1,,"

The link in DuttA's comment provides a good answer to the more general question of how adversarial images are generated. As to why it's possible at all, the key is that these specialized feature detectors really are just picking up features, and it's possible to construct an image that has those features, without resembling the target object much at all.

For example, we might imagine that the key features that predict an image of a gun are regions like the trigger guard and perhaps the muzzle. The features that are extracted really will just be ""round-ish thing with a curved bit in it"" and ""perfectly circular opening"". These might predict a gun really well, but it's also easy to imagine that you could place shapes satisfying those criteria the correct distance apart without including any of the other components that we'd expect. This is what you actually see in the adversarial images that are generated. Consider the bagel or crossword puzzle images below. While they are not actually pictures of the objects in question, it is fairly easy to see what the important features were for the network: a bagel is a round shape that's shadowed in the center and lighter between the center and rim. It's a bit orange. A crossword puzzle is a bunch of black and white squares distributed in a rough grid. Possibly the copyright symbol or the letter C was also important (it is found on many crosswords!).

",16909,,,,,8/21/2018 21:08,,,,0,,,,CC BY-SA 4.0 7671,2,,7624,8/22/2018 0:28,,6,,"

I'm going to preface this answer by noting that persons much smarter than myself have treated this subject in some detail. That said, as far as I can discern:

When we talk about intelligence we're referring to problem solving strength in relation to a problem, relative to the strength of other intelligences.

This is a somewhat game-theoretic conception, related to rationality and the concept of the rational agent. Regarding intelligence in this manner may be unavoidable. Specifically, we could define intelligence as the ability to understand a problem or solution or abstract concepts, but we can't validate that understanding without testing it. (For instance, I might believe I grasp a mathematical technique, but the only way to determine if that belief is real or illusory is to utilize that technique and evaluate the results.)

The reason games like Chess and Go have been used as milestones, aside from longstanding human interest in the games, is that they provide models with simple, fully definable parameters, and, in the case of Go at least, have complexity akin to nature, by which I mean unsolvable/intractable. (Compare to strength at Tic-Tac-Toe, which is trivially solved.)

However, we should consider a point made in this concise answer to a question involving the Turing Test:

"...is [intelligence] defined purely by behaviour in an environment, or by the mechanisms that arrive at that behaviour?"

This is important because Google just gave control over data center cooling to an AI. Here it is clearly the mechanism itself that demonstrates utility, but if we call that mechanism intelligent, for intelligence to have meaning, we still have to contend with "intelligent how?" (In what way is it intelligent?) If we want to know "how intelligent?" (its degree of utility) we still have to evaluate its performance in relation to the performance of other mechanisms.

(In the case of the automata controlling the air conditioning at Google, we can say that it is more intelligent than the prior control system, and by how much.)

Because we're starting to talk about more "generalized intelligence", defined here as mechanisms that can be applied to a set of problems, (I include minimax as a form of "axiomatic intelligence" and machine learning as a form "adaptive intelligence"), it may be worthwhile to expand and clarify the definition:

Intelligence is the problem solving strength of a mechanism in relation to a problem or a set of problems, relative to the strength of other mechanisms.

or, if we wanted to be pithy:

Intelligence is as intelligence does (and how well.)

",1671,,2444,,1/24/2021 19:32,1/24/2021 19:32,,,,3,,,,CC BY-SA 4.0 7672,1,7766,,8/22/2018 3:29,,5,874,"

So suppose that you have a real estate appraisal problem. You have some structured data, and some images exterior of home, bedrooms, kitchen, etc. The number of pictures taken is variable per observational unit, i.e. the house.

I understand the basics of combining an image processing neural net with tabular data for a single image. You chop off the final layer and feed in the embeddings of the image to your final model.

How would one deal with variable number of images? Where your unit of observation can have between zero and infinity images (theoretically no upper bound on number of images in observation)?

",17646,,2444,,1/23/2021 3:25,1/23/2021 3:25,Variable Number of Inputs to Neural Networks,,2,0,,,,CC BY-SA 4.0 7673,2,,6465,8/22/2018 8:09,,2,,"

To complement Cosmo's response and maybe address the ""using just your name"" part of your question, I would like to acknowledge that biased towards submissions do exist. Reviewers may be biased due to several reasons, such us authors' age, publications record, gender, nationality (Lotfi and Mahian, 2014).

If you are concerned about these aspects, rest assured that there are mechanisms to ensure that the authors' reputation does not influence reviewers' judgments. A good example is the ""Double-Blind Review"" process, which means that identities from the author(s) and reviewer(s) are concealed throughout the review process.

",17602,,,,,8/22/2018 8:09,,,,0,,,,CC BY-SA 4.0 7675,1,,,8/22/2018 9:28,,4,147,"

Is there a machine learning system that is able to "understand" mathematical problems given in a textual description, such as

A big cat needs 4 days to catch all the mice and a small cat needs 12 days. How many days need both, if they catch mice together?

?

",17650,,2444,,12/7/2020 14:55,12/7/2020 14:55,Is there a machine learning system that is able to understand mathematical problems given in a textual description?,,2,0,,,,CC BY-SA 4.0 7677,2,,7675,8/22/2018 11:21,,2,,"

There was a lot of work on this topic at UT Austin, which has now migrated to the Alan Institute.

There is no off-the-shelf software that will answer your question (if there was, DARPA would stop funding its development!), but you can read about the latest development in a number of recent papers.

This paper (Seo et al. EMNLP 2015) discusses the techniques that are used to interpret diagrams that accompany geometry problems, while this one (Hosseini et al. EMNLP 2014) talks about how to automatically parse verbs to interpret the meaning of a question. The 2015 TACL paper (Koncel-Kedziorski et al. 2015) completes this by discussing how to extract the relevant equations from a word problem. Once you have the equations, know what question is being asked, and can interpret any diagrams, you can do most high school math problems.

However, I don't think this is yet a fully reliable system. It is one part of a larger, long running effort to create a program that can achieve higher education certifications in many subjects. You can see many projects related to this at the Alan Institute's website.

",16909,,2444,,12/7/2020 14:53,12/7/2020 14:53,,,,0,,,,CC BY-SA 4.0 7679,2,,7675,8/22/2018 15:42,,1,,"

Well this is a relatively new problem very tied to Question Answering. One of the recent systems is EUCLID that can answer those type of question the public Dolphin algebra question set by using a tree transducer cascade approach.

This paper details the proposed model Hopkins, M., Petrescu-Prahova, C., Levin, R., Le Bras, R., Herrasti, A., & Joshi, V. (2017). Beyond sentential semantic parsing: Tackling the math sat with a cascade of tree transducers. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (pp. 795-804).

In the same sense, SEMEVAL has released a task related to Math QA, you can see the related bibliography and referenced works semeval 2019 task 10 internal.

",17463,,,,,8/22/2018 15:42,,,,3,,,,CC BY-SA 4.0 7680,1,7681,,8/22/2018 18:06,,16,1751,"

I was reading the book Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto (complete draft, November 5, 2017).

On page 271, the pseudo-code for the episodic Monte-Carlo Policy-Gradient Method is presented. Looking at this pseudo-code I can't understand why it seems that the discount rate appears 2 times, once in the update state and a second time inside the return. [See the figure below]

It seems that the return for the steps after step 1 are just a truncation of the return of the first step. Also, if you look just one page above in the book you find an equation with just 1 discount rate (the one inside the return.)

Why then does the pseudo-code seem to be different? My guess is that I am misunderstanding something:

$$ {\mathbf{\theta}}_{t+1} ~\dot{=}~\mathbf{\theta}_t + \alpha G_t \frac{{\nabla}_{\mathbf{\theta}} \pi \left(A_t \middle| S_t, \mathbf{\theta}_{t} \right)}{\pi \left(A_t \middle| S_t, \mathbf{\theta}_{t} \right)}. \tag{13.6} $$

",17565,,2444,,6/29/2019 11:59,7/6/2022 0:43,Why does the discount rate in the REINFORCE algorithm appear twice?,,4,0,,,,CC BY-SA 4.0 7681,2,,7680,8/22/2018 18:58,,9,,"

The discount factor does appear twice, and this is correct.

This is because the function you are trying to maximise in REINFORCE for an episodic problem (by taking the gradient) is the expected return from a given (distribution of) start state:

$$J(\theta) = \mathbb{E}_{\pi(\theta)}[G_t|S_t = s_0, t=0]$$

Therefore, during the episode, when you sample the returns $G_1$, $G_2$ etc, these will be less relevant to the problem you are solving, reduced by the discount factor a second time as you noted. At the extreme with an episodic problem and $\gamma = 0$ then REINFORCE will only find an optimal policy for the first action.

In continuing problems, you would use different formulations for $J(\theta)$, and these do not lead to the extra factor of $\gamma^t$.

",1847,,1847,,4/6/2021 13:46,4/6/2021 13:46,,,,0,,,,CC BY-SA 4.0 7682,2,,7680,8/22/2018 19:11,,6,,"

Neil's answer already provides some intuition as to why the pseudocode (with the extra $\gamma^t$ term) is correct.

I'd just like to additionally clarify that you do not seem to be misunderstanding anything, Equation (13.6) in the book is indeed different from the pseudocode.

Now, I don't have the edition of the book that you mentioned right here, but I do have a later draft from March 22, 2018, and the text on this particular topic seems to be similar. In this edition:

  • Near the end of page 326, it is explicitly mentioned that they'll assume $\gamma = 1$ in their proof for the Policy Gradient Theorem.
  • That proof eventually leads to the same Equation (13.6) on page 329.
  • Immediately below the pseudocode, on page 330, they actually briefly address the difference between the Equation and the pseudocode, saying that that difference is due to the assumption of $\gamma = 1$ in the proof.
  • Right below that, in Exercise 13.2, they give some hints as for what you should be looking at if you'd like to derive the modified proof for the case where $\gamma < 1$.
",1641,,,,,8/22/2018 19:11,,,,2,,,,CC BY-SA 4.0 7683,1,7686,,8/23/2018 3:48,,8,2863,"

I am reading Goodfellow et al Deeplearning Book. I found it difficult to understand the difference between the definition of the hypothesis space and representation capacity of a model.

In Chapter 5, it is written about hypothesis space:

One way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to select as being the solution.

And about representational capacity:

The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. This is called the representational capacity of the model.

If we take the linear regression model as an example and allow our output $y$ to takes polynomial inputs, I understand the hypothesis space as the ensemble of quadratic functions taking input $x$, i.e $y = a_0 + a_1x + a_2x^2$.

How is it different from the definition of the representational capacity, where parameters are $a_0$, $a_1$ and $a_2$?

",17664,,2444,,1/22/2021 15:59,1/22/2021 15:59,What is the difference between hypothesis space and representational capacity?,,3,0,,,,CC BY-SA 4.0 7684,1,7754,,8/23/2018 7:17,,5,1793,"

Where can I find (more) pre-trained language models? I am especially interested in neural network-based models for English and German.

I am aware only of Language Model on One Billion Word Benchmark and TF-LM: TensorFlow-based Language Modeling Toolkit.

I am surprised not to find a greater wealth of models for different frameworks and languages.

",17670,,2444,,11/1/2019 3:18,11/1/2019 3:18,Where can I find pre-trained language models in English and German?,,2,0,,1/6/2022 12:03,,CC BY-SA 4.0 7685,1,,,8/23/2018 7:17,,7,1410,"

In the Trust-Region Policy Optimisation (TRPO) algorithm (and subsequently in PPO also), I do not understand the motivation behind replacing the log probability term from standard policy gradients

$$L^{PG}(\theta) = \hat{\mathbb{E}}_t[\log \pi_{\theta}(a_t | s_t)\hat{A}_t],$$

with the importance sampling term of the policy output probability over the old policy output probability

$$L^{IS}_{\theta_{old}}(\theta) = \hat{\mathbb{E}}_t \left[\frac{\pi_{\theta}(a_t | s_t)}{\pi_{\theta_{old}}(a_t | s_t)}\hat{A}_t \right]$$

Could someone please explain this step to me?

I understand once we have done this why we then need to constrain the updates within a 'trust region' (to avoid the $\pi_{\theta_{old}}$ increasing the gradient updates outwith the bounds in which the approximations of the gradient direction are accurate). I'm just not sure of the reasons behind including this term in the first place.

",17671,,2444,,11/5/2020 22:22,11/5/2020 22:22,Why is the log probability replaced with the importance sampling in the loss function?,,2,0,,,,CC BY-SA 4.0 7686,2,,7683,8/23/2018 7:32,,1,,"

Consider a target function $f: x \mapsto f(x)$.

A hypothesis refers to an approximation of $f$. A hypothesis space refers to the set of possible approximations that an algorithm can create for $f$. The hypothesis space consists of the set of functions the model is limited to learn. For instance, linear regression can be limited to linear functions as its hypothesis space, or it can be expanded to learn polynomials.

The representational capacity of a model determines the flexibility of it, its ability to fit a variety of functions (i.e. which functions the model is able to learn), at the same. It specifies the family of functions the learning algorithm can choose from.

",8041,,2444,,8/12/2020 23:22,8/12/2020 23:22,,,,2,,,,CC BY-SA 4.0 7687,2,,7672,8/23/2018 7:43,,0,,"

I can think of 4 options:

  1. One option is to divide the data so that each data point only has one picture but we have multiple data points per one real estate with structured data duplicated. Then calculate the forecasts and average the resulting predicted price between data points belonging to one real estate. Here we probably assume that the quality of pictures is more important than what is being shot.

  2. The other option is to divide the pictures to categories: bedroom1, bathroom, bedroom2, kitchen and so on. Then for the missing pictures use a black square. This way you will be able to have multiple pictures in one data point.

  3. The third option would be to have places for the maximum amount of pictures and fill only the first pictures that are available. And the other pictures fill with black squares/Nans.

  4. The best option would probably be combining option number two and three. So you have categories and maximum number of pictures per category. So you will have four categories for bathroom1 if the maximum is 4 pictures of the first bathroom:

    Bathroom1_1, Bathroom1_2, Bathroom1_3, Bathroom1_4, Bedroom1_1, Bedroom1_2, Bedroom1_3, Bedroom2_1, Bedroom2_2 ... and so on.

The gradient boosting algorithms are very good at working with missing values. So probably lightgbm and xgboost as last layers could give good results.

So in the end you can check all the variants and even do ensembles from the variants that give the best results.

",13178,,,,,8/23/2018 7:43,,,,5,,,,CC BY-SA 4.0 7689,2,,7667,8/23/2018 10:38,,2,,"

Derivative of the sigmoid curve is 0 when the output is 0 or 1 as you can see from the image above. The technique you are referring to is called label-smoothing which is used in various applications (e.g. GANs) but I can see how it would be applicable here also, by helping to avoid 0-gradients saturating learning.

To answer your third question, sigmoids are rarely used in the intermediate layers of networks these days. They have been replaced by ReLUs (or variants of ReLU) for this exact reason - sigmoids are prone to causing 'vanishing gradients' where the gradient becomes 0 and therefore does not get backpropagated any further. ReLUs alleviate this problem by always providing a gradient of 1 for positive input values.

",17671,,,,,8/23/2018 10:38,,,,0,,,,CC BY-SA 4.0 7690,1,,,8/23/2018 11:01,,4,263,"

In the context of autonomous driving, two main stages are typically implemented: an image processing stage and a control stage. The first aims at extracting useful information from the acquired image while the second employs those information to control the vehicle.

As far as concerning the processing stage, semantic segmentation is typically used. The input image is divided in different areas with a specific meaning (road, sky, car etc...). Here is an example of semantic segmentation:

The output of the segmentation stage is very complex. I am trying to understand how this information is typically used in the control stage, and how to use the information on the segmented areas to control the vehicle.

For simplicity, let's just consider a vehicle that has to follow a path.

TL;DR: what are the typical control algorithms for autonomous driving based on semantic segmentation?

",16671,,16671,,2/18/2019 8:26,6/14/2019 20:03,Self-driving control logic based on semantic segmentation,,1,0,,,,CC BY-SA 4.0 7692,2,,7618,8/23/2018 20:22,,0,,"

Thanks to Dennis Soemers for helping to explain and unpack. I'm still in the early stages of approaching the function, so any thoughts on my current line of thinking would be appreciated.

-----------------------------------------

There are two conditions that need to be separated: number of branches leading to an a given node (raw probability) and number of branches leading from given node (outdegree, which I like to think of as ""chaotictivity"";)

Raw probability here is the indegree of all nodes leading to the evaluated node.

Aggregate indegree seems for appropriate for a basic time discounting function because it reflects the unqualified probability of any given future state or position.

My thought is that I can use it to ""normalize"" the reward distributions. An expressed position $p_{x_1}$ yields an aggregate of $30R$ over 20 potential positions $p$; $p_{x_2}$ yields an aggregate of $15R$ over 10 $p$: $$\frac{30}{20} = \frac{15}{10}$$

This ""squeezes"" the R sum for a given ply into a single number.

The fractions below represent: $\frac{choice}{choices}$

$$\left \{\frac{1}{4} \right \}$$ $$\left \{ \frac{1}{2} | \frac{1}{2} | \frac{1}{2} |\frac{1}{2} \right \}$$ $$\left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \} | \left \{ \frac{1}{1} | \frac{1}{1} \right \}$$

The $\beta$ for any given node on the final ply would be: $$\frac{1}{4} * \frac{1}{2} = \frac{1}{8} = .125$$

Fractions are attractive because integers may be utilized until the $\beta$ needs to be expressed to whatever digit, and the distinct cardinalities of the factors is maintained for ancillary evaluation.

This function should allow the automata to do things like make the choice with the greatest certainty of the least maximal potential downside.

(If greater uncertainty is desirable, the automata can pursue more chaotic nodes, where aggregate least maximal downside is equivalent. In that ""personalities"" can be desirable in automata that play games with humans, there might be a maximax ""gambler"" persona that seeks chaos;)

The function can also be used to evaluate individual target nodes, and modified by any number of additional functions for ""tuning"".

I'm thinking about how I might use the degree of variance between the most probable and least probable node for a given ply. If the least probable node is $\frac{1}{20}$, and most probable node for a given ply is $\frac{1}{5} = \frac{4}{20}$, the variance is $\frac{3}{20} = .15$

This is initially designed for partisan sudoku. I'm thinking it is worthwhile to utilize the native $R$, which is an integer value between $\{-m^n, .., 0, .., m^n\}$ for an $(m^n)^n$ sudoku grid, as it segregates gains and losses.

-------------------------------

EXPERIMENT DESIGN

Partisan sudoku is an attractive model for gauging relative strength of automata because outcomes are a set of ratios.

In the 2-player game where the number of regions in the sudoku is even, the game is perfectly symmetrical and neither player has a material, positional, or turn number advantage. Because the game is intractable on higher order grids, perfectly symmetrical games will not necessarily result in perfect ties because optimality of a given position is only presumed. (The second player can mirror the starting player for a perfect tie, where all outcome ratios are equal, or employ symmetry breaking if it perceives an opponent's position as less optimal than alternate positions.)

In 2-player games with an odd number of regions, matched sets of games may be employed with the automata alternating as starting player.

My initial plan is to evaluate individual heuristics, where one agent employs the branch discount $\beta$, and one agent does not.

",1671,,1671,,9/7/2018 20:43,9/7/2018 20:43,,,,0,,,,CC BY-SA 4.0 7693,1,,,8/23/2018 20:42,,1,149,"

A linear activation function (or none at all) should only be used when the relation between input and output is linear. Why doesn't the same rule apply for other activation functions? For example, why doesn't sigmoid only work when the relation between input and output is ""of sigmoid shape""?

",17488,,2444,,11/12/2018 20:20,1/7/2019 20:14,Why do non-linear activation functions not require a specific non-linear relation between its inputs and outputs?,,2,1,,,,CC BY-SA 4.0 7695,1,,,8/24/2018 4:06,,3,683,"

Recent advances in Deeplearning and dedicated hardware has made it possible to detect images with a much better accuracy than ever. Neural networks are the gold standard for computer vision application and are used widely in the industry, for example for internet search engines and autonomous cars. In real life problems, the image contains of regions with different objects. It is not enough to only identify the picture but elements of the picture.

A while ago an alternative to the well known sliding window algorithm was described in the literature, called Region Proposal Networks. It is basically a convolution neural network which was extended by a region vector.

Problem that I am trying to solve:

In a given video frame, I want to pick some region of interests (literally), and perform classification on those regions.

How is it currently implemented

  1. Capture the video frame
  2. Split the video frame into multiple images each representing a region of interest
  3. Perform image classification(inference) on each of the image (corresponding to a part of the frame)
  4. Aggregate the results of #3

Problem with the current approach

Multiple inferences per frame.

Question

I am looking for a solution where I specify the locations of interest in a frame, and inference task, be it object detection (or) image classification, is performed only on those regions.Can you please point to me the references which I need to study (or) use to do this.

",17688,,,user11571,8/24/2018 8:25,1/2/2023 0:09,Alternative to sliding window neural network (was: Object detect (or) image classification at specific locations in the frame),,2,1,,,,CC BY-SA 4.0 7696,1,,,8/24/2018 8:16,,6,193,"

I wrote a simple feed-forward neural network that plays tic-tac-toe:

  • 9 neurons in input layers: 1 - my sign, -1 - opponent's sign, 0 - empty;
  • 9 neurons in hidden layer: value calculated using ReLU;
  • 9 neurons in output layer: value calculated using softmax;

I am using an evolutionary approach: 100 individuals play against each other (all-play-all). The top 10 best are selected to mutate and reproduce into the next generation. The fitness score calculated: +1 for the correct move (it's possible to place your sign on already occupied tile), +9 for victory, -9 for a defeat.

What I notice is that the network's fitness keeps climbing up and falling down again. It seems that my current approach only evolves certain patterns on placing signs on the board and once random mutation interrupts the current pattern new one emerges. My network goes in circles without ever evolving actual strategy. I suspect the solution for this would be to pit network against tic-tac-toe AI, but is there any way to evolve the actual strategy just by making it to playing against itself?

",17693,,2444,,12/4/2021 9:12,12/4/2021 9:12,Why does the fitness of my neural network to play tic-tac-toe keep oscillating?,,2,9,,,,CC BY-SA 4.0 7698,2,,7685,8/24/2018 10:08,,6,,"

I am not 100% sure if the following is the only/complete story, but I'm quite confident it's at least part of the story:

In the PPO paper, after describing the standard policy gradient objective $L^{PG}$, they mention the following:

While it is appealing to perform multiple steps of optimization on this loss $L^{PG}$ using the same trajectory, doing so is not well-justified, and empirically it often leads to destructively large policy updates

This is because, as soon as you've performed one update using a trajectory generated with the previous policy, you land in an off-policy situation; the experience gained in that trajectory is no longer representative of your current policy, and all the estimators (like the advantage estimator) technically become incorrect.

With importance sampling, you can correct for this. This is also commonly used in multi-step off-policy value learning algorithms. Intuitively, the importance sampling term emphasizes estimates of advantage $\hat{A}_t$ corresponding to actions $a_t$ that have become more likely in the new policy relative to the old policy, and it de-emphasizes advantages corresponding to actions that have already become less likely in the new policy relative to the old policy.

If an action $a_t$ in the old trajectory has already become highly unlikely since that trajectory of experience was generated, we have $\pi_{\theta} (a_t \vert s_t) < \pi_{\theta_{\text{old}}} (a_t \vert s_t)$, which means that $\frac{\pi_{\theta} (a_t \vert s_t)}{\pi_{\theta_{\text{old}}} (a_t \vert s_t)}$ becomes close to $0$, which means that we'll reduce the influence of that particular chunk of experience on our subsequent updates. This makes sense because, due to previous updates since the generation of that trajectory, that particular part of the trajectory has already become highly unlikely anyway, and should therefore no longer be relevant for our updates.

The ability to perform multiple updates using the same (old) trajectory anyway is useful because this increases sample-efficiency, we can re-use the same samples of experience more than once rather than using them once and then discarding them again.

",1641,,1641,,5/2/2019 18:48,5/2/2019 18:48,,,,0,,,,CC BY-SA 4.0 7700,2,,3758,8/24/2018 15:02,,5,,"

The value $Q(s', ~\cdot~)$ should always be implemented to simply be equal to $0$ for any terminal state $s'$ (the dot instead of an action as second argument there indicates that what I just wrote should hold for any action, as long as $s'$ is terminal).

It is easier to understand why this should be the case by dissecting what the different terms in the update rule mean:

$$Q(s, a) \gets \color{red}{Q(s, a)} + \alpha \left[ \color{blue}{r + \gamma Q(s', a')} - \color{red}{Q(s, a)} \right]$$

In this update, the red term $\color{red}{Q(s, a)}$ (which appears twice) is our old estimate of the value $Q(s, a)$ of being in state $s$ and executing action $a$. The blue term $\color{blue}{r + \gamma Q(s', a')}$ is a different version of estimating the same quantity $Q(s, a)$. This second version is assumed to be slightly more accurate, because it is not ""just"" a prediction, but it's a combination of:

  • something that we really observed: $r$, plus
  • a prediction: $\gamma Q(s', a')$

Here, the $r$ component is the immediate reward that we observed after executing $a$ in $s$, and then $Q(s', a')$ is everything we expect to still be collecting afterwards (i.e., after executing $a$ in $s$ and transitioning to $s'$).

Now, suppose that $s'$ is a terminal state, what rewards do we still expect to be collecting in the future within that same episode? Since $s'$ is terminal, and the episode has ended, there can only be one correct answer; we expect to collect exactly $0$ rewards in the future.

",1641,,1641,,8/24/2018 15:55,8/24/2018 15:55,,,,0,,,,CC BY-SA 4.0 7701,1,7702,,8/24/2018 17:34,,10,3653,"

For example, AFAIK, the pooling layer in a CNN is not differentiable, but it can be used because it's not learning. Is it always true?

",17358,,2444,,12/23/2021 22:22,12/23/2021 22:22,"Can non-differentiable layer be used in a neural network, if it's not learned?",,1,0,,,,CC BY-SA 4.0 7702,2,,7701,8/24/2018 18:58,,8,,"

It is not possible to backpropagate gradients through a layer with non-differentiable functions. However, the pooling layer function is differentiable*, and usually trivially so.

For example:

  • If an average pooling layer has inputs $z$ and outputs $a$, and each output is average of 4 inputs then $\frac{da}{dz} = 0.25$ (if pooling layers overlap it gets a little more complicated, but you just add things up where they overlap).

  • A max pooling layer has $\frac{da}{dz} = 1$ for the maximum z, and $\frac{da}{dz} = 0$ for all others.

A pooling layer usually has no learnable parameters, but if you know the gradient of a function at its outputs, you can assign gradient correctly to its inputs using the chain rule. That is essentially all that back propagation is, the chain rule applied to the functions of a neural network.

To answer your question more directly:

Can non-differentiable layer be used in a neural network, if it's not learned?

No.

There is one exception: If this layer appears directly after the input, then as it has no parameters to learn, and you generally do not care about the gradient of the input data, so you can have a non-differentiable function there. However, this is just the same as transforming your input data in some non-differentiable way, and training the NN with that transformed data instead.


* Technically there are some discontinuities in the gradient of a max function (where any two inputs are equal). However, this is not a problem in practice, as the gradients are well behaved close to these values. When you can safely do this or not is probably the topic of another question.

",1847,,1847,,1/8/2019 20:12,1/8/2019 20:12,,,,7,,,,CC BY-SA 4.0 7703,1,,,8/24/2018 19:05,,3,82,"

Sutton and Barto 2018 define the discounted return $G_t$ the following way (p 55):

Is my interpretation correct?

Or should all ""1"" be in the same column?

",17703,,2444,,2/16/2019 2:47,2/16/2019 2:47,Is my interpretation of the return correct?,,1,0,,,,CC BY-SA 4.0 7704,2,,7703,8/24/2018 19:11,,3,,"

Your table is almost correct. It is a minor difference, you should not have a $R_0$, the top row, leftmost column of numbers should be empty. That is because the first reward is $R_1$ (a result of taking action $A_0$ in state $S_0$). The alignment of the columns on the right hand side is correct though.

It might help to add the time step number at the top. But the important detail is that $G_t$ is a measure of all future rewards.

For instance it should always be zero when you reach a terminal state, which is what your example shows. Whilst it is quite common to receive a reward at the end of an episode (i.e. whilst arriving at a terminal state), also as your example shows.

The decision to have reward time step match that of next state is a convention that can be altered. A few RL sources, but not Sutton & Barto, will have reward on same time step as state and action that decided it, and thus $R_0$ will exist. The reward of 1 for reaching the terminal state would then be 1 time step earlier in your table, and there would be no $R_4$. The definition of $G_t$ would need to change to match ($G_t = R_t + \gamma G_{t+1})$, as well as other equations. That would change your table also - the reward sequence (top row) would shift to the left.

",1847,,1847,,8/25/2018 7:02,8/25/2018 7:02,,,,0,,,,CC BY-SA 4.0 7705,1,7706,,8/24/2018 20:21,,3,146,"

I am confused about the definition of the optimal value ($V^*$) and optimal action-value (Q*) in reinforcement learning, so I need some clarification, because some blogs I read on Medium and GitHub are inconsistent with the literature.

Originally, I thought the optimal action value, $Q^*$, represents you performing the action that maximizes your current reward, and then acting optimally thereafter.

And the optimal value, $V^*$, being the average $Q$ values in that state. Meaning that if you're in this state, the average "goodness" is this.

For example: If I am in a toy store and I can buy a pencil, yo-yo, or Lego.

Q(toy store, pencil) = -10
Q(toy store, yo-yo) = 5
Q(toy store, Lego) = 50

And therefore my $Q^* = 50$

But my $V^*$ in this case is:

V* = -10 + 5 + 50 / 3 = 15

Representing no matter what action I take, the average future projected reward is $15$.

And for the advantage of learning, my baseline would be $15$. So anything less than $0$ is worse than average and anything above $0$ is better than average.

However, now I am reading about how $V^*$ actually assumes the optimal action in a given state, meaning $V^*$ would be 50 in the above case.

I am wondering which definition is correct.

",17706,,2444,,11/1/2020 12:17,11/1/2020 12:20,"In reinforcement learning, does the optimal value correspond to performing the best action in a given state?",,1,0,,,,CC BY-SA 4.0 7706,2,,7705,8/24/2018 20:39,,3,,"

I am wondering which definition is correct.

The asterisk * in both the definitions stands for "optimal" in the sense of "value when following the optimal policy"

So this one is correct:

$V^*$ actually assumes the optimal action in a given state, meaning $V^*$ would be $50$ in the above case

However, you have got the definition of Q slightly wrong.

I think this is because you are omitting the parameters.

The state value function uses the state as a parameter, $V_{\pi}(s)$, it returns the value of being in state $s$ and following a fixed policy $\pi$. The * is used to denote following an optimal policy.

The action value function has two parameters - a state and an action that is possible in that state, $Q_{\pi}(s, a)$, it returns the value of being in state $s$, taking action $a$ (regardless of whether it is the best action or not) and following the policy $\pi$ after that point.

Your assertion in the question:

And therefore my $Q^* = 50$

is wrong, or rather not meaningful, as you have not stated the parameters. You already list all the possible values of Q with the parameters. You could say $\text{max}_a Q(\text{toy store}, a) = 50$, or to choose the best action $\pi(\text{toy store}) = \text{argmax}_a Q(\text{toy store}, a) = \text{Lego}$

",1847,,2444,,11/1/2020 12:20,11/1/2020 12:20,,,,2,,,,CC BY-SA 4.0 7707,1,,,8/24/2018 21:11,,7,1064,"

I've built a deep deterministic policy gradient reinforcement learning agent to be able to handle any games/tasks that have only one action. However, the agent seems to fail horribly when there are two or more actions. I tried to look online for any examples of somebody implementing DDPG on a multiple-action system, but people mostly applied it to the pendulum problem, which is a single-action problem.

For my current system, it is a 3 state, 2 continuous control actions system (One is to adjust the temperature of the system, the other one adjusts a mechanical position, both are continuous). However, I froze the second continuous action to be the optimal action all the time. So RL only has to manipulate one action. It solves within 30 episodes. However, the moment I allow the RL to try both continuous actions, it doesn't even converge after 1000 episodes. In fact, it diverges aggressively. The output of the actor-network seems to always be the max action, possibly because I am using a tanh activation for the actor to provide output constraint. I added a penalty to large actions, but it does not seem to work for the 2 continuous control action case.

For my exploratory noise, I used Ornstein-Ulhenbeck noise, with means adjusted for the two different continuous actions. The mean of the noise is 10% of the mean of the action.

Is there any massive difference between single action and multiple action DDPG?

I changed the reward function to take into account both actions, have tried making a bigger network, tried priority replay, etc., but it appears I am missing something.

Does anyone here have any experience building a multiple-action DDPG and could give me some pointers?

",17706,,2444,,3/28/2021 1:32,3/28/2021 1:32,Is there a difference in the architecture of deep reinforcement learning when multiple actions are performed instead of a single action?,,0,7,,,,CC BY-SA 4.0 7708,2,,7518,8/25/2018 0:37,,2,,"

I'm going to start by trying to restate your problem as I understand it.

  1. You have a game which contains weapons.
  2. Weapons are characterized by 5 different numbers, which can range over different values (1-5 in your examples?).
  3. You have a way to simulate combat involving the two weapons.
  4. The combat is random, but can be repeated many times. An average win rate can be determined.
  5. You are looking for an AI algorithm that would take in a lot of pairs of statistics, along with the average win rates for one over the other, and give you insight into how to make the average win rate as close to 50% as possible.

If this sounds right, then fundamentally your problem is a form of regression, which something you could use AI for, but probably don't need to. However, your problem is probably not linear, so you need the interactions between the features. Here's what I suggest:

For each pair of weapons, store a comma separated list consisting of the stats for each weapon (one by one), followed by wins1 - wins2. At the top, list out the names of each attribute, separated by commas, (e.g. weapon1Str, weapon1Range, ... ,weapon1-weapon2 Then use a language like R that has simple support for complex forms of regression.

In R, this is then as simple as:

data <- read.csv(file=""Myfile.csv"")
lm(formula = dist ~ .*., data = data)

This should produce a list of ""coefficients"", one for each of the attributes, and one for the interaction between each pair of attributes, which form a lengthy quadratic equation in 10 variables.

Any zero of that equation should be a pair of weapons that minimizes this difference.

That's probably the place to start. If it doesn't work out, maybe come post a different question and we can help more.

",16909,,,,,8/25/2018 0:37,,,,0,,,,CC BY-SA 4.0 7709,2,,7624,8/25/2018 4:00,,2,,"

This is an important question for AI – maybe the most important of all – for the research field of Artificial Intelligence. I mean if AI is science, then its experiments will be empirically testable. There has to be a way to decide pass or fail. So what are the tests for intelligence? Before you even design a test, you need a clear idea of what intelligence amounts to, otherwise how could you design a competent test for it?

Sure, I'm part of the research and development project known as Building Watertight Submarines, and sure, I'm totally confident my submarine is watertight, but I have no idea how to test whether it is or not because I don't know what "watertight" means. This whole idea is absurd. But ask AI what "intelligence" means. The answers you get, on analysis, are almost the same as the submarine example.

Base Answer - Behavior

The word (idea, concept) "Intelligence" is usually defined by AI in terms of behavior. I.e. the Turing test approach. A machine is intelligent if it behaves in a way that, were a human to behave in that same way, the human would be said to be performing an action that required human intelligence.

Problem 1: player pianos are intelligent. Playing a Scott Joplin tune obviously requires intelligence in a human.

Problem 2. If a machine passes the test, it only shows that the machine is "intelligent" for the tested behaviors. What about untested behaviors? This is actually a life-and-death problem today with self-driving vehicle AI control systems. The AI systems are acceptably good at driving a car (which obviously requires human intelligence) in specific environments, e.g. freeways with well-marked lanes, no tight corners, and a median barrier separating the two directions. But the systems go disastrously wrong in "edge cases" – unusual situations.

Problem 3. Who would put their child on a school bus driven by a robot that had passed the Turing test for driving school buses? What about a storm when a live power line falls across the road? Or a twister in the distance is coming this way? What about a thousand other untested possibilities? A responsible parent would want to know (a) what are the principles of the internal processes and structures of human intelligence, and (b) that the digital bus driver had adequately similar internal processes and structures – i.e., not behavior but the right inner elements, the right inner causation.

Desired answer – inner principles

I would want to know that the machine was running the right inner processes and that it was running these processes (algorithms) on the right inner (memory) structures. Problem is, no one seems to know what the right inner processes and structures of human intelligence are. (A huge problem to be sure – but one that hasn't held AI back – or self-driving system developers - one bit.) The implication of this is that what AI ought to be doing now is working out what are the inner processes and structures of human intelligence. But it's not doing this – rather, it's commercializing its flawed technology.

Elements of a definition – 1. Generalization

We do know some things about human intelligence. Some tests really do test whether a machine has certain properties of the human mind. One of these properties is generalization. In his 1950 paper, Turing, as a sort of joke, gave a really good example of conversational generalization: (The witness is the machine.)

Interrogator: In the first line of your sonnet which reads 'Shall I compare thee to a summer's day', would not 'a spring day' do as well or better?

Witness: It wouldn't scan.

Interrogator: How about 'a winter's day' That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter's day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don't think you're serious. By a winter's flay one means a typical winter's day, rather than a special one like Christmas.

Current AI has nothing that comes even remotely near being able to generalize like this. Failure to generalize is regarded as perhaps the greatest failing of current AI. The ability to generalize would be one part of an adequate definition of "intelligence". But what generalization amounts to would need to be explicated.

The problem of generalization, also, is behind several the severe philosophical objections to AI theory, including the frame problem, the problem of common-sense knowledge, and the problem of combinatorial explosion.

Elements of a definition – 2. Perception

Sensory perception is fairly obviously fundamental to human learning and intelligence. Data (in some form) is emitted by the human senses then processed by the central system. In the computer, binary values exit the digital sensor and travel to the machine. However, nothing in the values themselves indicates what was sensed. Yet the only thing the computer gets is the binary values. How could the machine ever come to know what is sensed? (The classic Chinese room argument problem.)

So another element of human-like intelligence is the ability to perceive in a human-like way. What "human-like way" means here is that the machine processes sensory input using the same principles that apply in human perception. The problem is that no one seems to know how a semantics (knowledge) can be built from the data emitted by digital sensors (or organic senses). But still, human-like perception needs to be an element of an adequate definition of "intelligence".

Once AI gets these two issues sorted out – generalization and perception – then it will probably, hopefully, be well on the way to realizing its original goal of almost 70 years past – building a machine with (or that could acquire) a human-like general intelligence. And maybe the principles of generalization and the principles of perception are one and the same. And maybe there is actually only one principle. It shouldn't be assumed that the answers are complex. Sometimes the hardest things to understand are the most simple.

So the question "What do we mean when we say "intelligence"? is really important to AI. And the conclusion is that AI ought to replace its current behavioral definition of "intelligence" with one that includes the human elements of generalization and perception. And then get on and try to work out the operating principles, or principle, of both of these.

",17709,,2444,,1/24/2021 19:13,1/24/2021 19:13,,,,2,,,,CC BY-SA 4.0 7712,2,,7684,8/25/2018 13:08,,0,,"

This will depend to some extent on what you want to do with the language models.

Some possible resources are:

TensorFlow offers 3 pre-trained language models in the research package.

Cafe's ModelZoo has a single pre-trained model that does video -> captions.

Other packages like Cafe2 offer pre-trained models, but the documentation does not suggest any of them are suitable for language.

Failing this, a good approach might be to email the authors of a paper that adopts an approach you like. Some (but far from all) researchers will be happy to share their models, which you can then use as a starting point for your own.

",16909,,,,,8/25/2018 13:08,,,,3,,,,CC BY-SA 4.0 7715,1,,,8/25/2018 21:24,,4,77,"

I’m training a network to do image classification on zoo animals.

I’m a software engineer and not an ML expert, so I’ve been retraining Google’s Inception model and the latest models is trained using Google AutoML Vision.

The network performs really well, but I have trouble with images of animals that I don’t want any labels for. Basically I would like images of those animals to be classified as unknowns or achieve low scores.

I do have images of the animals that I don’t want labels for and I tried putting them all into one “nothing” label together with images I’ve collected of the animals habitats without any animals. This doesn’t really yield any good results though. The network performs for the labeled animals but ends up assigning one of those labels to the other animals as well. Usually with a really high score as well.

I have 14 labels and 10.000 images. I should also mention that the “nothing” label ends up having a lot of images compared to the actual labels. Those images are not included in the 10.000.

Is there any tricks to achieve better results with this? Should I create multiple labels for the images in the “nothing” category maybe?

",17725,,,,,8/28/2018 1:47,Optimizing image recognition results for unknown labels,,1,1,,,,CC BY-SA 4.0 7717,1,7718,,8/26/2018 8:49,,3,164,"

In A3C, there are several child processes and one master process. The child precesses calculate the loss and backpropagation, and the master process sums them up and updates the parameters, if I understand it correctly.

But I wonder how I should decide the number of the child processes to implement. I think the more child processes are, the better it is to disentangle the correlation between the samples is, but I'm not sure what is the cons of setting a large number of child processes.

Maybe the more child processes are, the larger the variance of the gradient is, leading to the instability of the learning? Or is there any other reason?

And finally, how should I decide the number of the child processes?

",7402,,2444,,1/23/2021 14:12,1/23/2021 14:12,What is the pros and cons of increasing and decreasing the number of worker processes in A3C?,,1,0,,,,CC BY-SA 4.0 7718,2,,7717,8/26/2018 11:20,,3,,"

The correct number of child processes will depend on the hardware available to you.

Simplifying a bit, child processes can be in one of two states: waiting for memory or disk access, or running.

If your problem fits nicely in your computers' memory, then processes will spend almost all of their time running. If it's too big for memory, they will periodically need to wait for disk.

You should use approximately 1 Child process per CPU core. If you are training on a GPU, then it depends whether the process can make use of the entire GPU at once (in which case, use just 1), or whether a ""process"" is really more like a CUDA thread here (in which case you'd want one per CUDA core).

If you think your processes will wait for disk, use more than one per core. About 50% more is a good starting point. You can use a program like top to monitor CPU usage and adjust the number of processes accordingly.

To answer your question more explicitly:

  • Having more child processes (up to a point, discussed above), will increase hardware utilization, and make your training run faster. With a Core i7 CPU for instance, you might be able to run 8 or 16 child processes at a time, so you'd train 8-16x times faster.
  • Having more child processes than processing units (CPU cores, CUDA cores), will begin to cause frequent context switching, where the processing units have to pause to change between different jobs. Changing jobs is extremely expensive, and ultimately, your program cannot train faster than it would by using all the available hardware. If you have more processes than processing units, reducing the number should make your program train faster.
",16909,,,,,8/26/2018 11:20,,,,0,,,,CC BY-SA 4.0 7720,2,,7715,8/26/2018 21:08,,2,,"

Welcome to AI.SE @Stromgren!

A likely explanation is that the animals in the ""nothing"" group do not have much in common with each other.

This means it will be difficult for the network to learn which features from the images are associated with that label (in fact, there aren't any!). As a result, the network is probably assigning very low confidence to any estimate for the nothing label. You should be able to check if this is the case (i.e. examine whether it is ever confident about the label ""nothing"").

I am not completely sure how the Inception network encodes its labels. A common scheme though is to use one output neuron for each class. The correct label for an image is thus always a ""1-hot"" vector, where one of the elements is set to 1, and the others all to zero.

If that representation is being used, you can incorporate the ""nothing"" data by labelling it with a vector that is all zero: none of the output neurons should activate for it. That would produce precisely the training signal you want.

",16909,,16909,,8/28/2018 1:47,8/28/2018 1:47,,,,0,,,,CC BY-SA 4.0 7721,1,7725,,8/27/2018 1:58,,18,16404,"

In the paper Deep Recurrent Q-Learning for Partially Observable MDPs, the author processed the Atari game frames with an LSTM layer at the end. My questions are:

  • How does this method differ from the experience replay, as they both use past information in the training?

  • What's the typical application of both techniques?

  • Can they work together?

  • If they can work together, does it mean that the state is no longer a single state but a set of contiguous states?

",17365,,2444,,4/22/2019 12:08,10/22/2019 16:23,How does LSTM in deep reinforcement learning differ from experience replay?,,1,0,,,,CC BY-SA 4.0 7723,1,,,8/27/2018 7:32,,4,1854,"

I am using the PPO algorithm implemented by tensorforce: https://github.com/reinforceio/tensorforce . It works great and I am very happy with the results.

However, I notice that there are many metaparameters available to give to the PPO algorithm:

 # the tensorforce agent configuration ------------------------------------------
    network_spec = [
        dict(type='dense', size=256),
        dict(type='dense', size=256),
    ]

    agent = PPOAgent(
        states=environment.states,
        actions=environment.actions,
        network=network_spec,
        # Agent
        states_preprocessing=None,
        actions_exploration=None,
        reward_preprocessing=None,
        # MemoryModel
        update_mode=dict(
            unit='episodes',
            # 10 episodes per update
            batch_size=10,
            # Every 10 episodes
            frequency=10
        ),
        memory=dict(
            type='latest',
            include_next_states=False,
            capacity=200000
        ),
        # DistributionModel
        distributions=None,
        entropy_regularization=0.01,
        # PGModel
        baseline_mode='states',
        baseline=dict(
            type='mlp',
            sizes=[32, 32]
        ),
        baseline_optimizer=dict(
            type='multi_step',
            optimizer=dict(
                type='adam',
                learning_rate=1e-3
            ),
            num_steps=5
        ),
        gae_lambda=0.97,
        # PGLRModel
        likelihood_ratio_clipping=0.2,
        # PPOAgent
        step_optimizer=dict(
            type='adam',
            learning_rate=1e-3
        ),
        subsampling_fraction=0.2,
        optimization_steps=25,
        execution=dict(
            type='single',
            session_config=None,
            distributed_spec=None
        )
    )

So my question is: is there a way to understand, intuitively, the meaning / effect of all these metaparameters and use this intuitive understanding to improve training performance?

So far I have reached - from a mix of reading the PPO paper and the literature around, and playing with the code - to the following conclusions. Can anybody complete / correct?

  • effect of network_spec: this is size of the 'main network'. Quite classical: need it big enough to get valuable predictions, not too big either otherwise it is hard to train.

  • effect of the parameters in update_mode: this is how often the network updates are performed.

    • batch_size is how many used for a batch update. Not sure of the effect neither what this exactly means in practice (are all samples taken from only 10 batches of the memory replay)?

    • frequency is how often the update is performed. I guess having frequency high would make the training slower but more stable (as sample from more different batches)?

    • unit: no idea what this does

  • memory: this is the replay memory buffer.

    • type: not sure what this does or how it works.

    • include_next_states: not sure what this does or how it works

    • capacity: I think this is how many tuples (state, action, reward) are stored. I think this is an important metaparameter. In my experience, if this is too low compared to the number of actions in one episode, the learning is very bad. I guess this is because it must be large enough to store MANY episodes, otherwise the network learns from correlated data - which is bad.

  • DistributionMode: guess this is the model for the distribution of the controls? No idea what the parameters there do.

  • PGModel: No idea what the paramaters there do. Would be interesting to know if some should be tweaked / which ones.

  • PGLRModel: idem, no idea what all these parameters do / if they should be tweaked.

  • PPOAgend: idem, no idea what all these parameters do / if they should be tweaked.

Summary

So in summary, would be great to get some help about:

  • Which parameters should be tweaked
  • How should these parameters be tweaked? Is there a 'high level intuition' about how they should be tweaked / in which circumstances?
",17753,,,,,8/27/2018 13:24,Tuning of PPO metaparameters: a high level overview of what each parameter does,,1,0,,,,CC BY-SA 4.0 7725,2,,7721,8/27/2018 9:28,,17,,"

How does this method differ from the experience replay, as they both use past information in the training? What's the typical application of both techniques?

Using a recurrent neural network is one way for an agent to build a model of hidden or unobserved state in order to improve its predictions when direct observations do not give enough information, but a history of observations might give better information. Another way is to learn a Hidden Markov model. Both of these approaches build an internal representation that is effectively considered part of the state when making a decision by the agent. They are a way to approach solving POMDPs.

You can consider using individual frame images from Atari games as state as a POMDP, because each individual frame does not contain information about velocity. Velocity of objects in play is an important concept in many video games. By formulating the problem as a POMDP with individual image inputs, this challenges the agent to find some representation of velocity (or something similar conceptually) from a sequence of images. Technically a NN may also do this using fixed inputs of 4 frames at a time (as per the original DQN Atari paper), but in that case the designers have deliberately ""solved"" the partially observable part of the problem for the agent in advance, by selecting a better state representation from the start.

Experience replay solves some different problems:

  • Efficient use of experience, by learning repeatedly from observed transitions. This is important when the agent needs to use a low learning rate, as it does when the environment has stochastic elements or when the agent includes a complex non-linear function approximator like a neural network.

  • De-correlating samples to avoid problems with function approximators that work best with i.i.d. data. If you didn't effectively shuffle the dataset, the correlations between each time step could cause significant issues with a feed-forward neural network.

These two issues are important to learning stability for neural networks in DQN. Without experience replay, often Q-learning with neural networks will fail to converge at all.

Can they work together?

Sort of, but not quite directly, because LSTM requires input of multiple related time steps at once, as opposed to randomly sampled individual time steps. However, you could keep a history of longer trajectories, and sample sections from it for the history in order to train a LSTM. This would still achieve the goal of using experience efficiently. Depending on the LSTM architecture, you may need to sample quite long trajectories or even complete episodes in order to do this.

From comments by Muppet, it seems that is even possible to sample more randomly with individual steps by saving LSTM state. For instance, there is a paper ""Deep reinforcement learning for time series: playing idealized trading games"" where the authors get a working system doing this. I have no experience of this approach myself, and there are theoretical reasons why this may not work in all cases, but it is an option.

If they can work together, does it mean that the state is no longer a single state but a set of contiguous states?

Not really, the state at any time step is still a single state representation, is separate conceptually from an observation, and is separate conceptually from a trajectory or sequence of states used to train a RNN (other RL approaches such as TD($\lambda$) also require longer trajectories). Using an LSTM implies you have hidden state on each time step (compared to what you are able to observe), and that you hope the LSTM will discover a way to represent it.

One way to think of this is that the state is the current observation, plus a summary of observation history. The original Atari DQN paper simply used the previous three observations hard-coded as this ""summary"", which appeared to capture enough information to make predicting value functions reliable.

The LSTM approach is partly of interest, because it does not rely on human input to decide how to construct state from the observations, but discovers this by itself. One key goal of deep learning is designs and architectures that are much less dependent on human interpretations of the problem (typically these use feature engineering to assist in learning process). An agent that can work directly from raw observations has solved more of the problem by itself without injection of knowledge by the engineers that built it.

",1847,,1847,,10/22/2019 16:23,10/22/2019 16:23,,,,3,,,,CC BY-SA 4.0 7726,2,,7723,8/27/2018 10:12,,1,,"

Some investigation about the memory dict: The current type is latest, which means you're not using a memory replay, but a latest memory. Switching to replay may help. Also, include_next_state means that you store tuples (state, action, reward, next state). It's not a real parameter though, because in PPO it must be set to False, otherwise an error is raised. Your interpretation of capacity looks OK.

About the update mode spec dict, your current settings mean that : every 10 (frequency) episodes (unit), you pull a batch of 10 (batch_size) episodes (unit) from the memory (the pulling method is defined through the memory dict), and you perform an optimization step over this batch. Be aware that the unit defines both the unit of the optimization frequency and the type of the object fetched from the memory.

",17759,,75,,8/27/2018 13:24,8/27/2018 13:24,,,,0,,,,CC BY-SA 4.0 7727,1,,,8/27/2018 10:18,,5,1702,"

I am trying to understant how it works. How do you teach it say, to add 1 to each number it gets. I am pretty new to the subject and I learned how it works when you teach it to identify a picture of a number. I can understand how it identifies a number but I cant get it how would it study to perform addition? I can understand that it can identify a number or picture using the pixels and assigning weights and then learning to measure whether a picture of a number resembling the weight is assigned to each pixel. But i can't logically understand how would it learn the concept of adding a number by one. Suppose I showed it thousands of examples of 7 turning to 8 152 turning into 153 would it get it that every number in the world has to be added by one? How would it get it having no such operation of + ? Since addition does not exist to its proposal then how can it realize that it has to add one in every number? Even by seeing thousands of examples but having no such operation of plus I cant understand it. I could understand identifying pixels and such but such an operation I cant get the theoretical logic behind it. Can you explain the logic in layman terms?

",17760,,,,,8/27/2018 14:41,How is it possible to teach a neural network to perform addition?,,2,2,,,,CC BY-SA 4.0 7728,2,,7727,8/27/2018 12:27,,1,,"

Welcome to AI.SE @bilanush.

Here's an example approach that might make things clearer. There are other ways to train a neural network to do this however.

In your earlier example with an image, you probably noticed that the network receives the image as a series of values, representing each pixel in the image. The network then learns which of a series of output neurons should be active in response to a given set of pixel values. Those output neurons, when read in an appropriate way, correspond to the correct label for the image. The difference between the set of outputs that should have been active, and the set that were active, forms the basis of the error signal that allows the network to learn.

You've probably heard that computers represent numbers with binary digits. So you could think of the number 16 as being: 00010000 in ""8-bit binary"". In 16-bit binary, this number would be 0000000000010000, and so on.

So one way of viewing your problem is a function mapping binary inputs to binary outputs (very similar to labelling a black-and-white image). For instance, the input 00010000 (16) should produce the output 00010001 (17). The input 00100011 (35) should produce the output 00100100 (36), and so on.

As before, you will have a set of output neurons. In this case, it should be wide as the set of input neurons. As before, the error signal is the difference between the expected inputs and outputs.

As to the question of how they can learn this function ""without plus"", in fact the individual neurons in a network perform just two operations: addition of their inputs, and a non-linear transformation of the sum. It has been proven that these are sufficient to learn any function from inputs to outputs, as long as the network contains 3 layers or more, and the as long as the middle layer is wide enough, but here it should be easy to see how addition might emerge.

",16909,,,,,8/27/2018 12:27,,,,2,,,,CC BY-SA 4.0 7729,2,,7617,8/27/2018 12:35,,5,,"

I was hoping to see more answers here, but I'll get us started with some examples:

Combinatorial Search Problems: If your problem can be phrased as movement through a combinatorial graph, you don't need a neural network. In particular, your problem should have discrete states, a clear set of actions that are possible in each state, a clear definition of where we start, and a clear definition of what the goal state looks like. The most effective general purpose technique is iterative deepening search. If you have an idea about which moves might be more effective, or better, a function that estimates how far each state is from the goal, you may be able to build a heuristic function and use A* search instead. Common applications for these techniques include pathfinding in video games (or directions in other applications), AI planning, and Automated Theorem Proving.

I'll add some more topics later, but I suspect others have expertise to share here. Let's see some more ideas!

",16909,,,,,8/27/2018 12:35,,,,0,,,,CC BY-SA 4.0 7731,2,,1655,8/27/2018 13:24,,2,,"

ASIC - It stands for Application-specific integrated circuit. Basically, you write programs to design a chip in HDL. I'll take cases of how modern computers work to explain my point:

  • CPU's - CPU's are basically a microprocessor with many helper IC's performing specific tasks. In a microprocessor, there is only a single Arithmetic Processing unit (made up term) called Accumulator in which a value has to be stored, as computations are performed only and only the values stored in the accumulator. Thus every instruction, every operation, every R/W operation has to be done through the accumulator (that is why older computers used to freeze when you wrote from a file to some device, although nowadays the process has been refined and may not require accumulator to come in-between specifically DMA). Now in ML algorithms, you need to perform matrix multiplications which can be easily parallelized, but we have in our has a single processing unit only and so came the GPU's.
  • GPU's - GPU's have 100's processing units but they lack the multipurpose facilities of a CPU. So they are good for parallelizable calculations. Since there is no memory overlapping (same part of the memory being manipulated by 2 processes) in matrix multiplication, GPU's will work very well. Though since GPU is not multi-functional it will work only as fast as a CPU feeds data into its memory.
  • ASIC - ASIC can be anything a GPU, CPU or a processor of your design, with any amount of memory you want to give to it. Let' say you want to design your own specialized ML processor, design a processor on ASIC. Do you want a 256-bit FP number? Create a 256-bit processor. You want your summing to be fast? Implement a parallel adder up to a higher number of bits than conventional processors? You want n number of cores? No problem. you want to define the data-flow from different processing units to different places? You can do it. Also with careful planning, you can get a trade-off between ASIC area vs power vs speed. The only problem is that for all of this you need to create your own standards. Generally, some well-defined standards are followed in designing processors, like a number of pins and their functionality, IEEE 754 standard for floating-point representation, etc which have been come up after lots of trial and errors. So if you can overcome all of these you can easily create your own ASIC.

I do not know what Google is doing with their TPU's but apparently, they designed some sort of Integer and FP standard for their 8-bit cores depending on the requirements at hand. They probably are implementing it on ASIC for power, area and speed considerations.

",,user9947,36737,,3/31/2021 22:25,3/31/2021 22:25,,,,0,,,,CC BY-SA 4.0 7732,2,,7727,8/27/2018 13:47,,0,,"

This is what we call a regression problem. Although @John has provided a novel method I do not think it will work since you are decomposing the number into its minimal representation, so teaching it will be quite tough due to long term dependencies like 0111111 will change to 100000, so you have to train on almost all examples, with 0 actual learning.

Let's see your problem from a different viewpoint. Why are you thinking only of integers? Your problem can be generalised to approximating this curve:

This is clearly an example of $W.T * X + B$. Single input node will feed into 2 output nodes of Leaky ReLu activation, and a bias node. The bias node will be your $c$ and the 2-3 Leaky ReLu's will adjust their weights to create a straight line. Training this might be a problem (with co-adaptation between nodes), but mathematically a solution will be achieved by this Neural Net structure.

Also it is better to train on real values also, for better and finer weight adjustment (although theoretically for a single independent variable $x$ you should just need 3-4 values to make this Neural Net learn, but who knows?)

NOTE: The approximation in the negative region might not be that great.

",,user9947,-1,,6/17/2020 9:57,8/27/2018 14:41,,,,0,,,,CC BY-SA 4.0 7733,1,7745,,8/27/2018 15:25,,4,67,"

I'm trying to use an ANN to learn from a large amount of forest measurement data obtained from sampling plots across Ontario, Canada and associated climate data provided by regional climate modelling in this province.

So the following are the inputs to the ANN:

  • Location (GPS coordinates)
  • Measurement year and month
  • Tree species
  • Age
  • Soil type
  • Soil moisture regime
  • Seasonal or monthly average temperature
  • Seasonal or monthly average precipitation
  • Some more data are available to select

And the targets include: - Average total tree height - Average tree diameter at breast height

For each sampling plot, the trees have been measured for 1-4 times. So my question is what type of ANN can best used to learn from the data and then it can be used for predicting with a set of new input data?

",17766,,2193,,8/28/2018 14:08,8/28/2018 17:33,Type of artificial neural network suitable for learning and then predicting forest growth,,1,2,,,,CC BY-SA 4.0 7734,1,,,8/27/2018 16:13,,7,522,"

What are the best machine learning models that have been used to compose music? Are there some good research papers (or books) on this topic out there?

I would say, if I use a neural network, I would opt for a recurrent one, because it needs to have a concept of timing, chord progressions, and so on.

I am also wondering how the loss function would look like, and how I could give the AI so much feedback as they usually need.

",17769,,2444,,1/20/2021 22:05,1/20/2021 22:05,What are the best machine learning models for music composition?,,2,0,,,,CC BY-SA 4.0 7735,2,,7734,8/27/2018 16:17,,1,,"

There are a few of them. The most recent I've found is from DeepMind: The challenge of realistic music generation: modelling raw audio at scale. This video is a great analysis of it.

",7496,,2444,,1/20/2021 22:01,1/20/2021 22:01,,,,2,,,,CC BY-SA 4.0 7736,1,,,8/27/2018 18:41,,7,119,"

Due to my RL algorithm having difficulties learning some control actions, I've decided to use imitation learning/apprenticeship learning to guide my RL to perform the optimal actions. I've read a few articles on the subject and just want to confirm how to implement it.

Do I simply sample a state $s$, then perform the optimal action $a^*$ in that state $s$, calculate the reward for the action $r$, and then observe the next state $s'$, and finally put that into the experience replay?

If this is the case, I am thinking of implementing it as follows:

  1. Initialize the optimal replay buffer $D_O$
  2. Add the optimal tuple of experience $(s, a^*, r, s')$ into the replay buffer $D_O$
  3. Initialize the normal replay buffer $D_N$
  4. During the simulation, initially sample $(s, a^*, r, s')$ only from the optimal replay buffer $D_O$, while populating the normal replay buffer $D_N$ with the simulation results.
  5. As training/learning proceeds, anneal out the use of the optimal replay buffer, and sample only from the normal replay buffer.

Would such an architecture work?

",17706,,2444,,11/5/2020 23:52,11/5/2020 23:52,"In imitation learning, do you simply inject optimal tuples of experience $(s, a, r, s')$ into your experience replay buffer?",,1,1,,,,CC BY-SA 4.0 7738,1,7748,,8/28/2018 0:25,,0,527,"

I want to create a simple object detection tool. So, basically, an image will be provided to the tool, and, from that image, it has to detect the number of objects.

For example, an image of a dining table that has certain items present on it, such as plates, cups, forks, spoons, bottles, etc.

The tool has to count the number of objects, irrespective of the type of object. After counting, it should return the position of the object with its size, so that I can draw a border over it.

I would like not to use any library or API present such as TensorFlow, OpenCV, etc., given that I want to learn the details.

If the process is very difficult to be created without using an API then the number of/type of objects which it will count as an object can also be limited but since this project will be for my educational/learning purpose can anyone help me understand the logic using which this can be achieved? For example, it may ignore a napkin present in the table to be counted as an object.

",17776,,2444,,9/12/2020 15:29,9/12/2020 15:31,How can I develop an object detection system that counts the number of objects and determines their position in an image?,,1,3,,12/28/2021 10:17,,CC BY-SA 4.0 7739,1,7741,,8/28/2018 4:55,,4,1533,"

What happens after you have used machine learning to train your model? What happens to the training data?

Let's pretend it predicted correct 99.99999% of the time and you were happy with it and wanted to share it with the world. If you put in 10GB of training data, is the file you share with the world 10GB? If it was all trained on AWS, can people only use your service if they connect to AWS through an API?

What happens to all the old training data? Does the model still need all of it to make new predictions?

",12770,,2444,,12/16/2021 18:14,12/16/2021 18:14,What happens to the training data after your machine learning model has been trained?,,1,0,,,,CC BY-SA 4.0 7740,2,,7617,8/28/2018 8:46,,4,,"

A nice example Markov Decision Processes, which can be solved by classic reinforcement learning techniques like Q learning.

A Markov Decision Process consists of

  1. A set of discrete states (or continuous states that have been discretized)
  2. A set of possible actions that can be taken in each state.
  3. A set of transition probabilities that describe how an agent stochastically moves from its current state to the next, based on the agent's actions.
  4. A reward function quantitatively describing how nice it is to be in each state.
  5. A discounting factor that describes how much worse it is to receive a reward in the future than today.

Very small MDPs can be directly, exactly, solved, using techniques like value iteration, but the computational cost for these approaches grows extremely fast.

Reinforcement Learning (RL) was developed as a machine learning approach for MDPs. There is a loop: the agent gets the state of the environment, chooses an action, executes this action on the environment, and he gets back a reward, and the new state of the environment, and so on... You want the agent to maximize the cumulative reward over time.

The basic concept of Q Learning doesn't use ANNs. In Q learning, you build a state-action matrix, called the Q matrix. Thus, you must discretize the states of your environment, and the actions available to your agent. Then, the coefficient Qij is the expected reward when you perform the action j on the state i. In basic Q Learning, you explore and build this matrix, and it should converge and give an "optimal rule of action" for your agent.

However, the situation is often too complex, and you often want a non-discretized space of states or actions. Here Deep QL arrives, where the Q matrix becomes an ANN.

You can find a nice QL tutorial here (normal and deep).

And a lecture about QL here.

Keep in mind that only ANNs perform well in complex situations, so you'll always see examples with ANNs, even if the basic theory doesn't require ANNs.

",17759,,2444,,1/18/2021 11:30,1/18/2021 11:30,,,,1,,,,CC BY-SA 4.0 7741,2,,7739,8/28/2018 9:39,,4,,"

In many cases, a production-ready model has everything it needs to make predictions without retaining training data. For example: a linear model might only need the coefficients, a decision tree just needs rules/splits, and a neural network needs architecture and weights. The training data isn't required as all the information needed to make a prediction is incorporated into the model.

However, some algorithms retain some or all of the training data. A support vector machine stores the points ('support vectors') closest to the separating hyperplane, so that portion of the training data will be stored with the model. Further, k-nearest neighbours must evaluate all points in the dataset every time a prediction is made, and as a result the model incorporates the entire training set.

Having said that, where possible the training data would be retained. If additional data is received, a new model can be trained on the enlarged dataset. If it is decided a different approach is required, or if there are concerns about concept drift, then it's good to have the original data still on hand. In many cases, the training data might comprise personal data or make a company's competitive advantage, so the model and the data should stay separate.

If you'd like to see how this can work, this Keras blog post has some information (note: no training data required to make predictions once a model is re-instantiated).

",9091,,9091,,8/29/2018 10:50,8/29/2018 10:50,,,,2,,,,CC BY-SA 4.0 7743,2,,4095,8/28/2018 11:31,,1,,"

All the methods in the GameState class that is used to represent state, are stubs, and without these, the MCTS algorithm won't do anything at all. In particular, the DoMove method just changes who's turn it is, without actually taking any action.

Probably the reason the players can't see each other's cards is that this is not a completed implementation. Someone is either still working on this, or gave up half way.

",16909,,,,,8/28/2018 11:31,,,,1,,,,CC BY-SA 4.0 7744,2,,4140,8/28/2018 17:23,,3,,"

The second equation is correct. In TD($\lambda$), the $\lambda$ parameter can be tuned to smoothly vary between single-step updates (essentially what Sarsa does) in the case of $\lambda = 0$, and Monte-Carlo returns (using the full episode's returns) in the case of $\lambda = 1$.

In the first equation, $\sum_{j = i}^{N - 1} \lambda^{j - i} d_i$ could be interpreted as a... summing up exactly the same temporal-difference term $d_i$ a number of times (specifically, $N - 1 - j$ times), but multiplied by a different scalar every time. I'm not sure how that could be useful in any way.

In the second equation, $\sum_{m = i}^{N - 1} \lambda^{m - i} d_m$ can be interpreted as a weighted combination;

  • $1 \times d_i$: this is the difference between what we predict our returns will be at time $i + 1$, and what we previously predicted our returns would be at time $i$.
  • $+ \lambda^1 \times d_{i + 1}$: a very similar difference between two of our own predictions, now the predictions at time $i + 2$ and $i + 1$. This time the temporal-difference term is weighted by the parameter $\lambda$ (completely ignored if $\lambda = 0$, full weight if $\lambda = 1$, somewhere in between if $0 < \lambda < 1$).
  • $+ \lambda^2 \times d_{i + 2}$: again a similar temporal-difference term, now again one step further into the future. Downweighted a bit more than the previous term in cases where $0 < \lambda < 1$.
  • etc.

For people who are familiar with temporal-difference algorithms like TD($\lambda$), sarsa($\lambda$), eligibility traces, etc. from Reinforcement Learning literature, this makes a lot more sense. The notation is still a bit different from the standard literature on algorithms like TD($\lambda$), but in fact becomes equivalent once you note that in this paper they discuss domains where there are only rewards associated with terminal states, and no intermediate rewards.

Intuitively, what they're doing with the $\lambda$ parameter is assigning more weight (or ""credit"" or ""importance"") to short-term predictions / short-term ""expectations"" (in the English sense of the word, rather than the mathematical sense of the word) or observations of rewards, over long-term predictions/observations. In the extreme case of $\lambda = 0$, you completely ignore long-term predictions/observations and only propagate observed rewards very slowly, one-by-one in single steps. In the other extreme case of $\lambda = 1$, you propagate rewards observed at the end of episodes with equal weight all the way to the beginning of the episodes, through all states that you went to, giving them all equal weight for that observed reward. With $0 < \lambda < 1$, you choose a balance between those two extremes.


Also note that Equation (5) in the KnightCap paper (where they similarly discuss the extreme case of $\lambda = 1$, like I did above) is incorrect if we take the first equation from your question, but is correct if we take the second equation.

",1641,,,,,8/28/2018 17:23,,,,0,,,,CC BY-SA 4.0 7745,2,,7733,8/28/2018 17:33,,1,,"

My suggestion is not to use an ANN, but instead to use a simpler regression algorithm. The main reason for this is that ANNs take a long time to train, and work better when a lot of data is used. They also require a lot of expertise in parameter tuning to apply well. Since you say you don't have a lot of data, and also don't have a lot of experience using them, I think you will be better off applying something else first. If the other techniques don't work at all, then you might think about using ANNs, but again, they tend to want a lot of data.

If you have tried ordinary least squares regression, and found it does not work well, my next choice would be a Classification And Regression Tree. These models can make good decisions with small amounts of data, and do not require a lot of time to train. They can handle real-valued outputs like the height and width of a tree. Weka's REPTree might be a good starting place.

If Trees don't work out, my next suggestion would be to try regression using a Support Vector Machine. SciKitLearn's SVR is a good choice for this. SVRs can sometimes be very effective when data is limited, because they make assumptions about how to handle data-poor regions that seem to be generally applicable. An SVM can also report low confidence when estimating in those regions. They also train fairly fast when using small amounts of data, and can learn non-linear functions from the data.

If you really want to use an ANN, I would start with a simple Multi-layer perceptron. This model has few parameters to play with, and can probably fit well to your regression. It may make strange decisions in regions with less data however.

Hope this helps!

",16909,,,,,8/28/2018 17:33,,,,4,,,,CC BY-SA 4.0 7746,1,7753,,8/28/2018 20:09,,4,199,"

Cognitive psychology is researched since the 1940s. The idea was to understand human problem solving and the importance of heuristics in it. George Katona (an early psychologist) published in the 1940s a paper about human learning and teaching. He mentioned the so-called Katona-Problem, which is a geometric task.

Squares

Katona style problems are the ones where you remove straws in a given configuration of straws to create n unit squares in the end. In the end, every straw is an edge to a unit square. Some variations include 2x2 or 3x3 sizes of squares allowed as well as long as no two squares are overlapping, i.e. a bigger square 2x2 can't contain a smaller square of size 1x1. Some problems use matchsticks as a variation, some use straws, others use lines. Some variations allow bigger squares to contain smaller ones, as long as they don't share an edge viz. https://puzzling.stackexchange.com/questions/59316/matchstick-squares

  • Is there a way we can view it as a graph and removing straws/matchsticks as deleting edges between nodes in a graph?

  • If so, can I train a bot where I can plugin some random, yet valid conditions for the game and goal state to get the required solution?

Edit #1: The following problem is just a sample to show where I am getting at. The requirement for my game is much larger. Also, I chose uninformed search to make things simpler without bothering about complex heuristics and optimization techniques. Please be free to explore ideas with me.

Scenario #1:

Consider this scenario. In the following diagram, each dashed line or pipe line represents a straw. Numbers and alphabet denote junctions where straw meet. Let's say, my bot can explore each junction, remove zero, one, two, three or four straws such that resultant state has

  • no straw that dangles off by being not connected to a square.
  • a small mxm square isn't contained in a larger nxn square (m<n)
  • Once straw is removed, it can't be put back.

Initial configuration is shown here. I always need to start from top left corner node P and optimization... the objective is to remove straws in minimum hops from node to node using minimum number of moves, by the time goal state is reached.

       P------Q------R------S------T
       |      |      |      |      |
       |      |      |      |      |
       E------A------B------F------G
       |      |      |      |      |
       |      |      |      |      |
       J------C------D------H------I
       |      |      |      |      |
       |      |      |      |      |
       K------L------M------N------O
       |      |      |      |      |
       |      |      |      |      |
       U------V------W------X------Y

Goal 1 : I wish to create a large 2x2 square.

At some point during, say BFS search (although it could be any uninformed search on partially observable universe i.e. viewing one node at a time), I could technically reach A, blow out all edges on A to create the following.

       P------Q------R------S------T
       |             |      |      |
       |             |      |      |
       E      A      B------F------G
       |             |      |      |
       |             |      |      |
       J------C------D------H------I
       |      |      |      |      |
       |      |      |      |      |
       K------L------M------N------O
       |      |      |      |      |
       |      |      |      |      |
       U------V------W------X------Y

That is one move.

Goal 2 : I want to create a 3x3 square instead.

I can't do that in one move. I need the record of successive nodes to be explored and then possibly backtrack to given point as well if the state fails to produce desired result. Each intermediate state might produce rectangles which are not allowed (also, how would one know how many more and which straws to remove to get to a square) or dangle a straw or worse get stuck in an infinite loop as I can choose to not remove any straw. How do I approach this problem?

#Edit 2:

For validation, figures 3, 4 and 5 are given below.

       P------Q------R------S------T
       |             |      |      |
       |             |      |      |
       E      A      B------F      G
       |             |      |      |
       |             |      |      |
       J------C------D------H      I
       |      |      |      |      |
       |      |      |      |      |
       K------L------M------N      O
       |      |      |      |      |
       |      |      |      |      |
       U------V------W------X      Y

The above figure (3) is invalid as we can't have dangling sticks TG,GI etc.

       P------Q------R------S------T
       |      |                    |
       |      |                    |
       E------A                    G
       |                           |
       |                           |
       J                           I
       |                           |
       |                           |
       K                           O
       |                           |
       |                           |
       U------V------W------X------Y

The above figure (4) is invalid as we can't have overlapping squares

       P------Q------R      S      T
       |             |             
       |             |            
       E      A      B------F------G
       |             |      |      |
       |             |      |      |
       J------C------D------H------I
       |      |      |      |      |
       |      |      |      |      |
       K------L------M------N------O
       |      |      |      |      |
       |      |      |      |      |
       U------V------W------X------Y

Figure (5) is valid configuration.

",17799,,2444,,12/10/2021 15:43,12/10/2021 15:43,How do I train a bot to solve Katona style problems?,,1,0,,,,CC BY-SA 4.0 7747,1,7752,,8/28/2018 23:10,,4,668,"

I've seen many Lorem Ipsum generators on the web, but not only, there is also ""bacon ispum"", ""space ispum"", etc. So, how do these generators generate the text? Are they powered by an AI?

",17801,,2444,,5/13/2020 21:18,5/13/2020 21:18,"How does the ""Lorem Ipsum"" generator work?",,2,0,0,,,CC BY-SA 4.0 7748,2,,7738,8/29/2018 8:00,,4,,"

If you want to get experience, you should probably start with some easier task. Object detection and localization are relatively hard and writing a neural network and image processing pipeline from scratch will take you a long time.

If you want to build up an intuition about how NN's work, you might want to code some simple task from scratch. This is an example.

When you got some intuition about how NN's work, then you should proceed to your task. In here, you have a similar question and answer provided. The current state-of-the-art approach for your task would probably be point 3, that is object detection network, like YOLO or Faster-RCNN.

",16929,,2444,,9/12/2020 15:31,9/12/2020 15:31,,,,1,,,,CC BY-SA 4.0 7749,1,7750,,8/29/2018 8:01,,3,214,"

Tensorflow/Lucid is able to visualize what a ""channel"" of a layer of a neural network (image recognition, Inception-v1) responds to. Even after studying the tutorial, the source code, the three research papers on lucid and comments by the authors on Hacker News, I'm still not clear on how ""channels"" are supposed to be defined and individuated. Can somebody shed some light on this? Thank you.

https://github.com/tensorflow/lucid
https://news.ycombinator.com/item?id=15649456

",17807,,17221,,8/29/2018 16:49,8/29/2018 16:55,"What exactly does ""channel"" refer to in tensorflow/lucid?",,1,4,,,,CC BY-SA 4.0 7750,2,,7749,8/29/2018 8:24,,3,,"

The channel they are talking about is the depth of the layer L.

In this image, the channel is 5. There are 5 $3*2$ filters

They optimize an image from noise, to better respond to the filter F, in the layer L. Then you obtain this kind of image, and you can try to interpret the prupose of that filter (it learns to detect eyes, faces, wheels, and so on)

EDIT: To be more precise, you visualise the channel only if you optimise images for every filters, otherwise, you obtain a filter visualisation (if you do the optimisation one time, for one filter)

They call it ""channel"", because in a colored image, you have 3 dimensions : witdh, height, and channel (for color), as layers which have also 3 dimensions.

",17221,,,user9947,8/29/2018 16:55,8/29/2018 16:55,,,,0,,,,CC BY-SA 4.0 7751,2,,7747,8/29/2018 8:42,,3,,"

If you wanted to generate more I guess you could take the string and convert to a list then you could randomly select as many words as you want, from the list.

Using Python

import numpy as np

lorem = ""lLorem ipsum dolor sit amet, consectetur adipiscing elit. "".split()

number_of_words_needed = 20

new_text = []

for i in range(number_of_words_needed):
    new_text.append(lorem[np.random.randint(len(lorem))])

print(new_text)

ipsum ['sit', 'dolor', 'elit.', 'sit', 'sit', 'sit', 'elit.', 'dolor', 'amet,', 'ipsum', 'amet,', 'ipsum', 'dolor', 'Lorem', 'Lorem', 'adipiscing', 'sit', 'elit.', 'consectetur', 'adipiscing']

for reference

Source: ""Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source. Lorem Ipsum comes from sections 1.10.32 and 1.10.33 of ""de Finibus Bonorum et Malorum"" (The Extremes of Good and Evil) by Cicero, written in 45 BC. This book is a treatise on the theory of ethics, very popular during the Renaissance. The first line of Lorem Ipsum, ""Lorem ipsum dolor sit amet.."", comes from a line in section 1.10.32.

The standard chunk of Lorem Ipsum used since the 1500s is reproduced below for those interested. Sections 1.10.32 and 1.10.33 from ""de Finibus Bonorum et Malorum"" by Cicero are also reproduced in their exact original form, accompanied by English versions from the 1914 translation by H. Rackham.""

",17809,,1671,,8/29/2018 21:26,8/29/2018 21:26,,,,1,,,,CC BY-SA 4.0 7752,2,,7747,8/29/2018 8:56,,5,,"

Lorem ipsum generators don't typically use anything considered as AI. Usually they just store large pieces of text and select sections from it randomly - they are very simple. The main goal is to produce ""nonsense"" text that fills space but does not distract from issues of layout and design. The variations of it are usually just for fun, and like the original, are mostly simple generators which select strings of text from a core data source randomly and without using any AI techniques.

It is possible to build more sophisticated random text generators that work using data structures from Natural Language Processing (NLP).

One popular and easy-to-code data structure is N-grams, which store the frequencies/probabilities of the Nth word given words 1 to N-1. E.g. a bigram structure can tell you all the possible words to come after ""fish"" e.g. ""fish"" => [""food"" => 0.2, ""swims"" => 0.3, ""and"" => 0.4, ""scale"" => 0.1] To use that structure to generate text, use a random number generator to select a word based on looking up the Nth word's frequency, then shift the list of words being considered and repeat.

A more recent text generating NLP model is recurrent neural networks (RNNs), which have a variety of designs. Popular right now are LSTM networks, and these are capable of some quite sophisticated generation, provided they are trained with enough data for long enough. The blog The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy has quite a few really interesting examples of using RNNs for text generation. In practice this works similarly to n-grams: Use the RNN to suggest probabilities for next word given words so far, choose one randomly, then feed back the generated word into the RNN and repeat.

",1847,,1847,,8/29/2018 16:06,8/29/2018 16:06,,,,0,,,,CC BY-SA 4.0 7753,2,,7746,8/29/2018 11:40,,2,,"

Your intuition is right: this is fundamentally a problem for combinatorial search.

You're also right that problems are created by the fact that not every move is valid at state. To fix this, you need to add a function that can determine whether a given state is valid or not, in addition to the usual function that checks whether it is your goal state or not. Before adding each node to the queue of your search algorithm, check whether it is a valid state. If it isn't, don't add it.

The second issue you raise is that your search might enter an infinite loop. Since it is possible to remove zero edges from a state, this is a serious concern. There are two approaches to solving this. First, you can try storing all states that you have already visited in a fast data structure like a Hash Table. Before adding a node to your queue, check if it's already been processed. If it has been, don't add it. This may work, but the memory requirements grow exponentially in the number of moves required for a solution. It's sometimes worth it, but I think you can likely skip it for this problem.

A better approach if you're worried about speed is to switch your algorithm to something like iterative deepening, which has the good properties of BFS, but with much lower memory requirements; or to A* search if you can come up with an admissible heuristic for your domain (a good starting point: counting the number of junctions you'd need to remove sticks from to finish, if the robot could teleport, would be admissible).

Hope this helps!

Edit: Here's some pseudo-code for filtering out invalid moves:

function valid_state(State s){
    for stick in s.remaining_sticks:
        if stick is vertical:
           1. let side = walk up from the middle of stick until it becomes possible to turn right.
           2. let side += walk down from the middle of stick until it becomes possible to turn right.
           3. From the first junction above stick where we can turn right, try to walk *side* steps to the right.
           4. Then try to walk side steps down.
           5. Then try to walk side steps left.
           6. Repeat previous 5 steps but for the nearest junctions where we can turn left instead of right.
        else:
           Do exactly what's in the if above, but substitute "left" for "up" and "right" for "down".      
    if we could walk in a square successfully for every stick, this is a valid state, so return true. Otherwise, return false.     
}
",16909,,2444,,12/10/2021 15:43,12/10/2021 15:43,,,,0,,,,CC BY-SA 4.0 7754,2,,7684,8/29/2018 13:52,,1,,"

Of course now there has been a huge development: Huggingface published pytorch-transformers, a library for the so successful Transformer models (BERT and its variants, GPT-2, XLNet, etc.), including many pretrained (mostly English or multilingual) models (docs here). It also includes one German BERT model. SpaCy offers a convenient wrapper (blog post).

Update: Now, Salesforce published the English model CTRL, which allows for use of ""control codes"" that influence the style, genre and content of the generated text.

For completeness, here is the old, now less relevant version of my answer:


Since I posed the question, I found this pretrained German language model: https://lernapparat.de/german-lm/

It is an instance of a 3-layer ""averaged stochastic descent weight-dropped"" LSTM which was implemented based on an implementation by Salesforce.

",17670,,17670,,9/23/2019 6:47,9/23/2019 6:47,,,,1,,,,CC BY-SA 4.0 7755,1,8564,,8/29/2018 16:04,,16,6502,"

I'm coding a Proximal Policy Optimization (PPO) agent with the Tensorforce library (which is built on top of TensorFlow).

The first environment was very simple. Now, I'm diving into a more complex environment, where all the actions are not available at each step.

Let's say there are 5 actions and their availability depends on an internal state (which is defined by the previous action and/or the new state/observation space):

  • 2 actions (0 and 1) are always available
  • 2 actions (2 and 3) are only available when the internal state is 0
  • 1 action (4) is only available when the internal state is 1

Hence, there are 4 actions available when the internal state is 0 and 3 actions available when the internal state is 1.

I'm thinking of a few possibilities to implement that:

  1. Change the action space at each step, depending on the internal state. I assume this is nonsense.

  2. Do nothing: let the model understand that choosing an unavailable action has no impact.

  3. Do almost nothing: impact slightly negatively the reward when the model chooses an unavailable action.

  4. Help the model: by incorporating an integer into the state/observation space that informs the model what's the internal state value + bullet point 2 or 3

Are there other ways to implement this? From your experience, which one would be the best?

",17818,,2444,,11/17/2020 19:15,11/17/2020 19:15,How to implement a variable action space in Proximal Policy Optimization?,,3,1,,,,CC BY-SA 4.0 7756,1,7757,,8/29/2018 16:15,,4,308,"

I have been recently reading about model selection algorithms (for example to decide which value of the regularisation parameter or what size of a neural network to use, broadly hyper-parameters). This is done by dividing the examples into three sets (training 60%, cross-validation 20%, test 20%) and training is done on the data with the first set for all parameters, and then choose the best parameter based on the result in the cross-validation and finally estimate the performance using the test set.

I understand the need for a different data-set compared to training and test for select the model, however, once the model is selected, why not using the cross-validation examples to improve the hypothesis before estimating the performance?

The only reason I could see is that this could cause the hypothesis to worsen and we wouldn't be able to detect it, but, is it really possible that by adding much more examples (60% -> 80%) the hypothesis gets worse?

",11303,,,user9947,8/29/2018 17:10,8/30/2018 4:05,Use cross-validation to train after model selection,,1,0,,,,CC BY-SA 4.0 7757,2,,7756,8/29/2018 16:44,,3,,"

You are quite correct. If you have properly followed the Cross Validation procedure and selected the best model indeed, then you can use the CV set as the training set for the final model. And no it will not cause your hypothesis to worsen (for that set maybe, but not for new examples) if you have selected the model correctly. In-fact you may use the entire 100% of the data-set.

Justin Johnson a TA at Stanford University answered a similar type of question on training CNN's using 100% of the data-set. He said that if you had enough computational resources and want to squeeze that extra 1% or 2% accuracy from your model you can use the entire data-set after model selection.

NOTE: As @NeilSlater pointed out, if you need the model for reporting purposes you should only use 80% of the data-set, otherwise you'll lose your only source for unbiased model verification. But if you are looking to deploy the model on field you can use 100% of the data-set.

",,user9947,,user9947,8/30/2018 4:05,8/30/2018 4:05,,,,5,,,,CC BY-SA 4.0 7758,2,,7646,8/29/2018 22:51,,1,,"

There is no easy way to play around with the hyper-parameters (number of layers, layer configuration, number of outputs per layer) of a CNN and get an accurate view of how these will affect the resulting performance of your model. However there are a few things that you can do to avoid wasting too much time training and re-training.

Why?

When training a CNN we aim to minimize a loss function, thus a better CNN model is defined as one which converges to a set of model parameters with a lower loss function. Identifying the minimum of the loss function for a given CNN is already very difficult, and there is no guarantee that the true minimum will every be reached by fault of gradient descent.

Each variation of the CNN resulting from the different hyper-parameters will still result in a very large number of model parameters. There is no way to know how the loss function will look in this hyper-dimensional space and even harder to estimate its minimum.

What to do?

First, you should try and understand how each layer in the network affects its inputs. You should know what kind of layers to use for what kind of data. You should also know what kinds of activation functions to use for different data distributions. You should also know how many model parameters per layer to try in order to sufficiently compress your data whilst not losing significant information.

You can get a lot of this intuition by reading papers which have found successful models for specific tasks.

In addition, you can train with smaller amounts of data and estimate the potential minimum loss function by seeing how fast the loss function moves towards its minimal value (momentum). Usually a lower minimum is achieved when the loss function decreases faster. However, this is in no way always true. A loss function can converge slowly at first and then speed up later. This is entirely possible. But, you can get some sense of the potential of your model in this way.

",5925,,,,,8/29/2018 22:51,,,,2,,,,CC BY-SA 4.0 7759,2,,3801,8/30/2018 8:46,,1,,"

Dynamic Computational Graphs are simply modified CGs with a higher level of abstraction. The word 'Dynamic' explains it all: how data flows through the graph depends on the input structure,i.e the DCG structure is mutable and not static. One of its important applications is in NLP neural networks.

",17833,,,,,8/30/2018 8:46,,,,0,,,,CC BY-SA 4.0 7760,2,,7734,8/30/2018 14:29,,0,,"

I am also new to the neural network architecture game but from what I have learned so far I think you have a few good options to choose from.

A recurrent neural network (RNN) would be a standard approach but if you're looking for something more robust you could look into a Long Short Term Memory network (LSTM). The neurons have a memory of past events and can recall that later on. It is a subset of RNN.

Perhaps you could go a little further and use a Convolutional Neural Network (CNN). So far these type of networks have been highly successful for image recognition. You could abstract a song piece as an image. Each pixel could be a progression in time and the value of the pixel could be the actual note.

Also take a look at this article for a good overview of several different neural network types.

",17840,,,,,8/30/2018 14:29,,,,0,,,,CC BY-SA 4.0 7761,1,,,8/30/2018 19:11,,7,1687,"

How does the Dempster-Shafer theory differ from Bayesian reasoning? How do these two methods handle uncertainty and compute posterior distributions?

",17847,,2444,,10/18/2021 20:18,10/18/2021 20:18,How does the Dempster-Shafer theory differ from Bayesian reasoning?,,1,0,,,,CC BY-SA 4.0 7762,1,7943,,8/30/2018 20:05,,6,3147,"

How can I create an artificially intelligent aimbot for a game like Counter-Strike Global Offensive (CS:GO)?

I have an initial solution (or approach) in mind. We can train an image recognition model that will recognize the head of the enemy (in the visible area of the player, so excluding the invisible area behind the player, to avoid being easily detected by VAC) and move the cursor to the position of the enemy's head and fire.

It would be much more preferable to train the recognition model in real-time than using demos. Most of the available demos you might have might be 32 tick, but while playing the game, it works at 64 tick.

It is a very fresh idea in my mind, so I didn't actually think a lot about it. Ignoring facts like detection by VAC for a few moments.

Is there any research work on the topic? What are the common machine learning approaches to tackle such a problem?

Later on, this idea can be expanded to a completely autonomous bot that can play the game by itself, but that is a bit too much initially.

",17849,,2444,,11/22/2019 18:31,11/22/2019 18:31,How can I create an artificially intelligent aimbot for a game like CS:GO?,,2,0,,,,CC BY-SA 4.0 7763,1,7771,,8/30/2018 23:45,,19,14594,"

I am studying reinforcement learning and the variants of it. I am starting to get an understanding of how the algorithms work and how they apply to an MDP.

What I don't understand is the process of defining the states of the MDP. In most examples and tutorials, they represent something simple like a square in a grid or similar.

For more complex problems, like a robot learning to walk, etc.,

  • How do you go about defining those states?
  • Can you use learning or classification algorithms to "learn" those states?
",17853,,2444,,11/18/2021 12:01,11/18/2021 12:01,How to define states in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 7764,2,,7763,8/31/2018 2:41,,11,,"

A common early approach to modeling complex problems was discretization. At a basic level, this is splitting a complex and continuous space into a grid. Then you can use any of the classic RL techniques that are designed for discrete, linear, spaces. However, as you might imagine, if you aren't careful, this can cause a lot of trouble!

Sutton & Barto's classic book Reinforcement Learning has some suggestions for other ways to go about this. One is tile coding, covered in section 9.5.4 of the new, second edition. In tile coding, we generate a large number of grids, each with different grid spacing. We then overlay the grids on top of each other. This creates discrete regions non-uniform shapes, and can work well for a variety of problems.

Section 9.5 also covers a variety of other ways to encode a continuous space into a discrete MDP, including radial-basis functions, and coarse codings. Check it out!

",16909,,1671,,7/18/2019 19:40,7/18/2019 19:40,,,,0,,,,CC BY-SA 4.0 7765,2,,7736,8/31/2018 10:50,,3,,"

That seems to be functional.

That is a great approach, as long as you are using an off-policy algorithm (since the samples you are using to learn are not the policy currently being performed), like Q-learning.

By annealing the sample rate from the optimal buffer to the regular one, you introduce noise into the network and emphasize exploration (albeit more limited). This is helpful when you (the researcher) have no access to optimal policies, but merely "good" ones, and you still want the network to try and improve on those.

",7496,,-1,,6/17/2020 9:57,8/31/2018 10:50,,,,2,,,,CC BY-SA 4.0 7766,2,,7672,8/31/2018 11:37,,1,,"

We can generalize both problem and solution by removing the specifics of housing.

Representing Forward Propagation

We have a function $f$ we wish to obtain via the training of an artificial network that produces a scalar result $s$, the sole dependent variable and the generalization of market price.

$s = f(s_1, s_2, ..., s_k, c_1, c_2, ..., c_v)$

The independent scalar variables $s_1$ through $s_k$ are the generalization of a constant number $k$ of property features from the tax authority, assessor's office, or inspection document. The question calls this structured data, however it is questionable whether $k$ is truly effectively constant. In normal practice, some of $s_i$ will be unassigned. Since the question overlooks the additional complexity of missing scalars, so will this answer.

The independent cube variables $c_1$ through $c_v$ are the generalizations of a variable number $v$ of property images from Google photography, real estate agents, buyers, sellers, and other potential sources. The dimensions of each cube are horizontal and vertical positions and pixel structure element number.

It is unlikely that the resolution values for each cube are uniform between samples in real life, which the question did not mention, so this answer will overlook that complexity and focus on the variability of $v$, the quantity of cubes representing images for a given example. Since $v$ cannot meaningfully be either negative or infinite, we can assume $v$ to be a non-negative integer.

Terminology

The observational unit should not be considered the house or the property but an image of it, which may be a member of a camera location and orientation category relative to the elements of the property. Each image capture is an observation, distinct in both Bergsonian and clock time. Each item under evaluation is an example from the sample of all items, in this case, all properties in the region for which prediction is attempted.

Design of a Solution

Each of the $v$ cubes demonstrate zero or more additional features of the item being evaluated, real properties in the question's specific case.

It may be reasonable to assume that such features may either positively or negative affect the example's label corresponding to the result $s$, but not both positively and negatively affect it. If that is the case, we can aggregate the features across the set of cubes for each example under the reasonable assumption that there are no points of inflection. Such may be reasonable because, for instance, a feature regarding the uniformity of paint coverage, lawn care, or roofing material may have no inflection point. Such aspects of the property cannot be too uniform. That makes the substitution straightforward.

A reasonably versatile way to generalize the aggregation of directionally consistent function is using a substitution, which may be what the question meant by feed in the embeddings of the image to your final model.

$s = f = f'(s_1, s_2, ..., s_k, v, h\big(v, \sum_{j = 1}^v g(c_j)\big)$

Note the elements of this substitution.

  • $h$ is a vector function that normalizes the distribution of each feature found in the feature vector before using it as a set of inputs to $f'$.
  • $g$ is a generalization of the cube, with the input (independent variable) being a cube representing the image and with the output (dependent variable) being a vector of features extracted.
  • $v$, the number of cubes (visual observations) is fed along with extracted features into the function $f'$, which can be realized through the convergence of a multilayer perceptron.

If the images are grouped in terms of the location of the camera, then this principle can be applied iteratively, where $o$ is the number of distinct categories of camera orientation. In this case, the pairs $(v_z, h_z)$ represent the cube quantity and feature aggregations for image camera location category $z$.

$s = f = f'(s_1, s_2, ..., s_k, v_1, h_1, v_2, h_2, ... v_o, h_o)$

Given sensible models $h$ and $g$, training for prediction is straightforward.

Image feature extraction can be realized through ConvNet approaches such as OverFeat1, AlexNet2, CaffeNet3, GoogLeNet4, VGG 65 or PatreoNet6. Tuning such models produces $g$.

The nature of function $h$ may be homogeneous or heterogeneous across dimensions. Each component of the feature vector arising from extraction can have applied to it a function such as any of these or others, where $q$ is the feature index, $p_{qi}$ is learning parameter i.

  1. $h_q(x) = x$
  2. $h_q(x) = \large{x^{p_{q1} + p_{q2} v}}$
  3. $h_q(x) = \log (x + p_{q1} v + p_{q2})$
  4. $h_q(x) = \large{\epsilon^x}$
  5. Others

Scalar function 1 is best when the designer wishes the convergence during the training of $f'$ in such a way that normalization is accomplished in the net. It is a good choice for features where its frequency of occurrence and magnitude are across the entire set of images for a given example item is roughly proportional to the resulting value of that item.

Function 2 presents flexibility in normalization curvature with respect to the number of cubes. Function 3 presents attenuation of the frequency of feature occurrence. Function 4 presents compounding of feature effect with recurrence in the images of the same example.

The key is then the selection of how to deal with the substitution in the training in terms of procedure and wiring of corrective signaling. Procedurally, there are three options.

  • Train both the ConvNet function $g$ and the multilayer perceptron function $f'$ together, extending the applicable principles of back-propagation and gradient descent.
  • Extract features first, tuning the ConvNet corresponding to $g$ prior to training the network corresponding to $f'$. The advantage of this approach is manual control over and interim evaluation of feature extraction.
  • Use something similar to the mini-batch approach to find a balance between the above two extremes.

———

Footnotes

[1] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, Y. LeCun, Overfeat: Integrated recognition, localization and detection using convolutional networks, arXiv preprint arXiv:1312.6229v4

[2] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Neural Information Processing Systems, 2012, pp. 1106–1114

[3] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional architecture for fast feature embedding, arXiv preprint arXiv:1408.5093

[4] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, arXiv preprint arXiv:1409.4842

[5] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556

[6] K. Nogueira, W. O. Miranda, J. A. Dos Santos, Improving spatial feature representation from aerial scenes by using convolutional networks, in: Graphics, Patterns and Images (SIBGRAPI), 2015 28th SIBGRAPI Conference on, IEEE, 2015, pp. 289–296.

",4302,,4302,,9/16/2018 14:07,9/16/2018 14:07,,,,0,,,,CC BY-SA 4.0 7771,2,,7763,8/31/2018 16:02,,22,,"

The problem of state representation in Reinforcement Learning (RL) is similar to problems of feature representation, feature selection and feature engineering in supervised or unsupervised learning.

Literature that teaches the basics of RL tends to use very simple environments so that all states can be enumerated. This simplifies value estimates into basic rolling averages in a table, which are easier to understand and implement. Tabular learning algorithms also have reasonable theoretical guarantees of convergence, which means if you can simplify your problem so that it has, say, less than a few million states, then this is worth trying.

Most interesting control problems will not fit into that number of states, even if you discretise them. This is due to the ""curse of dimensionality"". For those problems, you will typically represent your state as a vector of different features - e.g. for a robot, various positions, angles, velocities of mechanical parts. As with supervised learning, you may want to treat these for use with a specific learning process. For instance, typically you will want them all to be numeric, and if you want to use a neural network you should also normalise them to a standard range (e.g. -1 to 1).

In addition to the above concerns which apply for other machine learning, for RL, you also need to be concerned with the Markov Property - that the state provides enough information, so that you can accurately predict expected next rewards and next states given an action, without the need for any additional information. This does not need to be perfect, small differences due to e.g. variations in air density or temperature for a wheeled robot will not usually have a large impact on its navigation, and can be ignored. Any factor which is essentially random can also be ignored whilst sticking to RL theory - it may make the agent less optimal overall, but the theory will still work.

If there are consistent unknown factors that influence result, and could logically be deduced - maybe from history of state or actions - but you have excluded them from the state representation, then you may have a more serious problem, and the agent may fail to learn.

It is worth noting the difference here between observation and state. An observation is some data that you can collect. E.g. you may have sensors on your robot that feed back the positions of its joints. Because the state should possess the Markov Property, a single raw observation might not be enough data to make a suitable state. If that is the case, you can either apply your domain knowledge in order to construct a better state from available data, or you can try to use techniques designed for partially observable MDPs (POMDPs) - these effectively try to build missing parts of state data statistically. You could use a RNN or hidden markov model (also called a ""belief state"") for this, and in some way this is using a ""learning or classification algorithms to ""learn"" those states"" as you asked.

Finally, you need to consider the type of approximation model you want to use. A similar approach applies here as for supervised learning:

  • A simple linear regression with features engineered based on domain knowledge can do very well. You may need to work hard on trying different state representations so that the linear approximation works. The advantage is that this simpler approach is more robust against stability issues than non-linear approximation

  • A more complex non-linear function approximator, such as a multi-layer neural network. You can feed in a more ""raw"" state vector and hope that the hidden layers will find some structure or representation that leads to good estimates. In some ways, this too is ""learning or classification algorithms to ""learn"" those states"" , but in a different way to a RNN or HMM. This might be a sensible approach if your state was expressed naturally as a screen image - figuring out the feature engineering for image data by hand is very hard.

The Atari DQN work by DeepMind team used a combination of feature engineering and relying on deep neural network to achieve its results. The feature engineering included downsampling the image, reducing it to grey-scale and - importantly for the Markov Property - using four consecutive frames to represent a single state, so that information about velocity of objects was present in the state representation. The DNN then processed the images into higher-level features that could be used to make predictions about state values.

",1847,,,,,8/31/2018 16:02,,,,0,,,,CC BY-SA 4.0 7772,2,,7761,8/31/2018 16:49,,3,,"

Demster-Shafer Theory and Bayesian Networks were both techniques that rose to prominence within AI in the 1970's and 1980's, as AI started to seriously grapple with uncertainty in the world, and move beyond the sterilized environments that most early systems worked in.

In the 1970's and perhaps even earlier, it became apparent that direct applications of probability theory to AI were not going to work out because of the curse of dimensionality. As more variables needed to be considered in a given problem, the amount of storage space and processing time needed increased exponentially. This led to a search for new methods to handle uncertainty within AI.

Bayesian Networks and Bayesian Learning remained firmly rooted in probabilistic reasoning, but allowed for the assignment of subjective priors to probabilities, to incorporate expert knowledge. It also allowed problems to be factored into graphical structures to avoid the curse of dimensionality in most cases.

Dempster-Shafer was a further generalization of Bayesian Networks, in which malformed probability distributions were permitted as a way to capture uncertainty. So, for example, the probability of all possible events was not required to add up to 1, because there might be events we don't know about. While on the surface this might seem reasonable, most modern AI researchers view this as a deeply flawed approach. Cheeseman's criticism of DS and other non-probabilistic methods is the basis from which a lot of this view stems. Judea Pearl was another harsh and influential critic of DS Theory.


The basic difference in the fusion of new information is that in Bayesian Networks, after observing new evidence $$E$$, we apply Bayes' rule:

$$ P(H | E) = P(E | H) * P(H) / P(E) $$

to yield a posterior for every hypothesis.

In DS theory, we look for overlap between the worlds suggested by the new evidence and the old data. This can lead to non-sensical results.

Here's an example:

Our prior belief is that our Robot is located at position (0,1) with probability 0.95, and position (0,2) with probability 0.05.

A new signal appears. The signal indicates that robot is at position (0,0) with probability 0.95, and position (0,2) with probability 0.05.

Under Bayes rule, we consider the probability that these signals were generated under each of our original hypotheses, and the probability of observing these signals at all, as shown in the equation above. Under DS-Theory, we would do the same thing.

However, DS-theory provides a second way to interpret the signal: as a second prior distribution, rather than as evidence. We can then combine this second prior with the first, to compute a sort of joint-prior:

$$P( H_{A,B}) \propto P(H_A) * P(H_B)$$

That is, the "probability" (it's not always a true probability, which is one of the criticisms) of a hypothesis after the fusion will be the product of the "probabilities" of the hypothesis under each of the separate priors.

In the example above, this gives a wacky result: the "joint prior" says the Robot is at (0,2) with probability 1.0. This and other problems are why this mode of information combination has mostly been abandoned. There are many more examples on the wikipedia page for DS.

I think there's a discussion of this in more detail in Section IV of Russell & Norvig, at the end of one of the chapters.

",16909,,2444,,10/18/2021 20:18,10/18/2021 20:18,,,,0,,,,CC BY-SA 4.0 7773,2,,7555,8/31/2018 18:12,,1,,"

Approaches to the Game

It is true that the board has $16!$ possible states. It is also true that using a hash set is what students learn in a first year algorithms courses to avoid redundancy and endless looping when searching a graph that may contain graph cycles.

However, those trivial facts are not pertinent if the goal is to complete the puzzle in the fewest computing cycles. Breadth first search isn't a practical way to complete an orthogonal move puzzle. The very high cost of a breadth first search would only be necessary if number of moves is of paramount importance for some reason.

Sub-sequence Descent

Most of the vertices representing states will never be visited, and each state that is visited can have between two and four outgoing edges. Each block has an initial position and a final position and the board is symmetric. The greatest freedom of choice exists when the open space is one of the four middle positions. The least is when the open space is one of the four corner positions.

A reasonable disparity (error) function is simply the sum of all x disparities plus the sum of all y disparities and a number heuristically representing which of the three levels of freedom of movement exists because of the resulting placement of the open space (middle, edge, corner).

Although blocks may temporarily move away from their destinations to support a strategy toward completion requiring a sequence of moves, there is rarely a case where such a strategy exceeds eight moves, generating, on the average, 5,184 permutations for which the final states can be compared using the disparity function above.

If the empty space and positions of block 1 through 15 are encoded as an array of nibbles, only addition, subtraction, and bit-wise operations are needed, making the algorithm fast. Repeating the eight move brute force strategies can be repeated until disparity falls to zero.

Summary

This algorithm cannot cycle because there is always at least one of the permutations of eight moves that decreases disparity, regardless of the initial state, with the exception of a starting state that is already complete.

",4302,,,,,8/31/2018 18:12,,,,0,,,,CC BY-SA 4.0 7774,1,7777,,8/31/2018 19:03,,6,3320,"

I'm having a little trouble with the definition of rationality, which goes something like:

An agent is rational if it maximizes its performance measure given its current knowledge.

I've read that a simple reflex agent will not act rationally in a lot of environments. For example, a simple reflex agent can't act rationally when driving a car, as it needs previous perceptions to make correct decisions.

However, if it does its best with the information it's got, wouldn't that be rational behaviour, as the definition contains ""given its current knowledge""? Or is it more like: ""given the knowledge it could have had at this point if it had stored all the knowledge it has ever received""?

Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time it's allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?

",17488,,2444,,11/16/2019 23:12,11/16/2019 23:12,What is the definition of rationality?,,2,0,,,,CC BY-SA 4.0 7775,2,,7774,8/31/2018 19:18,,6,,"

When we use the term rationality in AI, it tends to conform to the game theory/decision theory definition of rational agent.

In a solved or tractable game, an agent can have perfect rationality. If the game is intractable, rationality is necessarily bounded. (Here, ""game"" can be taken to mean any problem.)

There is also the issue of imperfect information and incomplete information.

Rationality isn't restricted to objectively optimal decisions but includes subjectively optimal decisions, where the the optimality can only be presumed. (That's why defection is the optimal strategy in 1-shot Prisoner's Dilemma, where the agents don't know the decisionmaking process of the the competitor.)

  • Rationality here conforms to Russell & Norvig's definition, where it is related to performance in an environment.

What may be rational in one environment may not be rational in a different environment. Additionally, what may be locally rational for a simple reflex agent will not appear rational from the perspective of an agent with more knowledge, or a learning agent.

Iterated Dilemmas, where there is communication in the form of prior choices, may provide an analogy. An agent that always defects, even where the competitor has shown willingness to cooperate, may not be regarded as rational because defecting vs. a cooperative agent does not maximize utility. A simple reflex agent wouldn't have the capacity to alter its strategy.

However, rationality used in the most general sense might allow that, to the agent making the decision, if the decision is based on achieving an objective, and the decision is reached utilizing the information available to that agent, the decision is may be regarded as rational, regardless of actual optimality.

",1671,,1671,,10/21/2018 22:37,10/21/2018 22:37,,,,1,,,,CC BY-SA 4.0 7776,1,,,8/31/2018 19:28,,6,782,"

My question relates to but doesn't duplicate a question that has been asked here.

I've Googled a lot for an answer to the question: Can you find the dimensions of an object in a photo if you don't know the distance between the lens and the object, and there are no ""scales"" in the image?

The overwhelming answer to this has been ""no"". This is, from my understanding, due to the fact that, in order to solve this problem with this equation,

$$Distance\ to\ object(mm) = \frac{f(mm) * real\ height(mm) * image\ height(pixels)}{object\ height(pixels) * sensor\ height(mm)} $$

you will need to know either the ""real height"" or the ""distance to object"". It's the age old issue of ""two unknowns, one equation"". That's unsolvable. A way around this is to place an object in the photo with a known dimension in the same plane as the unknown object, find the distance to this object and use that distance to calculate the size of the unknown (this relates to answer from the question I linked above). This is an equivalent of putting a ruler in the photo and it's a fine way to solve this problem easily.

This is where my question remains unanswered. What if there is no ruler? What if you want to find a way to solve the unsolvable problem? Can we train an Artificial Neural Network to approximate the value of the real height without the value of the object distance or use of a scale? Is there a way to leverage the unexpected solutions we can get from AI to solve a problem that is seemingly unsolvable?

Here is an example to solidify the nature of my question:

I would like to make an application where someone can pull out their phone, take a photo of a hail stone against the ground at a distance of ~1-3 ft, and have the application give them the hail stone dimensions. My project leader wants to make the application accessible, which means he doesn't want to force users to carry around a quarter or a special object of known dimensions to use as a scale.

In order to avoid the use of a scale, would it be possible to use all of the EXIF meta-data from these photos to train a neural network to approximate the size of the hail stone within a reasonable error tolerance? For some reason, I have it in my head that if there are enough relevant variables, we can design an ANN that can pick out some pattern to this problem that we humans are just unable to identify. Does anyone know if this is possible? If so, is there a deep learning model that can best suit this problem? If not, please put me out of my misery and tell me why it it's impossible.

",17866,,1671,,8/31/2018 20:56,9/25/2018 7:51,Can one use an Artificial Neural Network to determine the size of an object in a photograph?,,2,6,,,,CC BY-SA 4.0 7777,2,,7774,8/31/2018 19:48,,4,,"

I've read that a simple reflex agent will not act rationally in a lot of environments. E.g. a simple reflex agent can't act rationally when driving a car as it needs previous perceptions to make correct decisions.

I wouldn't say that the need for previous perceptions is the reason why a simple reflex agent doesn't act rationally. I'd say the more serious issue with simple reflex agents is that they do not perform long-term planning. I think that is the primary issue that causes them to not always act rationally, and that is also consistent with the definition of rationality you provided. A reflex-based agent typically doesn't involve long-term planning, and that's why it in fact does not often do best given the knowledge it has.

Another question about the definition of rationality: Is a chess engine rational as it picks the best move given the time its allowed to use, or is it not rational as it doesn't actually (always) find the best solution (would need more time to do so)?

An algorithm like minimax in its ""purest"" formulation (without a limit on search depth) would be rational for games like chess, since it would play optimally. However, that is not feasible in practice, it would take too long to run. In practice, we'll run algorithms with a limit on search depth, to make sure that they stop thinking and pick a move in a reasonable amount of time. Those will not necessarily be rational. This gets back to bounded rationality as described by DukeZhou in his answer.

The story is not really clear if we try to talk about this in terms of ""picking the best move given the time it's allowed to use"" though, because what is or isn't possible given a certain amount of time depends very much on factors such as:

  • algorithm we choose to implement
  • speed of our hardware
  • efficiency of implementation / programming language used
  • etc.

For example, hypothetically I could say that I implement an algorithm that requires a database of pre-computed optimal solutions, and the algorithm just looks up the solutions in the database and instantly plays the optimal moves. Such an algorithm would be able to truly be rational, even given a highly limited amount of time. It would be difficult to implement in practice because we'd have difficulties constructing such a database in the first place, but the algorithm itself is well-defined. So, you can't really include something like ""given the time it's allowed to use"" in your definition of rationality.

",1641,,,,,8/31/2018 19:48,,,,5,,,,CC BY-SA 4.0 7778,2,,5461,9/1/2018 3:43,,1,,"

What you might want to look for is called video captioning. Earlier examples from this line of research is:

Below is a screenshot of the results (positive and negative) reported in those papers:

For the ICCV paper, it's not hard to find some implementations, e.g. here.

For more recent results, I would suggest to look into the ActivityNet 2017 Challenge - dense captioning or its 2018 version. Some winning solutions include:

However I am not sure whether any open-source implementation has been released.

",15493,,,,,9/1/2018 3:43,,,,0,,,,CC BY-SA 4.0 7779,1,,,9/1/2018 4:03,,3,294,"

I am trying to train a supervised model where the output from the model is output of a linear function $WX + b$. Kindly note that I'm not using any softmax or $\log$ softmax on the result of the linear. I am using negative log-likelihood loss function, which takes the input as the linear output from the model and the true labels. I am getting decent accuracy by doing this, but I have read that the input to negative log-likelihood function must be probabilities. Am I doing something wrong?

",17372,,2444,,4/29/2019 14:58,5/23/2020 15:01,Should the input to the negative log likelihood loss function be probabilities?,,1,1,,,,CC BY-SA 4.0 7780,2,,3801,9/1/2018 4:19,,2,,"

In short, dynamic computation graphs can solve some problems that static ones cannot, or are inefficient due to not allowing training in batches.

To be more specific, modern neural network training is usually done in batches, i.e. processing more than one data instance at a time. Some researchers choose batch size like 32, 128 while others use batch size larger than 10,000. Single-instance training is usually very slow because it cannot benefit from hardware parallelism.

For example, in Natural Language Processing, researchers want to train neural networks with sentences of different lengths. Using static computation graphs, they would usually have to first do padding, i.e. adding meaningless symbols to the beginning or end of shorter sentences to make all sentences of the same length. This operation complicates the training a lot (e.g. need masking, re-define evaluation metrics, waste a significant amount of computation time on those padded symbols). With a dynamic computation graph, padding is no longer needed (or only needed within each batch).

A more complicated example would be to (use neural network to) process the sentences based on its parsing trees. Since each sentence has its own parsing tree, they each requires a different computation graph, which means training with a static computation graph can only allow single-instance training. An example similar to this is the Recursive Neural Networks.

",15493,,,,,9/1/2018 4:19,,,,0,,,,CC BY-SA 4.0 7781,1,,,9/1/2018 11:32,,6,865,"

I am new to the field and I am trying to understand how is possible to use categorical variables / enums?

Lets say we have a data set and 2 of its features are home_team and away_team, the possible values of these 2 features are all the NBA teams. How can we ""normalize"" these features to be able to use them to create a deep network model (e.g. with tensorflow)?

Any reference to read about techniques of modeling that are also very appreciated.

",17876,,,,,9/1/2018 14:16,How to model categorical variables / enums?,,2,0,,,,CC BY-SA 4.0 7782,2,,7779,9/1/2018 11:55,,1,,"

This seems pretty reasonable to me. You can optimize for any function that is proportionate to the negative log-likelihood. Conventionally, we assume that the likelihood of piece of data under a linear model is proportionate to some sort of gaussian function of the difference between the predicted value and the observed value. If you're a Bayesian you'd say this is a probability. If you're a hardcore frequentist, you might quibble about that, but it's still a number between 0 and 1.

If you take the log of this likelihood function however, you'll get a quadratic function of the difference, with some scalers that you don't need to worry about. So you ought to minimize:

$$\sum_y (y-\hat{y})^2$$

This is not a probability, but since it is proportionate to the log of the original likelihood function (which was, in some sense), minimizing it will also minimize the original function.

Hope that helps!

",16909,,,,,9/1/2018 11:55,,,,2,,,,CC BY-SA 4.0 7783,2,,7781,9/1/2018 12:14,,5,,"

Authors use many different approaches.

One approach is to have a different input neuron for each possible category, and then use a ""1-hot"" encoding. So if you have 10 categories, then you can encode this as 10 binary features.

Another is to use some sort of binary encoding. If you have 10 categories, it is sufficient to use 4 neurons to represent all possible categories by using binary numbers.

A third approach is to convert your categories to cardinal values, and then normalize them. This may be more effective if your categories really are cardinal (i.e. orderable). If there isn't a natural ordering to them though, this might lead to strange results or make the problem difficult to learn (since it ends up embedding non-linear relationships in the learning problem that don't need to exist).

",16909,,,,,9/1/2018 12:14,,,,2,,,,CC BY-SA 4.0 7784,2,,7781,9/1/2018 14:16,,2,,"

A one-hot encoding, as described in John's answer, is probably the most straightforward / simple solution (maybe even the most common?). It is not without its problems though. For example, if you have a large number of such categorical variables, and each has a large number of possible values, the number of binary inputs you need for one-hot encodings may grow too large.

Lets say we have a data set and 2 of its features are home_team and away_team, the possible values of these 2 features are all the NBA teams.

In this specific example, a different possible solution might be not to use the ""identity"" of a team as a feature itself, but try to find a number of (ideally numeric) features corresponding to that team.

For example, instead of trying to encode ""home_team"" in some way in your inputs, you could (if you manage to find the data you need to do this) use the following features (not really familiar with NBA, so not sure if all these make sense):

  • Win percentage of home_team in recent X amount of time
  • Historical win percentage of home_team against away_team
  • Average points scored per match by this team
  • In football there's something like how many minutes per game a team is ""in control"" of the ball, is there something similar in NBA maybe?
  • etc.

And then you can try to get a similar list of features for the away_team.

This kind of solution would work for your example, and maybe also for various other examples. It might not work in all cases of categorical features though, in some cases you'd have to revert to solutions like those in John's answer.

",1641,,,,,9/1/2018 14:16,,,,0,,,,CC BY-SA 4.0 7785,1,7790,,9/1/2018 19:40,,1,84,"

I have been researching LSTM neural networks. I have seen this diagram a lot and I have few questions about it. Firstly, is this diagram used for most LSTM neural networks?

Secondly, if it is, wouldn't only having single layers reduce it's usefulness?

",17881,,1581,,9/2/2018 14:01,9/2/2018 14:01,Need Help With LSTM Neural Networks,,2,0,,,,CC BY-SA 4.0 7786,2,,7785,9/1/2018 21:05,,1,,"

(1) Yes this is the diagram for a classical LSTM unit. Of cause there are some variants and those diagrams would look slightly different.

(2) It is very common for researchers to use more than one layers of LSTM and achieves better performance than a single layer one. A common way to ""stack"" LSTMs is to use the previous layer's output ($h_t$ in your diagram) as the input to the next layer ($x_t$). However, I have seldom seen any successful application of 5+ layers of LSTMs, while for CNNs it is common to use tens or even hundreds of layers.

",15493,,,,,9/1/2018 21:05,,,,0,,,,CC BY-SA 4.0 7787,1,7797,,9/2/2018 8:24,,1,109,"

I am currently looking into LSTMs. I found this nice blog post, which is already very helpful, but still, there are things I don't understand, mostly because of the collapsed layers.

  • The input $X_t$, and the output of the previous time step $H_{t-1}$, how do they get combined? Multiplied, added or what?
  • The input weights and the weights of the input of the previous time step, those are just the weights of the connections between the time-steps/units, right?
",17769,,2444,,10/22/2019 21:01,10/22/2019 21:01,How do the current input and the output of the previous time step get combined in an LSTM?,,1,1,,,,CC BY-SA 4.0 7788,1,7798,,9/2/2018 8:30,,10,1613,"

What benefits can we got by applying Graph Convolutional Neural Network instead of ordinary CNN? I mean if we can solve a problem by CNN, what is the reason should we convert to Graph Convolutional Neural Network to solve it? Are there any examples i.e. papers can show by replacing ordinary CNN with Graph Convolutional Neural Network, an accuracy increasement or a quality improvement or a performance gain is achieved? Can anyone introduce some examples as image classification, image recognition especially in medical imaging, bioinfomatics or biomedical areas?

",14948,,2444,,3/12/2019 10:37,4/27/2020 23:45,What benefits can be got by applying Graph Convolutional Neural Network instead of ordinary CNN?,,2,0,0,,,CC BY-SA 4.0 7790,2,,7785,9/2/2018 9:02,,0,,"

Just to be 100% sure - the diagram you refer to is a diagram of an LSTM CELL, not NETWORK. The operands you see on the diagram are operations within a cell, not separate ""neurons"". I think it is quite obvious, however reading your questions I just wanted to be 100% sure we are on the same page.

Now, about layers. RNN networks (LSTM in particular) are just like any other ANN structure. Theoretically, a 1-hidden layer network can do any computation of a ""deeper"" network. ANN is a universal approximate of math functions. Still, multi-layer ANNs typically work better on more complex problems. Multi-layer typically needs less total connections, learns better and is less resource-demanding.

In particular, multi-layer LSTMs are believed to be better at determining complex in-time patterns. I think there is no rigorous proof for this, however. Also, in practical applications - I did not see much improvement in network capabilities by adding additional LSTM layers. Adding more dense layers before/ after LSTM seemed to have a much better effect.

",15397,,,,,9/2/2018 9:02,,,,0,,,,CC BY-SA 4.0 7792,1,,,9/2/2018 10:00,,1,117,"

Is it possible to form a table that will have simply the shortest distance from each source to destination using q learning?

If not, suggest any other learning algorithm.

",17885,,2444,,2/16/2019 2:45,2/16/2019 2:45,Can Q-learning be used to find the shortest distance from each source to destination?,,1,1,,,,CC BY-SA 4.0 7793,1,7796,,9/2/2018 10:20,,5,100,"

Stuart Russell and Peter Norvig pointed out 4 four possible goals to pursue in artificial intelligence: systems that think/act humanly/rationally.

What are the differences between an agent that thinks rationally and an agent that acts rationally?

",17886,,2444,,2/9/2021 11:19,2/9/2021 12:47,What are the differences between an agent that thinks rationally and an agent that acts rationally?,,1,1,,,,CC BY-SA 4.0 7794,1,7910,,9/2/2018 11:25,,5,3092,"

I want to explore and experiment the ways in which I could use a neural network to identify patterns in text.

examples:

  1. Prices of XYZ stock went down at 11:00 am today
  2. Retrieve a list of items exchanged on 03/04/2018
  3. Show error logs between 3 - 5 am yesterday.
  4. Reserve a flight for 3rd October.
  5. Do I have any meetings this Friday?
  6. Remind to me wake up early tue, 4th sept

This is for a project so I am not using regular expressions. Papers, projects, ideas are all welcome but I want to approach feature extraction/pattern detection to have a model trained which can Identify patterns that it has already seen.

",7998,,7998,,9/2/2018 11:58,9/10/2018 20:27,How can I detect datetime patterns in text?,,3,4,,1/6/2022 13:12,,CC BY-SA 4.0 7795,2,,7792,9/2/2018 14:05,,1,,"

Welcome to AI.SE Adarsh!

This is fairly simple to do with Q-Learning.

If you assign a reward equal to -1 for each location other than the goals, and 0 for the goals, then Q-learning with no discount factor will learn the length of the shortest path from each location to the goal. Alternatively, you can use a discounting factor, and then invert the equation on the learned values of the Q function to obtain the number of steps.

Hope this helps!

",16909,,,,,9/2/2018 14:05,,,,0,,,,CC BY-SA 4.0 7796,2,,7793,9/2/2018 14:11,,5,,"

Maybe a good example to think about would be something like the Sphyx story. The wasp in the story appears to behave like a rational being: it seems to have a plan of action, it seems to be able to do advanced operations like counting, and it seems to execute the plan well. However, if you disrupt the wasp's plan, it becomes apparent that it is not thinking rationally. Instead, it has evolved a very complex behaviour that appears to include rational components, but is still just an instinct.

In the context of AI, consider a GOFAI system for planning like GraphPlan as opposed to a machine learning system for generating plans. While the former is a general-purpose algorithm for reasoning about planning problems, the latter is an input/output mapping that may mimic reasoning, or may be more "instinctual". Some AI'ers would say that the latter system is not really engaged in rational thought, while the former is. Both systems exhibit rational action, however.

Instinctive systems often work very well, and I'd say that the increasing effectiveness of machine learning approaches since that edition of R&N was published (almost 10 years ago now?) makes this a fuzzier distinction in practice than the book might suggest.

",16909,,2444,,2/9/2021 12:47,2/9/2021 12:47,,,,0,,,,CC BY-SA 4.0 7797,2,,7787,9/2/2018 18:30,,1,,"

(1) $X_t$ and $H_{t-1}$ are concatenated. The blog you cited explained its notation ""Lines merging denote concatenation"". For example, if $X_t=[1,2,3]$ and $H_{t-1}=[4,5,6,7]$, then their concatenation is $[1,2,3,4,5,6,7]$

(2) When you say ""input weights"" or ""weights of the input of the previous time step"", are you referring to the $W_i$ in your cited blog? If so they are not the weights of the connections between the time-steps/units. They are part of the input gate only. The connections between the time-steps/units do not have weights applied to them.

",15493,,,,,9/2/2018 18:30,,,,3,,,,CC BY-SA 4.0 7798,2,,7788,9/2/2018 18:47,,3,,"

Generally speaking a graph CNN is applied to data represented by graphs, not images.

  • a graph is a collection of nodes and edges connecting them.

  • an image is a 2D or 3D matrix, in which each element denotes a pixel in space

If your data are just images, or something similar (e.g. some fMRI data), you usually cannot benefit from graph CNN compared with usual CNN.

Sometimes, the class labels of your images may be organized in a graph-like (or tree-like) structure. In that case, you may have a chance to benefit from graph CNN.

",15493,,23503,,4/27/2020 23:45,4/27/2020 23:45,,,,0,,,,CC BY-SA 4.0 7800,2,,7788,9/3/2018 5:36,,2,,"

Bioinformatics is an area that Graph Convolutional Neural Network is useful. Consider protein networks, or gene-gene networks. Surely, the biological networks can be represented as a graph. Now, you should see how GCN is useful for bioinformatics.

",6014,,,,,9/3/2018 5:36,,,,0,,,,CC BY-SA 4.0 7803,1,,,9/3/2018 9:56,,11,612,"

Hopfield networks are able to store a vector and retrieve it starting from a noisy version of it. They do so setting weights in order to minimize the energy function when all neurons are set equal to the vector values, and retrieve the vector using the noisy version of it as input and allowing the net to settle to an energy minimum.

Leaving aside problems like the fact that there is no guarantee that the net will settle in the nearest minimum etc – problems eventually solved with Boltzmann machines and eventually with back-propagation – the breakthrough was they are a starting point for having abstract representations. Two versions of the same document would recall the same state, they would be represented, in the network, by the same state.

As Hopfield himself wrote in his 1982 paper Neural networks and physical systems with emergent collective computational abilities

The present modeling might then be related to how an entity or Gestalt is remembered or categorized on the basis of inputs representing a collection of its features.

On the other side, the breakthrough of deep learning was the ability to build multiple, hierarchical representations of the input, eventually leading to making AI-practitioners' life easier, simplifying feature engineering. (see e.g. Representation Learning: A Review and New Perspectives, Bengio, Courville, Vincent).

From a conceptual point of view, I believe one can see deep learning as a generalization of Hopfield nets: from one single representation to a hierarchy of representation.

Is that true from a computational/topological point of view as well? Not considering how ""simple"" Hopfield networks were (2-state neurons, undirected, energy function), can one see each layer of a network as a Hopfield network and the whole process as a sequential extraction of previously memorized Gestalt, and a reorganization of these Gestalt?

",17901,,2444,,3/7/2020 23:02,4/7/2020 1:01,Can layers of deep neural networks be seen as Hopfield networks?,,1,0,,,,CC BY-SA 4.0 7812,2,,7541,9/4/2018 6:20,,3,,"

Driving Priorities

When considering the kind of modeling needed to create reliable and safe autonomous vehicles, the following driving safety and efficacy criteria should be considered, listed in priority with the most important first.

  • The safety of those inside the vehicle and outside the vehicle
  • Reduction of wear on passengers
  • The safety of property
  • The arrival at the given destination
  • Reduction of wear on the vehicle
  • Thrift in fuel resources
  • Fairness to other vehicles
  • The thrift in time

These are ordered in a way that makes civic and global sense, but they are not the priorities exhibited by human drivers.

Copy Humans or Reevaluate and Design from Scratch?

Whoever said that the goal of autonomous car design is to model the portions of a human mind that can drive should not be designing autonomous cars for actual manufacture. It is well known that most humans, although they may have heard of the following safety tips, cannot bring them into consciousness with sufficient speed to benefit from them in actual driving arrangements.

  • When the tires slip sideways, steer into the skid.
  • When a forward skid starts, pump the breaks.
  • If someone is headed tangentially into your car's rear, immediately accelerate and then break.
  • On an on ramp, accelerate to match the speed of the cars in the lane into which you merge, unless there is no space to merge.
  • If you see a patch of ice, steer straight and neither accelerate nor decelerate once you reach it.

Many collisions between locomotives and cars are because a red light causes a line in multiple lanes across the tracks. Frequently, a person will move onto the railroad tracks to gain one car's length on the other cars. When others move to make undoing that choice problematic, a serious risk emerges.

As absurd as this behavior is to anyone watching, many deaths occur as a fast traveling 2,000 ton locomotive hits what feels like a dust speck to the train passengers.

Predictability and Adaptability

Humans are unpredictable, as the question indicates, but although adaptability may be unpredictable, unpredictability may not be adaptive. It is adaptability that is needed, and it is needed in five main ways.

  • Adaptive in the moment to surprises
  • Adaptive through general driving experience
  • Adaptive to the specific car
  • Adaptive to passenger expression
  • Adaptive to particular map regions

In addition, driving a car is

  • Highly mechanical,
  • Visual,
  • Auditory,
  • Plan oriented
  • Geographical, and
  • Preemptive in surprise situations.

Modelling Driving Complexities

This requires a model or models comprise of several kinds of objects.

  • Maps
  • The vehicle
  • The passenger intentions
  • Other vehicle
  • Other obstructions
  • Pedestrians
  • Animals
  • Crossings
  • Traffic signals
  • Road signs
  • Road side

Neither Mystery nor Indeterminance

Although these models are cognitively approximated in the human brain, how well they are modeled and how effective those models are at reaching something close to a reasonable balance of the above priorities varies from driver to driver, and varies from trip to trip for the same driver.

However, as complex as driving is, it is not mysterious. Each of the above models are easy to consider at a high level in terms of how they interact and what mechanical and probabilistic properties they have. Detailing these is an enormous task, and making the system work reliably is a significant engineering challenge, in addition to the training question.

Inevitability of Achievement

Regardless of the complexity, because of the economics involved and the fact that it is largely a problem of mechanics, probability, and pattern recognition, it will be done, and it will eventually be done well.

When it is, as unlikely as this sounds to the person who accepts our current culture as permanent, human driving may become illegal in this century in some jurisdictions. Any traffic analyst can mount heaps of evidence that most humans are ill equipped to drive a machine that weighs a ton at common speeds. The licensing of unprofessional drivers has only become widely accepted because of public insistence on transportation convenience and comfort and because the workforce economy requires it.

Autonomous cars may reflect the best of human capabilities, but they will likely far surpass them because, although the objects in the model are complex, they are largely predictable, with the notable exception of children playing. AV technology will use the standard solution for this. The entire scenario can be brought into slow motion to adapt for children playing simply by slowing way down. AI components that specifically detect children and dogs are likely to emerge soon, if they do not already exist.

Randomness

Randomness is important in training. For instance, a race car driver will deliberately create skids of various types to get used to how to control them. In machine learning we see some pseudo random perturbations introduced during training to ensure that the gradient descent process does not get caught in a local minimum but rather is more likely to find a global minimum (optimum).

Deadlock

The question is correct in stating that, ""A dose of unpredictability could have its uses."" The deadlock scenario is an interesting one, but is unlikely to occur as standards develop. When four drivers come to a stop sign at the same time, they really don't. It only seems like they did. The likelihood that none of them arrived more than a millisecond before the others is astronomically small.

People will not detect (or even be honest enough) to distinguish these small time differences, so it usually comes to who is most gracious to wave the others on, and there can be some deadlock there too, which can become comical, especially since all of them really wish to get moving. Autonomous vehicles will extremely rarely encounter a deadlock that is not covered by the rule book the government licensing entity publishes, which can be programmed as driving rules into the system.

On those rare occasions, the vehicles could digitally draw lots, as suggested, which is one place where unpredictability is adaptive. Doing skid experimentation like a race car driver on Main Street at midnight may be what some drunk teen might do, but that is a form of unpredictability that is not adaptive toward a sensible ordering of the priorities of driving. Neither would be texting or trying to eat and drive.

Determinism

Regarding determinism, in the context of the uses discussed, pseudo-random number generation of particular distributions will suffice.

  • Deadlock release or
  • Training speed-ups and improved reliability when there are local minima that are not the global minimum during optimization,

Functional tests and unit testing technologies are not only able to handle the testing of components with pseudo-randomness, but they sometimes employ pseudo-randomness to provide better testing coverage. The key to doing this well is understanding of probability and statistics, and some engineers and AI designers understand it well.

Element of Surprise

Where randomness is most important in AV technology is not in the decision making but in the surprises. That is the bleeding edge of that engineering work today. How can one drive safely when a completely new scenario appears in the audio or visual channels? This is perhaps the place where the diversity of human thought may be best adept, but at highway speeds, it is usually too slow to react in the way we see in movie chase scenes.

Correlation Between Risk and Speed

This brings up an interesting interaction of risk factors. It is assumed that higher speeds are more dangerous, the actual mechanics and probability are not that clear cut. Low speeds produce temporally longer trips and higher traffic densities. Some forms of accidents are less likely at higher speeds, specifically ones that are related mostly to either traffic density or happenstance. Other forms are more likely at higher speeds, specifically ones that are related to reaction time and tire friction.

With autonomous vehicles, tire slippage may be more accurately modeled and reaction time may be orders of magnitude faster, so minimum speed limits may be more imposed and upper limits may increase once we get humans out of the driver's seats.

",4302,,,,,9/4/2018 6:20,,,,3,,,,CC BY-SA 4.0 7815,1,,,9/4/2018 8:34,,1,84,"

So as my university project I am planning to make a prediction system as described in the title. My current idea is to use the age/gender classifier and run it on a video(taken in front of a shop) which outputs a csv file of the age/gender/Customer ID. In addition, I will use the existing data of the shop of who came in/who didn't come into the shop but passed by the shop and by running XGBoost on this csv data I can predict which customer will come into the shop or not.

Do you think this idea is possible? Is there any other way to implement this idea. It would also be great if we could implement this in such a way as to make the deep learning model learn the various features of those who come into the shop or not.

",17651,,,,,9/4/2018 11:11,Automatic prediction of whether a customer will come into the shop or not,,1,4,,,,CC BY-SA 4.0 7816,2,,7815,9/4/2018 11:11,,1,,"

This should be possible, but it's not completely clear what you are trying to do.

If you're trying to predict customer age and gender from a video, then you've got a computer vision problem. Deep learning methods are the state of the art for this, and probably some sort of convolutional deep neural network is your friend.

If you're trying to predict whether the customer will enter the shop from their age and gender, then @DuttA suggestion to start with simple classification techniques is probably the best bet. Try out logistic regression as a starting point.

If you really want to do both of these things at once, you can again try out a deep convolutional neural network.

All that said, it's not completely clear that there is signal in the videos you want to collect: it might be that predicting whether they'll enter the shop is not really possible on the basis of a short video. The only way to know for sure is to give it a try however.

",16909,,,,,9/4/2018 11:11,,,,8,,,,CC BY-SA 4.0 7817,1,7818,,9/4/2018 14:55,,3,328,"

I came across RNN's a few minutes ago, which might solve a problem with sequenced data I've had for a while now.

Let's say I have a set of input features, generated every second. Corresponding with these input features, is an output feature (also available every second). One set of input features does not carry enough data to correlate with the output feature, but a sequence of them most definitely does.

I read that RNN's can have node connections along sequences of inputs, which is exactly what I need, but almost all implementations/explanations show prediction of the next word or number in a text-sentence or in a sequence of numbers.

They predict what would be the next input value, the one that completes the sequence. However, in my case, the output feature will only be available during training. During inference, it will only have the input features available.

Is it possible to use RNN in this case? Can it also predict features that are not part of the input features?

Thanks in advance!

",16932,,,,,9/4/2018 15:30,Is it possible to use an RNN to predict a feature that is not an input feature?,,1,0,,,,CC BY-SA 4.0 7818,2,,7817,9/4/2018 15:24,,2,,"

Is it possible to use RNN in this case? Can it also predict features that are not part of the input features?

Yes.

No changes are required to a RNN in order to do this. All you need is correctly labelled data mapping a sequence of $x$ to correct $y$ in order to train, and of course a RNN architecture which has input vectors matching shape of $x$ and output vectors matching shape of $y$. The case where $x$ and $y$ are the same data type is just a special case of RNN design, and not a requirement.

You may need to consider some details:

  • If the relationship between $x$ and $y$ is complex and non-linear even accounting for accumulated hidden state during the sequence, you may need to add deeper layers. The output of the LSTM can be some vector $h$ and you can add fully-connected layers to help with predicting $y$ from $h$. This, or adding more LSTM layers, is a choice of hyperparameter that you may want to experiment with. Start with a basic LSTM to see how that goes first.

  • If you wish to predict a sequence of output features that is either not the same length as the input feature sequence, or logically should come after the whole sequence (think language translation) then this may need a slight change in setup to get best results. For a predict-same-kind sequence you can feed your predicted output value into the next input, but if input and output have different data types, this will not work. Instead, you will need to have some dummy input or other setup for creating sequences of $y$.

In your specific case the second point does not seem to apply, as you want to predict a single $y$ immediately after a sequence of $x$.

",1847,,1847,,9/4/2018 15:30,9/4/2018 15:30,,,,1,,,,CC BY-SA 4.0 7819,1,,,9/4/2018 15:38,,3,396,"

I am not sure if I can use the words binomial and binary and boolean as synonyms to describe a data attribute of a data set which has two values (yes or no). Are there any differences in the meaning on a deeper level?

Moreover, if I have an attribute with three possible values (yes, no, unknown), this would be an attribute of type polynominal. What further names are also available for this type of attribute? Are they termed as ""symbolic""?

I am interested in the realtion between the following attribute type: binary, boolean, binominal, polynominal (and alternative describtions) and nominal.

",13295,,6014,,9/4/2018 23:22,9/5/2018 0:11,Is a binary attribute type the same as binomial attribute type?,,2,0,,,,CC BY-SA 4.0 7825,2,,7819,9/4/2018 23:25,,0,,"

Binomial is a distribution characterised by $p$, the probability of success for an independent trial. Each sample you get from the distribution is a binary variable, 0 or 1.

",6014,,,,,9/4/2018 23:25,,,,0,,,,CC BY-SA 4.0 7826,2,,7819,9/5/2018 0:11,,1,,"

@SmallChess's answer is a good start, but there are some additional parts to the question.

binary variables or binary data consist of data with the values 0 or 1, and no other values. We usually don't talk about ""binary distributions"", because it's only data, variables, or outcomes that can be binary. A distribution might produce binary data, but is not itself binary because its parameters typically take on real-values.

A binomial distribution is a distribution that produces binary data. In particular, it is a random process that produces the value 1 with probability $$p$$, and the value 0 with probability $$1-p$$. Notice that although it makes binary data, it is not itself a kind of data, and is in fact charactorized by a non-binary number (p).

Boolean data takes on the values true or false. Often, but not always, these are stored as 0's and 1's. The distinction is that boolean data may not be stored numerically. There might also be different expectations about how Boolean data should be processed (for instance, $$true + true = true$$, but $1 + 1 = 2$.

I am not aware of the term polynomial being applied to data. However, multinomial distributions are probability distributions that produce 0 with probability $p_0$, 1 with probability $p_1$, 2 with probability $p_2$, and so on, producing $p_n$ with probability $1 - \sum_{i=0}^{n-1} p_i$ for $n$ different numbers. Like binomial distributions, multinomial distributions are characterized by a set of real-valued numbers, and are distinct from the kind of data they generate.

Categorical data takes on values from a set of categories. The example you give (yes, no, maybe) is not strictly multinomial data, but could be generated from a multinomial distribution by mapping the values 0, 1 and 2 onto yes, no and maybe. Note again that categorical data might be non-numeric. Operations like adding might be non-sensical.

Cardinal data isn't something you asked about, but arises when data can be nicely ordered. For example, playing cards are easily mapped to the numbers 1-13, and can have reasonable semantic meaning when represented this way (e.g. A + 2 = 3, and 1 + 2 = 3).

Nominal Data is just literal numbers that mean exactly what they purport to mean. For example, if you store the number of cans of beer a customer purchased, that would be nominal data.

",16909,,,,,9/5/2018 0:11,,,,2,,,,CC BY-SA 4.0 7827,2,,6052,9/5/2018 8:18,,1,,"

For what its worth (and having done a bit of study on this and being really interested in the topic): the answer seems to go back to the beginnings of AI and even earlier (Turing's 1936 paper in which he introduces what's now called the Turing machine).

John McCarthy's filer for the 1956 Dartmouth College summer workshop on ""Artificial Intelligence"" (which name introduced the term ""Artificial Intelligence"") in part says:

""The study [workshop] is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.""

This references Turing's 1936 paper where a machine or natural system is described, and the description is run in a computer. To simulate is to quite precisely describe a system then run the description (transformed a bit – but the result is still a description) in a computer. The description is the program. The description needs to be precise, as indicated in the Church-Turing thesis.

So the idea of simulation is core to the computational theory of what a digital computer can or might do. So it's also core to the computational theory of mind (the organic brain being a natural system), and hence to AI.

That said, it's obviously a crazy idea to try to quite precisely describe the organic machine that is a human brain. I mean how many neurons? 100 billion. Quite precisely describe each and every single one of these, and each and every of the up to 10,000 connections that connect to each and every single neuron. Crazy with a capital C. And to suppose there are degrees of simulation of the brain, or that the mind is somehow a simplification of the brain, or that the description can be in higher level concepts, not neurological ones, is just to admit that the description is not quite precise. An adequate simulation of a brain would be terribly detailed.

So why do we hear so much about AI trying to simulate the brain? Answer: AI has no other word to express what it does.

In my view, AI ought to be trying to work out the data-processing principles of the organic brain, not trying to describe the causation of the brain. AI doesn't know the principles of perception or the principles of general knowledge. It's incredible to say this – seeing as both are so absolutely fundamental to human intelligence. But AI doesn't know the principles. It ought to be trying to work them out. Then – once discovered – to work out how these principles could be realised in a computer.

You suggest that there's a binary choice between AI trying to get a computer to simulate the organic brain, and trying to grow organic brains in a dish. But there's actually a third option. Computer can do things other than simulate (i.e., other than compute). Maybe these other things might include embodying the principles of organic brains.

There are two really big areas here: (1) what are the principles of intelligence? (2) what are the non-computational things computers can do?

You ask why AI is concerned with the digital environment rather than, say, growing organic brains in a vat. But AI is basically an engineering project (building something with a designed causality) and even though AI knows only a little about the causality of what it's trying to build, the digital computer seems to be the only viable platform, at present, with enough individually addressable memory locations and processor speed to cope with semantic structures that would result from an adequate sensory interaction with the environment.

",17709,,,,,9/5/2018 8:18,,,,0,,,,CC BY-SA 4.0 7829,1,,,9/5/2018 9:13,,2,151,"

What is the process for integrating sentiment analysis in a CRM? What I am searching for is a system which analyzes the customer comments or reviews using the CRM and finds out the customer sentiment on the services provided by the system or company or a product.

I have done a sentiment analyzer which takes text and shows the sentiment of the text. Now I want to integrate the above-mentioned sentiment analyzer to a CRM, how can I do that?

",17058,,17058,,9/5/2018 10:33,12/11/2022 16:04,Integration of Sentiment analysis in CRM,,1,1,,,,CC BY-SA 4.0 7830,1,,,9/5/2018 14:06,,1,163,"

I am interested in the field of artificial intelligence. I began by learning the various machine learning algorithms. The maths behind some were quite hard. For example, back-propagation in convolutional neural networks.

Then when getting to the implementation part, I learnt about TensorFlow, Keras, PyTorch, etc. If these provide much faster and more robust results, will there be a necessity to code a neural network (say) from scratch using the knowledge of the maths behind back-prop, activation functions, dimensions of layers, etc., or is the role of a data scientist only to tune the hyper-parameters?

Further, as of now the field of AI does not seem to have any way to solve for these hyperparameters, and they are arrived at through trial and error. Which begs the question, can a person with just basic intuition about what the algorithms do be able to make a model just as good as a person who knows the detailed mathematics of these algorithms?

",17143,,2444,,4/1/2020 15:27,4/2/2020 0:44,Is it necessary to know the details behind the AI algorithms and models?,,2,0,,,,CC BY-SA 4.0 7832,1,7871,,9/5/2018 15:42,,4,725,"

In Proximal Policy Optimization Algorithms (2017), Schulman et al. write

With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse.

I don't understand why the clipped surrogate objective works. How can it work if it doesn't take into account the objective improvements?

",17759,,2444,,1/7/2022 16:25,1/7/2022 16:25,Why does the clipped surrogate objective work in Proximal Policy Optimization?,,2,0,,,,CC BY-SA 4.0 7833,2,,7794,9/5/2018 15:59,,1,,"

If want to use deep learning approaches, you should look to recurrent neural networks (RNN). Recurrent networks will take into account temporal dependencies and could detect thatn this in this Friday belong to datetime but not in this apple.

As a simple model, you could create a model with a bidirectional LSTM layer (a type of RNN):

  • Input: the sequences of characters.
  • Output: whether the character belongs to datetime or not.

The longest part will gather many sentences with its corresponding solution to create a training/testing dataset. Keras might be a good framework to start playing around and with many examples.

",17936,,,,,9/5/2018 15:59,,,,2,,,,CC BY-SA 4.0 7836,1,,,9/5/2018 22:45,,2,533,"

I'm trying to solve the OpenAI BipedalWalker-v2 by using a one-step actor-critic agent. I'm implementing the solution using python and tensorflow.

I'm following this pseudo-code taken from the book Reinforcement Learning An Introduction by Richard S. Sutton and Andrew G. Barto.

in summary, my question can be reduced to the following:

  • Is it a good idea to implement a one-step actor-critic algorithm to solve the OpenAI BipedalWalker-v2 problem? If not what would be a good approach? If yes; how long would it take to converge?
  • I run the algorithm for 20000 episodes, each episode has an avg of 400 steps, for each step, I immediately update the weights. The results are not better than random. I have tried different standard deviations (for my normal distribution that represents pi), different NN sizes for the Critic and Actor, and different learning-steps for the optimizer algorithm. The results never improve. I don't know what I'm doing wrong.

My Agent Class

import tensorflow as tf
import numpy as np
import gym
import matplotlib.pyplot as plt

class agent_episodic_continuous_action():
    def __init__(self, lr,gamma,sample_variance, s_size,a_size,dist_type):
       ... #agent parameters

    def save_model(self,path,sess):    
    def load_model(self,path,sess):       
    def weights_init_actor(self,hidd_layer,mean,stddev): #to have control over the weights initialization      
    def weights_init_critic(self,hidd_layer,mean,stddev):  #to have control over the weights initialization            
    def create_actor_brain(self,hidd_layer,hidd_act_fn,output_act_fn,mean,stddev):  #actor is represented by a fully connected NN      
    def create_critic_brain(self,hidd_layer,hidd_act_fn,output_act_fn,mean,stddev): #critic is represented by a fully connected NN      
    def critic(self):            
    def get_delta(self,sess):                 
    def normal_dist_prob(self): #Actor pi distribution is a normal distribution whose mean comes from the NN 
    def create_actor_loss(self): 
    def create_critic_loss(self):
    def sample_action(self,sess,state): #Sample actions from the normal dist. Whose mean was aprox. By the NN
    def calculate_actor_loss_gradient(self):
    def calculate_critic_loss_gradient(self):   
    def update_actor_weights(self):
    def update_critic_weights(self):
    def update_I(self):  
    def reset_I(self):      
    def update_time_step_info(self,s,a,r,s1,d):  
    def create_graph_connections(self):
    def bound_actions(self,sess,state,lower_limit,uper_limit):  

Agent instantiation

tf.reset_default_graph()
agent= agent_episodic_continuous_action(learning-step=1e-3,gamma=0.99,pi_stddev=0.02,s_size=24,a_size=4,dist_type="normal")
agent.create_actor_brain(hidden_layers=[12,5],hidden_layers_fct="relu",output_layer="linear",mean=0.0,stddev=0.14)
agent.create_critic_brain(hidden_layers=[12,5],hidden_layers_fct="relu",output_layer="linear",mean=0.0,stddev=0.14)
agent.create_graph_connections()

path = "/home/diego/Desktop/Study/RL/projects/models/biped/model.ckt"   
env = gym.make('BipedalWalker-v2')
uper_action_limit = env.action_space.high
lower_action_limit = env.action_space.low   
total_returns=[]

Training loops

with tf.Session() as sess:
    try:
        sess.run(agent.init)
        sess.graph.finalize()
        #agent.load_model(path,sess)        
        for i in range(1000): 
            agent.reset_I()
            s = env.reset()    
            d = False
            while (not d):
                a=agent.bound_actions(sess,s,lower_action_limit,uper_action_limit)  
                s1,r,d,_ = env.step(a)
                #env.render()
                agent.update_time_step_info([s],[a],[r],[s1],d)                 
                agent.get_delta(sess)
                sess.run([agent.update_critic_weights,agent.update_actor_weights],feed_dict={agent.state_in:agent.time_step_info['s']})
                agent.update_I()  
                s = s1
        agent.save_model(path,sess)    
    except Exception as e:
        print(e)
",17565,,-1,,6/17/2020 9:57,9/10/2018 16:06,How many episodes does it take for a vanilla one-step actor-critic agent to master the OpenAI BipedalWalker-v2 problem?,,0,6,,,,CC BY-SA 4.0 7837,2,,7830,9/6/2018 2:20,,6,,"

This is a good question. I tend to think the answer is yes it is necessary to know the details, because a person without mathematical understanding of these algorithms cannot consistently make a model as good as someone who does have that understanding.

The reason is right at the core of computer science: abstractions are useful, but usually obscure details. When those details matter, someone who only knows the abstraction and not the details that lie beneath can't understand what's going on.

As an example, if you don't understand the math behind optimizing the weights of a neural network, it might not be apparent how parameters like the learning rate are impacted by properties like network depth when some of the inputs have not been properly normalized. If you understand the optimization process mathematically, you can reason through the effects even if you are trying to work on an unfamiliar problem. This ability to reason through the probable effects of parameter decisions in new domains is the main thing that you miss by working from intuition.

",16909,,16909,,4/2/2020 0:44,4/2/2020 0:44,,,,0,,,,CC BY-SA 4.0 7838,1,,,9/6/2018 2:40,,22,1993,"

What is the definition of artificial intelligence?

",17948,,2444,,1/16/2021 0:01,8/18/2021 12:05,What is artificial intelligence?,,7,1,,,,CC BY-SA 4.0 7841,2,,7794,9/6/2018 4:46,,0,,"

If you dont want to use machine learning you may use date time parser in python. Few examples are given below. It will return you formatted date time from given string. It works with all languages.

>>> import dateparser
>>> dateparser.parse('12/12/12')
datetime.datetime(2012, 12, 12, 0, 0)
>>> dateparser.parse(u'Fri, 12 Dec 2014 10:55:50')
datetime.datetime(2014, 12, 12, 10, 55, 50)
>>> dateparser.parse(u'Martes 21 de Octubre de 2014')  # Spanish (Tuesday 21 October 2014)
datetime.datetime(2014, 10, 21, 0, 0)
>>> dateparser.parse(u'Le 11 Décembre 2014 à 09:00')  # French (11 December 2014 at 09:00)
datetime.datetime(2014, 12, 11, 9, 0)
>>> dateparser.parse(u'13 января 2015 г. в 13:34')  # Russian (13 January 2015 at 13:34)
datetime.datetime(2015, 1, 13, 13, 34)
>>> dateparser.parse(u'1 เดือนตุลาคม 2005, 1:00 AM')  # Thai (1 October 2005, 1:00 AM)
datetime.datetime(2005, 10, 1, 1, 0)
",3773,,,,,9/6/2018 4:46,,,,1,,,,CC BY-SA 4.0 7842,1,,,9/6/2018 4:53,,2,4180,"

What are some good approaches that I can use to count the number of people in a crowd?

Tracking each person individually is obviously not an option. Any good approaches or some references to research papers would be very helpful.

",14592,,2444,,4/4/2022 9:42,4/4/2022 9:42,What are some good approaches that I can use to count the number of people in a crowd?,,1,0,,,,CC BY-SA 4.0 7843,2,,92,9/6/2018 4:55,,5,,"

How is it possible that deep neural networks are so easily fooled?

Deep neural networks are easily fooled by giving high confidence predictions for unrecognizable images. How is this possible? Can you please explain ideally in plain English?

Intuitively, extra hidden layers ought to make the network able to learn more complex classification functions, and thus do a better job classifying. While it may be named deep learning it's actually a shallow understanding.

Test your own knowledge: Which animal in the grid below is a Felis silvestris catus, take your time and no cheating. Here's a hint: which is a domestic house cat?

For a better understanding checkout: "Adversarial Attack to Vulnerable Visualizations" and "Why are deep neural networks hard to train?".

The problem is analogous to aliasing, an effect that causes different signals to become indistinguishable (or aliases of one another) when sampled, and the stagecoach-wheel effect, where a spoked wheel appears to rotate differently from its true rotation.

The neural network doesn't know what it's looking at or which way it's going.

Deep neural networks aren't an expert on something, they are trained to decide mathematically that some goal has been met, if they are not trained to reject wrong answers they don't have a concept of what is wrong; they only know what is correct and what is not correct - wrong and "not correct" are not necessarily the same thing, neither is "correct" and true.

The neural network doesn't know right from wrong.

Just like most people wouldn't know a house cat if they saw one, two or more, or none. How many house cats in the above photo grid, none. Any accusations of including cute cat pictures is unfounded, those are all dangerous wild animals.

Here's another example. Does answering the question make Bart and Lisa smarter, does the person they are asking even know, are there unknown variables that can come into play?

We aren't there yet but neural networks can quickly provide an answer that is likely to be correct, especially if it was properly trained to avoid all misteps.

",17742,,-1,,6/17/2020 9:57,9/6/2018 5:01,,,,0,,,,CC BY-SA 4.0 7844,2,,7842,9/6/2018 7:12,,2,,"

Here a convolutional neural network (cNN) based approach is presented: Image Crowd Counting Using Convolutional Neural Network and Markov Random Field

This blog post has more of a tutorial character and also present an approach based on CNNs Counting Crowds and Lines

",2585,,,,,9/6/2018 7:12,,,,0,,,,CC BY-SA 4.0 7850,2,,7830,9/6/2018 9:32,,0,,"

In my answer I will call people who do not know mathematics behind ML algorithms as data science practitioners and those who know as data scientists (this terms may not be true in real life).

So with the advent of Neural Networks the importance of understanding maths behind ML algorithms has diminished significantly since earlier based on different data parameters you had to do something called $feature$ $engineering$. This needed some kind of knowledge in statistics and basic co-ordinate geometry.

Nowadays practitioners can easily apply well known models to problems without any thought process or mathematics involved. Examples of this include CNN architectures like AlexNet, LeNet, ResNet, etc, RNN architectures like LSTM's and GRU's. We even are copying the weights of pre-trained models.

So what edge does a data scientist hold over practitioners? To me here is a list of points on which data scientists hold an edge:

  • Hyper-parameter tuning: In any NN there are minimum of 3-4 hyper-parameters. Which gives rise to $^4C_n = 16$ where $n$ varies from 0 to 4 possible tunings. A data scientist from loss curves, accuracy graphs, and other score graphs will be easily able to narrow down to the tuning required to make the NN perform the best. Whereas, a practitioner may have to try out the entire 16 combinations (each combination requiring many combinations of values for each) to be able to get to the solution. Time and resource consuming.
  • Special Architectures: Some problems require out of the box thinking to come up with the best solution. Like a combination of CNN and RNN is used to predict captions for images, CNN's may be used in field of sequence processing (like genetic sequences). Such uncommon/unconventional solutions to problems can only be applied by a person who knows how a NN works in details.
  • Intuition: Although strongly advised against by Andrew Ng in his course, I believe every ML programmers out there use NN architectures and solutions which they believe will work best entirely due to gut feeling (rather than going through tedious methodical way of using it on small scale problems of the same type). In this scenario data scientists are bound to get a more accurate model working just due to the fact they know the mathematics and the working of a NN. They will have an intuitive understanding of how the model will work on the data to perform the task at hand.

These I feel are some of the places where a scientist holds an edge over practitioners. I may have missed some other edges that a Data scientist may hold over practitioners, you are free to edit it in.

",,user9947,,,,9/6/2018 9:32,,,,1,,,,CC BY-SA 4.0 7851,2,,7838,9/6/2018 9:47,,7,,"

Over the years, many people attempted to define artificial intelligence. A lot of those definitions are summed up by Stuart Russell and Peter Norvig in their book Artificial Intelligence - A Modern Approach

The definitions of AI can be summarised as falling into the following categories:

  1. Those that address thought process and reasoning (how an AI thinks/reasons)
  2. Those that address behaviour (how an AI acts given what it knows)

Furthermore, the above 2 categories are further divided into definitions that:

I. assess the success of an AI (to do the above) based on its ability to replicate human performance

II. or an ability to replicate an ideal performance measure called 'rationality' (does it do the 'right' thing based on what it knows?)

I will cite you definitions that fit into each of the above categories:

  • 1.I. ""The [automation of] activities that we associate with human thinking, activities such as decision making, problem solving, learning.."" - Bellman 1978
  • 1.II. ""The study of the computations that make it possible to perceive, reason, and act."" - Winston, 1992
  • 2.I. ""The study of how to make computers do things at which, at the moment, people do better"" - Rich and Knight, 1991
  • 2.II. ""The study of the design of intelligent agents"" - Poole et al., 1998

In summary, AI is devoted to the creation of intelligent and rational machines that can make rational decisions and take rational actions.

I would suggest you read up on the Turing test, which Alan Turing proposed to test if a computer was intelligent. However, the Turing test has a few issues, because it is anthropomorphic.

When Aeronautical engineers created the airplane, they didn't set their goal that planes should fly exactly like birds, but rather, they started learning how lift forces were generated, based on the study of aerodynamics. Using this knowledge, they created planes.

Similarly, people in the AI world shouldn't put, IMHO, human intelligence as the standard to strive for, but, rather, we could use, say, rationality as a standard (amongst others).

",17965,,2444,,6/21/2019 16:48,6/21/2019 16:48,,,,3,,,,CC BY-SA 4.0 7853,1,,,9/6/2018 12:37,,2,153,"

I developed a CNN for image analysis. I've around 100K labeled images. I'm getting a accuracy around 85% and a validation accuracy around 82%, so it looks like the model generalize better than fitting. So, I'm playing with different hyper-parameters: number of filters, number of layers, number of neurons in the dense layers, etc.

For every test, I'm using all the training data, and it is very slow and time consuming.

Is there a way to have an early idea about if a model will perform better than another?

",17960,,2444,,5/4/2019 17:10,10/21/2021 14:14,Is there a way of pre-determining whether a CNN model will perform better than another?,,4,4,,,,CC BY-SA 4.0 7854,1,7858,,9/6/2018 12:44,,4,587,"

In the Berkeley RL class CS294-112 Fa18 9/5/18, they mention the following gradient would be 0 if the policy is deterministic.

$$ \nabla_{\theta} J(\theta)=E_{\tau \sim \pi_{\theta}(\tau)}\left[\left(\sum_{t=1}^{T} \nabla_{\theta} \log \pi_{\theta}\left(\mathbf{a}_{t} \mid \mathbf{s}_{t}\right)\right)\left(\sum_{t=1}^{T} r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)\right)\right] $$

Why is that?

",17966,,2444,,1/1/2022 12:57,1/1/2022 13:00,Why is the derivative of this objective function 0 if the policy is deterministic?,,2,0,,,,CC BY-SA 4.0 7855,2,,7853,9/6/2018 13:35,,0,,"

I would like you to try with following changes in the model.

  • Introduce batch normalization layer in the model.
  • Try with the batch size of 32 - 64
  • Use different architecture, Like VGG, Resnet, etc..

There is no full proof answer to this question but you can get best by trying some know strategies.

",3773,,,,,9/6/2018 13:35,,,,1,,,,CC BY-SA 4.0 7856,1,,,9/6/2018 13:42,,2,153,"

One way of ranking human intelligence is based on IQ tests. But how can we compare the intelligence of AI systems?

For example, is there a test that tells me that a spam filter system is more intelligent than a self-driving car, or can I say that a chess program is more intelligent than AlphaGo?

",7681,,2444,,4/1/2020 14:44,4/1/2020 14:44,How can we compare the intelligence of AI systems?,,1,0,,,,CC BY-SA 4.0 7857,2,,7854,9/6/2018 14:22,,3,,"

Well, I'd rather comment, but I don't have yet this privilege, so here are some comments.

First, having a deterministic policy inside the log would do create trivial terms.

Secondly, for me, in Policy Gradient methods, it's a non sense to have a deterministic policy during the optimization, because you want to explore the space of weights. In my experience, you only set the policy to deterministic (in PG method) when you're done with the optimization, and you want to test your network.

",17759,,17759,,9/6/2018 14:55,9/6/2018 14:55,,,,0,,,,CC BY-SA 4.0 7858,2,,7854,9/6/2018 14:44,,12,,"

Here is the gradient that they are discussing in the video:

$$\nabla_{\theta} J(\theta) \approx \frac{1}{N} \sum_{i=1}^N \left( \sum_{t=1}^T \nabla_{\theta} \log \pi_{\theta} (\mathbf{a}_{i, t} \vert \mathbf{s}_{i, t}) \right) \left( \sum_{t = 1}^T r(\mathbf{s}_{i,t}, \mathbf{a}_{i, t}) \right)$$

In this equation, $\pi_{\theta} (\mathbf{a}_{i, t} \vert \mathbf{s}_{i, t})$ denotes the probability of our policy $\pi_{\theta}$ selecting the actions $\mathbf{a}_{i, t}$ that it actually ended up selecting in practice, given the states $\mathbf{s}_{i, t}$ that it encountered during the episode that we're looking at.

In the case of a deterministic policy $\pi_{\theta}$, we know for sure that the probability of it selecting the actions that it did select must be $1$ (and the probability of it selecting any other actions would be $0$, but such a term does not show up in the equation). So, we have $\pi_{\theta} (\mathbf{a}_{i, t} \vert \mathbf{s}_{i, t}) = 1$ for every instance of that term in the above equation. Because $\log 1 = 0$, this leads to:

\begin{aligned} \nabla_{\theta} J(\theta) &\approx \frac{1}{N} \sum_{i=1}^N \left( \sum_{t=1}^T \nabla_{\theta} \log \pi_{\theta} (\mathbf{a}_{i, t} \vert \mathbf{s}_{i, t}) \right) \left( \sum_{t = 1}^T r(\mathbf{s}_{i,t}, \mathbf{a}_{i, t}) \right) \\ % &= \frac{1}{N} \sum_{i=1}^N \left( \sum_{t=1}^T \nabla_{\theta} \log 1 \right) \left( \sum_{t = 1}^T r(\mathbf{s}_{i,t}, \mathbf{a}_{i, t}) \right) \\ % &= \frac{1}{N} \sum_{i=1}^N \left( \sum_{t=1}^T \nabla_{\theta} 0 \right) \left( \sum_{t = 1}^T r(\mathbf{s}_{i,t}, \mathbf{a}_{i, t}) \right) \\ % &= \frac{1}{N} \sum_{i=1}^N 0 \left( \sum_{t = 1}^T r(\mathbf{s}_{i,t}, \mathbf{a}_{i, t}) \right) \\ % &= 0 \\ \end{aligned}

(i.e. you end up with a sum of terms that are all multiplied by $0$).

",1641,,2444,,1/1/2022 13:00,1/1/2022 13:00,,,,3,,,,CC BY-SA 4.0 7861,1,7864,,9/6/2018 17:53,,6,273,"

In theoretical computer science, there is a massive categorization of the difficulty of various computational problems in terms of their asymptotic worst-time computational complexity. There doesn't seem to be any analogous analysis of what problems are ""hard for AI"" or even ""impossible for AI."" This is in some sense quite reasonable, because most research is focused on what can be solved. I'm interested in the opposite. What I do need to prove about a problem to prove that it is ""not reasonably solvable"" by AI?

Many papers say something along the lines of

AI allows us to find real-world solutions to real-world instances of NP-complete problems.

Is there a theoretical, principled reason for saying this instead of ""... PSPACE-complete problems""? Is there some sense in which AI doesn't work on PSPACE-complete, or EXPTIME-complete, or Turing complete problems?

My idea answer would be a reference to a paper that shows AI cannot be used to solve a particular kind of problem based on theoretical or statistical reasoning. Any answer exhibiting and justifying a benchmark for ""too hard for AI"" would be fine though (bonus points if the benchmark has a connection to complexity and computability theory).

If this question doesn't have an answer in general, answers about specific techniques would also be interesting to me.

",12732,,2444,,10/23/2021 16:01,10/23/2021 16:01,"What does ""hard for AI"" look like?",,1,0,,,,CC BY-SA 4.0 7862,2,,3972,9/6/2018 19:57,,2,,"

I'm not familiar with any existing, robust methods to generate such a dataset. Here are some thoughts though.

You propose using an MLP with a single hidden layer. That means we have two weight matrices, and two activation functions (one for hidden layer, one for output layer). Some notation:

  • $d$: dimensionality of input vectors
  • $n$: dimensionality of hidden layer
  • You mentioned binary output, so I'll assume that dimensionality is $2$
  • $W_n^{(1)} \in \mathbb{R}^{d \times n}$: the first weight matrix in the scenario where we're using $n$ hidden nodes
  • $W_n^{(2)} \in \mathbb{R}^{n \times 2}$: the second weight matrix in the scenario where we're using $n$ hidden nodes
  • $g$: first activation function
  • $h$: second activation function

Then, given an input vector $x \in \mathbb{R}^d$, our neural network will generate output $f_n(x) = h(W_n^{(2)} (g(W_n^{(1)}x)))$.

Now, in general we of course expect Neural Networks to only find a local optimum, but if you want a robust solution you'll want it to be able to handle the worst case, and the worst case for your ""adversarial"" task is when the Neural Network manages to find the global optimum. So, we'll assume it can find the global optimum.

Essentially, what you're looking for is a dataset $D_n$ containing a number of input vectors $x$, such that:

  1. $f_n(x) = h(W_n^{(2)} (g(W_n^{(1)}x)))$ provides good / optimal results (after training to the global optimum)
  2. $f_{n - 1}(x) = h(W_{n - 1}^{(2)} (g(W_{n - 1}^{(1)}x)))$ provides poor results (even after training to the global optimum of this setup).

In other words, you want to find a collection of vectors $x$ such that it becomes impossible to find a collection of weights in the $n - 1$ case where $f_{n-1}(x)$ is a good approximation of $f_n(x)$. You want to make it impossible that $f_{n - 1}(x) \approx f_n(x)$.


Now this has turned into a clear mathematical problem. I'm not familiar with any established methods in mathematics to solve a problem like this, maybe there are though. My best guess at this point in time would be a procedure like the following:

  1. Generate random ""ground truth"" versions of the weight matrices $W^{(1)*}_{n}$ and $W^{(2)*}_{n}$ (just completely random matrices).
  2. Generate completely random input vectors $x$. Compute the corresponding ground truth labels as $f^*_n(x) = h(W_n^{(2)*} (g(W_n^{(1)*}x)))$.
  3. Hope that the neural network with $n$ hidden nodes can recover the ground truth weight matrices that were previously generated randomly.
  4. Hope that the neural network with $n - 1$ hidden nodes cannot find an accurate approximation.

In theory, the MLP with $n$ hidden nodes should be able to learn the exact ground truth function. In theory, under certain conditions, the MLP with $n$ hidden nodes should not be able to learn the exact ground truth function. I suspect those ""certain conditions"" would be that the rows/columns of the weight matrices should be linearly independent, which is likely with randomly generated matrices, but I'm not 100% sure on this. Even if it can't learn the exact ground truth, it may still be capable of learning an approximation... there may be ways to find upper bounds on how close such an approximation could get, but I'm not sure.

",1641,,,,,9/6/2018 19:57,,,,1,,,,CC BY-SA 4.0 7863,2,,7856,9/6/2018 20:03,,2,,"

There are different ways to compare different kinds of AI techniques.

As a starting point, be aware that ""AI System"" can mean an incredibly broad range of things. In popular culture, we usually think of a deployed system that uses AI techniques. These systems can only be compared on the basis of their performance, and their performance may have relatively little to do with AI itself (e.g. their behaviours might be more strongly affected by user interface decisions, not the AI techniques under the hood).

In contrast, AI researchers are usually more interested in comparing the performance of different AI algorithms at solving the ""AI-ish"" parts of the problem a fully developed system aims to solve. A common way to do this is with benchmark problems. For example, in machine learning it is common to compare two algorithms by running each of them on a commonly used dataset, and comparing the performance of the models they create. In AI Planning, it is common to issue planning challenges to the community, and compare the quality of the plans on several different axes (e.g. average wait times, maximum wait times, whether goals were accomplished, how long it took to create a plan, etc.).

There is no generally agreed upon way to compare techniques across different areas of AI, but a commonly adopted approach is the Turing Test. In the Turing Test, we care only about the ability of the system to mimic something like human intelligence. It's fair game to ask about planning problems, or learning problems, or other topics, so you could in some sense judge one technique to be better than another. However, most judgements made in the Turing Test are subjective, so it's not clear that it really solves the problem you posed.

",16909,,,,,9/6/2018 20:03,,,,3,,,,CC BY-SA 4.0 7864,2,,7861,9/6/2018 20:19,,5,,"

Nice Question!

This is a perennial topic of discussion among AI researchers. The short answer is ""we don't really know which topics are hard in general, but we do know which we haven't got good techniques for yet.""

Let's start by explaining why AI is not concerned with notions of computational complexity like NP-Completeness. AI researchers figured out in the 90's that most problems that are computationally hard in theory aren't actually hard in practice at all! For example, Boolean Satisfiability, the canonical NP-Hard problem, is known to have hard instances, but in practice, these almost never show up. Even when they do, we can usually get good approximate solutions with minimal computation time. Since many AI problems are reducible to SAT, whole areas of the field just use these approximation techniques and solvers. There's a good, if a bit old, survey here. Since 2008, things have only gotten better. Basically, NP-Hard stuff just isn't that hard. Worst-case complexity is therefore probably the wrong tool to guess at which problems are hard for AI.

At the other end of the scale, we have subjective complexity based on things like how ""large"" the problem is. This has proven to be pretty unreliable. One example is Go, which I was told would ""never be solved by AI"" as late as 2010. Clearly, we were completely wrong. We just didn't know which techniques to use yet. Another example is language. Rule-based AI'ers tried at it for decades with minimal success. Probabilistic methods have achieved essentially human-level performance in less time. If you asked a researcher in the 1970s if language was hard for AI, they'd have said yes, but they'd have been wrong. This has been closely related to the advances in computing hardware: techniques that seemed wasteful and slow 40 years ago are now entirely practical. Sometimes they turn out to solve problems really well.

Part of this ties into the issue that AI'ers don't really know or agree on what intelligence is, or what it means to solve a problem. Some AI'ers maintain that language really is hard, and that the systems we use now aren't really solving language, they're just making blind guesses that happen to be right a lot. Under that view, language is still hard. Fodor was a strong proponent of this view in the past, and his writings are a good place to read about it, but I've still heard people espouse views like this at AI conferences as recently as 2015.

Something this usually considered hard right now is making a good general intelligence, that integrates knowledge across many domains and could plausibly pass something like the Turing test. Yet, there has been some progress here (look at virtual assistants), and some AI'ers reckon this might not be as hard as we thought either.

So, basically, we don't know what's hard because we don't know when someone will come up with a new exciting technique that solves problems previously thought to be hard, we know that worst-case complexity isn't a good measurement system for ""intelligence"" requirements, and we don't know what intelligence really is.

",16909,,2444,,7/7/2019 13:49,7/7/2019 13:49,,,,1,,,,CC BY-SA 4.0 7865,1,7872,,9/6/2018 23:27,,13,6586,"

In a convolutional neural network, which layer consumes more training time: convolution layers or fully connected layers?

We can take AlexNet architecture to understand this. I want to see the time breakup of the training process. I want a relative time comparison so we can take any constant GPU configuration.

",17980,,2444,,5/19/2020 19:48,5/19/2020 19:48,Which layer in a CNN consumes more training time: convolution layers or fully connected layers?,,2,0,,,,CC BY-SA 4.0 7866,2,,7838,9/7/2018 1:15,,2,,"

Intelligence

A measure of the strength of a decision-making agent relative to other decision-making agents, in regard to a given task or set of tasks. The medium is irrelevant—intelligence is exhibited by both organic and intentionally created mechanisms. May also be the capability to solve a problem, as in the case of a solved game.

Artificial

Relates to the term artifact, a thing which is intentionally created. Typically this term has been used to connote physical objects, but algorithms created by humans are also regarded as artifacts.

The etymology is derived from the Latin words ars and faciō: ""To skillfully construct"", or, ""the art of making"".

Artificial Intelligence

  • Any decision-making agent that is skillfully (intentionally) constructed.

APPENDIX: The meaning of ""intelligence""

The original meaning of ""intelligence"" seems to be ""to acquire"", back to the Indo-European. See: intelligence (etymology); *leg/*leh₂w-

The OED 1st definition of intelligence is not incorrect, extending the meaning to acquisition of capability (demonstrable utility), just that the second definition is the older and fundamental: ""The collection of information of [strategic] value; 2.3 (archaic) Information in general; news.""

You can regard the universe as being comprised of information, whatever form that information takes (matter, energy, states, relative positions, etc.) From the standpoint of an algorithm, this makes sense since the only means they have to gauge the universe are percepts.

Take a flat text file. It may just be data, but you could try and execute. If it actually runs, it might demonstrate utility at some task. (For instance, if it is a minimax algorithm.)

""Intelligence as a measure of utility"" is itself ""intelligence"" in the sense of information, specifically that information by which we measure intelligence, as a degree, relative to a task or to other intelligences.

",1671,,1671,,3/24/2019 0:09,3/24/2019 0:09,,,,1,,,,CC BY-SA 4.0 7867,1,,,9/7/2018 4:19,,2,765,"

I have an application where I want to find the locations of objects on a simple, relatively constant background (fixed camera angle, etc). For investigative purposes, I've created a test dataset that displays many characteristics of the actual problem.

Here's a sample from my test dataset.

Our problem description is to find the bounding box of the single circle in the image. If there is more than one circle or no circles, we don't care about the bounding box (but we at least need to know that there is no valid single bounding box).

For my attempt to solve this, I built a CNN that would regress (min_x, min_y, max_y, max_y), as well as one more value that could indicate how many circles were in the image.

I played with different architecture variations, but, in general, the architecture a was very standard CNN (3-4 ReLU convolutional layers with max-pooling in between, followed by a dense layer and an output layer with linear activation for the bounding box outputs, set to minimise the mean squared error between the outputs and the ground truth bounding boxes).

Regardless of the architecture, hyperparameters, optimizers, etc, the result was always the same - the CNN could not even get close to building a model that was able to regress an accurate bounding box, even with over 50000 training examples to work with.

What gives? Do I need to look at using another type of network as CNNs are more suited to classification rather than localisation tasks?

Obviously, there are computer vision techniques that could solve this easily, but due to the fact that the actual application is more involved, I want to know strictly about NN/AI approaches to this problem.

",17985,,2444,,1/29/2021 23:22,1/29/2021 23:22,How to architect a network to find bounding boxes in simple images?,,2,0,,,,CC BY-SA 4.0 7868,2,,7867,9/7/2018 6:34,,1,,"

There are some special architectures of CNNs which are designed exactly for the task you mention. The Detector library includes a collection of these architectures, this paper describes the Mask R-CNN network in detail, which is designed for image segmentation tasks.

",2585,,,,,9/7/2018 6:34,,,,1,,,,CC BY-SA 4.0 7869,2,,7762,9/7/2018 7:00,,0,,"

Thare are many ways to approache this. One approach to start would be to try to describe the problem using a formalism closer to reinforcement learning.

  • Output:

Aiming in any shooter type game, as I recall, involves moving the mouse. So the output of your aimbot has two dimentions. Depending on the required accuracy, you can consider these two dimentions continous, with a limited range, or if you consider each pixel as a integer, you might be able to discretize your action space. (I assume mouse XY coordinates should be input to the game, not increments)

  • Input:

You definetly need screen information. You can take the whole screen as input to a CNN, similarly to DeepQLearning for Atari.

  • Reward function

This might be tricky since your rewards need to be as dense as possible, however, the only feedback you get from the game is that someone was shot. It might be ebough but this will definetly increase your training time.

  • Training data / Environemnt:

Your environment for training is the game itseld. Using a curriculum learning approach would probably make the training process more efficient.

You can also try an imitation learning approach, since I assume you are happy to provide expert training examples (in this case probably headshots in the game environment).

You can read more about how to apply reinforcement learning for games here. The Unity ML-Agents Library also includes sample tracking problems and their solutions.

",2585,,,,,9/7/2018 7:00,,,,0,,,,CC BY-SA 4.0 7871,2,,7832,9/7/2018 8:59,,2,,"

Ok, so I think I have a better understanding of this now.

Firstly, let's remind the main idea of the PPO : staying close to the previous policy. It's the same idea than in TRPO, but the L function is improved.

So, you wanna make ""small but safe steps"". With clipped surrogate objective, you don't give too much importance to promising actions. You learn that bad actions are bad, so you decrease their probability according to ""how bad"" they are. But for good actions, you only learn that they are ""a little bit good"", and their probability will be just slightly increased.

This mechanism allows you to perform small but relevant updates of your policy.

hope this will help someone :)

",17759,,,,,9/7/2018 8:59,,,,1,,,,CC BY-SA 4.0 7872,2,,7865,9/7/2018 11:31,,13,,"

NOTE: I did these calculations speculatively, so some errors might have crept in. Please inform of any such errors so I can correct it.

In general in any CNN the maximum time of training goes in the Back-Propagation of errors in the Fully Connected Layer (depends on the image size). Also the maximum memory is also occupied by them. Here is a slide from Stanford about VGG Net parameters:

Clearly you can see the fully connected layers contribute to about 90% of the parameters. So the maximum memory is occupied by them.

As far as training time goes, it somewhat depends on the size (pixels*pixels) of the image being used. In FC layers it is straightforward that the number of derivatives you have to calculate is equal to the number of parameters. As far as the convolutional layer goes, let's see an example,let us take the case of 2nd layer: It has got 64 filters of $(3*3*3)$ dimensions to be updated. The error is propagated from the 3rd layer. Each channel in the 3rd layer propagates it error to its corresponding $(3*3*3)$ filter. Thus $224*224$ pixels will contribute to about $224*224*(3*3*3)$ weight updations. And since there are $64$ such $224*224$ channels we get the total number of calculations to be performed as $64*224*224*(3*3*3) \approx 87*10^6 $ calculations.

Now let us take the last layer of $56*56*256$. It will pass it gradients to the previous layer. Each $56*56$ pixel will update a $(3*3*256)$ filter. And since there are 256 such $56*56$ the total calculations required would be $256 * 56 * 56 * (3*3*256) \approx 1850 *10^6$ calculations.

So number of calculations in a convolutional layer really depends on the number of filters and the size of the picture. In general I have used the following formulae to calculate the number of updations required for filters in a layer, also I have considered $stride = 1$ since it will be the worst case:

$channels_{output} * (pixelOutput_{height} * pixelOutput_{width}) * (filter_{height} * filter_{width} * channels_{input})$

Thanks to fast GPU's we are easily able to handle these huge calculations. But in FC layers the entire matrix needs to be loaded which causes memory problems which is generally not the case of convolutional layers, so training of convolutional layers is still easy. Also all these have to be loaded in the GPU memory itself and not the RAM of the CPU.

Also here is the parameter chart of AlexNet:

And here is a performance comparison of various CNN architectures:

I suggest you check out the CS231n Lecture 9 by Stanford University for better understanding of the nooks and crannies of CNN architectures.

",,user9947,,user9947,9/8/2018 9:14,9/8/2018 9:14,,,,0,,,,CC BY-SA 4.0 7873,1,7874,,9/7/2018 13:04,,3,790,"

I'm a fresh learner of AI. I was told that depth-first search is not an optimal searching algorithm since ""it finds the 'leftmost' solution, regardless of depth or cost"". Therefore, does it mean that in practice, when we implement DFS, we should always have a checker to stop the search when it finds the first solution (also the leftmost one)?

",17996,,2444,,11/21/2019 16:41,11/21/2019 16:41,Does depth-first search always stop when it has found the leftmost solution?,,1,0,,,,CC BY-SA 4.0 7874,2,,7873,9/7/2018 13:12,,2,,"

One of the more standard assumptions when first introducing new students to search algorithms (like Depth-First Search, Breadth-First Search which you've also likely heard about or will hear about soon, etc.) is indeed that our goal is to find some sort of solution, and only find one.

If our intention is to find just a single solution, then yes, you will need to check at every node whether that is a solution node, and you can stop the search process once you've found one.

In practice, there can be all kinds of variants of this idea. Maybe in a different case you are interested in finding all solutions, rather than a single one; in such a case you would naturally not stop the search process after finding the first one, but continue searching.

So, to conclude, it really depends on exactly what you want, why are you using a search algorithm. If you only care about finding a solution, you can stop when you have one.

",1641,,,,,9/7/2018 13:12,,,,0,,,,CC BY-SA 4.0 7875,1,,,9/7/2018 13:31,,13,710,"

The term Singularity is often used in mainstream media for describing visionary technology. It was introduced by Ray Kurzweil in a popular book The Singularity Is Near: When Humans Transcend Biology (2005).

In his book, Kurzweil gives an outlook to a potential future of mankind which includes nanotechnology, computers, genetic modification and artificial intelligence. He argues that Moore's law will allow computers an exponential growth which results in a superintelligence.

Is the technological singularity something that is taken seriously by A.I. developers or is this theory just a load of popular hype?

",17978,,2444,,12/12/2021 17:32,12/12/2021 17:32,Is the singularity something to be taken seriously?,,4,1,,,,CC BY-SA 4.0 7877,1,7878,,9/7/2018 14:42,,6,1461,"

I'm confused regarding a specific detail of MCTS.

To illustrate my question, let's take the simple example of tic-tac-toe. After the selection phase, when a leaf node is reached, the tree is expanded in the so-called expansion phase. Let's say a particular leaf node has 6 children. Would the expansion phase expand all the children and run the simulation on them? Or would the expansion phase only pick a single child at random and run simulation, and only expand the other children if the selection policy arrives at them at some later point?

Alternatively, if both of these are accepted variants, what are the pros/cons of each one?

",12201,,2444,,11/19/2019 16:24,11/19/2019 16:24,Which nodes are expanded in the expansion phase of MCTS?,,1,0,,,,CC BY-SA 4.0 7878,2,,7877,9/7/2018 15:56,,3,,"

By far the most common (and likely also the most simple / straightforward) implementation is to expand exactly one node in the Expansion Phase; specifically, the node corresponding to the very first state selected (semi-)randomly by the Play-Out Phase. This is also pretty much the bare minimum you have to do if you want any form of tree growing at all (which you do).


Other variants are possible too, but are much less common. The variant you suggest in the question is to expand all the children of the final node encountered during the Selection Phase, and run a Play-Out for all of them. I am not familiar with any literature on such a strategy really, never tried that myself. Intuitively, I would expect it to perform very similarly, perhaps slightly worse. Essentially what this would do is that it moves the ""behaviour"" of the search algorithm slightly more towards Breadth-First Search behaviour, rather than Best-First search behaviour. You spend a bit less of your computation time in the Selection Phase, because every Selection Phase is followed up by for example 6 (or whatever branching factor you have) Play-Outs instead of just a single one. On average I'd expect this to be slightly worse, because the Selection Phase is the primary source of the ""Best-First Search"" behaviour of the algorithm. I certainly don't expect a change like this to cause a large difference in performance though, if any. It will also likely be domain-dependent; worse in some cases, better in other cases.


A different variant that I did once use myself is to expand every node in the complete line of play followed by the Play-Out phase. You can visualize this as a very ""thin"", but ""deep"" expansion, whereas your suggestion discussed above would be visualized as a ""shallow"" but ""wide"" expansion (and the conventional expansion strategy of a single node would be ""thin"" and ""shallow"").

For this strategy, it is much easier to clearly define the advantages and disadvantages that it has in comparison to the standard strategy. The main advantage is that you retain more information from your Play-Outs, you throw less information away. This is because the Backpropagation phase, after the Play-Out terminates, can only store information in nodes that exist. If you immediately expand the complete line of play followed in the Play-Out Phase, you can store the result (the evaluation in the terminal state) in all of those nodes. If you don't expand the complete Play-Out (e.g. only expand the very first node), you'll have to ""skip"" all of those nodes which you didn't expand yet (they don't exist), and you can't store results in there yet.

The main disadvantage of this approach is that it requires more memory, the tree grows a lot more quickly.

I would personally recommend this approach if you have very strict limitations on computation time, if you expect to be able to only run very few iterations of the MCTS algorithm. For example, I personally used this in my General Video Game AI agent; this is a real-time game where you have to make decisions every ~40 milliseconds. In such a low amount of time, you cannot run many MCTS simulations. This means that:

  1. You do not expect to run out of memory, even if you grow your tree very quickly, so the increased memory requirements become a non-issue.
  2. Due to the low expected number of iterations, it is extremely important to retain as much information as possible, not throw any information away. If we can't run many simulations, we want to make sure to squeeze every little bit of information we can out of each of them.

For contrast, if you're developing an agent to play a board game, and it has in the order of multiple minutes of thinking time per turn, the standard approach of only expanding a single node per Expansion Phase becomes a lot more appealing. If you're capable of running tens of thousands of iterations or more, it really doesn't hurt if you ""forget"" about a little bit of information deep down in the tree. The risk of running out of memory also becomes a lot more serious if you're running many iterations, so you don't want to grow the tree too quickly.

",1641,,,,,9/7/2018 15:56,,,,2,,,,CC BY-SA 4.0 7879,1,8905,,9/7/2018 17:14,,3,123,"

Objects tracking is finding the trajectory of each object in consecutive frames. Human tracking is a subset of object tracking which just considers humans.

I've seen many papers that divide tracking methods into two parts:

  1. Online tracking: Tracker just uses current and previous frames.
  2. Offline tracking: Tracker uses all frames.

All of them mention that online tracking is suitable for autonomous driving and robotics, but I don't understand this part. What are the applications of object/human tracking in autonomous driving?

Do you know some related papers?

",10051,,2444,,3/22/2019 14:55,3/23/2019 22:05,What are applications of object/human tracking in autonomous cars?,,2,6,,,,CC BY-SA 4.0 7880,1,7883,,9/7/2018 22:39,,4,955,"

Is there an accepted way in NLP to parse conjunctions (and/or) in a sentence?

By following the example below, how would I parse

I drink orange juice if its the weekend or if it's late and I'm tired.

into

it's the weekend

and

it's late

and

I'm tired

?

Implying an action will be taken when one of the above elements at the 1st level of depth is true.

I know when I hear the sentence that it means ""its the weekend"" OR (""it's late"" AND ""I'm tired""), but how could this be determined computationally?

Can an existing python/other library do this?

",10623,,2444,,6/2/2020 23:14,6/2/2020 23:14,How to parse conjunctions in natural language processing?,,1,0,,,,CC BY-SA 4.0 7881,1,,,9/7/2018 23:43,,4,939,"

Connect6 is an example of a game with a very high branching factor. It is about 45 thousand, dwarfing even the impressive Go.

Which algorithms can we use on games with such high branching factors?

I tried MCTS (soft rollouts, counting a ply as placing one stone), but it does not even block the opponent, due to the high branching factor.

In the case of Connect6, there are stronger AIs out there, but they aren't described in any research papers that I know of.

",18006,,2444,,1/2/2022 10:05,1/2/2022 10:05,Which algorithms can we use on games with high branching factors (e.g. Connect6)?,,1,1,,,,CC BY-SA 4.0 7882,1,,,9/8/2018 0:05,,2,35,"

Formal semantics of natural language perceives sentences as logical expressions. Full paragraphs and even stories of natural language texts are researched and formalized using discourse analysis (Discourse Representation Theory is one example). My question is - is there research trend that applied the notion ""discourse"" to images, sounds and even animation? Is there such a notion as ""visual discourse""?

Google gives very few and older research papers, so - maybe the field exists, but it uses different terms and Google can not relate those terms to my keyword ""visual discourse"".

Basically - there are visual grammars and other pattern matching methods that can discover objects in the picture and relate them. But one should be able to read whole store from the picture (musical piece, multimedia content) and I imagine that such reading can be researched by multimedia discourse analysis. But there is no work under such terms. How it is done and named in reality?

",8332,,8332,,9/8/2018 0:11,9/8/2018 0:11,VIsual/musical/multimedia discourse (analysis) - are there such notions?,,0,1,,,,CC BY-SA 4.0 7883,2,,7880,9/8/2018 1:37,,1,,"

This seems not easy for NLP. I doubt that state-of-the-art NLP tools can reliably determine the correct hierarchical structure of independent clauses. Examples below.

The Berkeley parser gets it basically right in the sense that it can put its late and I'm tired on parallel, and they together on parallel with the weekend. But still not perfect (the weekend should be in the same subtree with It's rather than if its late and I'm tired)

The Stanford parser, which is available in Python (NLTK), incorrectly parsed I\'m tired to the same level of I drink orange juice.

",15493,,,,,9/8/2018 1:37,,,,3,,,,CC BY-SA 4.0 7885,2,,7865,9/8/2018 7:12,,5,,"

As CNN contains convolution operation, but DNN uses constructive divergence for training. CNN is more complex in terms of Big O notation.

For reference:

  1. See Convolutional Neural Networks at Constrained Time Cost for more details about the time complexity of CNNs

  2. See What is the time complexity of the forward pass algorithm of a neural network? and What is the time complexity for training a neural network using back-propagation? for more details about the time complexity of the forward and backward passes of an MLP

",18010,,2444,,5/19/2020 19:46,5/19/2020 19:46,,,,1,,,,CC BY-SA 4.0 7886,2,,7875,9/8/2018 10:31,,1,,"

In order to be on the same page, you should give references about ""technological singularity"", as it comprises multiple fields (mathematics, statistics, philosophy of science, epistemology, sociology, politics, economics, to mention a few).

Generally, when you consider concepts related to adj + ai (where adj = {weak, strong, full, narrow, ...}), the breath of speculation is quite large and in fieri, so as a developer (where for developer I assume you work on coding-related problems, not the project manager at google x and the like) I would not be worried, unless you are enjoying a cup of tea with your colleagues during a break.

",7988,,,,,9/8/2018 10:31,,,,0,,,,CC BY-SA 4.0 7887,2,,7881,9/8/2018 10:52,,5,,"

Typically, Monte-Carlo Tree Search (MCTS) actually is the go-to ""solution"" for such problems with large branching factors. I can understand that ""vanilla"" MCTS may still have unsatisfactory performance, but there is a plethora of extensions/enhancements available.

I don't have experience with the specific game you mentioned (Connect6), but from a quick look at how the game works, I imagine there will be a huge number of transpositions in the search tree (positions that are the same but can be reached through multiple different paths in the search tree). This will especially be very common if you treat placing one stone as a single ply; every ""combined move"" (of placing two stones in two positions subsequently) can be reached in two different ways, simply by switching the order in which the player places them. There has been research in using Transposition Tables with MCTS, so that may be a promising direction to look into.

I also suspect there will be great value in using Deep (Reinforcement) Learning approaches. If there is a large board on which to place stones, there will likely be many moves that are ""absurd"" and can easily be dismissed altogether by Deep Learning approaches (e.g., placing stones far away in a corner of the board where none of the ""action"" is going on). Vanilla MCTS, without Deep Learning extensions, will not be able to recognize and dismiss such absurd moves, and play them way too often (in Play-Out but also Selection phase due to the high branching factor). The most obvious source of inspiration here would be AlphaGo Zero.

Finally, there's definitely some published research on Connect6 AI (and even MCTS in Connect6). For example: Two-Stage Monte Carlo Tree Search for Connect6. You can likely also find more relevant research by checking that paper's list of References, and checking later papers on google scholar that cite this one.

",1641,,,,,9/8/2018 10:52,,,,0,,,,CC BY-SA 4.0 7888,2,,7875,9/8/2018 12:07,,10,,"

I can say that among AI researchers I interact with, it far more common to view it as wild speculation than as settled fact.

This is borne out by surveys of AI researchers, with 80% thinking strong forms of AI will emerge in ""more than 50 years"" or ""never"", and just a few percent thinking that such forms of AI are ""near"".

Software Developers are not the same as AI researchers, and I have found the Singularity myth to be much more widespread among developers. It has a nice ring to it: Computers keep getting faster, at some point they'll be faster than brains, at that point we just simulate brains. Soon after, we simulate something better than brains.

I suspect that the reasons AI researchers are less optimistic are rooted in the fact that we still don't have a good understanding of human intelligence, or even enough of an understanding of the brain to simulate it. For example, in the last two weeks we have discovered previously unknown types of brain cells. This gives the (correct) impression that even if we had a fast enough computer, we are not at all close to being able to accurately simulate a human brain. We don't really know what a human brain is.

Even if we did know that, simulations are necessarily lossy. We may not have good simulation techniques. Even if we did have good techniques, we may simulate the brain and discover our simulation does not behave as expected for reasons that we don't understand. This is very probable when simulating new systems. In some sense, proponents of the Singularity resemble people predicting that weather control was near in the 1940s. After all, we could simulate simple weather patterns already then, and generate forecasts that sort of worked. How much more complex could it really be to generate perfect forecasts?

",16909,,,,,9/8/2018 12:07,,,,0,,,,CC BY-SA 4.0 7890,2,,3850,9/9/2018 4:51,,5,,"

Old question, but I thought it's worth one practical answer. I happened to stumble upon it right after looking at a guide of how to build such neural network, demonstrating echo of python's randint as an example. Here is the final code without detailed explanation, still quite simple and useful in case the link goes offline:

from random import randint
from numpy import array
from numpy import argmax
from pandas import concat
from pandas import DataFrame
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense

# generate a sequence of random numbers in [0, 99]
def generate_sequence(length=25):
    return [randint(0, 99) for _ in range(length)]

# one hot encode sequence
def one_hot_encode(sequence, n_unique=100):
    encoding = list()
    for value in sequence:
        vector = [0 for _ in range(n_unique)]
        vector[value] = 1
        encoding.append(vector)
    return array(encoding)

# decode a one hot encoded string
def one_hot_decode(encoded_seq):
    return [argmax(vector) for vector in encoded_seq]

# generate data for the lstm
def generate_data():
    # generate sequence
    sequence = generate_sequence()
    # one hot encode
    encoded = one_hot_encode(sequence)
    # create lag inputs
    df = DataFrame(encoded)
    df = concat([df.shift(4), df.shift(3), df.shift(2), df.shift(1), df], axis=1)
    # remove non-viable rows
    values = df.values
    values = values[5:,:]
    # convert to 3d for input
    X = values.reshape(len(values), 5, 100)
    # drop last value from y
    y = encoded[4:-1,:]
    return X, y

# define model
model = Sequential()
model.add(LSTM(50, batch_input_shape=(5, 5, 100), stateful=True))
model.add(Dense(100, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
# fit model
for i in range(2000):
    X, y = generate_data()
    model.fit(X, y, epochs=1, batch_size=5, verbose=2, shuffle=False)
    model.reset_states()
# evaluate model on new data
X, y = generate_data()
yhat = model.predict(X, batch_size=5)
print('Expected:  %s' % one_hot_decode(y))
print('Predicted: %s' % one_hot_decode(yhat))

I've just tried and it indeed works quite well! Took just a couple of minutes on my old slow netbook. Here's my very own output, different from the link above and you can see match isn't perfect, so I suppose exit criteria is a bit too permissive:

...
 - 0s - loss: 0.2545 - acc: 1.0000
Epoch 1/1
 - 0s - loss: 0.1845 - acc: 1.0000
Epoch 1/1
 - 0s - loss: 0.3113 - acc: 0.9500
Expected:  [14, 37, 0, 65, 30, 7, 11, 6, 16, 19, 68, 4, 25, 2, 79, 45, 95, 92, 32, 33]
Predicted: [14, 37, 0, 65, 30, 7, 11, 6, 16, 19, 68, 4, 25, 2, 95, 45, 95, 92, 32, 33]
",18025,,,,,9/9/2018 4:51,,,,2,,,,CC BY-SA 4.0 7891,1,8069,,9/9/2018 5:11,,3,405,"

I want to generate images of childrens' drawings consistent with the developmental state of children of a given age. The training data set will include drawings made by real children in a school setting. The generated images will be used for developmental analysis.

I have heard that Generative Adversarial Networks are a good tool for this kind of problem. If this is true, how would I go about applying a GAN to this challenge?

",18027,,4302,,9/13/2018 8:15,9/21/2018 0:19,How to use a Generative Adversarial Network to generate images for developmental analysis?,,2,8,,,,CC BY-SA 4.0 7892,2,,5982,9/9/2018 8:07,,0,,"

As Andreas has commented this is a problem of statistical language model (a probability distribution over a sequence of words). The important thing you need is a hash table mapping fixed-length to the expected ending chains of words in your dictionary.
Things that can make your prediction better:

  • Add better and more words to your dictionary.
  • Use text expansion.

What you are looking for will require a pinch of Reinforcement Learning too. You need to figure out a way to penalize and award the predictions and then use the result in future. Your case also requires you to build your own corpus, which is the hardest part. If your corpus is good, it will give better results.
This is the research paper that will help you a lot.

",3005,,,,,9/9/2018 8:07,,,,1,,,,CC BY-SA 4.0 7893,2,,3850,9/9/2018 8:38,,2,,"

Adding to what Demento said, the extent of randomness in the Random Number Generation Algorithm is the key issue. Following are some designs that can make the RNG weak:
Concealed Sequences
Suppose this is the previous few sequences of characters generated: (Just an example, for the practical use larger range, is used)

lwjVJA
Ls3Ajg
xpKr+A
XleXYg
9hyCzA
jeFuNg
JaZZoA

Initially, you can't observe any pattern in the generations but changing them to Base64 encoding and then to hex, we get the following:

9708D524
2ECDC08E
C692ABF8
5E579762
F61C82CC
8DE16E36
25A659A0

Now if we subtract each number form the previous one, we get this:

FF97C4EB6A
97C4EB6A
FF97C4EB6A
97C4EB6A
FF97C4EB6A
FF97C4EB6A

This indicates that the algorithm just adds 0x97C4EB6A to the previous value, truncates the result to a 32-bit number, and Base64-encodes the data.
The above is a basic example. Today's ML algorithms and systems are capable enough to learn and predict more complex patterns.

Time Dependency
Some RNG algorithms use time as the major input for generating random numbers, especially the ones created by developers themselves to be used within their application.

Whenever weak RNG algorithms is implemented that appear to be stochastic, they can be extrapolated forwards or backwards with perfect accuracy in case sufficient dataset is available.

",3005,,,,,9/9/2018 8:38,,,,0,,,,CC BY-SA 4.0 7895,1,7902,,9/9/2018 19:07,,8,300,"

Drawing parallels between Machine Learning techniques and a human brain is a dangerous operation. When it is done successfully, it can be a powerful tool for vulgarisation, but when it is done with no precaution, it can lead to major misunderstandings.

I was recently attending a conference where the speaker described Experience Replay in RL as a way of making the net "dream". I'm wondering how true this assertion is. The speaker argued that a dream is a random addition of memories, just as experience replay. However, I doubt the brain remembers its dream or either learns from it. What is your analysis?

",17759,,2444,,1/31/2021 12:58,1/31/2021 12:58,Is Experience Replay like dreaming?,,1,0,,,,CC BY-SA 4.0 7896,1,,,9/9/2018 20:31,,9,981,"

In the homework for the Berkeley RL class, problem 1, it asks you to show that the policy gradient is still unbiased if the baseline subtracted is a function of the state at time step $t$.

$$ \triangledown _\theta \sum_{t=1}^T \mathbb{E}_{(s_t,a_t) \sim p(s_t,a_t)} [b(s_t)] = 0 $$

I am struggling through what the first step of such a proof might be.

Can someone point me in the right direction? My initial thought was to somehow use the law of total expectation to make the expectation of $b(s_t)$ conditional on $T$, but I am not sure.

",18043,,2444,,6/10/2020 16:39,6/10/2020 16:42,Why is baseline conditional on state at some timestep unbiased?,,2,3,,,,CC BY-SA 4.0 7897,1,8067,,9/9/2018 20:33,,7,194,"

Is there research that employs realistic models of neurons? Usually, the model of a neuron for a neural network is quite simple as opposed to the realistic neuron, which involves hundreds of proteins and millions of molecules (or even greater numbers). Is there research that draws implications from this reality and tries to design realistic models of neurons?

Particularly, recently, Rosehip neuron was discovered. Such neuron can be found only in human brain cells (and in no other species). Are there some implications for neural network design and operation that can be drawn by realistically modelling this Rosehip neuron?

",8332,,2444,,3/19/2019 21:49,3/19/2019 21:49,Is there research that employs realistic models of neurons?,,3,0,,,,CC BY-SA 4.0 7898,2,,7897,9/9/2018 21:03,,4,,"

It looks like you really have two questions here. I'll try to answer the first one, and you should think about making a separate question for the second.

There is research into using simulated models of biologically realistic neurons. While there are large projects like the Human Brain Project aimed at simulating human brains, there is also a lot of lower-level AI research. SPAWN is an interesting system that got a lot of press a few years ago, and has continued to be developed since then. It uses realistic neurons to simulate several brain-regions at once, creating a surprisingly general AI system that could perform many types of motor and vision tasks using the same basic design.

",16909,,,,,9/9/2018 21:03,,,,2,,,,CC BY-SA 4.0 7899,1,7931,,9/9/2018 22:20,,1,106,"

Image Captioning is a hot research topic in the AI community. There are considerable image captioning models for research usage such as NIC, Neural Talk 2 etc. But can these research models be used for commercial purpose? Or we should build much more complex structured ones for commercial usage? Or if we can make some improvements based these models to meet the business applications situation? If so, what improvements should we take? Are there any existing commercial Image Captioning applications can be referenced?

",14948,,,,,9/12/2018 6:23,How to build a commercial image captioning system?,,1,0,,,,CC BY-SA 4.0 7901,2,,1308,9/10/2018 3:35,,3,,"

Self-flight is already running on all commercial airlines, but only at the cruising altitude. When you flight from London to New York, most of the time the pilots are monitoring the autopilot systems. Running AI on cruising altitude is arguably a simper problem than self-driving cars. The system has been in-place for years.

The real danger of airlines is taking off/landing. As you can imagine, the costs for a bad judgement could be quite serious. Anything could happen while the plane is still under the clouds. While our AI technology is improving, it will never replace human intervention, especially when the costs of collision is very high. If you don't believe me, google for how Facebook is hiring humans for reading ""fake news"". Facebook has publicity acknowledged AI will never be sufficient for them, then why would AI good enough for replacing well trained pilots?

AI on a commercial airplane is already taking place, but there is no possibility ever it will take over the world. We will always need pilots who knows how to work with the system, making important decisions.

Trained pilots + good AI assistance will make the world a better place.

",6014,,6014,,9/10/2018 3:41,9/10/2018 3:41,,,,1,,,,CC BY-SA 4.0 7902,2,,7895,9/10/2018 6:20,,8,,"

The speaker argued that a dream is a random addition of memories, just as experience replay.

The speaker is taking some liberties due to a general lack of scientific understanding of what dreams are. We don't even have strong consensus on why sleep is a necessary feature of animals, let alone what part dreaming plays in it. However, there are some widely-accepted theories, with supporting evidence, that dreams are part of a learning and memorisation process. Studies that manipulate sleep or dreaming have shown changes in the speed that skills are learned for example.

Experience replay in reinforcement learning is a far more precise and well-understood affair, whereby individual time steps that occurred in the past are visited and re-assessed in light of current knowledge about long-term value, at random. If dreams were really like experience replay as it is practiced in RL today, then they would consist of a random jumble of tiny seemingly inconsequential events strung together, and all taken very exactly from the events of the past day. Sometimes dreams do contain content like this, but typically the content is far more varied.

Taken with a large dose of artistic license, then yes, the speaker is referring to real theories and conjectures about dreaming, that do have scientific support. Although it is equally good to draw parallels between dreams and a higher-level management of the memory or experience replay data - which items to replay, and which to keep, depending on what is salient about the information. For instance, there is good evidence that dreams help filter what is forgotten, and also evidence that events associated with strong emotional state are more likely to feature in dreams.

It is important to separate the speaker's analogy, and any suggestion that a current reinforcement learning agent has a subjective experience. We are still a long way away from anything like that, and other similar use of a dreaming metaphor in machine learning - e.g. ""Deep Dream"" - is equally not an assertion that the devices are having an experience of any kind.

",1847,,1847,,9/10/2018 9:24,9/10/2018 9:24,,,,0,,,,CC BY-SA 4.0 7903,1,,,9/10/2018 7:47,,1,65,"

I have an average laptop.

How can I connect specialized AI neural network processors (say, Intel Nvidia or Intel Nervana https://venturebeat.com/2018/05/23/intel-unveils-nervana-neural-net-l-1000-for-accelerated-ai-training/) to thelaptop. Should I buy some external motherboard or even server unit with NN processors inside or is there available more lightweight solution like external HDD?

",8332,,8332,,9/10/2018 11:02,9/10/2018 11:02,How to connect AI neural network processor to laptop?,,1,2,,,,CC BY-SA 4.0 7905,2,,7903,9/10/2018 10:59,,2,,"

Check of deep learning box on the internet. You will have to think hard in order to make a good and cost effective one.

But there are a few people who have made one of their own and shared their experiences.

Here are some medium articles:

Building your own Deep Learning dream machine

Building a 270* Teraflops Deep Learning Box for Under $10,000

",16920,,,,,9/10/2018 10:59,,,,0,,,,CC BY-SA 4.0 7906,2,,7897,9/10/2018 11:18,,3,,"

It is true that the current Machine learning is based on treating neurons as a component in the whole complexity , mesh of neurons. The focus is more on the architecture rather than understanding or imitating the basic block of it more clearly , i.e. the neurons.

Anirban Bandhopadhyay is a biologist and Neurologist who has studied how the harmony changes the memory element and decision making power in microtubles inside the neurons.

Here, is the snippet of him explaining , and trying to see what exactly computation is , and how the brain does computation.

How does the Brain Act?

",16920,,,,,9/10/2018 11:18,,,,0,,,,CC BY-SA 4.0 7907,1,7909,,9/10/2018 12:24,,3,134,"

I'm trying to learn AI and thinking to apply it to our system. We have an application for the translation industry. What we are doing now is the coordinator $C$ assigns a file to a translator $T$. The coordinator usually considers these criteria (but not limited to):

  • the deadline of the file and availability of a translator
  • the language pair that the translator can translate
  • is the translator already reached his target? (maybe we can give the file to other translators to reach their target)
  • the difficulty level of the file for the translator (basic translation, medical field, IT field)
  • accuracy of translator
  • speed of translator

Given the following, is it possible to make a recommendation to the coordinator, to whom she can assign a particular file?

What are the methods/topics that I need to research?

(I'm considering javascript as the primary tool, and maybe python if javascript will be more of a hindrance in implementation.)

In addition to suggesting a translator, we are also looking into suggesting the "deadline of the translator". Basically, we have "deadline of the customer" and "deadline of the translator"

The reason for this is that, if the translators are occupied throughout the day, it makes sense to suggest it to a busy translator but allow him to finish it until next day.

",18053,,2444,,1/23/2021 3:19,1/23/2021 3:19,What AI technique should I use to assign a person to a task?,,1,0,,,,CC BY-SA 4.0 7909,2,,7907,9/10/2018 14:47,,4,,"

What you have could be well described as a Task Allocation problem, which is studied as part of the planning subfield of AI. Chapters 10 & 11 of Russell & Norvig provide a good overview of this area, although I think they don't talk too much about Task Allocation in particular.

There are two basic approaches to this problem: centralized approaches, and decentralized approaches.

In centralized approaches, the properties of each task (or sub-task) and the skills of each processing entity are recorded in a central database. The task is phrased as an optimization problem. For example, given the skills of the processors and the tasks' types, find the schedule that minimizes average processing time (or cost, or usage of rare-resource types, or whatever you're interested in). Common approaches include phrasing the optimization task as a linear-programming problem; phrasing the problem as a graph and using something like the graphplan algorithm; or phrasing the problem as a constraint satisfaction problem and using some kind of heuristic-guided local search.

There are all kinds of other more modern techniques too. I'm not aware of a survey paper for translation tasks in particular, but there are lots of examples in robotics and distributed computing.

Although good AI techniques exist for scheduling the task, they are predicated on being able to quantify the tasks' properties and the abilities of the agents, and on the translators accepting the decisions of the system. If you want an interactive system, you may need to look at techniques from Natural Language Processing. The work on Mixed Initiative Scheduling Systems might also be relevant if you have to go that route.

",16909,,2444,,1/23/2021 3:13,1/23/2021 3:13,,,,0,,,,CC BY-SA 4.0 7910,2,,7794,9/10/2018 20:27,,1,,"

Approaches

There are two main approaches to detecting any human readable representation of a discrete quantity within text.

  1. Detect well known and stable patterns in the input stream and by adjacency determine the output stream.
  2. Windowing through the text in the input stream and directly detect the quantities.

There are other approaches and there are hybrids of these two or one of these and the other approaches, but these two are the theoretically most straightforward and likely to produce both reliability and accuracy.

Re-entrant Learning

Whether the training involves re-entrant learning techniques, such as reinforcement, is a tangential issue that this answer will not address, but know that whether all training is solely a deployment component or whether adaptation and/or convergence occurs in real time is an architectural decision to be made.

Practical Concerns

Practically, the outputs of each recognition are as follows.

  • Starting index
  • Ending index
  • Integer year or null
  • Integer day of year or null
  • Integer hour in military time or null
  • Minute or null
  • Second or null
  • Time zone or null
  • Probability the recognition unit was correctly identified
  • Probability the recognition produced accurate results

Also practically, the input must either be from within one particular locale's norms in terms of

  • Calendar,
  • Time,
  • Written language,
  • Character encoding, and
  • Collation,

... or ...

  • The learning must occur using training sets that include the locales that will be encountered during system use

... or ...

  • Much of the locale specific syntax must be normalized to a general date and time language such as this:

    जनवरी --> D_01

    Enero --> D_01

    Janúar --> D_01

so that Filipino and Icelandic names for the first month of the year enter the artificial network as the same binary pattern.

**Date and Time Specifically*

In the case of 1. above, which is semi-heuristic in nature, and assuming that the locale is entirely en-US.utf-8, the CASE INSENSITIVE patterns for a PCRE library or equivalent to use as a search orientation heuristic include the following.

(^|[^0-9a-z])(19|20|21)[0-9][0-9])([^0-9a-z]|$)
(^|[^0-9a-z])(Mon|Monday|Tue|Tues|Tuesday|Wed|Wednesday|Thu|Thur|Thurs|Thursday|Fri|Friday|Sat|Saturday|Sun)([^0-9a-z]|$)
(^|[^0-9a-z])(Jan|January|Feb|February|Mar|March|Apr|April|May|Jun|June|Jul|July|Aug|August|Sep|Sept|September|Oct|October|Nov|November|Dec|December)([^0-9a-z]|$)
(^|[^0-9a-z])(Today|Yesterday|Tomorrow)([^0-9a-z]|$)
(^|[^0-9])[AP]M|[AP][.]M[.]|Noon|Midnight)([^0-9a-z]|$)
(^|[^a-z])(0?[1-9])(:[0-5][0-9]){1,2}([^a-z]|$)

There should be others for time, hyphenated or slash delimited dates, or time zone.

The positions and normalized encoding of these date and time artifacts are then substituted into the artificial network inputs instead of the original text in the stream, reducing redundancy and improving both the speed of training and the resulting accuracy and reliability of recognition.

In the case of 2. above, the entire burden of recognition is left to the artificial network. The advantage is less reliance on date and time conventions. The disadvantage is a much larger burden placed on training data variety and training epochs, meaning a much higher burden on computing resources and the pacience of the stake holder for the project.

Windowing

An overlapping windowing strategy is necessary. Unlike FFT spectral analysis in real time, the windowing must be rectangular, because the size of the window is the width of the input layer of the artificial network. Experimenting with the normalization of input such that the encoding of text and data and time components entering the input layer could greatly vary the results in terms of training speed, recognition accuracy, reliability, and adaptability to varying statistical distributions of date and time instances and relationships.

",4302,,,,,9/10/2018 20:27,,,,0,,,,CC BY-SA 4.0 7911,1,7913,,9/10/2018 21:25,,1,938,"

I would like to create a neural network, which, given the training data (e.g. 58, 2) outputs a non-binary number (e.g 100). Perhaps I am not searching for the correct thing, but all the examples I have found have shown classifiers using a sigmoid function (range of 1 to 0). I am looking for something that would output nonbinary numbers.

",18070,,2444,,12/20/2021 23:56,12/20/2021 23:56,How would I go about creating a neural network that outputs a non-binary number?,,1,0,,,,CC BY-SA 4.0 7912,2,,5454,9/10/2018 21:56,,1,,"

So my question now is, after the training is complete, which of the two networks mu or mu' should be used for making predictions?

After training is ""complete"", use mu, i.e., the online network because this is the most updated actor network you have. In the most ideal case, mu' will be equal to mu. But if your training is truly complete, mu = mu'.

Equivalently to the training phase I suppose that mu should be used without the exploration noise but since it is mu' that is used during the training for predicting the ""true"" (unnoisy) action for the reward computation, I'm apt to use mu'.

In the training phase, you must use exploration noise when mu maps states to actions. The actor is a policy gradient, i.e. based on policy iteration. So it will directly map to an action everytime. If no exploratory noise is added, it will map to the same value every time, unless you update your network weights. What this means is that if you're given some states, s. Without exploration noise, the actor will always output the same actions, and never explore.

mu' is ONLY used for the supervised learning portion of DDPG, where it stabilizes training of 2 neural networks that essentially train off each other.

Imagine this problem: a worker and his supervisor in a factory. Both the worker and the supervisor knows nothing at the start (initialized neural networks). The worker's first action is to pick up a 200 lb box. The supervisor then positively rewards him. So from this, the worker learned that picking up heavy objects = good. The worker then picks up another heavy box, and this time the supervisor yells at him because the supervisor learned that its dangerous. So now the worker himself is confused, because he does not know if picking up heavy objects is good or bad because the supervisor himself changes his mind.

In DDPG, the actor and critic behaves the same way. So we introduce target networks to make it so both the actor and critic don't keep changing their minds, so they can actually learn things.

Or does this even matter? If the training was to last long enough shouldn't both versions of the actor have converged to the same state?

That is correct, but in reality, that is very rare.

",17706,,,,,9/10/2018 21:56,,,,0,,,,CC BY-SA 4.0 7913,2,,7911,9/10/2018 22:01,,3,,"

First of all, sigmoid does not output 0 or 1, it outputs any real number in the range between 0 and 1.

Furthermore, neural networks don't usually output binary values, unless the output layer uses the step function as an activation function (which is rare).

I'm not really sure if you want the neural network to be a classifier or regressor, but it sounds like you want a regressor.

Regression is when you are interested in the value of the output neuron(s) itself. A simple example is if you want the network to predict the sum of two input neurons.

If you want to change the network from a classifier to a regressor you should probably reduce the number of neurons in the output layer to 1, and change the activation function of that neuron from softmax to the identity function ($f(x)=x$; which is the same as no activation function at all).

",17488,,2444,,12/20/2021 23:55,12/20/2021 23:55,,,,5,,,,CC BY-SA 4.0 7914,1,7920,,9/10/2018 22:31,,7,295,"

Would AlphaGo Zero become theoretically perfect with enough training time? If not, what would be the limiting factor?

(By perfect, I mean it always wins the game if possible, even against another perfect opponent.)

",18006,,18006,,9/11/2018 2:19,9/11/2018 14:46,Would AlphaGo Zero become perfect with enough training time?,,3,0,,,,CC BY-SA 4.0 7915,2,,7914,9/11/2018 2:16,,2,,"

Yes AlphaGo Zero could become undeniably perfect.

It has won 100:0 against AlphaGo Lee (which won 4:1 against 18-time world champion (human) Lee Sedol) and 89:11 against AlphaGo Master (which won 60 straight online games against human professional Go players from 29 December 2016 to 4 January 2017).

From the official AlphaGo website:

"AlphaGo's 4-1 victory in Seoul, South Korea, in March 2016 was watched by over 200 million people worldwide. It was a landmark achievement that experts agreed was a decade ahead of its time, and earned AlphaGo a 9 dan professional ranking (the highest certification) - the first time a computer Go player had ever received the accolade.".

From AlphaGo's webpage: "AlphaGo's next move":

"We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems. Just like our first AlphaGo paper, we hope that other developers will pick up the baton, and use these new advances to build their own set of strong Go programs.

...

Since our match with Lee Sedol, AlphaGo has become its own teacher, playing millions of high level training games against itself to continually improve.".

",17742,,-1,,6/17/2020 9:57,9/11/2018 2:16,,,,1,,,,CC BY-SA 4.0 7916,1,7918,,9/11/2018 4:19,,2,110,"

I am currently reading the research paper Image Crowd Counting Using Convolutional Neural Network and Markov Random Field by Kang Han, Wanggen Wan, Haiyan Yao, and Li Hou.
I did not understand the following context properly:

We employ the residual network, which is trained on ImageNet dataset for image classication task, to extract the deep features to represent the density of the crowd. This pre-trained CNN network created a residual item for every three convolution layer to bring the layer of the network to 152. We resize the image patches to the size of 224 × 224 as the input of the model and extract the output of the fc1000 layer to get the 1000 dimensional features. The features are then used to train 5 layers fully connected neural network. The network's input is 1000dimensional, and the number of neurons in the network is given by 100-100-50-50-1. The network's output is the local crowd count

Can anyone explain the above part in detail?

",14592,,1671,,9/11/2018 18:49,9/11/2018 18:49,"Clarification regarding ""Image Crowd Counting Using Convolutional Neural Network and Markov Random Field""",,1,0,,,,CC BY-SA 4.0 7917,2,,7829,9/11/2018 6:45,,0,,"

I got one paper on the integration of opinion mining in CRM, which helps to get an idea about the process involved.

",17058,,,,,9/11/2018 6:45,,,,0,,,,CC BY-SA 4.0 7918,2,,7916,9/11/2018 7:44,,3,,"

I will try to do it part by part :

We employ the residual network, which is trained on ImageNet dataset for image classication task, to extract the deep features to represent the density of the crowd.

If you look at the figure 2, you can see that they use the Neural network architecture ResNet. This is a deep network, here is the paper. It has good performance and do image classification.

This pre-trained CNN network created a residual item for every three convolution layer to bring the layer of the network to 152

If you are in the layer k, it means this layer has in input, the ouput of the k-3th layer. See the paper, figure 5 explains it well without much explications needed. Furthermore, Resnet has 3 different architectures wih different number of layers, and they take the deeper one, the 152 layers deep Resnet.

We resize the image patches to the size of 224 × 224 as the input of the model and extract the output of the fc1000 layer to get the 1000 dimensional features

The input of Resnet is images of size 224x224, so they need to resize them to fit the input requirement of Resnet. The output of Resnet is 1000 because Imagenet is a dataset of 1000 classes.

The features are then used to train 5 layers fully connected neural network. The network's input is 1000dimensional, and the number of neurons in the network is given by 100-100-50-50-1.

Then they give the output of Resnet to their own Network, which is 5 layer deep. See figure 2 of their paper. Obviously, the input layer has 1000 inputs because of the output of Resnet. The network have layers of 100, 100, 50 and finally 1 neuron. See figure 2.

The network's output is the local crowd count

I don't think I need to explain it, they want only the number of people in the crowd, so they need only one output. This is obviously not a classifcation problem, but a regression problem.

Has you don't really point out what you don't understand, I don't explain it in details. Feel free to ask more precise question if some part are still blurred to you !

",17221,,17221,,9/11/2018 7:49,9/11/2018 7:49,,,,4,,,,CC BY-SA 4.0 7920,2,,7914,9/11/2018 8:25,,3,,"

We cannot tell with certainty whether AlphaGo Zero would become perfect with enough training time. This is because none of the parts (Neural Network) that would benefit from infinite training time (= a nice approximation of ""enough"" training time) are guaranteed to ever converge to a perfect solution.

The main limiting factor is that we do not know whether the Neural Network used is big enough. Sure, it's pretty big, it has a lot of capacity... but is that enough? Imagine if they had used a tiny Neural Network (for example just a single hidden layer with a very low number of nodes, like 2 hidden nodes). Such a network certainly wouldn't have enough capacity to ever learn a truly, perfectly optimal policy. With a bigger network it becomes more plausible that it may have sufficient capacity, but we still cannot tell for sure.


Note that AlphaGo Zero does not just involve a trained part; it also has a Monte-Carlo Tree Search component. After running through the Neural Network to generate an initial policy (which in practice turns out to already often be extremely good, but we cannot tell for certain if it's perfect), it does run some MCTS simulations during it's ""thinking time"" to refine that policy.

MCTS doesn't benefit* from increased training time, but it does benefit from increased thinking time (i.e. processing time per turn during the actual game being played, rather than offline training time / self-play time before the evaluation game). In the most common implementation of MCTS (UCT, using the UCB1 equation in the Selection Phase), we can prove that it does in fact learn to play truly perfectly if it is given an infinite amount of thinking time. Now, AlphaGo Zero does use a slightly different implementation of the Selection Phase (which involves the policy generated by the trained Neural Network as prior probabilities), so without a formal analysis I can't tell for sure whether that proof still holds up here. Intuitively it looks like it still should hold up fine though.


*Note: I wrote above that ""MCTS doesn't benefit from increased training time. In practice, of course it does tend to benefit from increased training time because that tends to result in a better Network, which tends to result in better decisions during the Selection Phase and better evaluations of later game states in the tree. What I mean is that MCTS is not theoretically guaranteed to always keep benefitting from increases in training time as we tend to infinity, precisely because that's also where we don't have theoretical guarantees that the Neural Network itself will forever keep improving.

",1641,,1641,,9/11/2018 8:33,9/11/2018 8:33,,,,1,,,,CC BY-SA 4.0 7921,2,,7914,9/11/2018 8:40,,4,,"

Assuming you mean a mathematically perfect player, similar to what we can achieve trivially in Tic Tac Toe, then the answer is ""maybe"". The underlying reinforcement learning algorithms that it uses do have some convergence guarantees, but there are some caveats:

  • Theories of convergence that apply to value and policy functions learned by RL assume unrealistic timescales and decays of learning parameters. If you have actual infinite time and resources then it is possible to explore all board states and learn their values accurately. But then, if you have those resources at your disposal, a brute-force search would work too.

  • Using neural network approximations to true value functions can put bounds on how well value functions are learned, as they rely on generalisation, and are characterised by an error metric. The values that they calculate are in fact guaranteed to not be mathematically perfect, and in part that is by design (because you want learning from similar states to apply to new unseen states, and as part of that need to accept the compromise that most state values will be slightly incorrect). This is especially true of the fast policy network used to drive Monte Carlo Tree Search (MCTS).

  • Running longer MCTS during play improves performance at cost of spending more time per move decision. Given infinite resources, MCTS can play perfectly, even from very crude heuristics.

The difference between AlphaGo Zero and a more traditional game search algorithm is to do with optimal use of available computing resources, as set by available hardware, training time and decision time when playing. It is orders of magnitude more effective to use the RL self-play approach combined with the focused MCTS as in AlphaGo Zero, than any basic search. At least for similar puzzles and games, it is more efficient than any other game playing technique that has been explored. We are likely still more orders of magnitude of effort away from a perfect Go player, and there is no reason yet to suspect that this will ever become practical, and Go become ""solved"".

",1847,,1847,,9/11/2018 14:46,9/11/2018 14:46,,,,0,,,,CC BY-SA 4.0 7923,1,7924,,9/11/2018 10:47,,6,1842,"

Should I be decaying the learning rate and the exploration rate in the same manner? What's too slow and too fast of an exploration and learning rate decay? Or is it specific from model to model?

",18076,,2444,,1/7/2022 16:15,1/7/2022 16:18,Should I be decaying the learning rate and the exploration rate in the same manner?,,1,2,,,,CC BY-SA 4.0 7924,2,,7923,9/11/2018 11:13,,6,,"

First of all, I'd say that there is a reason to give Learning Rate (LR) and Exploration Rate (ER) the same decay: they play at the same scale (the number of successive batches you'll train your model on). But if I refine the analysis, I would rather say that it's a reason to choose them in the same range, i.e. close to 1, but not specifically at the same number.

  • For LR decay, people often choose it very close to one (which can mean really different things like 0.98 or 0.997), because it plays on a large scale, and you don't want the LR to disappear too brutally.

  • However, the choice of ER decay can have more variation from model to model. It depends on the initial value of ER (you don't wanna decay fastly ER if your ER is initially low), and also to the "learning speed" of your model: if your model learns efficiently at the beginning, you could want to fastly decrease ER in order to reduce the noise on the action, supposing that you did enough exploration at the beginning (but I think this last opinion is more controversial). You can find an interesting paper here, where the author tries different ER decay and finds out that 0.99 is the best, for CartPole environment.

",17759,,2444,,1/7/2022 16:18,1/7/2022 16:18,,,,0,,,,CC BY-SA 4.0 7925,1,,,9/11/2018 20:19,,3,250,"

I was recently perusing the paper Some Studies in Machine Learning Using the Game of Checkers II--Recent Progress (A.L. Samuel, 1967), which is interesting historically.

I was looking at this figure, which involved Alpha-Beta pruning.

It occurred to me that the types of non-trivial, non-chance, perfect information, zero-sum, sequential, partisan games utilized (Chess, Checkers, Go) involve game states that cannot be precisely quantified. For instance, there is no way to ascribe an objective value to a piece in Chess, or any given board state. In some sense, the assignment of values is arbitrary, consisting of estimates.

The combinatorial games I'm working on are forms of partisan Sudoku, which are bidding/scoring (economic) games involving territory control. In these models, any given board state produces an array of ratios allowing precise quantification of player status. Token values and positions can be precisely quantified.

This project involves a consumer product, and the approach we're taking currently is to utilize a series of agents of increasing sophistication to provide different levels challenge for human players. These agents also reflect what is known as a ""strategy ladder"".

Reflex Agents (beginner)
Model-based Reflex Agents (intermediate)
Model-based Utility Agents (advanced)

Goals may also be incorporated to these agents such as desired margin of victory (regional outcome ratios) which will likely have an effect on performance in that narrower margins of victory appear to entail less risk.

The ""respectably weak"" vs. human performance of the first generation of reflex agents suggests that strong GOFAI might be possible. (The branching factors are extreme in the early and mid-game due to the factorial nature of the models, but initial calculations suggest that even a naive minimax lookahead will be able to look farther more effectively than humans.) Alpha-Beta pruning in partisan Sudoku, even sans a learning algorithm, should provide greater utility than in previous combinatorial game models where the values are estimates.

  • Is the historical weakness of GOFAI in relation to non-trivial combinatorial games partly a function of the structure of the games studied, where game states and token values cannot be precisely quantified?

Looking for any papers that might comment on this subject, research into combinatorial games where precise quantification is possible, and thoughts in general.

I'm trying to determine if it might be worth attempting to develop a strong GOFAI for these models prior to moving up the ladder to learning algorithms, and, if such a result would have research value.

There would definitely be commercial value in that strong GOFAI with no long-term memory would allow minimal local file size for the apps, which must run on lowest-common-denominator smartphones with no assumption of connectivity.

PS- My previous work on this has involved defining the core heuristics that emerge from the structure of the models, and I'm slowly dipping my toes into the look ahead pool. Please don't hesitate to let me know if I've made any incorrect assumptions.

",1671,,2444,,12/31/2021 13:22,12/31/2021 13:22,Historical weakness of GOFAI in relation to partisan combinatorial games?,,1,6,,,,CC BY-SA 4.0 7926,1,7936,,9/11/2018 23:01,,5,2223,"

I don’t believe in free will, but most people do. Although I’m not sure how an act of free will could even be described (let alone replicated), is libertarian freewill something that is considered for AI? Or is AI understood to be deterministic?

",16480,,2444,,4/8/2020 0:57,6/4/2020 16:06,Does AI rely on determinism?,,10,2,,,,CC BY-SA 4.0 7927,1,,,9/12/2018 0:32,,0,99,"

I am trying to use Deep-Q learning environment to learn Super Mario Bros. The implementation is on Github.

I have a neural network that Q values update within an episode for a very small learning rate (0.00005). However, even if I increase the learning rate to 0.00025, the Q values do not change within an episode as they are predicting the same Q values regardless of what state it is in. For example, if Mario moves right, the Q value is the same. When I start a new episode, the Q values change though.

I think that the Q values should be changing within an episode as the game should be seeing different parts and taking different actions. Why don't I observe this?

",18076,,18076,,9/12/2018 13:03,9/18/2018 6:47,Should Q values be changing within an epoch/episode or should they change after one episode/epoch?,,1,11,,,,CC BY-SA 4.0 7929,2,,7926,9/12/2018 3:48,,0,,"

AI is algorithmic not free willed in a sense that humans have free will. So in that sense it is deterministic. Give it the same data each time you would expect the same result. Change something (ie feed it new data to learn from) and then it will give a different result. Hence the determinism.

EDIT: Some algorithms do use some randomising - ie some versions of hill climbing - but if we want to get technical there's no such thing as true random numbers anyway. (Unless you're using one of those supercomputers that use the radiation from the sun as a seeding factor)

",9413,,9413,,9/12/2018 3:53,9/12/2018 3:53,,,,11,,,,CC BY-SA 4.0 7930,2,,7926,9/12/2018 3:57,,3,,"

AI is ""deterministic"" in the sense that it follows exactly the algorithm. ""Deterministic"" means different things to a data scientist/programmer, but let's not go into details here.

There is no ""freewill"" in AI, it's all about mathematics and algorithms. Don't watch too many scientific movies!

",6014,,,,,9/12/2018 3:57,,,,2,,,,CC BY-SA 4.0 7931,2,,7899,9/12/2018 6:23,,1,,"

Thinking Commercially

A commercial solution will need to be able to ascertain, continuously verify, and utilize the best of options for captioning learning models.

Each fairly successful image captioning learning models could be placed in an adapter to provide a common training, optimizing, testing, evaluation, and usage interface. The hot swappable container-addon mega pattern used in peripheral device installation, RAID, J2EE containers, browsers, and other containment sub-systems can be applied.

System Description

The system acceptance criteria being as follows.

  • A new model could be added without stopping or starting the system if tauted as successful by others
  • Any model can be deleted without starting or stopping the system if it has not been performing well
  • Each possible state of each model added can be given some percentage of the total system resources
  • Various training processes and hyper parameter tuning processes can be applied to any of those model-state combinations
  • A-B testing can be done on any model-state combination
  • Stats can be displayed on any model-state combination any model overall or any state across all added models
  • The interface for each model requires that its associated model produces an array of caption-reliability pairs with zero ore more pairs
  • The reliability measure that is included in the pair with the caption is an indication of the model's assessment of appropriateness to the image
  • The most appropriate caption is picked from the proposals of the various models

States can be idle or any of these.

  • Training
  • Testing
  • Evaluation
  • Hyper parameter tuning
  • In use

For example, for the two learning models usable in captioning systems suggested in this question, NIC and Neural Talk 2 we could have a system resource allocation like this:

  • 30% NIC Training
  • 5% NIC Hyper parameter tuning
  • 45% NT2 In use
  • 15% NT2 Evaluation
  • 5% NT2 Hyper parameter tuning

Samples may be pulled from a pool of samples that have been vetted. That pool may be augmented by real images passing through the system, filtered in accordance with security criteria to avoid external attempts at control.

In the assignment of resources, the sample pool selection criteria must be specified. If the system is already at 100%, the model-state combinations from which the resources shall be drawn must also be specified.

Handling Multiple Output Options

Since there may be more than one model in use and each model may have zero, one, or multiple caption suggestions for each image, each with a reliability measure, the outputs must be analyzed to provide the best choice to associate with the image being analyzed. Additional system criteria must cover this process scenario. For any given image, final evaluation must follow the following general guidelines.

  • If multiple models produce similar or exactly the same captions, they most be weighted higher in final evaluation.
  • If the model's reliability is proven in actual use based on feedback from end users, the model's output must be weighed higher accordingly
  • Re-entrant models (such as reinforcement learning network models) must have access to the end user feedback for additional learning even in in-use state
  • Clear winners are chosen
  • Close races are disambiguated through trained functionality
  • Exact ties are broken through pseudo random index generation

Another artificial network may be placed at the output and appropriate encoding and normalization may be applied before training so that a properly trained network, converged using a quantification of the above additional criteria, can select the best caption from the options for each image.

Phased Development Approach

Phase one of such a system would likely require manual handling of model-state allocation. Phase two would be semi-automation. The location of new models would still require expert attention. Perhaps further in the future a hunt for new models could be automated too.

",4302,,,,,9/12/2018 6:23,,,,1,,,,CC BY-SA 4.0 7934,2,,7925,9/12/2018 12:03,,2,,"

Nice question!

I think there are a couple of issues at work here.

Is the historical weakness of GOFAI in relation to non-trivial combinatorial games partly a function of the structure of the games studied, where game states and token values cannot be precisely quantified?

I think the short answer is yes. The real issue is in the last part:

token values cannot be precisely quantified

The most successful GOFAI approaches to these games were all some variation on A* search, combining combinatorial search with some form of heuristic function that estimated the value of the pieces and their positions in any given state. Piece counting is probably a better heuristic than not counting anything at all, but it's still clearly incorrect, because a player with less material may still have an overwhelming positional advantage. Some heuristics can try to estimate this positional advantage as well however.

The real problem that GOFAI encounters in these games is that positional advantage can be emergent in ways that require incredible heuristic power to detect. Checkers is a good example. In the 1990's, the Chinook project at the University of Alberta set out to solve it completely. Checkers is notable because it had the same world champion for more than 15 consecutive years, Marion Tinsley. Tinsley lost a total of 7 competitive matches over 40 years of play. This makes him an especially interesting person to examine when we look at combinatorial games. Figuring out how Tinsley plays can help us understand how human intelligence works in games like this. In the course of solving checkers, the researchers noted that Tinsley was making moves that required up to 42 move lookaheads to reveal an advantage (See Schaeffer et al., AI Magazine, Vol. 17, Issue 1).

This strongly suggests that Tinsley was not methodically considering each possible move. Instead, by his own admission, his thinking was guided by a combination of memory over his 40 year career (in one match against Chinook in 1992, he indicated he was trying to recall a sequence from a match 30 years prior when making a move (AI Magazine Volume 14, Number 2); and of attentional heuristics (i.e. not thinking about every move sequence, and being able to reliably rule out parts of the search space without looking at them).

The key is that for GOFAI to solve checkers without heuristics (i.e. to solve it exactly), required enormous amounts of computational power, because some moves yield positional advantages that require 40+ moves of followthrough. Even an incredibly simple game (branching factor of 2) would be hard under that constraint.

In contrast though, self-play techniques like those pioneered in Backgammon with TD-Gammon (Tesauro, Comm. of the ACM 1995) mimic the process through which Tinsley became so good: they play lots and lots of games, learn a good heuristic estimate of position and material value, and more importantly, can learn to remember odd circumstances that require careful play. TD-Gammon achieved worldclass play despite only explicitly looking 2 moves ahead. GOFAI search techniques weren't even close despite searching much more deeply.

Modern research on attention could salvage the GOFAI approach however. If you can learn to tell what's important, you might be able to get a lot more value out of deeper lookaheads. This seems even closer to how Tinsley played: great ability to estimate value was used to guide an explicit analysis of a specific chain of moves.

",16909,,,,,9/12/2018 12:03,,,,1,,,,CC BY-SA 4.0 7936,2,,7926,9/12/2018 12:22,,14,,"

I'm going to assume that by free will, you mean something like the philosophical concept of libertarian free will, which is defended by philosophers like Robert Kane. In Libertarian Free Will, individuals have some capability to make choices about their actions. The classic way to argue this is by assuming some kind of spirit-stuff (e.g. a soul) that exists outside the material world, and that this spirit-stuff constitutes the consciousness of a person. Kane tries some mental gymnastics to avoid this, but then concedes something like it in a footnote. I'm not aware of any serious work that doesn't make some kind of non-physical assumption to justify this view. If someone can point at one, I'll update the answer.

By determinism, I'm going to assume you mean the usual notion of philosophical determinism: since people's decisions depend on what happened in the past, and where they are in the present, they don't really have a choice in any meaningful sense. Philosopher's like Dennett adopt a slightly softer view (compatibilism, essentially: you don't get to make big choices, but you do get to make small ones). Appeals to Quantum Mechanics are common to justify that view. In this context, free action means something more like ""did something we couldn't predict exactly"". An example might be: you are pre-destined to put a can of campbell's brand tomato soup in your shopping cart, but ""make a choice"" about exactly which of the dozens of cans you will put in. Since small choices can have large impacts (maybe that can will give you food poisoning, and the others wouldn't), this can make all sorts of things impossible to predict exactly.

I think most AI researchers don't worry too much about these issues, but Turing actually addresses them in his paper right at the start of the field, Computing Machinary and Intelligence. The deterministic/compatibilist view point is introduced as Lady Lovelace's objection: Computers only know how to do what we tell them to, so they can't be called intelligence.

Turing's counterargument uses two prongs. First, Turing notes that computers can probably be made to learn (he was right!). If they can learn, they can do things that we don't expect, just like children do. Second, Turing notes that computers already do all sorts of things we don't expect: they have bugs. Anytime a computer exhibits a bug, it did something that was unexpected. Since we cannot generally rule out bugs in programs, computers will always do surprising things. Therefore, computers satisfy the deterministic notion of free will.

Turing also addresses the libertarian notion of free will, which is part of what he calls the ""Theological Objection"". The objection is that intelligence requires some kind of divine spark (like free will). Turing argues that we can't detect sparks like this right now (he actually thought we would be able to one day, and spent a lot of time looking at supernatural phenomena too). However, there's no reason to suppose that computers with the right programs won't be endowed with them. A divine creator could decide that anytime you build something brain-like, it gets a spark. If we build a program that's brain-like, maybe it gets a spark too. In the absence of some way to detect souls, it seems like we ought to just agree to treat things that seem intelligent as though they had these souls, since otherwise we don't have a really clear way to decide who is and isn't intelligent. The only remaining way is to say ""only things made of human meat have souls and are intelligent"". While a lot of people do actually say things like this (e.g. animals have no souls), this is a pretty arbitrary view, and I think there's no hope arguing against it. Turing suggests the ""Anthropic Principal"": we shouldn't assume that we're special, because the earth isn't in a special place in the galaxy, or in the universe, and we have pretty compelling evidence that we're an evolved version of other animals around us, but some groups (e.g. biblical literalists) find this unconvincing.

",16909,,16909,,9/12/2018 19:18,9/12/2018 19:18,,,,1,,,,CC BY-SA 4.0 7937,2,,7926,9/12/2018 13:04,,3,,"

Although I’m not sure how an act of freewill could even be described (let alone replicated),

Well, one popular definition goes like this:

[Free will is] the freedom to act according to one's motives without arbitrary hindrance from other individuals or institutions

Source - Wikipedia entry on Compatibilism

Note that this definition is perfectly compatible with determinism (hence the name ""Compatibilism""). Actually, proponents usually argue that free will requires determinism, because if your choices were ultimately random, like rolling a dice, how could they be your free choices?

Now, if you assume that an AI can be said to have ""motives"", then according to this view, it would have free will - if noone hinders it.

The contrary view, Incompatibilism, has been described in another answer by John Doucette. I agree with him that most AI researchers probably don't worry about philosphical questions like that. All the proponents of indeterminism (sometimes called metaphysical free will or libertarianism) I'm aware of assume that there exist ""causes"" that are neither deterministic / physical, nor purely random. (e.g. Agent causation). Since AIs are more or less by definition based on physical processes, deterministic algorithms and possibly some random number generator, I don't see how they could posess this kind of freedom.

",16384,,16384,,9/12/2018 15:54,9/12/2018 15:54,,,,5,,,,CC BY-SA 4.0 7938,2,,7926,9/12/2018 14:55,,0,,"

The free will in the spiritual world would be for you to have the right to follow or not the main path (God's way). Taking into consideration that whatever path you follow, you will have its consquences (good or bad).

However, an artificial intelligence created to solve problems does not need these distractions. Determinism considers the feeling of freedom a subjective illusion. Trying to apply free will to an AI is to create an inner inner space within the ""mind"" .. but creating that is already impossible because we can not understand exactly what is going on in our minds. The big question is, how to program something that we can not explain accurately? How to induce an AI to learn something that does not make sense?

I believe that it is not just code and database that will lead the AI ​​to have faith in its existence or to believe in free will. But to construct an AI that reads and understands every human mind, every thought, house illusion, every paranoia, every confusion, every sadness, every lie ... may be a good source of studies, but whoever qualifies for this kind of experiment? What would be the conclusion of an AI by understanding the whole confused and troubled human mind?

It's a very complex question ... let's continue studying and raising this kind of subject. I spent my last years trying to recreate human thoughts and actions in an AI .. it's a study for a lifetime .. my fear is the disappointment at the end of everything :(

",7800,,,,,9/12/2018 14:55,,,,2,,,,CC BY-SA 4.0 7940,1,7941,,9/12/2018 20:23,,3,712,"

This is a q-learning snake using a neural network as a q function aproximator and I'm losing my mind here the current model it's worst than the initial one.

The current model uses a 32x32x32 MLPRegressor from scikit-learn using relu as activation function and the adam solver.

The reward function is like following:

  • death reward = -100.0
  • alive reward = -10.0
  • apple reward = 100.0

The features extracted from each state are the following:

  1. what is in front of the snake's head(apple, empty, snake)
  2. what is in the left of the snake's head
  3. what is in the right of the snake's head
  4. euclidian distance between head and apple
  5. the direction from head to the apple measured in radians
  6. length of the snake

One episode consists of the snake playing until it dies, I'm also using in training a probability epsilon that represent the probability that the snake will take a random action if this isn't satisfied the snake will take the action for which the neural network gives the biggest score, this epsilon probability gradually decrements after each iteration.

The episode is learned by the regressor in reverse order one statet-action at a time.

However the neural network fails too aproximate the q function, no matter how many iterations the snake takes the same action for any state.

Things I tried:

  • changing the structure of the neural network
  • changing the reward function
  • changing the features extracted, I even tried passing the whole map to the network

Code (python): https://pastebin.com/57qLbjQZ

",18123,,18123,,9/12/2018 21:19,1/21/2020 13:36,Snake game: snake converges to going in the same direction every time,,2,3,,,,CC BY-SA 4.0 7941,2,,7940,9/13/2018 2:13,,4,,"

There are two problems here.

  1. The code you posted doesn't incrimentally train your multilayer perceptron. Instead, it effectively re-randomizes the weights, and then re-fits the model each time you call .fit() at lines 35 & 54. Using SKLearn's _fit() function with Incremental=true might solve this, or you can package up the data into a larger batch, and train on that offline instead after several episodes.

  2. Your reward function makes it painful to be alive, and doesn't give enough benefits through the Apples to make up for this. There are 100 squares that could contain the apple. On average, the apple will spawn about 5 squares from the snake in each direction. Since the snake can't move diagonally, that's 10 moves (5 left/right, 5 up/down). That means that if the snake plays perfectly, then on average, it might be able to get zero reward total. In practice, the snake will not play perfectly. This means living gives negative expected reward.

In contrast, if the snake can kill itself, it will stop getting negative rewards. The reward function you've used is maximized by getting big enough to run into your own tail as fast as possible. The snake should be able to do this after eating 3 apples I think. There is some incentive to hunt for food well, but not much compared with hitting your own tail as soon as possible.

If you want the snake to learn to hunt for the food, reduce the penalty for being alive to -1, or even -0.1. The snake will be much more responsive to signals from the food.

",16909,,16909,,9/13/2018 19:52,9/13/2018 19:52,,,,6,,,,CC BY-SA 4.0 7942,1,7948,,9/13/2018 3:37,,5,635,"

The Alpha Zero (as well as AlphaGo Zero) papers say they trained the value head of the network by ""minimizing the error between the predicted winner and the game winner"" throughout its many self-play games. As far as I could tell, further information was not given.

To my understanding, this is basically a supervised learning problem, where, from the self-play, we have games associated with their winners, and the network is being trained to map game states to the likelihood of winning. My understanding leads me to the following question:

What part of the game is the network trained to predict a winner on?

Obviously, after only five moves, the winner is not yet clear, and trying to predict a winner after five moves based on the game's eventual winner would learn a meaningless function. As a game progresses, it goes from tied in the initial position to won at the end.

How is the network trained to understand that if all it is told is who eventually won?

",12201,,2444,,4/4/2020 14:56,4/4/2020 14:56,What part of the game is the value network trained to predict a winner on?,,2,0,,,,CC BY-SA 4.0 7943,2,,7762,9/13/2018 4:48,,-1,,"

Automation of Game-play

Aimbots are indeed designed to provide assistance to the human game player when the complexity of game play escapes full cybernetic autonomy at the current state of technology. There are five basic components in any game player, DNA based or digital.

  • Acquisition of the current state of the game
  • Control over execution of move options
  • Intercommunication with other players
  • Models related to the game
  • Execution engine for applying these

The models are as follows for a CS:GO aimbot.

  • Model of game players
  • Model of the opposing team
  • Model of the game player being assisted
  • Model of that player's team
  • Model of the opposing team
  • Model of the game state
  • Model of legal game moves that transition state
  • Model of objectives (winning or maintaining a top score)
  • Models of game-play strategy involving the first three items in the previous list

Learning all of these is not in the scope of current deep learning strategies but not outside the scope of AI if the following problem analysis and system approaches are taken.

  • Assumptions are made similar to those of Morgenstern and von Neumann in the later chapters of their Game Theory to mathematically treat the decisioning of game players in a minimalistic way.
  • DSP, GPU, network realization hardware, cluster computing, or some other artificial network hardware acceleration is available
  • Models programmed in Prolog, DRools, or some other production system and then leveraged by the execution engine in conjunction with other components such as deep learning networks, convolution processing, Markov trees, fuzzy logic, and the application of oversight functions or heuristics as needed

The two services, (a) the provision of suggestions and (b) the automation of minor tasks, may indeed represent the low hanging fruit from a software engineering perspective, but the problem analysis and system approach above may provide more.

Objectives in CS:GO

The CS:GO (Counter-Strike Global Offensive) game seems to have been written from a Westphalian geopolitical point of view. This is the typical western perspective, somewhat oblivious to the mindset of the true nature of asymmetric warfare1. This answer will focus on the creation of an aimbot for the existing models of game-play rather than a realistic simulation of geopolitical balance in this decade.

We have the objective types listed in online resources that provide a game overview, again, narrowed in authenticity by the prevailing western view of asymmetric war1.

  • Terminating players of the opposing team
  • Planting a bomb toward that end (terrorists only)
  • Defend hostages (terrorists only)
  • Prevention of bomb casualties (counter-terrorists only)
  • Rescue of hostages (counter-terrorists only)

Ballistic Control

The targetting of the body or head of an opponent is within the scope of what image recognition can do in conjunction with a movement model. In military applications, aeronautic devices must be propelled against air friction and the propulsion requires a largely exothermic reaction like combustion. Thus all targets have a heat signature, which can be recognized in an infrared video stream in such a way as to plot an intercept course for the ballistic weapon.

The targetting formulation for CS:GO is not as complex and aiming and firing may be fully automated with much less software machinery. A LSTM with sufficient speed can be trained to recognize a head in subsequent frames and terminate opponents even if moving. A simple web search for LSTM will provide a plethora of resources to the novice intending to learn about image recognition.

One Ambiguity

Whether the second objective can be met is dependent on what is meant by the term, ""Viewing angles,"" in the context of image recognition. Can the player see from perspectives other than the location of their eyes? If so, this answer can be adjusted if given a clear picture of what is meant.

Training and Re-entrant Learning

Training of an artificial neural net to target a head is unnecessary unless the 3D rendering of the game objects and players is distorted by a wide angle virtual lens and trajectories and movements are curved. As mentioned LSTM can be used to locate a head in multiple frames and extrapolate an opposing players trajectory.

Where deep learning may be most effective is in the training of how to interact with the player to best assist. Also, if there are other non-targeting techniques that are more discrete, those who play CS:GO well could record their interactions and those recordings can be processed in preparation for use as training data.

Certainly a re-entrant learning strategy such as reinforcement is useful for game-play especially if the make up of teams changes and players exhibit different behaviors, executing differing strategies over different networks with different latencies and through-puts, and communicating with the game clients through different peripheral devices.

[DeepMind Lab Test Bed for Reinforcement Technology](https://github.com/deepmind/lab}

More than Suggestions

With proper architecture, more than suggestive strategies can be provided to the player. Statistical dashboards, identification of a bomb before or after planting, and identification of hostages should be among the aimbot services provided, which might suggest a new name, such as obot for objective bot or asbot for assistive bot.

It is not certain that the aimbot interface need be integrated with dashboards or bomb or hostage identifiers. Sometimes independent bots provide a more flexible arrangement for a user. Individual bots can always use the same underlying image recognition components and models.

Entry Points into Developing Such a System

Read some of the work on the above concepts and download what code you can find that demonstrates it in Python or Java, install what is necessary, and develop some proficiency with the components discussed above as well as the associated theory. Don't shy away from the math, since success will require some proficiency with feedback signalling and concepts like gradient descent and back-propagation.

Reinforcement in Games

LSTM Head Locating

Playing Atari with Deep Reinforcement Learning, Mnih et al., 2013

Phased Approach

The following phased research and development approach is suggested.

  • Learn the theory
  • Practice the theory in code
  • Develop the image recognition front end
  • Develop the library to control a virtual player
  • Develop at least one of the above models
  • Create the simplest bot to use it
  • Expand automation from there

Footnotes

[1] In asymmetric power struggles, there are always at least two factions within each side because didactic legitimacy seeks division. Unity is not practically possible. Each real team usually has a more religious and more secular faction, each of which has economic, philosophic, and historical justifications for their position and agenda. Also, terrorists don't seek the public detonation of bombs or retention of hostages as objective but rather as means, with the total elimination of all not fully adhered to their view of legitimacy as the sole endgame objective. Suicide or high risk bombing is considered by most of those that employ it as the poor man's nukes, so without nuclear strike capability for the counter-terrorists and their allies, the terrorism lacks the important dimension of last resort. The last resort aspect of nuclear strike is missing from the counter-terrorist side too. CS:GO may sell better by glossing over these particular characteristics of asymmetric warfare and such was left out deliberately. There may be some benefit to adding these features in from an educational and anti-propaganda point of view.

",4302,,4302,,9/13/2018 6:45,9/13/2018 6:45,,,,1,,,,CC BY-SA 4.0 7946,2,,7942,9/13/2018 6:19,,0,,"

What part of the game is the network trained to predict a winner on

The positional evaluation. How to give a static score to a chess position.

",6014,,,,,9/13/2018 6:19,,,,0,,,,CC BY-SA 4.0 7947,1,7980,,9/13/2018 6:36,,1,107,"

Note to the Duplicate Police

This question is not a duplicate of the Q&A thread referenced in the close request. The only text even remotely related in that other thread is the brief mention of climate change in the Q and two sentences in the sole answer: ""Identify deforestation and the rate at which it's happening using computer vision and help in fighting back based on how critical the rate is. The World Resources Institute had entered into a partnership with Orbital Insight on this.""

If you look at the four bullet items below, you will find that this question asks a very specific thing about the relationship between climate and emissions. Neither that question nor that answer overlaps with the content of this question in any meaningful way. For instance, it is well known that CO2 is NOT causing deforestation. The additional carbon dioxide in the atmosphere causes faster regrowth. This is because plants need CO2 to grow. Hydroponic containers deliberately boost it to improve growth rates. Plants manufacture their own oxygen from the CO2 via chlorophyll.

If you recall from fifth grade biology, that's why they are plants.


Now Back to the Question

Several climate models have been proposed and used to model the relationship between human carbon emissions, added to the natural carbon emissions of life forms on earth, and features of climate that could damage the biosphere.

Population growth and industrialization have many impacts on the biosphere, including loss of terrain and pollution. Negative oceanic effects, including unpredictable changes in plankton and cyanobacteria are under study. Carbon emissions from combustion has received attention in recent decades just as sulfur emissions were central to concerns a century or more ago.

Predicting weather and climate is certainly difficult because it is complex and chaotic, as typical inaccuracies in forecasts clearly demonstrate, but that is looking forward. Looking backward, analyses of data already collected have shown a high probability that ocean and surface temperature rises followed increases in industrial and transportation related combustion of fuels.

How might AI be used to produce some of the key models humans need to protect the biosphere from severe damage.

  • A more reliable analysis of what has already occurred, since there is some legitimacy to the differing views as to how gross the effect of carbon emissions has been on extinctions of species in the biosphere and on arctic and antarctic melting

  • A better understanding as to whether the climate of the biosphere behaves as a buffer of climate, always tending to re-balance after a volcanic eruption, meteor stroke, or other event, or whether the runaway scenario described by some climatologist, where there is a point of no return, is realistic

  • A better model to use in trying out scenarios so that solutions can be applied in the order that makes sense from both environmental and economic perspectives

  • Automation of climate planning so that the harmful effects of the irresponsibility of one geopolitical entity wishing to industrialize without constraint on other geopolitical entities can be mitigated

Can pattern recognition, feature extraction, the learned functionality of deep networks, or generative techniques be used to accomplish these things? Can rules of climate be learned? Are there discrete or graph based tools that should be used?

",4302,,4302,,9/13/2018 23:34,9/14/2018 20:44,How can AI be used to more reliably analyze and plan around the tie between climate and emissions?,,1,0,,,,CC BY-SA 4.0 7948,2,,7942,9/13/2018 6:58,,4,,"

To my understanding, this is basically a supervised learning problem, where from the self play we have games associated with their winners, and the network is being trained to map game states to likelihood of winning.

Yes, although the data for this supervised learning problem was provided by self-play. As AlphaZero learned, the board evaluations of the same positions would need to change, so this is a non-stationary problem, requiring that the ML forgot the training for older examples over time.

What part of the game is the network trained to predict a winner on?

Potentially all of it, including the starting empty board. I am not sure if the empty board was evaluated in this way, but it is not only feasible, but can be done accurately in practice for simpler games (Tic Tac Toe and Connect 4 for example), given known player policies.

Obviously after only five moves, the winner is not yet clear, and trying to predict a winner after five moves based on the game's eventual winner would learn a meaningless function.

Not at all. This is purely a matter of complexity and difficultly. In practice at such an early stage, the value network will output something non-committal, such as $p=0.51$ win chance for player 1. And it will have learned to do this, because in its experience during self-play similar positions at the start of the game lead to almost equal numbers of player 1 and player 2 winning.

The function is not meaningless either, it can be used to assess results of look-ahead searches without needing to play to the end of the game. It completely replaces position-evaluation heuristics as used in more traditional game tree searches. In practice, very early position data in something as complex as chess or go is not going to be as useful as later position evaluations, due to ambivalent predictions. However, for consistency it can still be learned and used in the game algorithms.

How is the network trained to understand that, if all it is told is who eventually won?

If a supervised learning technique is given the same input data $X$ that on different examples predicts the labels $A, B, B, B, A, A, B, B$, then it should learn $p(B|X) = 0.625$. That would minimise a cross-entropy loss function, and is what is going on here.

",1847,,1847,,9/13/2018 7:17,9/13/2018 7:17,,,,0,,,,CC BY-SA 4.0 7949,1,,,9/13/2018 10:12,,3,250,"

In a CNN, does each new filter have different weights for each input channel, or are the same weights of each filter used across input channels?

This question helps me a lot.

Let, I have RGB input image. (3 channels) Then each filter has n×n weights for one channel. It means, actually the filter has totally 3×n×n weights.

For channel R, it has own n×n filter.

For channel G, it has own n×n filter.

For channel B, it has own n×n filter.

After inner product, add them all to make one feature map. Am I right?

And then, my question starts here. For some purpose, I will only use greyscale images as input. So the input images always have the same values for each RGB channel.

Then, can I reduce the number of weights in the filters? Because in this case, using three different n×n filters and adding them is same with using one n×n filter that is the summation of three filters.

Does this logic hold on a trained network? I have a trained network for RGB image input, but it is too heavy to run in real time. But I only use the greyscale images as input, so it seems I can make the network less heavy (theoretically, almost 1/3 of original).

I'm quite new in this field, so detailed explanations will be really appreciated. Thank you.

",18139,,,user9947,9/13/2018 11:03,9/13/2018 11:22,"Can I reduce the ""number of weights"" in CNN to 1/3 by restricting the input as greyscale image?",,2,6,,,,CC BY-SA 4.0 7950,2,,7949,9/13/2018 10:22,,2,,"

After inner product, add them all to make one feature map. Am I right?

yes, you are right.

Then, can I reduce the number of weights in the filters? Because in this case, using three different n×n filters and adding them is same with using one n×n filter that is the summation of three filters.

If you have transformed the image into greyscale then you no longer need 3 filters. You should retrain your model on greyscale images. In a greyscale image the value of each pixel is a single sample representing only an amount of light (the light intensity).

The network will run faster if that is the only architectural change you make, but keep in mind that by converting the image to greyscale you will lose information and probably some of the predictive power of your network.

",18138,,,,,9/13/2018 10:22,,,,3,,,,CC BY-SA 4.0 7951,2,,7891,9/13/2018 10:28,,-2,,"

I think your task is as follows.

$\ $

Let's assume 5-year-old children.

You have numbers of pictures that drew by them. (Let, these pictures are your [training set].)

And, you want to synthesis similar pictures with the training set.

Because you need more pictures for your study.

Am I right?

$\ $

OK.

From the pictures, you want to extract some meaningful information about a real child who drew them, right?

Then, I think GAN is not faithful for your study.

Of course, GAN can make very similar pictures with your training set.

However, it does not mean the synthesized images can contain the things what you want!

GAN just synthesizes ""fake pictures"" that cannot be distinguished from your training set.

The synthesized pictures may do not have any meaningful thing.

Because it is not drawn by the real child.

$\ $

But it is worth to do.

May GAN captures some features of ""children-like"". (But I think it is too hard.)

You can find lots of GAN for your research, especially DCGAN.

",18139,,,,,9/13/2018 10:28,,,,1,,,,CC BY-SA 4.0 7952,1,,,9/13/2018 10:52,,1,56,"

I am currently reading the research paper Image Crowd Counting Using Convolutional Neural Network and Markov Random Field by Kang Han, Wanggen Wan, Haiyan Yao, and Li Hou.
I did not understand the following context properly:

Formally, the Markov random field framework for the crowd counting can be defined as follows (we follow the notation in [18]). Let P be the set of patches in an image and C be a possi- ble set of counts. A counting c assigns a count c p ∈ C to each patch p ∈ P. The quality of a counting is given by an energy function:

E(c) = ∑ D p (c p ) + ∑ p∈P V (c p − c q ) . . . (2) (p,q)∈N

where N are the (undirected) edges in the four-connected image patch graph. D p (c p ) is the cost of assigning count c p to patch p, and is referred to as the data cost. V (c p −c q ) measures the cost of assigning count c p and c q to two neighboring patch, and is normally referred to as the dis- continuity cost. For the problem of smoothing the adjacent patches count, D p (c p ) and V (c p − c q ) can take the form of the following functions: D p (c p ) = λ min((I(p) − c p ) 2 , DATA K) . . . (3) V (c p − c q ) = min((c p − c q ) 2 , DISC K) . . . (4) where λ is a weight of the energy items, I(p) is the ground truth count of the patch p, DATA K and DISC K are the truncating item of D p (c p ) and V (c p − c q ), respectively.


Can anyone explain the above part in detail and give me a detailed insight on how should I implement this part of the project?

",14592,,14592,,9/13/2018 14:22,9/13/2018 14:22,Doubt regarding research paper on Crowd Counting using Convolutional neural networks and Markov Random Field,,0,2,,,,CC BY-SA 4.0 7953,2,,7949,9/13/2018 11:02,,0,,"

After inner product, add them all to make one feature map. Am I right?

Yes you are right. Now I will try to perform a transformation to preserve accuracy, cannot say about efficiency of the method.

Note: I have not worked on such type of problem, but knowing the maths behind CNN I will try to solve the problem theoretically.

First you have to know the RGB to greyscale conversion formulae which have bben used. Here are some common schemes.

So let us say each pixel had values $r, g, b$ and you converted it to $x_1*r + x_2*g +x_3*b$. So initially for simplicity let us say the we are talking about the corner pixel and convolution scheme $valid$, so the corner pixel with $RGB$ channels get multiplied with values $w_r, w_g, w_b$ during convolution and gets summed up.

But now you only have one pixel which is $x_1*r + x_2*g +x_3*b$. Now let us multiply this by $\frac {w_r}{x_1} + \frac {w_g}{x_2} + \frac {w_b}{x_3}$. This will result in: $(w_r*r + w_g*g +w_b*b) + (\frac {w_r*(x_2*g + x_3*b)}{x_1} +\frac {w_g*(x_1*r + x_3*b)}{x_2} + \frac {w_b*(x_2*g + x_1*r)}{x_3})$.

Now we have to try to remove the 2nd term of the equation. The paramters $w_r, w_g, w_b, x_1, x_2, x_3$ are predetermined already. So taking a single term $r$ from the second term of the equation gives $r*(\frac {w_g*x_1}{x_2} + \frac {w_b*x_1}{x_3})$. The second term has pre-determined value, and for the $r$ I think somehow using modern image analysis techniques you can find some approximate value of possible $r$. Do this for $g$ and $b$ as well and subtract it from the aforementioned equation and you will finally get $(w_r*r + w_g*g +w_b*b)$ which was the term obtained by convolution of filters with $RGB$ images.

I have done all this hypothetically, such image analysis techniques might not exist, but it is still worth a try. Probably better methods to reduce the second term exists in mathematical literature. I will leave it upto mathematicians to point in the right direction.

",,user9947,,user9947,9/13/2018 11:22,9/13/2018 11:22,,,,3,,,,CC BY-SA 4.0 7955,1,,,9/13/2018 15:32,,1,47,"

Neurons can be simulated using different models that vary in the degree of biophysical realism. When designing an artificial neuronal network, I am interested in the consequences of choosing a degree of neuronal realism.

In terms of computational performance, the FLOPS vary from integrate-and-fire to the Hodgkin–Huxley model (Izhikevich, 2004). However, properties, such as refraction, also vary with the choice of neuron.

  1. When selecting a neuronal model, what are consequences for the ANN other than performance? For example, would there be trade-offs in terms of stability/plasticity?

  2. Izhikevich investigated the performance question in 2004. What are the current benchmarks (other measures, new models)?

  3. How does selecting a neuron have consequences for scalability in terms of hardware for a deep learning network?

  4. When is the McCulloch-Pitts neuron inappropriate?


References

Izhikevich, E. M. (2004). Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5). https://www.izhikevich.org/publications/whichmod.pdf

",16411,,,,,9/13/2018 15:32,How does the degree of neuronal realism affect computing in a deep learning scenario?,,0,4,,,,CC BY-SA 4.0 7958,1,7960,,9/13/2018 19:57,,1,493,"

I found a video for the paper DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills on YouTube.

I looked in the related paper, but could not find details of how to the environment was created, such as the physics engine it used. I would like to use it, or something similar.

",,user18189,1847,,9/14/2018 6:31,9/14/2018 6:31,What is the physics engine used by DeepMimic?,,1,4,,,,CC BY-SA 4.0 7959,1,7968,,9/13/2018 20:26,,2,85,"

I want to build a model to support decision making for loan insurance proposal.

There are three actors in the problem: a bank, a loaner applicant (someone who ask for a loan) and a counselor. The counselor studies the loaner application and if it has a good profile it will propose to him loan from banks that fits his profile. Then the application is sent to the bank but the bank could refuse the applicant (based on criteria we don't know).

The counselor has also to decide whether or not he will propose to the loaner applicant a loan insurance.

The risk is that some banks reject loan applicant who accepts a loan insurance and other banks accept more applicants with a loan insurance. But there aren't rules regarding banks since some banks accept or reject applicants with loan insurance according of the type of acquisition applicants want with their loan for example.

Thus, the profile of the applicant can matter in their rejection from banks but all criteria influencing the decision are quite uncertain.

I've researched online and found several scholarly articles on using Monte Carlo for decision making. Should I use Monte Carlo or a simple classifier for this Decision Making problem ?

I saw that Monte Carlo (possibly Monte Carlo Tree Search) can be used in Decision Making and it is good when there is uncertainty. But it seems that it would forecast by producing some strategy (after running a lot of simulations) but what I want is an outcome based on both the profile of the loaner applicant and the bank knowing that criteria from banks (to accept loaner applicant from could change every six months. And I would have too model banks which seems quite difficult.

A classifier seems to me to not really fit the problem. I am not really sure. Actually, I don't see how a classifier like a decision tree, for example, would work here. Because I have to predict decision of the counselor to propose or not based on the decision of banks (and I don't know their criteria) to refuse or accept applicants who were proposed loan insurance and accepted it.

The data I have is former applicants profile who were sent to banks and if they were accepted or not by the bank, if they wanted a loan insurance or not and the type of acquisition they wanted to make with their loan.

I am new to Decision Making. Thank you!

",18192,,18192,,9/13/2018 21:24,9/14/2018 8:42,Should I use Monte Carlo or a classifier for this Decision Making problem?,,1,0,,,,CC BY-SA 4.0 7960,2,,7958,9/13/2018 22:28,,2,,"

Bullet physics engine

Their paper says

Physics simulation is performed at 1.2kHz using the Bullet physics engine [Bullet 2015].
",15493,,,,,9/13/2018 22:28,,,,0,,,,CC BY-SA 4.0 7962,1,7973,,9/14/2018 2:32,,1,488,"

Is it possible for a genetic algorithm + Neural Network that is used to learn to play one game such as a platform game able to be applied to another different game of the same genre.

So for example, could an AI that learns to play Mario also learn to play another similar platform game.

Also, if anyone could point me in the direction of material i should familiarise myself with in order to complete my project.

",18209,,1641,,9/14/2018 10:58,9/14/2018 11:46,Can genetic algorithms be used to learn to play multiple games of the same type?,,3,6,,,,CC BY-SA 4.0 7963,1,7967,,9/14/2018 3:38,,5,768,"

I started teaching myself about reinforcement learning a week ago and I have this confusion about the learning experience. Let's say we have the game Go. And we have an agent that we want to be able to play the game and win against anyone. But let's say this agent learn from playing against one opponent, my questions then are:

  1. Wouldn't the agent (after learning) be able to play only with that opponent and win? It estimated the value function of this specific behaviour only.
  2. Would it be able to play as good with weaker players?
  3. How do you develop an agent that can estimate a value function that generalizes against any behaviour and win? Self-play? If yes, how does that work?
",17582,,2444,,9/14/2018 12:33,9/14/2018 12:33,How can a reinforcement learning agent generalize if it is trained against only one opponent?,,2,0,,,,CC BY-SA 4.0 7965,1,7969,,9/14/2018 5:42,,4,44,"

In some situation, like risk detection and spam detection. The pattern of Good User is stable, while the patterns of Attackers are changing rapidly. How can I make a model for that? Or which classifier/method should I use?

",18213,,,,,9/14/2018 9:01,How to design a classifier while the patterns of positive data are changing rapidly?,,1,0,,,,CC BY-SA 4.0 7966,1,,,9/14/2018 7:28,,9,960,"

Are there neural networks that can decide to add/delete neurons (or change the neuron models/activation functions or change the assigned meaning for neurons), links or even complete layers during execution time?

I guess that such neural networks overcome the usual separation of learning/inference phases and they continuously live their lives in which learning and self-improving occurs alongside performing inference and actual decision making for which these neural networks were built. Effectively, it could be a neural network that acts as a Gödel machine.

I have found the term dynamic neural network but it is connected to adding some delay functions and nothing more.

Of course, such self-improving networks completely redefine the learning strategy, possibly, single shot gradient methods can not be applicable to them.

My question is connected to the neural-symbolic integration, e.g. Neural-Symbolic Cognitive Reasoning by Artur S. D'Avila Garcez, 2009. Usually this approach assigns individual neurons to the variables (or groups of neurons to the formula/rule) in the set of formulas in some knowledge base. Of course, if knowledge base expands (e.g. from sensor readings or from inner nonmonotonic inference) then new variables should be added and hence the neural network should be expanded (or contracted) as well.

",8332,,2444,,8/27/2019 23:31,8/27/2019 23:31,Are there dynamic neural networks?,,2,4,,,,CC BY-SA 4.0 7967,2,,7963,9/14/2018 8:37,,6,,"

Reinforcement Learning (RL) at its core does not have anything directly to say about adversarial environments, such as board games. That means in a purely RL set up, it is not really possible to talk about the ""strength"" of a player.

Instead, RL is about solving consistent environments, and that consistency requirement extends to any opponents or adversarial components. Note that consistency is not the same as determinism - RL theory copes well with opponents that effectively make random decisions, provided the distribution of those decisions does not change based on something the RL agent cannot know.

Provided an opponent plays consistently, RL can learn to optimise against that opponent. This does not directly relate to the ""strength"" of an opponent, although usually strong opponents present a more challenging environment to learn overall.

  1. Wouldn't the agent (after learning) be able to play only with that opponent and win? since it estimated the value function of this specific behavior only.

If the RL has enough practice and time to optimise against the opponent, then yes the value function (and any policy based on it) would be specific to that opponent. Assuming, the opponent did not play flawlessly, then the RL would learn to play such it would win as often as possible against the opponent.

When playing against other opponents, the success of the RL agent will depend on how similar the new opponent was to the original that it trained against.

  1. would it be able to play as good with weaker players?

As stated above, there is not really a concept of ""stronger"" or ""weaker"" in RL. It depends on the game, and how general the knowledge is that strong players require in order to win.

In theory you could construct a game, or deliberately play strongly, but with certain flaws, so that RL would play very much to counter one play style, and would fail against another player that did not have the same flaws.

It is difficult to measure this effect, because human players learn from their mistakes too, and are unlikely to repeat the exact same game time after time, but with small variations at key stages. Humans do not make consistent enough opponents, and individual humans do not play enough games at each stage of their ability to study fine-grained statistics of their effective policies.

In practice it seems likely that the effect of weakening against new players would be there in RL, due to sampling error if nothing else. However, it seems that the ""strength"" of players as we measure them in any game of skill such as chess or go, does correlate with a generalised ability. In part this is backed up by consistent results with human players and Elo ratings.

Any game where you can form ""rings"" of winning players:

  • Player B consistently beats Player A
  • Player C consistently beats Player B
  • Player A consistently beats Player C

Could cause issues of the type you are concerned about when applying RL to optimise an artificial agent.

  1. How do you develop an agent that can estimate a value function that generalizes against any behavior and win?

If is possible to play perfectly, then a value function which estimated for perfect play would work. No player could beat it. Think of Tic Tac Toe - it is relatively easy to construct perfect play value functions for it.

This is not achievable in practice in more complex games. To address this, and improve the quality of its decisions, what AlphaGo does is common to many game-playing systems, using RL or not. It performs a look-ahead search of positions. The value function is used to guide this. The end result of the search is essentially a more accurate value function, but only for the current set of choices - the search focuses lots of computation on a tiny subset of all possible game states.

One important detail here is that this focus applies at run time whilst playing against any new opponent. This does not 100% address your concerns about differing opponents (it could still miss a future move by a different enough opponent when searching). But it does help mitigate smaller statistical differences between different opponents.

This search tree is such a powerful technique that for many successful game playing algorithms, it is possible to start with an inaccurate value function, or expert heuristics instead, which are fixed and general against all players equally. IBM's Deep Blue is an example of using heuristics.

self-play? if yes, how does that work?

Self-play appears to help. Especially in games which have theoretical optimal play, value functions will progress towards assessing this optimal policy, forming better estimates of state value with enough training. This can give a better starting point than expert heuristics when searching.

",1847,,1847,,9/14/2018 8:43,9/14/2018 8:43,,,,0,,,,CC BY-SA 4.0 7968,2,,7959,9/14/2018 8:42,,2,,"

A classifier seems to me to not really fit the problem. I am not really sure. Actually, I don't see how a classifier like a decision tree, for example, would work here. Because I have to predict decision of the counselor to propose or not based on the decision of banks (and I don't know their criteria) to refuse or accept applicants who were proposed loan insurance and accepted it.

The data I have is former applicants profile who were sent to banks and if they were accepted or not by the bank, if they wanted a loan insurance or not and the type of acquisition they wanted to make with their loan.

Why does this seem to you like something where a classifier wouldn't fit? Unless I'm missing something, it sounds like a prototypical example of a classification problem to me.

You have:

  • Input features (applicants' profile)
  • A clear (binary?) prediction target: propose or don't propose (equivalent to predicting whether or not the bank would accept, because you'll always want to propose if the bank would accept, and never propose if the bank wouldn't accept).
  • Old training data containing both the input features and the matching prediction targets.

Approaches like Monte-Carlo Tree Search can only be used if you have a forward model or simulator. In your setting, you could view the features (applicants' profile) as a ""game state"", and model the problem as a game with two actions (propose or not propose). However, you don't have a forward model (a function that, given a current state and action, generates a possible reward and subsequent state).

In applications where MCTS is often used (such as games), you do have such a forward model: for a game like Go or chess, you can easily program the game's rules, program how you transition from one state into another when you select an action, etc. This does not appear to be the case for you.

",1641,,,,,9/14/2018 8:42,,,,0,,,,CC BY-SA 4.0 7969,2,,7965,9/14/2018 9:01,,2,,"

The phenomenon where the prediction targets (in your case, behaviour) change over time is referred to as ""concept drift"".

If you search for that term, you'll find that there have been many publications attempting to tackle that over multiple decades, way too many papers to all summarize here in a single answer. It's still a difficult problem though, by no means a ""solved"" problem.

Two different, broad directions for ideas are:

  1. Frequently re-training (offline) static models on the most recent training data
  2. Using online learning approaches that can continuously be updated from a data stream, online as new labelled data becomes available.

This github page contains a large list of papers on credit card fraud detection, where the problem you describe occurs because fraudsters change their behaviour in an attempt to evade detection. Most of those papers discuss variants of the first approach. Basically, many of those papers use an ensemble of multiple Random Forests. Every day, new labelled data becomes available. They often then remove the oldest of multiple Random Forests, and add a new Random Forest trained on the most recent data made available that day.

There are also some variants where they don't always train new models at a fixed schedule (e.g., every day), but try to detect when the statistical properties of the data have changed using statistical tests, and only train new models when it is ""necessary"" (due to such changes).

For the second idea, you'll often be thinking of approaches that use Stochastic Gradient Descent-like approaches for learning; with a non-decreasing learning rate / step size, such techniques will naturally, slowly ""forget"" what they have learned from old data, and focus more on the latter data.

If you have some method to obtain accurate labels for certain instances relatively quickly, you could consider an approach like the one proposed in this paper (disclaimer: I'm an author on this paper). For example, in that paper the assumption is that human experts can relatively quickly investigate and obtain accurate labels for a small selection of transactions, and this can be exploited to quickly learn in an online manner.

",1641,,,,,9/14/2018 9:01,,,,0,,,,CC BY-SA 4.0 7970,2,,7963,9/14/2018 9:30,,4,,"

Most of your questions are already addressed very well by Neil's answer, so I won't address those again. I'd just like to clarify additionally on the following point:

But let's say this agent learn from playing against one opponent

Precisely that assumption of learning from against a single opponent causes many issues. In fact, even if that ""single opponent"" is changing/improving, you can still have an unstable learning process. For instance, two agents that are simultaneously learning (or a single agent and a copy of itself) can keep infinitely going around in circles as Neil also already hinted at in a game like Rock-Paper-Scissors.

In the original AlphaGo publication (2016), learning from self-play was done by randomly selecting one of a set of (relatively recent) copies of the learning agent every game, rather than always playing against an exact copy of the single most recent version of the agent. Adding more diversity to the ""training partners"" in that way can help to learn a more robust policy that can handle different opponents. Of course, we shouldn't go overboard with this kind of randomization; you still want to make sure to train against strong training partners (or opponents that have a roughly similar level of strength as the learning agent), an agent that already is quite strong won't be able to learn a lot from playing against an extremely weak agent anymore.

In 2017, a new paper appeared on AlphaGo Zero. In this paper, they no longer used such randomization as described above, but still had a stable learning process from self-play. As far as I'm aware, the most likely hypothesis to explain this stability is the fact that Monte-Carlo Tree Search was used during the self-play training process to improve the update targets. This is different from the use of lookahead search that Neil already described during gameplay, after training. By also incorporating lookahead search during the training process, and using it to improve update targets, the hypothesis is that you can reduce the risk of ""overfitting"" against the training partner. The lookahead search actually ""thinks"" a bit about other moves that the training partner could have selected other than what it actually did, and incorporates that in the update targets. A similar combination of MCTS and self-play reinforcement learning was also independently published (by different authors) to result in stable learning in different games.

",1641,,,,,9/14/2018 9:30,,,,0,,,,CC BY-SA 4.0 7971,2,,7962,9/14/2018 10:28,,0,,"

Definitely depends on the design of your algorithm. According to my knowledge, almost all ML algorithms are targeting at specific issues, thus it’s difficult for general usage. And you have to train again for any new issues. It’s also difficult to understand the internal working mechanism of those AI algorithms due to general statistics methods (and yes, they call AI). I will recommend a regional/encapsulated method applied on general methods, therefore algorithms are not specific and micro-structured for general purposes. If games are similar, definitely we can apply on both with appropriate designed methods. Hinton has started his Capsule network which I think is a good direction. Just beware, training shouldn’t be specific object related. Instead, it should be micro-structure related or feature (it’s hard to differ current AI feature from human insight feature). For example, human can easily differ differences even though never see those before. And human do not have to re-train nerve units except for better understanding or accuracy. Genetic algorithms should have the same ability to survive in different but similar environments. Unfortunately, we are at the beginning of AI era but also luckily we have a lot to do. In fact, almost all current techs imitate the nature. If the nature can, definitely we can at one day.

",18221,,,,,9/14/2018 10:28,,,,0,,,,CC BY-SA 4.0 7972,2,,7962,9/14/2018 10:49,,0,,"

Genetic algorithms can learn multiple games, yes, in fact genetic algorithms is a bad term to describe this family, there is only one generic genetic algorithm with many variations depending on the problem at hand. I recommend this pdf for a introduction on how they work and how to build them:

http://www.boente.eti.br/fuzzy/ebook-fuzzy-mitchell.pdf

",18123,,,,,9/14/2018 10:49,,,,1,,,,CC BY-SA 4.0 7973,2,,7962,9/14/2018 11:46,,5,,"

Genetic algorithms and Neural Networks both are ""general"" methods, in the sense that they are not ""domain-specific"", they do not rely specifically on any domain knowledge of the game of Mario. So yes, if they can be used to successfully learn how to play Mario, it is likely that they can also be applied with similar success to other Platformers (or even completely different games). Of course, some games may be more complex than others. Learning Tic Tac Toe will likely be easier than Mario, and learning Mario will likely be easier than StarCraft. But in principle the techniques should be similarly applicable.

If you only want to learn in one environment (e.g., Mario), and then immediately play a different game without separately training again, that's much more complicated. For research in that area you'll want to look for Transfer Learning and/or Multi-Task learning. There has definitely been research there, with the latest developments that I'm aware of having been published yesterday (this is Deep Reinforcement Learning though, no GAs I think).

The most ""famous"" recent work on training Neural Networks to play games using Genetic Algorithms that I'm aware of is this work by Uber (blog post links to multiple papers). I'm not 100% sure if that really is the state of the art anymore, if it's the best work, etc... I didn't follow all the work on GAs in sufficient detail to tell for sure. It'll be relevant at least though.

I know there's also been quite a lot of work on AI in general for Mario / other platformers (for instance in venues such as the IEEE Conference on Computational Intelligence and Games, and the TCIAIG journal).

",1641,,,,,9/14/2018 11:46,,,,0,,,,CC BY-SA 4.0 7974,2,,6745,9/14/2018 12:07,,1,,"

This is not a bug, assuming you hve implemented the SOM properly with best decay rules of learning rate and neighbourhood strength (in short best hyperparameters).

Think of the map as finding many local clusters in the Iris-Versicolor class. Since SOM's behave somewhat like k-means clustering, I would say this implies Iris Versicolor has many local clusters which are as closely knit together as the entire classes of Iris setosa and Iris virginica.

",,user9947,,,,9/14/2018 12:07,,,,0,,,,CC BY-SA 4.0 7975,1,,,9/14/2018 12:07,,2,109,"

Problem:

We have a fairly big database that is built up by our own users. The way this data is entered is by asking the users 30ish questions that all have around 12 answers (x, a, A, B, C, ..., H). The letters stand for values that we can later interpret.

I have already tried and implemented some very basic predictors, like random forest, a small NN, a simple decision tree etc.

But all these models use the full dataset to do one final prediction. (fairly well already).

What I want to create is a system that will eliminate 7 to 10 of the possible answers a user can give at any question. This will reduce the amount of data we need to collect, store, or use to re-train future models.

I have already found several methods to decide what are the most discriminative variables in the full dataset. Except, when a user starts filling the questions I start to get lost on what to do. None of the models I have calculate the next question given some previous information.

It feels like I should use a Naive Bayes Classifier, but I'm not sure. Other approaches include recalculating the Gini or entropy value at every step. But as far as my knowledge goes, we can't take into account the answers given before the recalculating.

",18225,,2444,,2/16/2019 2:43,12/30/2022 4:10,How can I minimize the number of answers that are relevant to a machine learning model?,,2,0,,,,CC BY-SA 4.0 7976,2,,7975,9/14/2018 12:42,,0,,"

You don't need to re-train on the fly. What you're looking for is an embedded feature selection algorithm, and even more precisely, one that minimizes the number of responses required.

I think this might be one of the rare cases where genetic and evolutionary approaches are the obviously correct choice.

Genetic Programming is a technique for finding models that are simply computer programs. You generate a bunch of computer programs at random, and then breed the ""better"" ones together. Repeating this process over time leads to highly optimized programs.

A nice feature of GP is that it is extremely flexible when picking what to optimize. So instead of ""better"" meaning just ""more accurate"", ""better"" can mean the sum of accuracy and $\frac{1}{\#answers\_used}$. The algorithm works the same way, and, with carefully chosen rewards, you may be able to get the best of both worlds.

There are lots of variations on this. I would probably start with a simple toolkit like EJC, and standard, boring, genetic programming.

There are specialized techniques for things like your problem too, but you'll probably get 80% of the benefit without needing to pursue them.

",16909,,16909,,9/14/2018 12:53,9/14/2018 12:53,,,,1,,,,CC BY-SA 4.0 7977,1,7978,,9/14/2018 14:44,,2,426,"

I would like to develop a chatbot that is able to pass the Turing test, i.e. a chatbot that is able to carry on a natural conversation with a human.

Can natural language processing (NLP) be used to do that? What if I combine NLP with neural networks?

",18233,,2444,,6/2/2020 23:09,6/2/2020 23:09,Can I develop a chatbot to carry on a natural conversation with a human using NLP and neural networks?,,1,0,,,,CC BY-SA 4.0 7978,2,,7977,9/14/2018 15:41,,5,,"

I would not recommend using neural networks and NLP together to create a system sufficiently capable of conversation/dialogue that it would pass that current crop of Turing-like tests.

Conversations follow certain rules and regularities (which we have only partially discovered so far), and training an ANN with dialogues in order to pick up those regularities is simply not feasible. In conversations you have a memory of what has been mentioned previously, you build up assumptions about the intentions of your dialogue partner, and keep track of the current topic and sub-topics. This is far too complex to be reduced to a machine learning approach.

As a starting point I would suggest looking at ELIZA, developed by Weizenbaum in the mid-1960s. There are plenty of implementations in various programming languages available. Use that as a starting point to extend the capabilities according to topics you want to talk about, and store in memory what the user has said before, trying to refer back to it, etc. This is a lot easier to do with 'symbolic' AI rather than subsymbolic processing.

A lot of current tech companies offer chatbot variants based on machine learning, but they rarely go beyond intent recognition or simple question-answer dialogues. For more sophisticated dialogues they are simply not suitable.

(Disclaimer: I work for a company producing conversational software)

",2193,,,,,9/14/2018 15:41,,,,0,,,,CC BY-SA 4.0 7979,1,7981,,9/14/2018 20:21,,15,2562,"

In AlphaZero, the policy network (or head of the network) maps game states to a distribution of the likelihood of taking each action. This distribution covers all possible actions from that state.

How is such a network possible? The possible actions from each state are vastly different than subsequent states. So, how would each possible action from a given state be represented in the network's output, and what about the network design would stop the network from considering an illegal action?

",12201,,12201,,11/19/2018 2:21,11/19/2018 2:21,Why does the policy network in AlphaZero work?,,1,0,,,,CC BY-SA 4.0 7980,2,,7947,9/14/2018 20:44,,2,,"

Can AI provide a more reliable analysis of the gross effects of carbon emissions on extinctions of species ice-cap melting, and other effects?

Yes. The work of Judea Pearl and others over the last 20 years began out of a desire to address uncertainty within AI. Eventually, this led Pearl to become fascinated by the need to quantifiably determine when one event has caused another, the problem at the root of "correlation is not causation". He substantively succeeded with the combination of causal modeling and the do-calculus. The do-calculus allows you to formulate queries of the form "To what degree did X cause Y?", and to automatically determine what measurements are needed to determine the answer in a statistically meaningful way. The algorithms used under the hood are closely related to those used in AI and robotics systems to reason about uncertainty (i.e. Bayesian Inference). Causal modeling is still relatively new, and is not yet as widely used as it could be. An open problem is how to specify what the model of the world looks like from data (rather than just reasoning over a model that is given). If this problem is solved, we could see major improvements in the ability to provide analyses like those you ask about.

Can AI provide a better understanding about whether the runaway scenario described by some climatologists is realistic?

Probably not. Predicting future events without any observable precedents is not something that anyone can be sure about. The questions raised in those kinds of scenarios aren't about calculation, but about modeling. For example, if you believe that there is a large amount of methane trapped in the arctic permafrost, and you believe that temperatures above a certain range release this much faster than in the past, and you believe that methane warms the climate rapidly, then you would tend to believe that the runaway scenarios are plausible. AI can't tell us whether the methane is there (we have to go measure it, though maybe AI can make the measurement more accurate), and can't tell us how temperature will affect the release rate (again, we have models of this, but they rely on different assumptions, and we have to go measure to find out which are right).

Can AI provide better simulations of the impacts of interventions on both climate and the economy, to inform decision making?

Maybe. Agent-based modeling can help with this to some degree, and is arguably part of AI. In fact, it already has been to some degree, by Beeger & Troost in J. Agriculture Economics, in 2014 and again in 2017, though interest in this kind of modeling looks like a pretty new development in this area. Although ABM can give us reasonable models and help simulate the impact of interventions, ultimately they are just one modeling tool among many. Their potency may improve if more realistic agent models are used, but it is not clear that AI is going to provide advances in this area in the near future.

Automation of climate planning so that the harmful effects of the irresponsibility of one geopolitical entity wishing to industrialize without constraint on other geopolitical entities can be mitigated

Probably not. Although AI techniques have made some kinds of economic planning problems a lot easier, the main barrier to the effects you describe is a social/political one: countries are sovereign, and the world operates as a de facto anarchy (i.e. the UN is impotent). Your AI model can tell, say, India not to industrialize, but Indians want to enjoy the same kind of lifestyle improvements that Americans do, and would rather enjoy them sooner than later. India would collectively rather that Americans put an enormous tax on their carbon emissions, drive less, eat much, much less beef, and stop flying everywhere, than that Indians continue living on the equivalent of $7,000 each per year. In contrast, Americans would rather that Indians just wait a few decades while the developed world decarbonizes, and only industrialize once we have enough solar panels to replace all our current needs within current industrial economies.

Basically this is a resource allocation problem within an anarchy: we can only burn X carbon within Y time, everyone wants to burn some, and the only way to enforce contracts is with the threat of massive violence (or massive economic sanctions, which in turn, are imposed through the threat of violence against other actors if they trade with the sanctioned country). AI can help us answer questions like "How much should the USA pay India in exchange for India not industrializing this year?", see, e.g. work in Auction Theory, but AI can't actually make nations do those things if they can't reach a diplomatic compromise.

",16909,,-1,,6/17/2020 9:57,9/14/2018 20:44,,,,0,,,,CC BY-SA 4.0 7981,2,,7979,9/14/2018 21:48,,20,,"

The output of the policy network is as described in the original paper:

A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy π(a|s) by a 8 × 8 × 73 stack of planes encoding a probability distribution over 4,672 possible moves. Each of the 8×8 positions identifies the square from which to “pick up” a piece. The first 56 planes encode possible ‘queen moves’ for any piece: a number of squares [1..7] in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible underpromotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen.

So each move selector scores the relative probability of selecting a piece in a given square and moving it in a specific way. For example, there is always one output dedicated to representing picking up the piece in A3 and moving it to A6. This representation includes selecting opponent pieces, selecting empty squares, making knight moves for rooks, making long diagonal moves for pawns. It also includes moves that take pieces off the board or through other blocking pieces.

The typical branching factor in chess is around 35. The policy network described above always calculates discrete probabilities for 4672 moves.

Clearly this can select many non-valid moves, if pieces are not available, or cannot move as suggested. In fact it does this all the time, even when fully trained, as nothing is ever learned about avoiding the non-valid moves during training - they do not receive positive or negative feedback, as there is never any experience gained relating to them. However, the benefit is that this structure is simple and fixed, both useful traits when building a neural network.

The simple work-around is to filter out impossible moves logically, setting their effective probability to zero, and then re-normalise the probabilities for the remaining valid moves. That step involves asking the game engine for what the valid moves are - but that's fine, it's not ""cheating"".

Whilst it might be possible to either have the agent learn to avoid non-valid moves, or some clever output structure that could only express valid moves, these would both distract from the core goal of learning how to play the game optimally.

",1847,,1847,,9/15/2018 8:50,9/15/2018 8:50,,,,2,,,,CC BY-SA 4.0 7983,1,8046,,9/15/2018 3:08,,4,176,"

I'm currently implementing the original NEAT algorithm in Swift.

Looking at figure 4 in Stanley's original paper, it seems to me there is a chance that node 5 will have no (enabled) outgoing connection if parent 1 is assumed the fittest parent and the connection is randomly picked from parent 2.

Is my understanding of the crossover function correct and can it indeed result in a node with no outgoing connections?

",18249,,16909,,9/15/2018 13:53,9/19/2018 18:03,Can a crossover result in a node with no outgoing connections?,,1,0,,,,CC BY-SA 4.0 7984,2,,4048,9/15/2018 4:23,,4,,"

Yes. It is feasible.

Overview of the Question

The design goal of the system seems to be gain a winning strategic advantage by employing one or more artificial networks in conjunction with a card game playing engine.

The question shows a general awareness of the basics of game-play as outlined in Morgenstern and von Neuman's Game Theory.

  • At specific points during game-play a player may be required to execute a move.
  • There is a fininte set of move options according to the rules of the game.
  • Some strategies for selecting a move produce higher winning records over multiple game plays than other strategies.
  • An artificial network can be employed to produce game-play strategies that are victorious more frequently that random move selection.

Other features of game-play may or may not be as obvious.

  • At each move point there is a game state, which is needed by any component involved in improving game-play success.
  • In addition to not knowing when the opponent will bluff, in card games, the secret order of shuffled cards can introduce the equivalent of a virtual player the moves of which approximate randomness.
  • In three or more player games, the signaling of partners or potential partners can add an element of complexity to determining the winning game strategy at any point. Based on the edits, it does not appear like this game has such complexities.
  • Psychological factors such as intimidation can also play a role in winning game-play. Whether or not the engine presents a face to the opponent is unknown, so this answer will skip over that.

Common Approach Hints

There is a common approach to mapping both inputs and outputs, but there is too much to explain in a Stack Exchange answer. These are just a few basic principles.

  • All of the modeling that can be done explicitly should be done. For instance, although an artificial net can theoretically learn how to count cards (keeping track of the possible locations of each of the cards), a simple counting algorithm can do that, so use the known algorithm and feed those results into the artificial network as input.
  • Use as input any information that is correlated with optimal output, but don't use as inputs any information that can not possibly correlate with optimal output.
  • Encode data to reduce redundancy in the input vector, both during training and during automated game-play. Abstraction and generalization are the two common ways of achieving this. Feature extraction can be used as tools to either abstract or generalize. This can be done at both inputs and outputs. An example is that if, in this game, J > 10 in the same way that A > K, K > Q, Q > J and 10 > 9, then encode the cards as an integer from 2 through 14 or 0 through 12 by subtracting one. Encode the suits as 0 through 3 instead of four text strings.

The image recognition work is only remotely related, too different from card game-play to use directly, unless you need to recognize the cards from a visual image, in which case LSTM may be needed to see what the other players have chosen for moves. Learning winning strategies would more than likely benefit from MLP or RNN designs, or one of their derivative artificial network designs.

What an Artificial Network Would Do and Training Examples

The primary role of artificial networks of these types is to learn a function from example data. If you have the move sequences of real games, that is a great asset to have for your project. A very large number of them will be very helpful for training.

How you arrange the examples and whether and how you label them is worth consideration, however without the card game rules it is difficult to give any reliable direction. Whether there are partners, whether it is score based, whether the number of moves to a victory, and a dozen other factors provide the parameters of the scenario needed to make those decisions.

Study Up

The main advise I can give is to read, not so much general articles on the web, but read some books and some of the papers you can understand on the above topics. Then find some code you can download and try after you understand the terminology well enough to know what to download.

This means book searches and academic searches are much more likely to steer you in the right direction than general web searches. There are thousands of posers in the general web space, explaining AI principles with a large number of errors. Book and academic article publishers are more demanding of due diligence in their authors.

",4302,,4302,,10/15/2018 23:36,10/15/2018 23:36,,,,0,,,,CC BY-SA 4.0 7985,1,7991,,9/15/2018 6:05,,2,94,"

In conditional generative adversarial networks (GAN), the objective function (of a two-player minimax game) would be

$$\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x} | \boldsymbol{y})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z} | \boldsymbol{y})))]$$

The discriminator and generator both take $y$, the auxiliary information.

I am confused as to what will be the difference by using $\log D(x,y)$ and $\log(1-D(G(z,y))$, as $y$ goes in input to $D$ and $G$ in addition to $x$ and $z$?

",18253,,2444,,5/18/2020 12:44,5/18/2020 12:44,"Why do we use $D(x \mid y)$ and not $D(x,y)$ in conditional generative adversarial networks?",,1,0,,,,CC BY-SA 4.0 7991,2,,7985,9/15/2018 13:52,,2,,"

It looks like you're asking about the difference between using conditional and joint probabilities.

The joint probability $$D(x,y)$$ is the probability of x and y both happening together.

The conditional probability $$D(x | y)$$ is the probability that x happens, given that y has already happened. So, $$D(x,y) = D(y) * D(x | y)$$.

Notice that, in a C-GAN, we have some extra information that is given, like a class label $y$. We actually don't care at all about how likely that information is to appear. We care only about how likely it is to appear with a given $x$ from the source distribution, versus how likely it is to appear with a given $z$ from the generated distribution.

If you tried to minimize the joint probabilities, you would be attempting to change something that the networks have no ability to control (the chance of $y$ appearing).

",16909,,,,,9/15/2018 13:52,,,,2,,,,CC BY-SA 4.0 7992,1,8002,,9/15/2018 15:17,,1,90,"

Currently, I am interested in how NNs or any other AI models can be used for composing music.

But there are many other interesting applications too, like language processing.

I am wondering that: NNs generally need a cost function for learning. But for example, for composing music, what would be an appropriate cost function? I mean, algorithms can't (yet) really 'calculate' how good music is, right?

",17769,,16920,,9/15/2018 21:04,9/16/2018 12:57,How to find a cost function for human data,,1,1,,,,CC BY-SA 4.0 7993,1,8003,,9/15/2018 16:14,,2,184,"

At the time when the basic building blocks of machine learning (the perceptron layer and the convolution kernel) were invented, the model of the neuron in the brain taught at the university level was simplistic.

Back when neurons were still just simple computers that electrically beeped untold bits to each other over cold axon wires, spikes were not seen as the hierarchical synthesis of every activity in the cell down to the molecular scale that we might say they are today. In other words, spikes were just a summary report of inputs to be integrated with the current state, and passed on. In comprehending the intimate relationships of mitochondria to spikes (and other molecular dignitaries like calcium) we might now more broadly interpret them as synced messages that a neuron sends to itself, and by implication its spatially extended inhabitants. Synapses weigh this information heavily but ultimately, but like the electoral college, fold in a heavy dose of local administration to their output. The sizes and positions within the cell to which mitochondria are deployed can not be idealized or anthropomorphized to be those metrics that the neuron decides are best for itself, but rather what is thermodynamically demanded.1

Notice the reference to summing in the first bolded phrase above. This is the astronomically oversimplified model of biology upon which contemporary machine learning was built. Of course ML has made progress and produced results. This question does not dismiss or criticize that but rather widen the ideology of what ML can become via a wider field of thought.

Notice the second two bolded phrases, both of which denote statefulness in the neurons. We see this in ML first as the parameters that attenuate the signals between arrays of artificial neurons in perceptrons and then, with back-propagation into deeper networks. We see this again as the trend in ML pushes toward embedded statefulness by integrating with object oriented models, the success of LSTM designs, the interrelationships of GAN designs, and the newer experimental attention based network strategies.

But does the achievement of higher level thought in machines, such as is needed to ...

  • Fly a passenger jet safely under varying conditions,
  • Drive a car in the city,
  • Understand complex verbal instructions,
  • Study and learn a topic,
  • Provide thoughtful (not mechanical) responses, or
  • Write a program to a given specification

... requiring from us a much more radical is the transition in thinking about what an artificial neuron should do?

Scientific research into brain structure, its complex chemistry, and the organelles inside brain neurons have revealed significant complexity. Performing a vector-matrix multiplication to apply learning parameters to the attenuation of signals between layers of activations is not nearly a simulation of a neuron. Artificial neurons are not very neuron-like, and the distinction is extreme.

A little study on the current state of the science of brain neuron structure and function reveals the likelihood that it would require a massive cluster of GPUs training for a month just to learn what a single neuron does.

Are artificial networks based on the perceptron design inherently limiting?

References

[1] Fast spiking axons take mitochondria for a ride, by John Hewitt, Medical Xpress, January 13, 2014, https://medicalxpress.com/news/2014-01-fast-spiking-axons-mitochondria.html

",4302,,4302,,9/20/2018 5:17,9/20/2018 5:17,Are artificial networks based on the perceptron design inherently limiting?,,2,0,,,,CC BY-SA 4.0 7996,1,,,9/15/2018 18:31,,2,141,"

I am trying to understand the dimensionality of the outputs of convolution operations. Suppose a convolutional layer with the following characteristics:

  • Input map $\textbf{x} \in R^{H\times W\times D}$
  • A set of $F$ filters, each of dimension $\textbf{f} \in R^{H'\times W'\times D}$
  • A stride of $<s_x, s_y>$ for the corresponding $x$ and $y$ dimensions of the input map
  • Either valid or same padding (explain for both if possible)

What should be the expected dimensionality of the output map expressed in terms of $H, W, D, F, H', W', s_x, s_y$?

",18267,,2444,,4/27/2020 12:33,4/27/2020 12:33,"What is the dimensionality of the output map, given the dimensionality of the input map, number of filters, stride and padding?",,1,0,,,,CC BY-SA 4.0 7998,1,,,9/15/2018 18:55,,4,579,"

Keras' convolutional and deconvolutional layers are designed for square grids. Is there was a way to adapt them for use in hexagonal grids?

For example, if we were using axial coordinates, the input of the kernel of radius 1 centered at (x,y) should be:

[(x-1,y), (x-1,y+1), (x,y-1), (x,y+1), (x+1,y-1), (x+1, y)]

One option is to fudge it with a 3 by 3 box, but then you are using cells at different distances.

Some ideas:

  • Modify Kera's convolutional layer code to use those inputs instead of the default inputs. The problem is that Kera calls its backend instead of implementing it itself, which means we need to modify the backend too.
  • Use a 3 by 3 box, but set the weights at (x-1,y-1) and (x+1,y+1) to zero. Unfortunately, I do not know how to permanently set weights to a given value in Kera.
  • Use cube coordinates instead of Axial coordinates. In this case, a 3 by 3 by 3 box will only contain the central hex's neighbors and inputs set to 0. The problem is that it makes the input array much bigger. Even more problematic, some coordinates that correspond to non-hexes (such as (1,0,0)) will be assigned non-zero outputs (since (0,0,0) falls within its 3 by 3 by 3 box).

Are there any better solutions?

",18006,,1581,,9/16/2018 10:49,4/16/2019 7:44,Convolutional Layers on a hexagonal grid in Keras,,1,2,,12/28/2021 13:09,,CC BY-SA 4.0 7999,1,,,9/15/2018 20:16,,1,49,"

We have data in text format as sentences. The goal is to detect rules which exist in this set of sentences.

I have a limited set of contextless sentences that fit a pattern and want to find the pattern. I might not have sentences that don't fit the pattern.

What should be an approach to do that?

",18271,,18271,,9/15/2018 20:46,9/17/2018 13:34,What is the approach to deduce formal rules based on data?,,1,1,0,,,CC BY-SA 4.0 8000,1,8012,,9/15/2018 22:29,,4,373,"

Usually neural networks consist from layers, but is there research effort that tries to investigate more general topologies for connections among neurals, e.g. arbitrary directed acyclic graphs (DAGs).

I guess there can be 3 answers to my question:

  1. every imaginable DAG topology can be reduced to the layered DAGs already actively researched, so, there is no sense to seek for more general topologies;
  2. general topologies exist, but there are fundamental restrictions why they are not used, e.g. maybe learning is not converging in them, maybe they generate chaotic osciallations, maybe they generate bifurcations and does not provide stability;
  3. general topologies exist and are promising, but scientists are not ready to work with them, e.g. maybe they have no motivation, standard layered topologies are good enough.

But I have no idea, which answer is the correct one. Reading the answer on https://stackoverflow.com/questions/46569998/calculating-neural-network-with-arbitrary-topology I start to think that answer 1 is the correct one, but there is no reference provided.

If answer 3 is correct, then big revolution can be expected. E.g. layered topologies in many cases reduces learning to the matrix exponentiation and good tools for this are created - TensorFlow software and dedicated processors. But there seems to be no software or tools for general topologies is they have some sense indeed.

",8332,,10135,,10/17/2018 9:08,10/17/2018 9:08,Neural networks of arbitrary/general topology?,,1,1,,,,CC BY-SA 4.0 8001,2,,7999,9/15/2018 23:15,,1,,"

If you don't have non-examples of your pattern and don't have some kind of heuristic guide, unfortunately the answer is that you can't. ""All sentences"" will always be 100% compatible with your examples, and you'll never be able to collect evidence that disconfirms (or even decreases the likelihood) of that hypothesis. Even if you rule out that hypothesis by fiat, the hypothesis that the only non-accepted sentence is

The answers for “I have a bunch of books and want to infer topics” and “I have a set of contextless sentences that fit a pattern and another set of contextless sentences that don’t fit that pattern and want to find the pattern” are extremely different.

will be impossible to rule out

With a good enough heuristic guide, you might be able to do something, but that would require knowing statistical facts about examples and non-examples and then sampling from the world IID, and just happening to not get any non-examples. That doesn't seem like the situation you're in. If you really have nothing to quantify the non-examples in any way, there's nothing that can be done.

",12732,,12732,,9/17/2018 13:34,9/17/2018 13:34,,,,0,,,,CC BY-SA 4.0 8002,2,,7992,9/15/2018 23:22,,1,,"

You've hit upon the central conundrum of supervised learning: if you want a machine to learn to do something, you need to know how to explain what that something is.

In the case of music, there are several possible approaches:

  • Make one set of ""bad"" songs, and one set of ""good"" songs. Develop a measure of how similar two songs are (maybe euclidian distance between their discrete Fourier transforms is a good starting place?). Your cost function is then based minimizing the average distance to ""good"" songs, and maximizing the average distance to ""bad"" songs. This may not work well though, because good and bad songs might differ only in the occasional misplaced notes.

  • Move to a reinforcement learning paradigm. Listen to each song proposed by your network. Give is a score based on your subjective enjoyment. Your cost function is based on maximizing this score. This might work well, but again, it might not. Music is tricky.

  • Use unsupervised approaches. Reward your network just for making something that resembles music (perhaps using the Fourier transform approach above), without labelling good and bad. The advantage is that you don't need to decide what is good or bad, and so you can use a lot more music in your dataset. The drawback is music as a whole might be too diverse to learn easily from examples.

  • Treat your music as a sequence of notes, and train a generative model to predict future notes on the basis of past notes. You can then generate new music by starting the model with a set of notes and letting it generate new ones for a long time.

",16909,,16909,,9/16/2018 12:57,9/16/2018 12:57,,,,3,,,,CC BY-SA 4.0 8003,2,,7993,9/16/2018 1:42,,1,,"

In my opinion, there are many functions in our brain. Surely much more than the artificial neural network nowadays. I guess this is the field of brain science or cognitive psychology.

Some brain structures may help for certain applications, but not all. Neural network though is a simplest form of our brain, but has the most general usages. On the other words, if you want to improve the neural networks, different fields or different functions may needs totally different structures. You can refer this as so many types of neural networks nowadays for different applications.

",18276,,18276,,9/16/2018 10:23,9/16/2018 10:23,,,,0,,,,CC BY-SA 4.0 8007,1,,,9/16/2018 14:55,,0,32,"

If one uses one of the open source implementations of the WaveNet generative speech synthesis design, such as https://r9y9.github.io/wavenet_vocoder/, and trains using something like the CMU's arctic corpus, now can one add a voice that sounds younger, older, less professional, or in some other way distinctive. Must the entire training begin from scratch, or is there a more resource and time friendly way?

",4302,,,,,10/31/2020 2:20,Adding voices to voice synthesis corpuses,,1,0,,,,CC BY-SA 4.0 8012,2,,8000,9/17/2018 2:22,,2,,"

The simplistic neural networks that have been given away for free after they prove insufficient by themselves in field use consist solely of two orthogonal dimensions.

  • Layer width — the number of ordinal numbers or floating point numbers that represent the signal path through any given layer comprise of an array of layer elements
  • Network depth — the number of layers in the primary signal path, which is the number of activation function arrays or convolution kernels or whatever other elements Usually neural networks consist from layers

However, in large corporations that have AI pipelines, this is not the case. We are beginning to see more interesting topologies in open source. We see this in generative systems for images, text, and speech. We see this in robotic control of robots. The truth is that these more sophisticated topologies have been in play for years, but were just not appearing in the open source community because they were company confidential. Enough academic work, releasing of portions of corporate IP, and the accumulation of independent OSS work has occurred to start to see these topologies in GIT repos.

Cyclic Not Acyclic

Artificial network topologies are generally cyclic, not acyclic in terms of their causality or their signal pathways, depending on how you depict them theoretically. These are three basic examples from among dozens in the literature and in the open source repositories.

  1. Back-propagation represents the introduction of a deliberate cycle in signal paths in a basic multilayer perceptron, making that topology a sequence of layers represented by vertices, connected sequentially by a set of directed edges representing forward propagation, and a set of directed edges in the reverse direction to distribute the corrective error determined at the network output according to the principle of gradient descent. For efficiency, the corrective signal is distributed recursively backward through the layers to the $N - 1$ matrices of parameters attenuating the signals between $N$ layers. Back propagation requires the formation of these $N - 1$ cycles for convergence to occur.

  2. In a generative adversarial network (GAN), we have the signal path of each of the two networks feeding the training criteria of the other. Such a topological arrangement is like negative feedback in a stable control system in that an equilibrium is formed between the generative network and discriminative network. The two directed edges, (a) the one that causally affects G with Ds result, and (b) the one that causally affects D with Gs result, create a cycle on top of the cycles in each of G and D.

  3. In attention based networks being touted as theoretically advantageous over LSMT (which has been dominating over CNNs) has a much more complex topology and more cycles above those in supervisory layer than those in GANs.

Analysis of Answer One of Three

It is true that every directed graph can be realized in an arbitrarily large RNN because they are Turing complete, but that doesn't mean they are a great topology for all finite algorithms.

Turing was aware that his punched tape model was not the best general purpose, high speed computing architecture. He was not intending to prove anything about computing speed but rather what could be computed. His Turing machine had a trivial topology deliberately. He wanted to illustrate his completeness theorem to others and resurrect the forward movement of rationalism after Gödel disturbed it with his two incompleteness theorems.

Similarly, John von Neumann proposed his computing architecture, with a central processing unit (CPU) and unified data and instruction bus, to reduce the number of relays or vacuum tubes, not to maximize parallel algorithm execution. That topology as a directed graph has the instruction controller and the arithmetic unit in the center and everything else branching out from the data and address bus leading from them.

That a topology can accomplish a task is no longer a justification for persisting in the use of that topology, which is why Intel acquired Nirvana, which deviates from traditional von Neumann architecture, DSP architecture, and the current CUDA core architecture that NVidia GPUs use and offer for artificial network realization through C libraries that can be called via integrated Java and Python adapters.

There is definitely sense to seek for more general topologies, if they are fit for the purpose, just as with Turing's or von Neuman's.

Analysis of Answer Two of Three

General topologies exist, the most economically viable of which is the CUDA cores begun by NVidia, which can be configured for MLPs, CNNs, RNNs, and general 2D and 3D video processing. They can be configured with or without cycles depending on the characteristics of the parallelism desired.

The realization of topologies unlike the Cartesian arrangements of activation functions in artificial networks or the kernel cells in convolution engines do have barriers to use, but they are not fundamental restrictions. The primary barrier is not one of hardware or software. It is one of linguistics. We don't think topologically because we don't talk topologically. That's what is great about this question's challenge.

FORTRAN began to dominate over LISP during the time when general purpose programming began to emerge in many corporations. That is not surprising because humans communicate in orthogonal ways. It is cultural. When a child scribbles, teachers are indoctrinated to say nice things but respond by drawing a shape. If the child draws a square, the teacher smiles. The child is given blocks. The books are rectangular. Text is justified into rectangles.

We can see this in building architecture dating back to Stonehenge. Ninety degree angles are clearly dominant in artificial things, whereas nature doesn't seem to have that bias.

Although directed graphs were easy to implement and traverse in recursive structure and were commonplace in the LISP community. FORTRAN with its realization of vectors and matrices in one and two dimensional arrays respectively were easier to grasp by those with less theoretical background in data structures.

The result is that even if learning EMMASCRIPT (JavaScript) which has its seed from the LISP community and is not biased toward orthogonal data structures, people tend to proceed from HelloWorld.js to something with a basic loop in it, with an underlying array through which the loop iterates.

There are three wonderfully inquisitive and insightful phrases in answer two of three.

  • Maybe learning is not converging in them — Interestingly an algorithm cannot learn without a cycle. Directly applying a formula or converging using a known convergent series of terms does not qualify as learning. Gradient descent relies entirely on the cyclical nature of a corrective action at the end of each sample processing or batch of them.
  • Maybe they generate chaotic [oscillations] — This gets into chaos theory and control theory's concept of stability. They can do so, but so can a basic multilayer perceptron if the learning rate is set to high.
  • Maybe they generate bifurcations — Now we have fully entered the realm of chaos, which is arguably closely related to creativity. Mendelbrot proposed the relationship between new forms of order and the apparent chaotic behavior arising from the appropriate level of feedback in a system with signal path components that cannot be modelled with a first degree equation. Since then, we find that most phenomena in nature are actually strange attractors. The plot of the training of a network from a continuous feed of consistently distributed data in phase space will reveal ... you guessed it ... a strange attractor. When purtibations are deliberately injected into a training epoch from a pseudo-random number generator, the specific purpose is a bifurcation, so that the global optimization can be found when the training gets stuck in a local optimization.

Analysis of Answer Three of Three

General topologies exist and are promising and researchers are ready to work with them. It is enthusiasts that can have a dismissive attitude. They don't yet understand the demos they've downloaded and painstakenly tweaked to run on their computer, they're about to launch their AI carrier amidst the growing demand from all the media hype, and now someone is introducing something interesting and not yet implemented in code. The motivational direction is generally to either dismiss or resist the creative proposals.

In this case, Google, CalTech, IBM, MIT, U Toronto, Intel, Tesla, Japan, and a thousand other governments, institutions, corporations, and open source contributors will solve that problem, provided people keep talking about topology and the restrictions inherent in purely Cartesian thinking.

Misunderstanding Topology to Mean Dimensionality or Topography

There has been some confusion in terms. The SO reference in the question is an example of thinking that changing an array dimension is changing the topology. If such were so, then there would be no change one could make to the geometry of an AI system that would not be topological. Topology can only have meaning if there are features that are not topological. When one draws a layer, they don't need to increase the height of the rectangle representing it if the number of activitations, the width of the layer, is changed from 100 to 120.

I've also seen academic papers that called the texture or roughness of an error surface its topology. That completely undermines the concept of topology. They meant to use the term topography. Unfortunately neither the publisher nor the editor noticed the error.

Software or Tools

Most programming languages support directed graphs in recursive hashmaps. LISP and its derivatives supported them at a more machine instruction efficient level, and that's still the case. Object oriented databases and graph libraries exist and are in use. Google uses them extensively in web indexing and lookup. FaceBook's API is called the Graph API, because it is a query and insert API into the graph that is FaceBook's user data store.

The explosion is here in global software giants. There is open source for it. The revolution that is missing is among those who are not yet educated as to the meaning of topology, the difference between a hierarchy and a network or the role of feedback in any learning system.

Regarding Java and Python there are many barriers to the revolution in thinking, primarily these.

  • There are now key words in either Java or Python to directly deal with directed graphs other than the idea of a class with references to instances of other classes, which is quite limited. None of these languages can add edge types with a single simple language construct.
  • There is no mapping to hardware yet, although Nirvana allegedly developed one, and Intel acquired Nirvana, so that barrier may evaporate soon.
  • The bias still exists in preschool, kindergarten, and first grade
  • Hilbert spaces are not generally taught in calculus

Graphviz and other graphing software that auto-generates diagrams from unconstrained directed or bidirectional graph representations have done much to bust through the barriers because the generated images are visible across the web. It may be through visual representations of graphs that linguistic representations, thought, hardware, and software begin to emerge representing the paradigm shift the question investigates.

It is not that constraints are not useful. Only some patterns and paradigms produce results, but since the results from the human brain demand attention, and the human brain is

  • Not at all orthogonal,
  • Not implemented using Cartesian neural patterns, and
  • Not topologically a box,

One can all but conclude that those are not particularly well chosen constraints. Neither is the acyclic criteria. Nature is cyclic, and intelligence probably requires it in many ways and at many levels.

",4302,,4302,,10/15/2018 23:35,10/15/2018 23:35,,,,0,,,,CC BY-SA 4.0 8013,1,8020,,9/17/2018 4:44,,1,242,"

I'm training a language model with 5000 vocabularies using a single M60 GPU (w/ actually usable memory about 7.5G).
The number of tokens per batch is about 8000, and the hidden dimension to the softmax layer is 512. So, if I understand correctly, fully-connected (softmax) layer theoretically consumes 5000*8000*512*4=81.92GB for a forward pass (4 is for float32).
But the GPU performed the forward and backward passes without any problem, and it says the GPU memory usage is less than 7GB in total.

I used PyTorch. What's causing this?

EDIT: To be clearer, the input to the final fc layer (256x5000 matrix) is of size [256, 32, 256].

",18298,,18298,,9/17/2018 17:22,9/17/2018 18:35,Calculation of GPU memory consumption on softmax layer doesn't match with the empirical result,,1,1,,,,CC BY-SA 4.0 8015,2,,7993,9/17/2018 8:42,,2,,"

In the perceptron design generally used in Artificial Neural Networks, we know precisely what a single neuron is capable of computing. It can compute a function

$$f(x) = g(w^{\top} x),$$

where $x$ is a vector of inputs (may also be vector of activation levels in previous layer), $w$ is a vector of learned parameters, and $g$ is an activation function. We know that a single node in such an ANN can compute precisely that, and nothing else. This observation could be interpreted as ""of course it is limited; it can do precisely his and nothing else"".

The universal function approximation theory tells us (very informally here) that if a Neural Network is ""big enough"", has at least 1 hidden layer, and has non-linear activation functions, it may in theory learn to approximate any function reasonably well. If we add recurrence (i.e. an RNN), we also get, in theory, Turing completeness. Based on this, we could say that they are not particularly limited in theory... but of course there are many complications in practice:

  • How big is ""big enough""?
  • How do we effectively learn our parameters? (SGD is the most common approach, but can get stuck in local minima; global optimization methods like evolutionary algorithms wouldn't get stuck... but I don't believe that they're famous for being fast either).
  • etc.

Just the observation that they may not be highly limited in theory of course doesn't mean that there wouldn't be anything else that works better in practice either. I can very well imagine that a more complex model (trying to simulate additional functionality that we also observe in the brain) may be more capable of learning more complex functions more easily.

An important caveat there is that it tends to be the case that more complex function approximators tend to be more difficult to train in practice. We understand very well how to effectively train a linear function approximator. They also typically aren't very data-hungry. The downside is, they can only approximate linear functions.

We also understand quite well how to train, for example, Decision Trees. They're still quite easy models to understand intuitively, they can learn more complicated functions than just linear functions. I'd say we have a worse understanding of how to train them well than linear functions, but still a good understanding.

ANNs as they are used now... it looks like they are more powerful in practice than the two mentioned above, but there's also still more ""mystery"" surrounding them (in particular the deep variants). We can train them quite well, but we don't understand everything about them as well as we'd like.

Intuitively, I'd expect that trend to continue if we try to imitate the brain more faithfully. I wouldn't be surprised if there exist more powerful things out there, but they'll be more complex to understand, more difficult to train, maybe also more data-hungry (current ANNs already tend to be very data-hungry).

",1641,,,,,9/17/2018 8:42,,,,1,,,,CC BY-SA 4.0 8016,1,8017,,9/17/2018 11:36,,4,808,"

I have a very imbalanced dataset of two classes: 2% for the first class and 98% for the second. Such imbalance does not make training easy and so balancing the data set by undersampling class 2 seemed like a good idea.

However, as I think about it, should not the machine learning algorithm expect the same data distribution in nature as in its training set? I know, for sure, that the distribution of data in nature matches my imbalanced dataset. Does that mean that the balanced dataset will negatively affect the neural net performance with testing? when it assumed a different distribution of data caused by my balanced data set.

",17582,,,,,9/17/2018 12:42,Does balancing the training data set distribution for a neural network affect its understanding of the original distribution of data?,,1,0,,,,CC BY-SA 4.0 8017,2,,8016,9/17/2018 12:34,,3,,"

This is a very good question. Your problem is the classic classification problem of Neural Networks. In this problem the main objective of the Neural Network is to transform the data by some non-linear (in general) transformation so that the data becomes linearly separable for the final layer to perform classification.

Point to Note: This is not a regression problem, so that you are trying to fit a curve. Whenever there is a regression problem logically you can use a PDF to write some kind of information about new data. You can express mathematically the probability of your data falling within a certain range of error, since this is a optimising continuous function problem (generally RMSE).

This is not in the case of classifier. Classifier follow Bernoulli probability (though we represent the cost function as continuous). So current event is independent of past events. This makes a classifier harder to train for unbalanced class. So if we write:

func foo(data):
    return True

It pretty much has 98% accuracy, but you can understand we do not want this type of classifier.

In general we want both classes to have good accuracy scores, sometimes this is measured by $F1_{score}$ but I like to think it in terms of proportionality. If we have $n$ examples with $m$ in one class, then and the classifier $a_1$ and $a_2$ correct predictions in both classes respectively, then I would check both the metric $ \frac {a_1}{m}$ and $ \frac {a_2}{n-m}$, which gives you the general idea.

Also in practical the identification 2% is sometimes far more important than the rest 98% (air-plane defects, cancer detection). So we use special ML algorithm called Anomaly Detectors for such type of problems.

",,user9947,,user9947,9/17/2018 12:42,9/17/2018 12:42,,,,1,,,,CC BY-SA 4.0 8019,2,,3298,9/17/2018 18:05,,0,,"

Debug and Validation of large software systems like video software stack If you take example as validation and debug of video software stack, It is very difficult for naked eye to identify failures on the display. In this case you can use DNN based image classifier to identify functional failure.

",18195,,,,,9/17/2018 18:05,,,,0,,,,CC BY-SA 4.0 8020,2,,8013,9/17/2018 18:35,,1,,"

GPU DRAM capacity - 7.5G the below link explains how nVIDIA GPU cUDNN does memory optimization. https://devblogs.nvidia.com/optimizing-recurrent-neural-networks-cudnn-5/ the below link has detailed steps to calculate memory required by parameters and data. http://cs231n.github.io/convolutional-networks/#case one data point missing in question is number of output classes in softmax layer. The above two links will help you to calculate memory required and how software handles large matrix multiplication.

",18195,,,,,9/17/2018 18:35,,,,1,,,,CC BY-SA 4.0 8021,2,,3262,9/17/2018 18:41,,1,,"

If the problem you are solving is linearly separable, one layer of 1000 neurons can do better job than 10 layers with each of 100 neurons. If the problem is non linear and not convex, then you need deep neural nets.

",18195,,,,,9/17/2018 18:41,,,,1,,,,CC BY-SA 4.0 8022,2,,7927,9/18/2018 6:47,,2,,"

I was attaining negative values out of my convolutional layers and then using relu on them resulted in the gradient of the activation being 0. Hence, my Q values were not being updated. I've since updated my activations to be ELU. Thanks for the help.

",18076,,,,,9/18/2018 6:47,,,,0,,,,CC BY-SA 4.0 8026,1,,,9/18/2018 15:22,,5,112,"

AI algorithms involving neural networks can use tensor specific hardware. Are there any other artificial intelligence algorithms that could benefit from many tensor calculations in parallel? Are there any other computer science algorithms (not part of AI) that could benefit from many tensor calculations in parallel?

Have also a look at TensorApplications and Application Theory.

",18344,,2444,,5/5/2019 23:52,5/5/2019 23:52,Which artificial intelligence algorithms could use tensor specific hardware?,,1,1,,,,CC BY-SA 4.0 8027,1,,,9/18/2018 20:10,,0,214,"

Is there research work that uses neural network as the (BDI) agent (or even full-scale cognitive architecture like Soar, OpenCog) - that continuously receives information from the environment and act in an environment and modifies its base of belief in parallel? Usually NN are trained to do only one task and TensorFlow/PyTorch supports batch mode only out of the box. Also NN algorithms and theory are constructed assuming that training and inference phases are clearly separated and they have each own algorithms. So - completely new theory and software can be required for this - are there efforts in this direction? If no, then why not? It is so self-evident that such systems can be of benefit.

https://arxiv.org/abs/1802.07569 is good review about incremental learning and it contains chapters of implemented systems, but all of them still separates learning phase from inference phase. Symbolic systems and symbolic agents (like JSON AgentSpeak) can have updating belief/knowledge base and they can also act during receiving new information or during forming new beliefs. I am specifically seeking research about NNs which do learning and inference in parallel. As far as I sought then this separation still persists in self-organizing incremental NNs that are gaining some popularity.

I can image the construction of chained NNs in Tensorflow - there is some controller network that receives input (possibly preprocessed by hierarchically lower networks) and that decides what to the: s.c. mental actions are the ouput of this controller, these actions determine whether some subordinated network is required to undergo additional learning or whether it can be temporary used for the processing of some information. Central network itself, of course, can decide to move into temporary learning phase from time to time to improve its reasoning capabilities. Such pipeline of master-slave networks is indeed possible in TensorFlow but still TensorFlow will have one central clock, not distributed, loosely connected processing. But I don't know whether existence of central clock is any restriction on the generality of capabilities of such system. Well, this hierarchy of networks maybe can be realized inside the one large network as well - maybe this large network can allow separate parts (subsets of neurons) to function in somehow independent and mutually controlling mode, maybe such regions of large neural network can emerge indeed. I am interested in this kind of research - maybe there are available some good papers for this?

",8332,,8332,,9/18/2018 21:33,9/19/2018 11:06,Neural network as (BDI) agent - running in continuous mode (that do inference in parallel with learning)?,,1,0,,,,CC BY-SA 4.0 8028,2,,7298,9/18/2018 22:04,,6,,"

A closely related question and a minimal implementation written in Python.

That program implements the reinforcement learning technique 'Q-Learning'.

The idea is for the program to take in an observation of the environment (which could be a screenshot if learning a computer game, or sensor data for a robot) and output a decision in the form of a vector of values. Each cell in that output vector corresponds to a possible action (left, right, shoot, etc) and the highest valued cell shows the action that the agent/player should take. The values in that output vector are called the Q-Values, and it is the mapping from the input data to the vector of Q-Values that we are trying to learn. In your case, the function that takes in an observation of the environment and spits out a vector of decision choices is your SVM.

The question linked above contains a description of the training algorithm. It boils down to playing the game multiple times while storing the input vector, output vector and output decision for each step until you reach a termination condition (ie hitting an asteroid) and receiving a score (in this case a negative score since you want to avoid the asteroids). Then, going backwards through the the output vectors, assigning a slowly reducing amount of the output score to the particular cell in each output vector corresponding to the decision that was made at that step of the game. The array of input vectors from the game, and the array of output decision vectors that you have just created become the training data for your machine learning system. Once you have completed a training run, play the game again with your trained system and keep repeating until it is a good as you need it to be, or it stops getting any better.

To address your question directly, I don't think that you are going to be able to train your system without giving it information about the position of the asteroids (assuming that they are positioned at random). However, you could try just having a very simple input vector such as 3 integer values where each value corresponds to the presence (1) or absence (0) of an asteroid in the 3 squares to the above left, above and above right of the player. That might be enough to encourage it to dodge away from falling rocks...

There are some Support Vector Machines implemented in JavaScript, although I have never used them, but since SVMs are basically two-class classifiers you would have to check that the library can easily support multi-class SVMs. In your case the SVM would be trying to classify each input into a particular player choice (left, right, shoot, etc).

",12509,,,,,9/18/2018 22:04,,,,0,,,,CC BY-SA 4.0 8030,1,,,9/19/2018 5:25,,0,269,"

Can neural network take decision about its own weights (update of weights) during training phase or during the phase of parallel training and inference? When one region of hierarchical NN takes decision about weights of other region is the special case of my question.

I am very keen to understand about self-awareness, self-learning, self-improvement capabilities of neural networks, because those exactly those self-* capabilities are the key path to the artificial general intelligence (e.g. Goedel machine). Neural networks are usually mentioned as examples of special, single-purpose intelligence but I can not see the reason for such limitation if NN essentially trys to mimic human brains, at least in purpose if not in mechanics.

Well - maybe this desired effect is already effectively achieved/emerges in the operation of recurrent ANNs as the effect of collective behavior?

",8332,,8332,,9/19/2018 5:34,10/15/2018 23:28,Can neural network take decision about its own weights (update of weights)?,,1,7,,,,CC BY-SA 4.0 8031,1,,,9/19/2018 5:46,,2,7039,"

Usually, using the Manhattan distance as a heuristic function is enough when we do an A* search with one target. However, it seems like for multiple goals, this is not the most useful way. Which heuristic do we have to use when we have multiple targets?

",18359,,2444,,11/19/2020 14:43,11/19/2020 14:43,What heuristic to use when doing A* search with multiple targets?,,3,3,,11/19/2020 14:42,,CC BY-SA 4.0 8032,2,,8030,9/19/2018 6:30,,1,,"

Establishing Names and Terms

The mathematician's name is Kurt Gödel, an Austrian-American, and the machine concept named after him is the Gödel machine. The idea of meta-learning (learning how to learn) can be extended recursively to learning how to learn how to learn how to learn and so on.

Taking a decision is when the decision is external to the system and the system either copes or executes that decision, whereas making a decision is when the decision is internal to the system and the system executes it, reports it, or delegates it.

Both of these have been tried in the laboratory or in real systems and various levels of recursion have achieved success, but not indefinite recursion.

A Few Points from AI History

Creating a learn how to learn recursion was the object of several early LISP efforts at MIT, and meta-learning is in active research in multiple universities and corporations as of this writing. The first successful application of a single level of recursion were the expert systems that could acquire knowledge using meta-rules, which were rules about creating new rules.

Current Artificial Networks Regarding Meta-learning

It is correct that back propagation is not meta-learning. It is a single level of learning in the above recursive learning concept. Artificial networks have been applied to learn to adjust the hyper-parameters of another network, but these strategies require large quantities of data, and they don't cognate.

They don't know how to adjust hyper-parameters. They don't build models of learning and then use those models to learn. The adjustment mechanism is the application of a learned function, not a conception of learning.

Requirements for Meta-learning

To establish a meta-learning paradigm, we must consider the elements of learning.

  • Existence of function abstraction
  • Ability to guess a parameterized function
  • Ability to mutate that function
  • Ability to tune the parameters
  • A target functional behavior
  • A way to detect whether the function is approaching the target behavior
  • A strategy for deciding when to mutate and when to tune
  • A way of performing experiments in isolation
  • A way to recurse in a way that incorporates ALL of the above

Once this has been achieved, it may be described as self-improvement and may qualify as what most theorists think of as a Gödel machine.

Specifics in the Question

Can [an artificial] network [make] decision about its own weights (update of weights) during training phase or during the phase of parallel training and inference?

Yes.

[Can] one [level] of [a] hierarchical [artificial network] [make] decision[s] about weights of other [levels]?

Yes, but in a limited way as of the time of this writing.

I am very keen to understand about self-awareness ...

Self-awareness is the ability of the system to analyze and either utilize the result of that analysis, report it, or delegate based on the result IN COMBINATION WITH the ability to use itself as the object of analysis.

This can be as simple as a program that parses Java and produces statistics using its own code as the code to parse. It can be as complex as an anthropologist studying humanity. It can be as personal as a human looking in the mirror and wondering about the qualities of person they are. It can be as deep as someone wondering their purpose in their current life.

In all cases, there are those two qualities.

  1. The ability to perform a type of analysis that could be applied toward a class of objects of which itself is a member
  2. The selection of itself for analysis

There is one other aspect of awareness beyond these two that are not required but usually associated with awareness: Some way of directing attention of the analysis capability to itself regularly. Here are a few examples from human experience.

  • Practice of daily meditation
  • Reading a book on purpose
  • Seeing a cognitive behavioral therapist
  • Writing an autobiography
  • The first, fourth, and tenth Step of anonymous programs
  • Watching a candid movie of one's self
  • A family member that likes to step back and evaluate the family
  • Religious accountability groups

Self learning is like self-awareness except that learning replaces awareness. Obviously, self-learning is dependent on some degree of self-awareness. We now have a more activity based list because of the additional element of adjustment or inception of action.

  • Decide, on the basis of personal abilities and interests, after evaluating options for living, to relocate
  • Realizing that self cannot remember commitments to meetings to buy a technology device that notifies the user before such commitments and using the calendar app rigorously
  • Finding that repeated attempts to eat greens rather than junk food is not working and joining a community of those with eating disorders with solutions to that issue
  • Deciding that childhood views of future were good ones and abandoning current paths for ones that were bolder and will likely lead to fulfillment

Self-improvement is a super-set of self-learning, since the only point of learning is improvement, however there are forms of improvement that don't strictly involve learning. A system (or person) can execute something already learned that results in an improvement.

Artificial general intelligence (e.g. Gödel machine)

There is no mathematical proof that a Gödel machine would exhibit general artificial intelligence. More importantly, there is no proof and a considerably body of evidence against the proposition that humans are generally intelligent.

Lastly, there is no proof that general intelligence is achievable in a biological or artificial system. Gödel's second incompleteness theorem is strong evidence that such ideas may be naive.

[Artificial] networks are usually mentioned as examples of special, single-purpose intelligence, but I can not see the reason for such limitation if [they] essentially [try] to mimic human brains, at least in purpose if not in mechanics.

Artificial networks emerged out of a desire to mimic human brains, but the perceptron design is based on an old view of neuron functionality. Simulating a single neuron might require an entire rack of CPUs and GPUs.

Furthermore, we are not yet clear on what intelligence is. Definitions proposed vary widely. The fields of artificial networks, bioinformatics, ontology of ideas, semantics, and the relationships between stable adaptivity and algorithms are all in their early stages, both theoretically and experimentally.

Maybe this desired [capability] is already effectively achieved [or currently emerging] in the operation of recurrent ANNs as the effect of collective behavior?

No, but RNNs are a tiny step closer to biological neurons in that their layers maintain state beyond the attenuation matrix.

By attenuation matrix is meant the matrix of parameters used in vector-matrix multiplication to control the strength of signals from the activation functions of one layer to the activation functions of the next. It's common name in machine learning literature is simply, "The parameters." Learning occurs as they change to converge the network on an optimal state.

RNNs are also capable of being Turing complete, so they can theoretically realize arbitrary algorithms.

However, the Requirements for Meta-learning above are not an expected capability of any single instance of an artificial networks, either of the MLP (multilayer perceptron) type or the RNN type. Whether balancing two networks in symbiotic arrangement as in GANs, or a comprehensive recursive algorithm, or the simulation of neural and synaptic plasticity in silicon will lead to Gödel machines is unknown.

Let Science be Science

Whether ideas such as general intelligence are possible is unknown. Whether ideas about singularities in books and mass media are realistic is unknown. Whether the prophetic warnings of screenwriters about the emergence of artificial entities that become the dominant species on earth is unknown.

Jaques Ellul is perhaps the most scientifically accurate prophetic voice. In his Techological Sociey, he presents heaps evidence in support of the idea that humans are already serving an autonomous technology and have been since prior to industrialization.

A Swiss philosopher, Francis Schaeffer, once prophecied, "They will say all kinds of things in the name of science that have nothing to do with science." We should be careful to keep conjecture from reaching the status of scientific fact in technical conversation. If we have no carefully drawn theory or empirical evidence to support a conjecture, it should be stated as a proposal, not a conclusion.

",4302,,-1,,6/17/2020 9:57,10/15/2018 23:28,,,,0,,,,CC BY-SA 4.0 8034,1,,,9/19/2018 7:21,,1,84,"

In physics, there are a lot of graphs, such as 'velocity vs time' , 'time period vs length' and so on.

Let's say I have a sample set of points for a 'velocity vs time' graph. I draw it by hand, rather haphazardly, on a canvas. This drawn graph on the canvas is then provided to the computer. By computer I mean AI.

I want it to sort of beautify my drawn graph, such as straightening the lines, making the curves better, adding the digits on axes and so on. In other words, I want it to give me a better version of my drawn graph which I can readily use in, say, a word document for a report.

a) Is it possible/plausible to do this? b) Are there any APIs available that can already do this? (Don't want to reinvent the wheel) c) Any recommendations/suggestions to make the idea possible by altering it somehow?

",18361,,1581,,9/19/2018 18:53,9/19/2018 18:53,How can I use A.I/Image Processing to construct mathematical graphs from drawing?,,0,2,,,,CC BY-SA 4.0 8037,2,,8027,9/19/2018 11:06,,1,,"

You might be interested in the Clarion cognitive architecture, developed by Prof. Ron Sun and collaborators.

Full disclosure: I am a student in Ron Sun's Cognitive Architecture Lab.

Brief Description of Clarion

Clarion agents are composed of several subsystems, each of which may contain several neural networks. These subsystems include the Action-Centered Subsystem (ACS), the Non-Action-Centered Subsytem. The ACS controls action decision making, the NACS stores general knowledge. During an activation cycle, the ACS might request information from the NACS, setting the NACS activation cycle in motion.

Strictly speaking, Clarion does not adopt the BDI framework. But, various components of a Clarion agent can be put in correspondence with BDI concepts. BDI beliefs correspond roughly to knowledge in the ACS and NACS. The Motivational Subsystem (MS) contains agent drives and sets agent goals, which roughly correspond to BDI desires and intentions respectively.

Clarion agents do not directly control when learning happens, but may control which subsystems are active based on task demands, and may output mental actions as part of their processing. Learning generally happens at the end of a subsystem activation cycle based on feedback from the environment or from the agent's own motivational and metacognitive subsystems.

Links

A page on Prof. Ron Sun's site links to several resources on Clarion.

Currently the most up-to-date reference on Clarion is the book Anatomy of the Mind. A precis is available in the journal Cognitive Computation.

",18371,,,,,9/19/2018 11:06,,,,0,,,,CC BY-SA 4.0 8038,1,,,9/19/2018 11:22,,1,78,"

I'm trying to use a CNN to analyse statistical images. These images are not 'natural' images (cats, dogs, etc) but images generated by visualising a dataset. The idea is that these datasets hopefully contain patterns in them that can be used as part of a classification problem.

Most CNN examples I've seen have one of more pooling layers, and the explaination I've seen for them is to reduce the number of training elements, but also to allow for some locational independance of an element (e.g. I know this is an eye, and can appear anywhere in the image).

In my case location is important and I want my CNN to be aware of that. ie. the presence of a pattern at a specific location in the image means something very specific compared to if that feature or pattern appears elsewhere.

At the moment my network looks like this (taken from an example somewhere):

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 196, 178, 32)      896       
_________________________________________________________________
activation_1 (Activation)    (None, 196, 178, 32)      0         
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 98, 89, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 96, 87, 32)        9248      
_________________________________________________________________
activation_2 (Activation)    (None, 96, 87, 32)        0         
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 48, 43, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 46, 41, 64)        18496     
_________________________________________________________________
activation_3 (Activation)    (None, 46, 41, 64)        0         
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 23, 20, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 29440)             0         
_________________________________________________________________
dense_1 (Dense)              (None, 32)                942112    
_________________________________________________________________
activation_4 (Activation)    (None, 32)                0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 32)                0         
_________________________________________________________________
dense_2 (Dense)              (None, 3)                 99        
_________________________________________________________________
activation_5 (Activation)    (None, 3)                 0         
=================================================================
Total params: 970,851
Trainable params: 970,851
Non-trainable params: 0
_________________________________________________________________

The 'images' I'm training on are 180 x 180 x 3 pixels and each channel contains a different set of raw data.

What strategies are there to improve my CNN to deal with this? I have tried simply removing some of the pooling layers, but that greatly increased memory and training time and didn't seem to really help.

",18372,,,,,11/19/2018 0:01,CNN Pooling layers unhelpful when location important?,,2,2,,,,CC BY-SA 4.0 8039,1,8047,,9/19/2018 12:35,,1,250,"

I'm finding it hard to understand the relationship between chaotic behavior, the human brain, and artificial networks. There are a number of explanations on the web, but it would be very helpful if I get a very simple explanation or any references providing such simplifications.

",18375,,4302,,9/20/2018 19:04,12/8/2018 23:30,What is chaotic behavior and how it is achieved in non-linear regression and artificial networks?,,2,1,,,,CC BY-SA 4.0 8040,2,,8038,9/19/2018 13:14,,-1,,"

Pooling doesn't completely remove information about location of features within your image. If you don't want to use pooling but want to reduce the size of your neural net layers you should try a value of stride greater than 1 in your convolutional layers. e.g:

keras.layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)

",18138,,,,,9/19/2018 13:14,,,,2,,,,CC BY-SA 4.0 8044,2,,8039,9/19/2018 16:40,,2,,"

It looks like you have some common misconceptions about AI and neural networks.

First, AI programs generally do not try to imitate the human behaviour of a human brain. Instead, they try to imitate some higher-level behaviour. For example, they might imitate the reasoning process that you go through when you make a plan. In this context, the building-blocks (silicon or flesh) don't matter too much.

Second, artificial neural networks are also (mostly) not intended to imitate the human brain. Although they are inspired by the arrangement of neurons in a human brain, the networks used in most ANN systems have very little to do with real brains. The main similarity is that both systems have a lot of simple little computational units connected in patterns such that signals passed from one to another lead to interesting computations. However, real neurons produce lots of different kinds of signals, are connected in arbitrary ways, and randomization and transmission times play a significant role. Artificial neurons generally are deterministic, produce only one kind of signal (or sometimes a couple different kinds), are connected in extremely regular ways, and usually simulate instantaneous transmissions between neurons.

",16909,,,,,9/19/2018 16:40,,,,0,,,,CC BY-SA 4.0 8045,2,,8031,9/19/2018 16:52,,2,,"

If by ""visit multiple targets"", you mean ""visit several points in the fastest order"", you are no longer in a simple path-finding-style search problem, but instead in an optimization problem. This is roughly the difference between chapters 3 & 6 of Russell & Norvig's section on search.

To do this, you can't just change your heuristic, instead you need to reframe your problem:

  • Instead of states in your search being locations, they should be tours. Each state is a list of all the places you need to visit, in a specific order.

  • Instead of actions being movements from one location to another, they need to be transformations from one tour to another. For example, if you swap the order that you visit two adjacent locations, then you'll get a different tour. This gives you a way to ""move"" between tours.

  • A solution means visiting all the locations as fast as possible. If you know how to get between two locations, just store the distances, and then sum all the distances together to get the cost of a tour. If you don't know, you can just run A* from each place to each other place once, and then cache the distances afterwards.

  • A heuristic will depend on your domain. A reasonable start might be to assume that you can visit each location from the nearest other location you've already visited. Generally, heurstics based on the idea of minimum spanning trees are effective for this domain.

The real answer though, is to try a technique that is meant for this kind of problem, like a local search algorithm. Notice that if we know the cost of moving between any two points, we can just adopt a greedy approach: make the move that improves things the most each time. This is often faster than waiting for A* in practice if you just want a good solution, but it doesn't need to be the very best one.

",16909,,,,,9/19/2018 16:52,,,,0,,,,CC BY-SA 4.0 8046,2,,7983,9/19/2018 17:43,,0,,"

The disabled/enabled bit in a connection gene indicates whether or not it should be expressed in the calculation of the network.

Here's an example:

This is a neural network and its corresponding connection genes which represent the layout of the network among other things. The top connection gene going from 1 -> 3 that has a weight of 0.1 is expressed in the calculation of the network. The bottom connection gene going from 2 -> 3 that has a weight of 0.4 is not expressed in the calculation of the network.

Calculating the network:

1633 * 0.1 = 163.3

Given the weights in the example the output of this network is 163.3

Hypothetically, if both connection genes had their enabled bit set to true (which can happen in the future, to quote the paper: ""The disabled genes may become enabled again in future generations"") then the output of the network would be:

(1633 * 0.1) + (10 * 0.4) = 167.3

To answer your question, the connection between two nodes still exists in the network regardless of whether the bit in the gene is enabled or disabled, but, if the bit in the gene is disabled it is not used when the output of the network is being calculated as I've shown above.

",15356,,15356,,9/19/2018 18:03,9/19/2018 18:03,,,,0,,,,CC BY-SA 4.0 8047,2,,8039,9/19/2018 18:29,,0,,"

Regression for models more complex than $y = a x + b$ is a convergence strategy. Surface fitting algorithms, such as Levenberg–Marquardt, are often successful at achieving regression using a damped version of least squares as an optimization criterion. The marriage of regression and the multilayer perceptron, an early model artificial network, led to the use of a back propagation strategy to distribute corrective signals that drive regression.

Back propagation using gradient descent is now used in artificial networks with a variety of cell and connection designs, such as LSTM and CNN networks as a convergence strategy. Both surface fitting and artificial network convergence share the method of successive approximation. With each successive application of some test, the result is used to attempt to improve the next iteration. Proofs have developed around convergence for many algorithms. Actual successive approximation runs have five possible outcomes.

  • Convergence within the time allotted and within the accuracy required
  • Convergence within the time allotted but not within the accuracy required
  • Convergence appears that it would have occurred but time allotted was exceeded
  • Oscillation appeared by the end of time allotted
  • Chaos appeared by the end of time allotted

The following illustration from Chaos Theory Tamed (Garnett P. Williams, 1997, p 164) modified slightly for easy viewing can explain how chaos arises when the learning rate or some other factor is set too aggressively. The graphs are of the behavior of the logistic equation $x_{i+1} = k x_i (1 - x)$ which plots as an inverted parabola in phase space. The one dimensional maps on the right of each of five cases show the relationship between adjacent values in the time series on the left of each of the five. Although the logistic equation is quite simple compared to regression algorithms and artificial nets, the principles involved are the same.

The right hand cases, with $k = 3.4$ and $k = 3.75$ correspond to the last two possible outcomes in the list above, oscillation and chaos respectively.

Care in Drawing Parallels

Care should be taken in drawing parallels between distinct things.

  • Surface fitting algorithms, like Levenberg–Marquardt
  • Algorithms that realize back propagation with gradient descent
  • Logical inference AI, such as production systems and fuzzy logic
  • Real time learning, such as Q-learning
  • Devotion of the human brain to a problem

Regression and artificial networks can be compared meaningfully because the math for each is fully defined and easy for those with the mathematical skill to analyze them for the comparison.

Comparing known mathematical systems with unknown biological ones is interesting, but to a large degree, grossly premature. The perceptron, on which MLPs (multilayer perceptrons) and their deep learning derivatives are based, are simplified and flattened models of what was once thought to be how neurons in the brain work. By flattened is meant that they are placed in the time domain where they are convenient for looping in software and do not take into consideration these complexities.

  • Neuron behavior is sensitive to the timing of incoming signals — Incoming signals may overlap but not precisely align in time.
  • Neuron behavior is sensitive to the history of incoming signals (because of cell body and axon thermodynamics, synaptic chemistry, and other neuro-chemical and structural functions not yet understood)
  • Neuron structure changes in terms of its connectivity
  • New neurons appear
  • Neurons may die due to cell apoptosis

In summary, multilayer perceptrons are not a model of neural networks in the human brain. They are merely roughly inspired by obsolete knowledge of them.

Chaos in the Human Brain

Whether there is chaotic behavior in the brain is known. It has been observed in real time. How coupled it is with human intelligence is a matter of conjecture, but it is already clear that it may appear to contribute to function in some cases and contribute to dysfunction in others. This is also true in artificial systems.

  • When used to deliberately interfere with a stable condition that may not be the optimum state of stability to find a better one, chaos may be a source of noise that benefits learning. This is related to the difference between local minima and global minimum. The good is sometimes the enemy of the best. Improved learning speed has been documented for artificial network algorithms with deliberate injection of pseudo-random noise into the back propagation.
  • When appearing not to deliberately inject noise in a portion of the system that can benefit from the noise but out of a basic instability in the system, the chaos can be detrimental to overall system function. Chaotic behavior in the human brain is a likely cause for various disorders. There is much supporting data but not yet a proof.

In summary, chaos in a system is neither productive nor counterproductive in every case. It depends on where it is in the system (in detail) and what the system design is expected to perform.

",4302,,4302,,12/8/2018 23:30,12/8/2018 23:30,,,,0,,,,CC BY-SA 4.0 8048,1,8062,,9/19/2018 19:01,,2,61,"

I want to implement Sparse Extended information slam. There is four step to implement it. The algorithm is available in Probabilistic Robotics Book at page 310, Table 12.3.

In this algorithm line no:13 is not very clear to me. I have 15 landmarks. So $\mu_t$ will be a vector of (48*1) dimension where (3*1) for pose. Now $H_t^i$ is a matrix whose columns are dynamic as per the algorithm it is (3j-3) and 3j. J is the values of landmarks 1 to 15. Now how could I multiply a dynamic quantity with a static one. There must be a error that matrix dimension mismatch when implement in matlab.

Please help me to understand the algorithm better.

",18384,,,,,9/20/2018 12:19,SEIF motion update algorithm doubt,,1,0,,,,CC BY-SA 4.0 8051,2,,8031,9/19/2018 22:16,,0,,"

Genetic algorithms are giving promising results for problems with multiple objectives[goals].

http://www.iitk.ac.in/kangal/Deb_NSGA-II.pdf the above paper will give best algorithm for multi objectives

",18195,,,,,9/19/2018 22:16,,,,1,,,,CC BY-SA 4.0 8052,2,,8038,9/19/2018 22:46,,-1,,"

1X1 convolution might improve accuracy as they reduce dimension in filter space.

Google inception V3 architecture would be a good starting point.

https://arxiv.org/pdf/1512.00567.pdf

",18195,,,,,9/19/2018 22:46,,,,0,,,,CC BY-SA 4.0 8054,2,,5043,9/20/2018 0:28,,1,,"

The connections between ethics and artificial intelligence can be divided into five major categories, and other categories may form over time.

  1. Correlations between ethics and artificial intelligence
  2. Existential impacts of artificial intelligence on the human experience
  3. Threats to current ethical social, economic, and legal standards arising from artificial intelligence research and application
  4. Uses of artificial intelligence to breach ethical standards without detection
  5. Uses of artificial intelligence to detection of breaches of ethical standards and assist in remedial action

Since the most important in the long term is the most likely to be dismissed by those with normal perspectives about ethics and AI, the four will be addressed in reverse order

Automatic Detection and Remedial Action

The pattern recognition capabilities of existing AI systems and sub-systems is already employed to detect a variety of ethical breaches.

  • Securities misconduct, including insider trading
  • Breaches of anti-trust law, including conflicts of interest
  • Employer misconduct, including inequality in hiring
  • Tax evasion
  • Misuse of non-profit funds

Remedial actions may be the opening of a case with the automatic generation of a notice to those in potential breach.

Smart Organized Crime

Although much detail could be included here about detection avoidance in crime using AI, it may not be socially responsible to include such in a global public facing site.

Threats to Economies and Individuals

As with any high impact technology, disruption is a possibility. This was true of fire, irrigation, the wheel, bronze smelting, gun powder, typesetting, steel-working, engines, textile automation, alternating current power distribution, aeronautics, petroleum refining, electronics and radio transmission, pharmacology, terrestrial nuclear reaction, and the Internet. Genetic engineering and artificial intelligence are next in line.

What ethical conventions will likely be impacted?

  • Distribution of employment roles, the change of which may not match distribution of educational preparation
  • Distribution of wealth, favoring prowess in highly automated business
  • Mutual exclusivity of personal privacy and the use of technology
  • Obscurity of totalitarian control (such that common citizens may be more like cogs in a machine than during industrialization)
  • Changes in the balance of world power
  • New forms of asymmetric war, such as cyber-war and autonomous combatants

All of these have either direct or direct impact on the viability of business options and how business may be conducted.

Existential impacts

Some may consider dominion over the earth as an ethical grant to humanity. Others may equate the soul with the cognitive and self-aware aspects of only one species on earth.

We already have a world that is sufficiently disconnected on an existential plane that many consider their cats or dogs as more important than any human. When people are more connected to their intelligent agents than their pets, family, and friends, that may qualify as an ethical impact. Others may see it as a psychological impact.

Realistically, it is ontological. What is a human to think of her or his only purpose when it becomes questionable whether homo sapiens is simply a link between DNA based intelligence and some more capable species the reproduction of which has been decoupled from DNA coding.

Replacement of jobs have caused changes in what families wish for their children. What will be the impact when few job roles (or eventually none) exist where artificial employees don't exceed their human counterparts in effectiveness?

If humans cannot adjust to the idea that the sole purpose of life has nothing to do with practical provision of water, food, shelter, clothing, and essential products and services, there may be systemic depression. Conversely, leisure may become the reality for all humans, leaving ethics and the finite nature of DNA based life the only two concerns of humans.

Correlations Between Ethics and AI

This is the most unpalatable of categories to examine when the examiner is human. It is possible that artificiality may be an ethics progenitor. The limitation of humans as ethical beings is well documented. It is possible that AI may be more ethical than its developers.

Will a group of AI systems be able to arrive at method for distribution of power and a standard for global trade that is as good as or better than humans have been able to negotiate and then police each other in a way that leaves no possibility of undetected breach of treaty?

",4302,,,,,9/20/2018 0:28,,,,0,,,,CC BY-SA 4.0 8055,1,,,9/20/2018 1:27,,2,18,"

I am trying to reproduce the model described in the paper DocUNet: Document Image Unwarping via A Stacked U-Net, i.e. stacking two U-Nets to yield one final prediction. The paper mentions that:

The deconvolution features of the first U-Net and the intermediate prediction y1 are concatenated together as the input of the second U-Net.

What does it mean by concatenating deconvolution features and the prediction (which is an array? cm)?

The next paragraph says that:

The second U-Net finally gives a refined prediction y2, which we use as the final output of our network. We apply the same loss function to both y1 and y2 during training.

It leads to the next question: Does it mean that I have to train U-Net twice?

",18389,,2444,,6/13/2020 0:04,6/13/2020 0:04,How do we stack two U-Nets to yield one final prediction?,,0,0,,,,CC BY-SA 4.0 8057,1,,,9/20/2018 6:06,,2,146,"

I have created a game based on this game here. I am attempting to use Deep Q Learning to do this, and this is my first foray into Neural networks (please be gentle!!)

I am trying to create a NN that can play this game. Here are some relevant facts about the game:

  • Player 1 (the fox) has 1 piece that he can move diagonally 1 step in any direction

  • Player 2(The geese) has 4 pieces that they can move only forward diagonally (either diagonal left or diagonal right) 1 step.

  • The Fox wins if he reaches the other end of the board, the geese win if they trap the fox so it cannot move.

I am trying to work on the agent first for the geese as it seems to be the harder agent with more pieces and restrictions. Here is the important sections of code I have so far:

This is where I setup the game board, and set the total actions for the geese

def __init__(self):
    self.state_size = (LENGTH,LENGTH) ##LENGTH is 8 so (8,8)
    #...
    #other DQN variables that aren't important to question
    #...
    self.action_size = 8 ##4 geese, each can potentially make 2 moves
    self.model = self.build_model()

And here is where I create my model

def build_model(self):
    #builds the NN for Deep-Q Model
    model = Sequential() #establishes a feed forward NN
    model.add(Dense(64,input_shape = (LENGTH,), activation='relu'))
    model.add(Dense(64, activation='relu'))
    model.add(Dense(self.action_size, activation = 'linear'))
    model.compile(loss='mse', optimizer='Adam')

This is where I perform an action

def act(self, state,env):
    #get the list of allowed actions for the geese
    actions_allowed = env.allowed_actions_geese_agent()

    if np.random.rand(0,1) <= self.epsilon: ##do a random move
        return actions_allowed[random.randint(0, len(actions_allowed)-1)]
    act_values = self.model.predict(state)
    print(act_values)
    return np.argmax(act_values)

My question: Since there are 4 geese and each can make 2 possible moves, am I correct in thinking that my action_size should be 8 (2 for each goose) or should it be maybe 2 (for diagonal left or right) or something else entirely?

The reason why I am at a loss is because on any given turn, some of the geese may have an invalid move, does that matter?

My next Question: Even if I have the right output layer for the geese agent, when I call model.predict(state) where I pick my action...how do I interpret the output? And how would I map that action it selects to a valid action that can be made?

Here is a picture of the result of using model.predict(state), as you can see it returns a ton of data and then when I call return np.argmax(act_values) I get 59 back...not sure how to utilize that (or if it's even correct based on my output layer)... and finally I included a drawing of the board. F is the fox and 1,2,3,4 are the different geese.

I apologize for the massive post, but I am just trying to provide as much information that is helpful.

",18244,,,,,9/20/2018 6:06,Mapping Actions to the Output Layer in Keras Model for a Board Game,,0,0,,,,CC BY-SA 4.0 8058,1,,,9/20/2018 6:46,,1,682,"

Problem Statement

I have 4 main input features.

This is a small snippet of the data for clearer understanding.

Gate name -> for example AND Gate

index_1 -> [0.001169, 0.005416, 0.01391, 0.03037, 0.06381, 0.1307, 0.2645, 0.532]

index_2 -> [7.906e-05, 0.001123, 0.00321, 0.007253, 0.01547, 0.03191, 0.06478, 0.1305]

values -> [[11.0081, 14.0303, 18.8622, 27.3426, 43.8661, 76.7538, 142.591, 274.499], [11.3461, 14.3634, 19.1985, 27.6827, 44.2106, 77.0954, 142.926, 274.879], [12.258, 15.2816, 20.1095, 28.5856, 45.1057, 77.9778, 143.8, 275.758], [13.665, 16.7457, 21.5835, 30.0545, 46.5581, 79.4212, 145.252, 277.192], [15.6636, 18.9526, 23.9051, 32.4281, 48.9011, 81.7052, 147.477, 279.371], [17.8838, 21.5839, 26.8957, 35.7103, 52.3901, 85.2132, 150.89, 282.714], [19.3338, 23.6933, 29.7184, 39.1212, 56.4053, 89.9721, 155.913, 287.637], [18.7856, 23.9999, 31.1794, 41.7549, 60.0043, 95.0488, 162.951, 295.005]]

My task is to predict this values matrix, given that I have index_1 and index_2. Originally this values matrix is propagation delay, calculated using a simulator called SPICE.

Where I am facing problem

  1. There is no written relation between Index_1, index_2 or values since simulator calculates this value using it's own models.

  2. I have made a CSV file which contains the data in separate columns.

  3. Another approach that I thought. If I can give index_1, index_2 and any 5*5 sub-matrix to the model, and the model can predict the values of whole 8*8 Matrix. But the problem is again, which machine learning model do I use.

Approaches Tried so Far

  1. I have tried a CNN model for this but it is giving me very low accuracy.

  2. Used one dense fully connected neural network but it is over-fitting the data and not giving me any values for matrix.

I am still stuck at how to predict the matrix values given this data. What are other strategies can be used?

",18392,,4302,,9/21/2018 3:31,12/31/2022 14:06,Machine learning to predict 8*8 matrix values using three independent matrices,,1,2,,,,CC BY-SA 4.0 8060,1,8075,,9/20/2018 8:14,,1,902,"

The Wumpus World proposed in book of Stuart Russel and Peter Norvig, is a game which happens on a 4x4 board and the objective is to grab the gold and avoiding the threats that can kill you. The rules of game are:

  • You move just one box for round

  • Start in position (1,1), bottom left

  • You have a vector of sensors for perceiving the world around you.

  • When you are next to another position (including the gold), the vector is 'activated'.

  • There is one wumpus (a monster), 2-3 pits (feel free to put more or less) and just one gold pot

  • You only have one arrow that flies in a straight line and can kill the wumpus

  • Entering the room with a pit, the wumpus or the gold finishes the game

Scoring is as follows: +1000 for grabbing the gold, -1000 for dying to the wumpus, -1 for each step, -10 for shooting an arrow. Fore more details about the rules, chapter 7 of the book explains them.

Well now that game has been explained, the question is: in the book, the solution is demonstrated by logic and searching, does there exist another form to solve that problem with neural networks? If yes, how to do that? What topology to use? What paradigm of learning and algorithms to use?

1*: My English is horrible, if you can send grammar corrections, I'm grateful.

2*: I think this is a bit confusing and a bit complex. if you can help me to clarify better, please do commentary or edit!

",,user18391,4709,,9/20/2018 12:41,9/21/2018 13:28,Does a solution for Wumpus World with neural networks exist?,,1,0,,,,CC BY-SA 4.0 8061,1,,,9/20/2018 8:51,,1,1206,"

I am trying to implement this paper. In this paper, the author uses the forward derivative to compute the Jacobian matrix dF/dx using chain rule where F is the probability got from the last layer and X is input image. My model is given below. Kindly let me know how to go about doing that?

class LeNet5(nn.Module):

def __init__(self):

    self.derivative= None # store derivative

    super(LeNet5, self).__init__()
    self.conv1= nn.Conv2d(1,6,5)
    self.relu1= nn.ReLU()
    self.maxpool1= nn.MaxPool2d(2,2)

    self.conv2= nn.Conv2d(6,16,5)
    self.relu2= nn.ReLU()
    self.maxpool2= nn.MaxPool2d(2,2)

    self.conv3= nn.Conv2d(16,120,5)
    self.relu3= nn.ReLU()

    self.fc1= nn.Linear(120,84)
    self.relu4= nn.ReLU()

    self.fc2= nn.Linear(84,10)
    self.softmax= nn.Softmax(dim= -1)


def forward(self,img, forward_derivative= False):
    output= self.conv1(img)
    output= self.relu1(output)
    output= self.maxpool1(output)

    output= self.conv2(output)
    output= self.relu2(output)
    output= self.maxpool2(output)

    output= self.conv3(output)
    output= self.relu3(output)

    output= output.view(-1,120)
    output= self.fc1(output)
    output= self.relu4(output)

    output= self.fc2(output)
    F= self.softmax(output)

    # want to comput the jacobian dF/dimg 
    jacobian= computeJacobian(F,img)#how to write this function

    return F, jacobian
",17372,,,,,9/21/2018 19:59,Compute Jacobian matrix of Deep learning model?,,1,6,,2/14/2022 16:36,,CC BY-SA 4.0 8062,2,,8048,9/20/2018 12:19,,1,,"

You are right, that pseudocode is not correct. In particular, the definition of $H_t^i$ in line $11$ should be changed; all the way on the right-hand side, it should have $3N - 3j$ columns of $0$s, rather than $3j$ columns of $0$s.

With that change, every matrix $H_t^i$ will have the same number of columns:

$$6 + 3j - 3 + 3N - 3j = 3 + 3N,$$

which evaluates to a total of $48$ in your case (because you have $N = 15$ landmarks). That's precisely the correct dimensionality required for matrix multiplication with your $\mu_t$ vector.


The version of the book that you linked to appears to be a fairly old draft. This webpage contains errata for the third edition for the book, in which page 393 corresponds to what was page 310 in your version of the book. The errata for that third edition of the book can be downloaded at the following URL: http://probabilistic-robotics.informatik.uni-freiburg.de/corrections/pg393.pdf

There you'll find the fix that I described above, but also some other fixes (most of them are just notational, adding bars over the $\mu$ vectors, but it looks like a more serious issue was additionally fixed in line 13, where a minus was changed to a plus).

",1641,,,,,9/20/2018 12:19,,,,2,,,,CC BY-SA 4.0 8063,1,8073,,9/20/2018 13:14,,10,4101,"

I'm training an auto-encoder network with Adam optimizer (with amsgrad=True) and MSE loss for Single channel Audio Source Separation task. Whenever I decay the learning rate by a factor, the network loss jumps abruptly and then decreases until the next decay in learning rate.

I'm using Pytorch for network implementation and training.

Following are my experimental setups:

 Setup-1: NO learning rate decay, and 
          Using the same Adam optimizer for all epochs

 Setup-2: NO learning rate decay, and 
          Creating a new Adam optimizer with same initial values every epoch

 Setup-3: 0.25 decay in learning rate every 25 epochs, and
          Creating a new Adam optimizer every epoch

 Setup-4: 0.25 decay in learning rate every 25 epochs, and
          NOT creating a new Adam optimizer every time rather
          using PyTorch's ""multiStepLR"" and ""ExponentialLR"" decay scheduler 
          every 25 epochs

I am getting very surprising results for setups #2, #3, #4 and am unable to reason any explanation for it. Following are my results:

Setup-1 Results:

Here I'm NOT decaying the learning rate and 
I'm using the same Adam optimizer. So my results are as expected.
My loss decreases with more epochs.
Below is the loss plot this setup.

Plot-1:

optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

for epoch in range(num_epochs):
    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-2 Results:  

Here I'm NOT decaying the learning rate but every epoch I'm creating a new
Adam optimizer with the same initial parameters.
Here also results show similar behavior as Setup-1.

Because at every epoch a new Adam optimizer is created, so the calculated gradients
for each parameter should be lost, but it seems that this doesnot affect the 
network learning. Can anyone please help on this?

Plot-2:

for epoch in range(num_epochs):
    optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-3 Results: 

As can be seen from the results in below plot, 
my loss jumps every time I decay the learning rate. This is a weird behavior.

If it was happening due to the fact that I'm creating a new Adam 
optimizer every epoch then, it should have happened in Setup #1, #2 as well.
And if it is happening due to the creation of a new Adam optimizer with a new 
learning rate (alpha) every 25 epochs, then the results of Setup #4 below also 
denies such correlation.

Plot-3:

decay_rate = 0.25
for epoch in range(num_epochs):
    optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

    if epoch % 25 == 0  and epoch != 0:
        lr *= decay_rate   # decay the learning rate

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-4 Results:  

In this setup, I'm using Pytorch's learning-rate-decay scheduler (multiStepLR)
which decays the learning rate every 25 epochs by 0.25.
Here also, the loss jumps everytime the learning rate is decayed.

As suggested by @Dennis in the comments below, I tried with both ReLU and 1e-02 leakyReLU nonlinearities. But, the results seem to behave similar and loss first decreases, then increases and then saturates at a higher value than what I would achieve without learning rate decay.

Plot-4 shows the results.

Plot-4:

scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[25,50,75], gamma=0.25)

scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)

scheduler = ......... # defined above
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

for epoch in range(num_epochs):

    scheduler.step()

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

EDITS:

  • As suggested in the comments and reply below, I've made changes to my code and trained the model. I've added the code and plots for the same.
  • I tried with various lr_scheduler in PyTorch (multiStepLR, ExponentialLR) and plots for the same are listed in Setup-4 as suggested by @Dennis in comments below.
  • Trying with leakyReLU as suggested by @Dennis in comments.

Any help. Thanks

",8720,,8720,,10/3/2018 7:35,10/10/2018 18:24,Loss jumps abruptly when I decay the learning rate with Adam optimizer in PyTorch,,1,1,,,,CC BY-SA 4.0 8065,2,,8061,9/20/2018 18:32,,2,,"

In the paper The Limitations of Deep Learning in Adversarial Settings, Papernot et. al., 2016, IEEE the chain rule is used, ""To express $\nabla F(X∗)$ in terms of $X$ and constant values only.""

Earlier is stated, ""Our understanding of how changes made to inputs affect a DNN’s output stems from the evaluation of the forward derivative: a matrix we introduce and define as the Jacobian of the function learned by the DNN. The forward derivative is used to construct adversarial saliency maps indicating input features to include in perturbation $\partial X$ in order to produce adversarial samples inducing a certain behavior from the DNN;""

And later, ""We define the forward derivative as the Jacobian matrix of the function F learned by the neural network during training. For this example, the output of F is one dimensional, the matrix is therefore reduced to a vector (below). Both components of this vector are computable using the adversary’s knowledge, and later we show how to compute this term efficiently.""

$\nabla F(X) = \Big[ \frac {\partial F(X)} {\partial X_1}, \frac {\partial F(X)} {\partial X_2} \Big] \quad\quad (2)$

Between equation (2) and equation (6) your questeion is answered, resulting in an equation where the chain rule has already been applied.

$\frac {\partial F_j (X)} {\partial x_i} = \Big(W_{n+1,j} \, . \, \frac {\partial H_n} {\partial x_i} \Big) \times \frac {\partial f_{n+1,j}} {\partial x_i} \;(W_{n+1,j} \, . \, H_n + b_{n+1,j}) \quad\quad(6)$

It is (6) that you must implement in code, but that is done after the DNN is converged as the paper states above.

Applying the Chain Rule

If $z = f(y)$ and $y = g(x)$, then $\frac {dz} {dx} = f'(g(x)) g'(x)$

Applied twice, if $w = e(z)$, $z = f(y)$, and $y = g(x)$, then $\frac {dw} {dx} = e'(f(g(x)) f'(g(x)) g'(x)$

Applied indefinitely we multiply all the first order derivatives of intermediate values, which has the following first order predicate logic expression.

$\forall \; 0 \le n \lt N \land v_{n + 1} = f_{n + 1}(v_n) \land F_{n+1}(x) = f_{n+1}(F_n(x)) \land g(a) = \dfrac {d} {d a} \, f(a)$ $\exists \; g_n(v_0) = \prod_{\,n = 1}^{\,n < N} {g_n} (F_n(v_0))$

Derivatives of convolution kernel values are the values themselves because they represent attenuation of the form $y = ax + b$. This is true of first degree linear activation functions too. The derivative normally used for ReLU (although the x = 0 case is actually undefined) is as follows:

$\dfrac {d} {dx} R(x) = {\begin{cases}x < 0 & 0\\x = 0 & 1\\x > 0 & 1 \end{cases}}$

The derivative of a max function used in max pool (although the u = v case is actually undefined) is as follows:

$\dfrac {d} {dt} M(u, v) = {\begin{cases}u < v & \frac {dv} {dt}\\u = v & \frac {dv} {dt}\\u > v & \frac {du} {dt}\end{cases}}$

",4302,,4302,,9/21/2018 19:59,9/21/2018 19:59,,,,3,,,,CC BY-SA 4.0 8067,2,,7897,9/20/2018 20:59,,4,,"

State of Rosehip Research

The Rosehip neuron is an important discovery, with vast implications to AI and its relationship to the dominant intelligence on earth for at least the last 50,000 years. The paper that has spawned other articles is Transcriptomic and morphophysiological evidence for a specialized human cortical GABAergic cell type, Buldog et. al., September 2018, Nature Neuroscience.

The relationship between this neuron type and its DNA expression is beginning. No data is available regarding the impact of the Rosehop distinctions on neural activity during learning or leveraging what has been learned. Surely, research along those lines is indicated, but the discovery was just published.

Benefit of the Interdisciplinary Approach to AI

That those who reference papers like this can see value in the unification or at least alignment of knowledge across disciplines is most likely beneficial to AI progress and progress in the other fields of cognitive science, bioinformatics, business automation, manufacturing and consumer robotics, psychology, and even law, ethics and philosophy.

That such interest in aligning understanding along interdisciplinary lines is present in AI Stack Exchange is certainly beneficial to the community growth in both professional and social dimensions.

Disparity Between What Works

In the human brain, neurons work. Whether Rosehip neurons are a prerequisite to language, the building of and leveraging of complex models, or transcendent emotions such as love in homo sapiens is unknown and will remain so in the near future. However, we have a fifty millennia long proof of concept.

We also know that artificial networks work. We use them in business, finance, industry, consumer products, and a variety of web services today. When a pop-up asks whether the answer given was helpful, our answer becomes a label in a set of real data from which samples are extracted for machine learning.

Nonetheless, the cells that are working are offspring of the 1957 perceptron with the addition of the application of gradient descent using an efficient corrective signal distribution strategy we call back propagation. The comprehension of neuron function in 1957 was grossly short of what we now know to be functional features of mammalian brain neurons. The Rosehip discovery may widen that gap.

Spiking Networks

The spiking network research more realistically models neurons, and neuromorphic research and development has been placing improved models into VLSI chips. The joint venture between IBM and MIT is another.

Correlating Neural Function to Brain Function

The relationship intelligence and the number of proteins or molecules may not be the most telling. These are more likely relationships between metrics and features and the intelligence of the system.

  • Genetic features that have been identified (22 of them) that directly affect intelligence testing results — For instance the correlation between polymorphisms of the oxytocin receptor genes OXTR rs53576, rs2254298, and rs2228485 and intelligence is known — See the question containing references to discovery of 22 genes that affect intelligence test results significantly
  • Neurochemical expression resulting from environmental factors varying the levels of oxytosin, dopamine, serotonin, neuropeptide Y, and canabinoids which is involved in global and regional functional behavior in the human brain
  • Signal topology (distinct from sizes and counts and distinct from the topology of created by packaging neural nets in the cranial region) — Signal topology is now being identified. Scanning technology has developed to the point where signal paths can be identified by tracking pulses in temporal space and determine causality.
  • Synaptic plasticity, a type of neural plasticity
  • Total number of neurons applied to a particular brain function
  • Impact on axon and cell body thermodynamics on signal transmission, a key element in modelling a brain neuron

None of these are yet modelled in such a way that simulation accuracy has been confirmed, but the need to research along these lines is clearly indicated as this question implies.

",4302,,4302,,10/16/2018 0:03,10/16/2018 0:03,,,,0,,,,CC BY-SA 4.0 8068,1,8072,,9/20/2018 21:33,,2,116,"

Many have examined the idea of modifying learning rate at discrete times during the training of an artificial network using conventional back propagation. The goals of such work have been a balance of the goals of artificial network training in general.

  • Minimal convergence time given a specific set of computing resources
  • Maximal accuracy in convergence with regard to the training acceptance criteria
  • Maximal reliability in achieving acceptable test results after training is complete

The development of a surface involving these three measurements would require multiple training experiments, but may provide a relationship that itself could be approximated either by curve fitting or by a distinct deep artificial network using the experimental results as examples.

  • Epoch index
  • Learning rate hyper-parameter value
  • Observed rate of convergence

The goal of such work would be to develop, via manual application of analytic geometry experience or via deep network training the following function, where

  • $\alpha$ is the ideal learning rate for any given epoch indexed by $i$,
  • $\epsilon$ is the loss function result, and
  • $\Psi$ is a function the result of which approximates the ideal learning rate for as large an array of learning scenarios possible within a clearly defined domain.

$\alpha_i = \Psi (\epsilon, i)$

The development of arriving at $\Psi$ as a closed form (formula) would be of general academic and industrial value.

Has this been done?

",4302,,,,,9/21/2018 8:31,Is a calculus or ML approach to varying learning rate as a function of loss and epoch been investigated?,,1,0,,,,CC BY-SA 4.0 8069,2,,7891,9/21/2018 0:19,,1,,"

A generative adversarial network is probably not the best approach for generating the images desired. We can assume from the comments that the data is not collected. That's a good thing, because a set of rasterized images, labeled with student age or grade is an inferior input form.

It appears that access to a student population is planned or already negotiated, which is also good.

Although the drawing, as it is being drawn, is seen through each student's eyes, the primary features correlated with drawing skill development is motor control, shape formation, and color choice. If the sheet of paper is placed over a drawing tablet, the tablet's incoming USB stream events are captured to a file, and the color selection is somehow recorded or automatically determined by having students hold the pencil or crayon up to the computer's camera before using it, a much better in natura input stream can be developed.

Pre-processing can lead to an expression of each drawing experience as a sequence of events arranged in temporal order with the following dimensions for each event.

  • Relative time from the instruction to draw in seconds
  • Color
  • Nearest x grid
  • Nearest y grid
  • Pressure

Determining color from camera input may be developed using LSTM approaches.

The dimensions of the label for each of these sequences would be those demographics and rankings that would most closely correlated with developmental stages.

  • Student age
  • Student gender
  • Curriculum grade (-1, 0, 1, 2, ... 12, where -1 is preschool and 0 is kindergarten)
  • Identifier of the drawing instructions given to the class
  • Grade ranking of the student in the class

The micro-analysis attached to each ELEMENT in the sequence includes these additional dimensions.

  • Drawing rate of the utensil given by $r = \frac {\sqrt{(x - x_p)^2 + (y - y_p)^2}} {t - t_p}$ where the subscript p indicates the values are drawn from the previous event in the sequence.
  • Drawing direction given by $\theta = \arctan (x - x_p, \; y - y_p)$
  • Curvature $\kappa$ calculated using cubic splines or some other data fitting approach
  • FFT spectrum $\vec{a}$ and Lyapunov exponent $\lambda$ applied to auto-correlation results

This is a modification of the system Google uses to synthesize speech, based on the WaveNet design. In the diagram, the residual function is defined as follows.

$z = \tanh \, (W_{f,k} x + V_{f,k} y) \, \odot \, \sigma \, (W_{g,k} x + V_{g,k} y)$

The development required is that the $\vec{a}$ must now be accompanied with scalars $r, \theta, \kappa, and \lambda$, but the resulting drawings are likely to have many of the hand-eye developmental features of the examples.

",4302,,,,,9/21/2018 0:19,,,,1,,,,CC BY-SA 4.0 8072,2,,8068,9/21/2018 7:29,,3,,"

Has this been done?

Difficult to prove a negative, but I suspect although plenty of research has been done into finding ideal learning rate values (the need for learning rate at all is an annoyance), it has not been done to the level of suggesting a global function worth approximating.

The problem is that learning rate tuning, like other hyperparameter tuning, is highly dependent on the problem at hand, plus the other hyperparamater values currently in use, such as size of layers, which optimiser is in use, what regularisation is being used, activation functions.

Although you may be hoping for $\Psi(\epsilon, i | P)$ to exist where P is the problem domain, it likely does not except as a mean value over all $\Psi(\epsilon, i | D, H)$ for the problem domain, where D is the dataset and H all the other hyperparameters.

It is likely that such a function exists, of ideal learning rate for best expected convergence per epoch. However, it would be incredibly expensive to sample it with enough detail to make approximating it useful. Coupled with limited applicability (not domain-specific, but linked to data and other hyperparameters), a search through all possible learning rate trajectories looks like it would give poor return on investment.

Instead, the usual pragmatic approaches are:

  • Include learning rate in hyperparameter searches, such as grid search, random search, genetic algorithms and other global optimisers.

  • Decay learning rate using one of a few approaches that have been successfully guessed and experiments have shown working. These have typically been validated by plotting learning curves of loss functions or other metrics, and the same tracking is usually required in new experiments to check that the approach is still beneficial.

  • Some optimisers use a dynamic learning rate parameter, which is similar to your idea but based on reacting to measurements during learning as opposed to changes based on an ideal function. They have a starting learning rate, then adjust it based on heuristics derived from measuring learning progress. These heuristics can be based on per-epoch measurements, such as whether a validation set result is improving or not. One such approach is to increase learning rate whilst results per epoch are improving, and reduce learning rate if results are not improving, or have got worse.

I have tried this last option, on a Kaggle competition, and it worked to some extent for me, but did not really improve results overall - I think it is one of many promising ideas in ML that can be made to work, but that has not stayed as a ""must have"", unlike say dropout, or using CNNs for images.

Some optimisers store a multiplier per layer or even per weight - RMSProp and Adam for example track rate of change of each parameter, and adjust the rate for each weight during updates. These can work very well in large networks, where the issue is not so much needing a specific learning rate at any time, but that a single learning rate is too crude to cover the large range of gradients and differences in gradients across the index space of all the connections. With RMSProp and Adam, the need to pick specific learning rates or explore them is much reduced, and often a library's default is fine.

",1847,,1847,,9/21/2018 8:31,9/21/2018 8:31,,,,0,,,,CC BY-SA 4.0 8073,2,,8063,9/21/2018 8:48,,11,,"

I see no reason why decaying learning rates should create the kinds of jumps in losses that you are observing. It should ""slow down"" how quickly you ""move"", which in the case of a loss that otherwise consistently shrinks really should, at worst, just lead to a plateau in your losses (rather than those jumps).

The first thing I observe in your code is that you re-create the optimizer from scratch every epoch. I have not yet worked enough with PyTorch to tell for sure, but doesn't this just destroy the internal state / memory of the optimizer every time? I think you should just create the optimizer once, before the loop through the epochs. If this is indeed a bug in your code, it should also actually still be a bug in the case where you do not use learning rate decay... but maybe you simply get lucky there and don't experience the same negative effects of the bug.

For learning rate decay, I'd recommend using the official API for that, rather than a manual solution. In your particular case, you'll want to instantiate a StepLR scheduler, with:

  • optimizer = the ADAM optimizer, which you probably should only instantiate once.
  • step_size = 25
  • gamma = 0.25

You can then simply call scheduler.step() at the start of every epoch (or maybe at the end? the example in the API link calls it at the start of every epoch).


If, after the changes above, you still experience the issue, it would also be useful to run each of your experiments multiple times and plot average results (or plot lines for all experiments). Your experiments should theoretically be identical during the first 25 epochs, but we still see huge differences between the two figures even during those first 25 epochs in which no learning rate decay occurs (e.g., one figure starts at a loss of ~28K, the other starts at a loss of ~40K). This may simply be due to different random initializations, so it'd be good to average that nondeterminisim out of your plots.

",1641,,,,,9/21/2018 8:48,,,,1,,,,CC BY-SA 4.0 8075,2,,8060,9/21/2018 13:28,,2,,"

Yes! If you read ahead to the chapters in reinforcement learning in the same book, you'll see that the wompus world appears again there. Techniques like Q-learning can be used to solve it, and since Q-learning involves learning the shape of a function, a neural network can be employed as a function approximator.

The basic idea is to treat this problem as an input/output mapping (states -> actions), and to learn which actions produce the greatest rewards.

Note however, that these approaches rely on trial and error. The logic based approach reasons about the rules of the game, and can play reasonably well right away. The learning approach will need to try and fail many times before playing well.

",16909,,,,,9/21/2018 13:28,,,,0,,,,CC BY-SA 4.0 8079,1,,,9/21/2018 21:03,,1,16,"

The idea that come to my mind is called Value Based Model for ANN. We use simple DCF formula to calculate kind of Q value: Rewards/Discount rate. Discount rate is a risk of getting the reward on the information that agent know about. Of course if you have many factors you just sum that. So, we calculate FV for every cell that agent know information about and this is a predicted data. We put predicted - actual and teach model how to run using loss function. Rephrased, does increase in output layer actually train the model to be better? The human logic is that if I took course I have a bigger value that helps me to live. What about NN? Does it actually more precise if we with time increase output?

",18436,,,,,9/21/2018 21:03,Does inflation should occur in output layer when I do Artificial Neural Network to increase smartness of the model?,,0,0,,,,CC BY-SA 4.0 8080,1,8082,,9/21/2018 22:38,,5,123,"

Let’s say I have a neural net doing classification and I’m doing stochastic gradient descent to train it. If I know that my current approximation is a decent approximation, can I conclude that my gradient is a decent approximation of the gradient of the true classifier everywhere?

Specifically, suppose that I have a true loss function, $f$, and an estimation of it, $f_k$. Is it the case that there exists a $c$ (dependent on $f_k$) such that for all $x$ and $\epsilon > 0$ if $|f(x)-f_k(x)|<\epsilon$ then $|\nabla f(x) - \nabla f_k(x)|<c\epsilon$? This isn’t true for general functions, but it may be true for neural nets. If this exact statement isn’t true, is there something along these lines that is? What if we place some restrictions on the NN?

The goal I have in mind is that I’m trying to figure out how to calculate how long I can use a particular sample to estimate the gradient without the error getting too bad. If I am in a context where resampling is costly, it may be worth reusing the same sample many times as long as I’m not making my error too large. My long-term goal is to come up with a bound on how much error I have if I use the same sample $k$ times, which doesn’t seem to be something in the literature as far as I’ve found.

",12732,,12732,,9/25/2018 23:49,9/25/2018 23:49,Do good approximations produce good gradients?,,1,1,,,,CC BY-SA 4.0 8081,2,,7896,9/22/2018 1:05,,1,,"

It appears that the homework was due two days prior to this answer's writing, but in case it is still relevant in some way, the relevant class notes (which would have been useful if provided in the question along with the homework) are here.

The first instance of expectation placed on the student is, ""Please show equation 12 by using the law of iterated expectations, breaking $\mathbb{E}_{\tau \sim p \theta(\tau)}$ by decoupling the state-action marginal from the rest of the trajectory."" Equation 12 is this.

$\sum_{t = 1}^{T} E_{\tau \sim p \theta(\tau)} [\nabla_\theta \log \pi_\theta(a_t|s_t)(b(s_t))] = 0$

The class notes identifies $\pi_\theta(a_t|s_t)$ as the state-action marginal. It is not a proof sought, but a sequence of algebraic steps to perform the decoupling and show the degree to which independence of the state-action marginal can be achieved.

This exercise is a preparation for the next step in the homework and draws only on the review of CS189, Burkeley's Introduction to Machine Learning course, which does not contain the Law of Total Expectation in its syllabus or class notes.

All the relevant information is in the above link for class notes and requires only intermediate algebra.

",4302,,4302,,9/22/2018 4:30,9/22/2018 4:30,,,,0,,,,CC BY-SA 4.0 8082,2,,8080,9/22/2018 4:05,,5,,"

In general $|f(x) - f_k(x)| \leq \epsilon$ doesn't ensure $|\nabla f(x) - \nabla f_k(x)| \leq c\epsilon$. And for neural networks there is no reason to believe it will happen either.

You can also look at Sobolev Training Paper (https://arxiv.org/abs/1706.04859). In particular, note that Sobolev training was better than critic training, which indirectly may indicate approximating function may not be the same as approximating gradient and function. In Sobolev training, the network is trained to match gradient and function whereas in critic training network is trained to match function. They produce quite different results which might give us some hints about the above problem.

In general, if two functions are arbitrary close, they might not be close in gradients.

Edit: (Trying to come up with a negative example) Consider $f(x) = g(x) + \epsilon \sin (\frac {kx} {\epsilon}) $. $g(x)$ is some neural network. Now, we train some another neural network $h(x)$ to fit $f(x)$ and after training we get $h(x) = g(x)$ ($h(x)$ and $g(x)$ have same weights precisely). However, $\nabla f_x =\nabla g_x + k\cos (\frac {kx} {\epsilon})$ is not arbitarariy close $\nabla g_x$.

I hope this example is enough to prove that a neural network that nicely approximates the function may not nicely approximate the gradients and no such result can be proved mathematically rigorously. However, considering the paper in the discussion, you might think for practical purposes it works. However, if you have both function and grad information available that is expected to work better.

",18443,,18443,,9/22/2018 17:43,9/22/2018 17:43,,,,1,,,,CC BY-SA 4.0 8083,1,8108,,9/22/2018 5:59,,3,79,"

While we train a CNN model we often experiment with the number of filters, the number of convolutional layers, FC layers, filter size, sometimes stride, activation function, etc. More often than not after training the model once, it is just a trial & error process.

  1. Is there a way that helps me to architect my model fundamentally before training?

  2. Once I train model, how do I know which among these variables (number of filters, size, number of convolutional layers, FC layers) should be changed - increased or decreased?

P.S. This question assumes that data is sufficient in volume and annotated properly and still accuracy is not up to the mark. So, I've ruled out the possibility of non-architectural flaws for the question.

",17980,,2444,,5/19/2020 19:41,5/19/2020 20:12,Is there a way that helps me to architect my CNN fundamentally before training?,,1,3,,,,CC BY-SA 4.0 8085,2,,2008,9/22/2018 6:58,,12,,"

In NLP you have an inherent ordering of the inputs so RNNs are a natural choice.

For variable sized inputs where there is no particular ordering among the inputs, one can design networks which:

  1. use a repetition of the same subnetwork for each of the groups of inputs (i.e. with shared weights). This repeated subnetwork learns a representation of the (groups of) inputs.
  2. use an operation on the representation of the inputs which has the same symmetry as the inputs. For order invariant data, averaging the representations from the input networks is a possible choice.
  3. use an output network to minimize the loss function at the output based on the combination of the representations of the input.

The structure looks as follows:

Similar networks have been used to learn the relations between objects (arxiv:1702.05068).

A simple example of how to learning the sample variance of a variable sized set of values is given here (disclaimer: I'm the author of the linked article).

",18159,,,,,9/22/2018 6:58,,,,0,,,,CC BY-SA 4.0 8086,2,,7896,9/22/2018 9:32,,7,,"

Using the law of iterated expectations one has:

$\triangledown _\theta \sum_{t=1}^T \mathbb{E}_{(s_t,a_t) \sim p(s_t,a_t)} [b(s_t)] = \nabla_\theta \sum_{t=1}^T \mathbb{E}_{s_t \sim p(s_t)} \left[ \mathbb{E}_{a_t \sim \pi_\theta(a_t | s_t)} \left[ b(s_t) \right]\right] =$

written with integrals and moving the gradient inside (linearity) you get

$= \sum_{t=1}^T \int_{s_t} p(s_t) \left(\int_{a_t} \nabla_\theta b(s_t) \pi_\theta(a_t | s_t) da_t \right)ds_t =$

you can now move $\nabla_\theta$ (due to linearity) and $b(s_t)$ (does not depend on $a_t$) form the inner integral to the outer one:

$= \sum_{t=1}^T \int_{s_t} p(s_t) b(s_t) \nabla_\theta \left(\int_{a_t} \pi_\theta(a_t | s_t) da_t \right)ds_t= $

$\pi_\theta(a_t | s_t)$ is a (conditional) probability density function, so integrating over all $a_t$ for a given fixed state $s_t$ equals $1$:

$= \sum_{t=1}^T \int_{s_t} p(s_t) b(s_t) \nabla_\theta 1 ds_t = $

Now $\nabla_\theta1 = 0$, which concludes the proof.

",18040,,2444,,6/10/2020 16:42,6/10/2020 16:42,,,,0,,,,CC BY-SA 4.0 8087,2,,3372,9/22/2018 10:23,,1,,"

In the history of programming the productivity of the programmer has increased. Early MS-DOS base games were programmed in Pascal and Assembly language direct for the CGA graphics card adapter. With the upraising of the C/C++ language and standardized operating systems, it become possible to use software libraries. (1) In early MS-DOS period, a team of programmers was needed for a simple jump'n'run game, while nowadays a single programmer can create such a game with existing game-engines in one weekend.

It is obvious to think this development forward and imagine a much advanced technique which support automatic programming. Creating games in a point&click fashion is possible with so called game construction sets. That are games, which comes with a level editor and a Lua scripting engine for modification of existing content. This is not directly related to Artificial Intelligence but goes into the same direction. The idea is, to use computer programs for increase the productivity. In the context of programming such AI-related support is hard to implement. Because programming and game-design contains lots of domain specific knowledge which has to be formalized. If an AI system should support the programming itself, the AI has to parse existing Stackoverflow threads and give hints to the programmer how to implements this in his software. So called “agent-based software engineering” is researched since the 2000s.

The most advanced example in semi-autonomous game design is perhaps the RPG adventure generator described in Barros, Gabriella Alves Bulhoes, et al. ""Who killed albert einstein? From open data to murder mystery games."" IEEE Transactions on Games (2018) It can parse Wikipedia article to generate a playable game from scratch.

",,user11571,,,,9/22/2018 10:23,,,,0,,,,CC BY-SA 4.0 8091,1,8092,,9/22/2018 18:18,,3,146,"

Depending on the source, I find people using different variations of the "squared error function". How come that be?

Here, it is defined as

$$ E_{\text {total }}=\sum \frac{1}{2}(\text {target}-\text {output})^{2} $$

OTOH, here, it's defined as

$$ \frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}\left(x^{(i)}\right)-y^{(i)}\right)^{2} $$

Notice that it is being divided by 1 over $m$ as opposed to variation 1, where we multiply by $1/2$.

The stuff inside the $()^2$ is simply notation, I get that, but dividing by $1/m$ and $1/2$ will clearly get a different result. Which version is the "correct" one, or is there no such thing as a correct or "official" squared error function?

",11814,,2444,,1/2/2022 10:17,1/2/2022 10:17,"Why is the ""square error function"" sometimes defined with the constant 1/2 and sometimes with the constant 1/m?",,1,0,,,,CC BY-SA 4.0 8092,2,,8091,9/22/2018 18:55,,3,,"

The first variation is named ""$E_{total}$"". It contains a sum which is not very well-specified (has no index, no limits). Rewriting it using the notation of the second variation would lead to:

$$E_{total} = \sum_{i = 1}^m \frac{1}{2} \left( y^{(i)} - h_{\theta}(x^{(i)}) \right)^2,$$

where:

  • $x^{(i)}$ denotes the $i$th training example
  • $h_{\theta}(x^{(i)})$ denotes the model's output for that instance/example
  • $y^{(i)}$ denotes the ground truth / target / label for that instance
  • $m$ denotes the number of training examples

Because the term inside the large brackets is squared, the sign doesn't matter, so we can rewrite it (switch around the subtracted terms) to:

$$E_{total} = \sum_{i = 1}^m \frac{1}{2} \left( h_{\theta}(x^{(i)}) - y^{(i)} \right)^2.$$


Now it already looks quite a lot like your second variation.

The second variation does still have a $\frac{1}{m}$ terms outside the sum. That is because your second variation computes the mean squared error over all the training examples, rather than the total error computed by the first variation.

Either error can be used for training. I'd personally lean towards using the mean error rather than the total error, mainly because the scale of the mean error is independent of the batch size $m$, whereas the scale of the total error is proportional to the batch size used for training. Either option is valid, but they'll likely require different hyperparameter values (especially for the learning rate), due to the difference in scale.


With that $\frac{1}{m}$ term explained, the only remaining difference is the $\frac{1}{2}$ term inside the sum (can also be pulled out of the sum), which is present in the first variation but not in the second. The reason for including that term is given in the page you linked to for the first variation:

The $\frac{1}{2}$ is included so that exponent is cancelled when we differentiate later on. The result is eventually multiplied by a learning rate anyway so it doesn’t matter that we introduce a constant here.

",1641,,,,,9/22/2018 18:55,,,,2,,,,CC BY-SA 4.0 8093,1,,,9/23/2018 1:14,,1,75,"

So I trained an AI to generate shakespeare, which it did somewhat well. I used this 10,000 character sample.

Next I tried to get it to generate limericks using these 100,000 limericks. It generated garbage output.

When I limited it to 10,000 characters, it then started giving reasonable limerick output.

How could this happen? I thought more data was always better.

The AI was a neural network with some LSTM layers, implemented in keras.

",18006,,,,,9/25/2018 6:31,Why would giving my AI more data make it perform worse?,,1,1,,,,CC BY-SA 4.0 8094,1,8136,,9/23/2018 3:24,,5,1287,"

I'm learning logistic regression and $L_2$ regularization. The cost function looks like below.

$$J(w) = -\displaystyle\sum_{i=1}^{n} (y^{(i)}\log(\phi(z^{(i)})+(1-y^{(i)})\log(1-\phi(z^{(i)})))$$

And the regularization term is added. ($\lambda$ is a regularization strength)

$$J(w) = -\displaystyle\sum_{i=1}^{n} (y^{(i)}\log(\phi(z^{(i)})+(1-y^{(i)})\log(1-\phi(z^{(i)}))) + \frac{\lambda}{2}\| w \|$$

Intuitively, I know that if $\lambda$ becomes bigger, extreme weights are penalized and weights become closer to zero. However, I'm having a hard time to prove this mathematically.

$$\Delta{w} = -\eta\nabla{J(w)}$$ $$\frac{\partial}{\partial{w_j}}J(w) = (-y+\phi(z))x_j + \lambda{w_j}$$ $$\Delta{w} = \eta(\displaystyle\sum_{i=1}^{n}(y^{(i)}-\phi(z^{(i)}))x^{(i)} - \lambda{w_j})$$

This doesn't show the reason why incrementing $\lambda$ makes weight become closer to zero. It is not intuitive.

",18427,,2444,,1/30/2021 17:34,1/30/2021 17:34,How does L2 regularization make weights smaller?,,1,0,,,,CC BY-SA 4.0 8097,1,8098,,9/23/2018 10:29,,1,692,"

I'm trying to implement an algorithm that would choose the optimal next move for the game of Connect 4. As I just want to make sure that the basic minimax works correctly, I am actually testing it like a Connect 3 on a 4x4 field. This way I don't need the alpha-beta pruning, and it's more obvious when the algorithm makes a stupid move.

The problem is that the algorithm always starts the game with the leftmost move, and also during the game it's just very stupid. It doesn't see the best moves.

I have thoroughly tested methods makeMove(), undoMove(), getAvailableColumns(), isWinningMove() and isLastSpot() so I am absolutely sure that the problem is not there.

Here is my algorithm.

NextMove.java

private static class NextMove {
    final int evaluation;
    final int moveIndex;

    public NextMove(int eval, int moveIndex) {
        this.evaluation = eval;
        this.moveIndex = moveIndex;
    }

    int getEvaluation() {
        return evaluation;
    }

    public int getMoveIndex() {
        return moveIndex;
    }
}

The Algorithm

private static NextMove max(C4Field field, int movePlayed) {
    // moveIndex previously validated
    
    // 1) check if moveIndex is a final move to make on a given field
    field.undoMove(movePlayed);
    
    // check
    if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
        field.playMove(movePlayed, C4Symbol.RED);
        return new NextMove(BLUE_WIN, movePlayed);
    }
    if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
        field.playMove(movePlayed, C4Symbol.RED);
        return new NextMove(RED_WIN, movePlayed);
    }
    if (field.isLastSpot()) {
        field.playMove(movePlayed, C4Symbol.RED);
        return new NextMove(DRAW, movePlayed);
    }
    
    field.playMove(movePlayed, C4Symbol.RED);
    
    // 2) moveIndex is not a final move
    // --> try all possible next moves
    final List<Integer> possibleMoves = field.getAvailableColumns();
    int bestEval = Integer.MIN_VALUE;
    int bestMove = 0;
    for (int moveIndex : possibleMoves) {           
        field.playMove(moveIndex, C4Symbol.BLUE);
        
        final int currentEval = min(field, moveIndex).getEvaluation();
        if (currentEval > bestEval) {
            bestEval = currentEval;
            bestMove = moveIndex;
        }

        field.undoMove(moveIndex);
    }
    
    return new NextMove(bestEval, bestMove);
}

private static NextMove min(C4Field field, int movePlayed) {
    // moveIndex previously validated
    
    // 1) check if moveIndex is a final move to make on a given field
    field.undoMove(movePlayed);
    
    // check
    if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
        field.playMove(movePlayed, C4Symbol.BLUE);
        return new NextMove(BLUE_WIN, movePlayed);
    }
    if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
        field.playMove(movePlayed, C4Symbol.BLUE);
        return new NextMove(RED_WIN, movePlayed);
    }
    if (field.isLastSpot()) {
        field.playMove(movePlayed, C4Symbol.BLUE);
        return new NextMove(DRAW, movePlayed);
    }
    
    field.playMove(movePlayed, C4Symbol.BLUE);
    
    // 2) moveIndex is not a final move
    // --> try all other moves
    final List<Integer> possibleMoves = field.getAvailableColumns();
    int bestEval = Integer.MAX_VALUE;
    int bestMove = 0;
    for (int moveIndex : possibleMoves) {
        field.playMove(moveIndex, C4Symbol.RED);
        
        final int currentEval = max(field, moveIndex).getEvaluation();
        if (currentEval < bestEval) {
            bestEval = currentEval;
            bestMove = moveIndex;
        }
        
        field.undoMove(moveIndex);
    }
    
    return new NextMove(bestEval, bestMove);
}

The idea is that the algorithm takes in the arguments of a currentField and the lastPlayedMove. Then it checks if the last move somehow finished the game. If it did, I just return that move, and otherwise I go in-depth with the subsequent moves.

Blue player is MAX, red player is MIN.

In each step I first undo the last move, because it's easier to check if the "next" move will finish the game, than check if the current field is finished (this would require to analyze for all possible winning options in the field). After I check, I just redo the move.

From some reason this doesn't work. I am stuck with that for days! I have no idea what's wrong... Any help greatly appreciated!

EDIT

I'm adding the code how I'm invoking the algorithm.

@Override
public int nextMove(C4Game game) {
    C4Field field = game.getCurrentField();
    C4Field tmp = C4Field.copyField(field);

    int moveIndex = tmp.getAvailableColumns().get(0);
    final C4Symbol symbol = game.getPlayerToMove().getSymbol().equals(C4Symbol.BLUE) ? C4Symbol.RED : C4Symbol.BLUE;
    tmp.dropToColumn(moveIndex, symbol);

    NextMove mv = symbol
            .equals(C4Symbol.BLUE) ? 
                    max(tmp, moveIndex) : 
                        min(tmp, moveIndex);

                    int move = mv.getMoveIndex();
                    return move;
}
",18470,,-1,,6/17/2020 9:57,9/23/2018 12:55,Connect 4 minimax does not make the best move,,1,0,,,,CC BY-SA 4.0 8098,2,,8097,9/23/2018 10:37,,3,,"

I suspect that you'll have to remove this code:

    if (field.isWinningMove(movePlayed, C4Symbol.BLUE)) {
        field.playMove(movePlayed, C4Symbol.RED);
        return new NextMove(BLUE_WIN, movePlayed);
    }

from the max() method, and remove this code:

    if (field.isWinningMove(movePlayed, C4Symbol.RED)) {
        field.playMove(movePlayed, C4Symbol.BLUE);
        return new NextMove(RED_WIN, movePlayed);
    }

from the min() method.


In the first case, you're checking whether the move that RED just made was a winning move. You don't want to check there if it was a winning move from BLUE, because it wasn't BLUE who just made that move; it was red. The same counts the other way around in the second case.


Additionally, the initial call into the algorithm seems overly complicated. I am not sure what the intended use of the tmp variable there is, or that dropToColumn() call. I would rewrite it to be more like:

@Override
public int nextMove(C4Game game) {
    C4Field field = game.getCurrentField();

    NextMove mv = null;

    if(game.getPlayerToMove().getSymbol().equals(C4Symbol.BLUE)){
        mv = max(field, -1);
    }
    else{
        mv = min(field, -1);
    }

    return mv.getMoveIndex();
}

This will require an adaptation of the max() and min() methods such that they skip the whole checking-for-wins thing if the previous movePlayed equals -1.

With the code you currently have there, you do not perform a minimax search for the optimal move in the current game state; instead you first arbitrarily modify the current game state using that tmp.dropToColumn() call, and perform the minimax search in that arbitrarily modified game state. The optimal move to play in such an arbitrarily-modified game state will tend not to be the optimal move in the game state that you really are in.

",1641,,1641,,9/23/2018 12:55,9/23/2018 12:55,,,,8,,,,CC BY-SA 4.0 8099,1,8101,,9/23/2018 13:54,,1,127,"

From what I understand, the value function estimates how 'good' it is for an agent to be in a state, and a policy is a mapping of actions to state.

If I have understood these concepts correctly, why does the value of a state change with the policy with which an agent gets there?

I guess I'm having difficulty grasping the concept that the goodness of a state changes depending on how an agent got there (different policies may have different ways, and hence different values, for getting to a particular state).

If there can be a concrete example (perhaps on a grid world or on a chessboard), that might make it clear why that might be the case.

",18468,,2444,,11/20/2020 1:50,11/20/2020 1:50,Why does the value of state change depending on the policy used to get to that state?,,1,0,,,,CC BY-SA 4.0 8100,1,,,9/23/2018 15:20,,2,1383,"

I want to understand what the gamma parameter does in an SVM. According to this page.

Intuitively, the gamma parameter defines how far the influence of a single training example reaches, with low values meaning ‘far’ and high values meaning ‘close’. The gamma parameters can be seen as the inverse of the radius of influence of samples selected by the model as support vectors.

I don't understand this part "of a single training example reaches", does it refer to the training dataset?

",18464,,2444,,9/12/2020 14:20,9/12/2020 14:20,"What is the purpose of the ""gamma"" parameter in SVMs?",,2,0,,,,CC BY-SA 4.0 8101,2,,8099,9/23/2018 15:32,,2,,"

I guess I'm having difficulty grasping the concept that the goodness of a state changes depending on how an agent got there

It doesn't.

The value of a state changes depending on what the agent will do next. That is where the dependency on the policy comes in, not in past behaviour, but expectations of future behaviour. The future behaviour depends on the state transitions and rewards presented by the environment, plus it depends on the distribution of actions chosen by the policy.

More formally, the value function of a state is not just a relative and arbitrary scoring system, but equals the expected (discounted) sum of rewards, assuming the MDP follows the given dynamics, including action selection:

$$v_{\pi}(s) = \mathbb{E}_{A \sim \pi}[\sum_{k=0}^\infty \gamma^k R_{t+k+1} | S_t = s]$$

Without identifying a policy, it is not possible to assess a value function. In value-based control methods, the policy to evaluate can be implied, somewhat self-referentially, as the policy that acts greedily (or maybe $\epsilon$-greedily) according to the current estimates of the value function.

If there can be a concrete example (perhaps on a GridWorld or on a chess board), that might make it clear why that might be the case

A very simple deterministic MDP with a start state and two terminal states illustrates this:

Start in state B. Taking the left action is followed by a transition (with $p=1$) to terminal state A, and a reward of $0$. Taking the right action is followed by a transition (with $p=1$) to terminal state C, and a reward of $3$.

What is the value of state B? It depends on what the policy chooses. A deterministic left policy $\pi_1$ has $v_{\pi_1}(B) = 0$, a random policy $\pi_2$ choosing left and right with $p=0.5$ has $v_{\pi_2}(B) = 1.5$. The optimal policy chooses action right always and has $v_{\pi_3}(B) = v^*(B) = 3.0$

",1847,,1847,,9/23/2018 16:20,9/23/2018 16:20,,,,5,,,,CC BY-SA 4.0 8103,2,,8100,9/23/2018 16:36,,1,,"

Rougly speaking, the higher the gamma, the more complex the model, the higher the risk of overfitting.

In fact, as you can read on the page you linked:

If gamma is too large, the radius of the area of influence of the support vectors only includes the support vector itself[...]

When gamma is very small, the model is too constrained and cannot capture the complexity or “shape” of the data. The region of influence of any selected support vector would include the whole training set.

",18035,,,,,9/23/2018 16:36,,,,2,,,,CC BY-SA 4.0 8105,1,,,9/23/2018 18:06,,3,62,"

I am trying to think of some marketing-related classification challenges that a feed-forward neural network would be suited for.

Any ideas?

",18453,,2444,,8/22/2019 21:20,8/22/2019 21:20,Which marketing-related classification challenges is a feed forward neural network suited to solve?,,1,0,,,,CC BY-SA 4.0 8107,2,,8093,9/23/2018 19:32,,2,,"

Why would giving my AI more data make it perform worse?

A lot of possible reasons:

  1. In forecasting, you could have a seasonality. If you have it exactly 3 times, then it is good. If you have it 3.5 times, it becomes worse because it overfits on the months which occur more often than the others.
  2. Data quality: In practice, improving the quality of data often yields better results (see Analysis and Optimization of Convolutional Neural Network Architectures, page 15 (Analysis Techniques) for 7 approaches to improve your results)
  3. Instable training / bad luck: some architectures / problems are super instable. You can execute the randomly initialized training 5 times and get vastly different results.
",3217,,3217,,9/25/2018 6:31,9/25/2018 6:31,,,,0,,,,CC BY-SA 4.0 8108,2,,8083,9/23/2018 19:39,,2,,"

See Analysis and Optimization of Convolutional Neural Network Architectures (my master's thesis).

  1. Is there a way that helps me to architect my model fundamentally before training?

Yes, but the architecture learning approaches are computationally intensive. When I wrote my masters thesis it was ~250,000 USD to run the Google experiment. Meanwhile, there seem to be more efficient methods, e.g. https://autokeras.com/

See Chapter 3.

  1. Once I train model how do I know which among these variables (number of filters, size, number of convolutional layers, FC layers) should be changed - increased or decreased?

See Chapter 2.5 for some approaches. But there is no silver bullet / no clear answer to this question.

",3217,,2444,,5/19/2020 20:12,5/19/2020 20:12,,,,0,,,,CC BY-SA 4.0 8109,1,8110,,9/24/2018 4:06,,0,78,"

I have a course named ""Evolutionary Algorithms"", but our teacher is always mentioning the word ""optimization"" in his lectures.

I am confused. Is he actually teaching optimization? If yes, why is the name of the course not ""Optimization""?

What is the difference between the study of evolutionary algorithms and optimization?

",,user3642,2444,,6/2/2020 16:32,9/25/2020 17:50,What is the difference between the study of evolutionary algorithms and optimization?,,1,0,,,,CC BY-SA 4.0 8110,2,,8109,9/24/2018 5:18,,2,,"

Optimization problems are formally defined by two things:

  1. Optimization objective: $$\min_{x \in \mathbb{R}} f(x)$$
  2. List of constraints: $\text{s.t. } x > 0$; $\text{s.t. } x^2 < 100$, ...

Optimization theory as a field deals with variations of such problems. The algorithms that are part of optimization include gradient descent and variants, the simplex algorithm, simulated annealing and many more. I would also include evolutionary algorithms as one of the algorithms in this field.

I am confused. Is he actually teaching optimization? If yes, why is the name of the course not "Optimization"?

Evolutionary algorithms are a subfield of optimization. The name of the course is evolutionary algorithms as you likely don't deal with the other aspects of optimization theory (e.g. the algorithms mentioned above).

",3217,,2444,,9/25/2020 17:50,9/25/2020 17:50,,,,0,,,,CC BY-SA 4.0 8111,2,,8058,9/24/2018 5:39,,0,,"

In principle, you can use a fully connected neural network with reshaping for this kind of problem. The questions you should ask yourself are:

  1. Baselines: What are the simplest algorithms to approach the problem? How good would a human be?
  2. What do I know? Are there any properties of the 8x8 matrix that will always be true? For example, it seems as if the values from left to right strictly increase. Same for top to bottom. This can be used!
  3. Are the outputs independent? (e.g. having the (1,1) entry of the matrix, do I know something about the $(i, j)$ entry of it?

Then, of course, there are more specific things I could imagine. If you were not clear about (2), you might have (wrongly) used softmax/tanh/sigmoid in the last layer. You might simply have too little training data for neural networks. Your neural network implementation might be broken.

",3217,,,,,9/24/2018 5:39,,,,0,,,,CC BY-SA 4.0 8112,2,,8100,9/24/2018 6:00,,2,,"

I've summarized the key ideas of SVMs. So this is how $\gamma$ is used with a gaussian Kernel:

$$K_{\text{Gauss}}(\mathbf{x}_i, \mathbf{x}_j) = e^{\frac{-\gamma\|\mathbf{x}_i - \mathbf{x}_j\|^2}{2 \sigma^2}}$$

The bigger the $\gamma$, the more ""linear"" the decision boundary will be. The closer to 0, the more support vectors you have/the more non-linear the boundary is (see interactive example)

You can also find it on udacity: SVM Gamma Parameter

In practice, you can use a grid search or random search to get good values.

",3217,,,,,9/24/2018 6:00,,,,0,,,,CC BY-SA 4.0 8113,2,,7776,9/24/2018 6:21,,3,,"

Can one use an Artificial Neural Network to determine the size of an object in a photograph?

Yes: Learning Depth from Single Monocular Images

In the end, depth is just one special form of size.

Of course, you need something partially known, e.g. another car. You don't need to know the exact size of the car, but you know which size cars in general have. If you have an image without any reference, it is impossible.

",3217,,,,,9/24/2018 6:21,,,,4,,,,CC BY-SA 4.0 8114,2,,7695,9/24/2018 6:41,,0,,"

There are many different problems in computer vision. Four of them are well described by the top image here:

  • Classification: Given an image, say what is on it (a single thing)
  • Classification+Localization: Given an image, say what is on it and draw an axis-aligned bounding box (AABB) around it
  • Object detection: Given an image, draw AABBs around every object and classify those objects
  • Semantic segmentation: See Survey of semantic segmentation
  • Instance segmentatation: Like semantic segmentation, but if there are multiple cats then they should be recognized as different objects.

Your question seems to be about object detection. The relevant papers here are:

If you actually already have the regions, then you can simply perform classification on them. When you pad / scale / crop them, you can batch-predict them.

",3217,,,,,9/24/2018 6:41,,,,0,,,,CC BY-SA 4.0 8117,1,8123,,9/24/2018 9:56,,3,121,"

Let's assume I want to teach a CNN some physics. Starting with a U-Net, I input images A and B as separate channels. I know that my target (produced by a very slow Monte-Carlo code) represents a signal such as f(g(A) * h(B)), where f, g and h are fairly ""convolutional"" operations -- meaning, involving mostly blurring and rescaling operations.

I feel safe to state that this problem would not be too difficult for the case of f(g(A) + h(B)) -- but what about f(g(A) * h(B))? Can I expect a basic CNN such as the U-Net to be able to represent the * (multiplication) operation?

Or should I expect to be forced to include a Multiply layer in my network, somewhere where I expect that the part before can learn the g and h parts, and the part after can learn the f part?

",18486,,,,,9/24/2018 20:22,"Can a basic CNN (Conv2D, MaxPooling2D, UpSampling2D) find a good approximation of a product of its input channels?",,1,0,,,,CC BY-SA 4.0 8121,1,,,9/24/2018 14:52,,3,99,"

I have trouble finding material (blog, papers) about this issue, so I'm posting here.

Taking a recent well known example: Musk has tweeted and warned about the potential dangers of AI, saying it is ""potentially more dangerous than nukes"", referring the issue of creating a superintelligence whose goals are not aligned with ours. This is often illustrated with the paperclip maximiser though experiment. Let's call this first concern ""AI alignment"".

By contrast, in a recent podcast, his concerns seemed more related to getting politicians and decision makers to acknowledge and cooperate on the issue, to avoid potentially dangerous scenarios like an AI arms race. In a paper co-authored by Nick Bostrom: Racing to the Precipice: a Model of Artificial Intelligence Development, the authors argue that developing AGI in a competitive situation incentivises us to skim on safety precautions, so it is dangerous. Let's call this second concern ""AI governance"".

My question is about the relative importance between these two issues: AI alignment and AI governance.

It seems that most institutions trying to prevent such risks (MIRI, FHI, FLI, OpenAI, DeepMind and others) just state their mission without trying to argue about why one approach should be more pressing than the other.

How to assess the relative importance of those two issues? And can you point me any literature about this?

",1741,,4302,,9/25/2018 0:24,9/25/2018 6:59,Should we focus more on societal or technical issues with AI risk,,1,1,,,,CC BY-SA 4.0 8123,2,,8117,9/24/2018 20:22,,2,,"

I think U-Net is already a quite complex network that probably should (as of my experience) be able to approximate this multiplication. However this would still be an approximation that might not be accurate and maybe only solves a function that looks like a multiplication for the range of the shown input samples defined by your dataset (therefore potentially overfitting on your training dataset).

So in general if you know that your target function does have this multiplication then you should definetly excplicitly enforce it in your network. If you know so much about the wanted function it's always better to build a good fitting architecture than using a general neural network architecture. That will ease the optimization and should generalize much better.

However, it's hard or impossible to tell you for sure how much depth or complexity you need to solve this task within your wanted accuracy. Eventually you should just try it out.

",13104,,,,,9/24/2018 20:22,,,,0,,,,CC BY-SA 4.0 8124,2,,7776,9/24/2018 21:06,,3,,"

In my thesis I actually solve the problem of depth estimation with a CNN based on a single monocular image so I can share my experiences for understanding that problem.

As you already stated in general you have the problem that you cannot recover the scale of the scene in an image by geometrical approaches directly. And that is still not the case even if you know the properties of your camera and lens, like focal length, but don't know any absolute sizes of the scene. However, a neural network is still able to solve the task of depth estimation based on monocular images (at least for fixed camera properties) due to known objects sizes it learned through the training on the dataset. That means it can use the learned size of specific objects and the relative depth relations to give a fairly good approximation of the depth in the scene.

However, in your special case this approach would not work, if I understand you correctly. If you just take a photo of a stone that can have arbitrary sizes and no depth cues or any unique patterns that relate to depth are present in the image there is no chance to ever estimate the absolute depth. A CNN eventually would probably just learn some average depth values or recurring depth patterns of your used dataset or memorize the whole training set to minimize the training error since it simply cannot solve this task. So you would not get a tool that does somehow generalize to new scenes. A neural network is still just a function approximator and not something magical that can solve the unsolveable.

For your usecase there could be some (complex) solutions that could give you a more or less accurate depth estimation. For example you could use a structure from motion approach where you somehow mesure the absolute camera movement with the accelerometer of the phone. Or best would be a stereo-camera based setup where you know the absolute camera displacement of the camera positions which could solve this task if you have textures in your images. With that you could find the absolute depth of specific points through a classical stereo depth estimation or by using a CNN that estimates the depth on the stereo image pair. Another approach would be to let the user input the phone height above the ground or approximate it through the accelerometer of the smartphone to then approximate the stone size based on the size in the image and the absolute known height above the ground (probably inaccurate).

",13104,,13104,,9/25/2018 7:51,9/25/2018 7:51,,,,0,,,,CC BY-SA 4.0 8126,1,8134,,9/24/2018 21:35,,1,72,"

Within a piece of text, I'm trying to detect who did what to whom.

For instance, in the following sentences:

CV hit IV. CV was hit by IV.

I'd like to know who hit whom.

I can't remember what this technique is called.

",18501,,2444,,4/8/2022 10:18,4/8/2022 10:18,"What is the name of the NLP technique that determines ""who did what to whom"" given a sentence?",,1,0,,,,CC BY-SA 4.0 8128,1,,,9/25/2018 1:08,,1,959,"

I have difficulty understanding the following paragraph in the below excerpts from page 4 to page 5 from the paper Dueling Network Architectures for Deep Reinforcement Learning.

The author said "we can force the advantage function estimator to have zero advantage at the chosen action."

For the equation $(8)$ below, is it correct that $A - \max A$ is at most zero?

... lack of identifiability is mirrored by poor practical performance when this equation is used directly.

To address this issue of identifiability, we can force the advantage function estimator to have zero advantage at the chosen action. That is, we let the last module of the network implement the forward mapping

$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max_{a' \in | \mathcal{A} |} A(s, a'; \theta, \alpha) \right). \tag{8}$$

Now, for $a^∗ = \text{arg max}_{a' \in \mathcal{A}} Q(s, a'; \theta, \alpha, \beta) = \text{arg max}_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha)$, we obtain $Q(s, a^∗; \theta, \alpha, \beta) = V (s; \theta, \beta)$. Hence, the stream $V(s; \theta, \beta)$ provides an estimate of the value function, while the other stream produces an estimate of the advantage function.

I would like to request further explanation on Equation 9, when the author wrote what is bracketed between the red parentheses below.

An alternative module replaces the max operator with an average:

$$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \left( A(s, a; \theta, \alpha) − \frac {1} {|A|} \sum_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha) \right). \tag{9}$$

On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant, but on the other hand it increases the stability of the optimization: with (9) the advantages only need to change as fast as the mean, instead of having to compensate any change to the optimal action’s advantage in (8).

In the paper, to address the identifiability issue, there are two equations used. My understanding is both equations are trying to fix the advantage part - the last module.

For equation $(8)$, are we trying to make $V(s) = Q^*(s)$, as the last module is zero?

For equation $(9)$, the resulting $V(s)$ = true $V(s)$ + mean$(A)$? As the author said "On the one hand this loses the original semantics of $V$ and $A$ because they are now off-target by a constant". And the constant refers to mean$(A)$? Is my understanding correct?

",18504,,2444,,1/25/2023 22:08,1/25/2023 22:08,Questions on the identifiability issue and equations 8 and 9 in the D3QN paper,,2,1,,,,CC BY-SA 4.0 8129,2,,8128,9/25/2018 1:55,,0,,"

I believe that is explained on the prior page:

"Intuitively, the value function $V$ measures the how good it is to be in a particular state $s$. The $Q$ function, however, measures the the value of choosing a particular action when in this state. The advantage function subtracts the value of the state from the $Q$ function to obtain a relative measure of the importance of each action."

Then two paragraphs above were you started your quote:

"However, we need to keep in mind that $Q(s, a; \theta, \alpha, \beta)$ is only a parameterized estimate of the true $Q$-function. Moreover, it would be wrong to conclude that $V (s; \theta, \beta)$ is a good estimator of the state-value function, or likewise that $A(s, a; \theta, \alpha)$ provides a reasonable estimate of the advantage function.

Equation (7) is unidentifiable in the sense that given $Q$ we cannot recover $V$ and $A$ uniquely. To see this, add a constant to $V (s; \theta, \beta)$ and subtract the same constant from $A(s, a; \theta, \alpha)$. This constant cancels out resulting in the same $Q$ value. This lack of identifiability is mirrored by poor practical performance when this equation is used directly."

Another way of looking at it would be:

  • You receive answers to your question

  • Answers receive votes

  • Answerers have reputation

In a perfect world people could vote based on reputation, with a weighing based upon the correctness of the answer.

You could simply look at which answer received the most votes and choose it as correct.

In the real world things don't work that way, things are correct or incorrect whether they are measured or not (think quantum mechanics) and measurement doesn't always reveal the true answer.

See: Parameter Estimation.

The estimate of the advantage is only so good, sometimes it's useful to consider it and in other instances it's useful to reject it - intelligently doing both maximizes it's usefulness.

",17742,,-1,,6/17/2020 9:57,9/26/2018 10:58,,,,1,,,,CC BY-SA 4.0 8130,2,,8121,9/25/2018 5:04,,1,,"

Neither AI alignment nor AI governance are important yet. We are so far away from AGI that we don't even know what is missing.

We don't set up safety instructions for interstellar travel, so why should we do it for AGI? I can also come up with a lot of dangers of that...

There are real dangers of AI, though. Including societal issues:

  1. Blind trust: Trusting a machine, although it is not perfect. Just because it is good in most cases and seems to be rational/objective. Example: Machine Bias
  2. Unemployment: AI has the potential to replace a lot of low-skill jobs with very few super high skilled jobs. For example, self-driving cars (< 100 people for one manufacturer, I guess) could replace all jobs in trasportation (1,076,200 of 31,373,700 in Germany (source) - that is 3.4%!). See CG gray video.
  3. Weapon Systems: The thread of leathal autonomous weapons is real. There is the danger of efficient slaughterbots and the danger of errors - similar to the Wech Baghtu wedding party airstrike or that one

There are a couple of other problems, but I think they are less severe:

  • Better fakes: lyrebird.ai gives a good demo
  • Spam: I think it's easier to improve the spam filters, but humans might have a harder time
  • Data is dominance: Companies like Amazon will have a dominant position on the market as they were the first to aquire important customer information
  • Bubbles / false hopes: AI (or let's better say Machine learning) is often seen as a golden hammer. It is not.
",3217,,3217,,9/25/2018 6:59,9/25/2018 6:59,,,,1,,,,CC BY-SA 4.0 8134,2,,8126,9/25/2018 9:03,,3,,"

You might be referring to Semantic role labeling. SRL is the task of assigning labels to words or phrases in a sentence that shows their semantic role in that sentence.

In your example CV was hit by IV, the task is to identify the verb ""hit"" carried out by the actor ""CV"" affected ""IV"" the recipient.

Note: If you're only interested in the syntactic relationship among words or phrases in a sentence, not the semantic relationship between them, simple dependency parsing would do the job.

",7449,,,,,9/25/2018 9:03,,,,0,,,,CC BY-SA 4.0 8136,2,,8094,9/25/2018 15:13,,2,,"

Here is my take.

The larger the $\lambda$, the more the corresponding regularization term for a coefficient will be big, so when minimizing the cost function, the coefficient will be reduced by a bigger factor, you can see this effect in the derivation of the update rule for gradient descent for example: \begin{align*} \theta_j := \theta_j - \alpha\ \left[ \left( \frac{1}{m}\ \sum_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)} \right) + \frac{\lambda}{m}\theta_j \right] &\ \ \ \ \ \ \ \ \ \ j \in \lbrace 1,2...n\rbrace\newline & \end{align*}

\begin{align*} \theta_j := \theta_j (1- \alpha \frac{\lambda}{m})\left( \frac{\alpha}{m}\ \sum_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)} \right) &\ \ \ \ \ \ \ \ \ \ j \in \lbrace 1,2...n\rbrace\newline & \end{align*}

From this derivation, it is clear that at every update the coefficients get reduced by a factor that is usually a little less than 1 and directly proportional to $\lambda$, so, as $\lambda$ gets bigger, the weights get reduced more and more; eventually, for very big values of $\lambda$, we risk to totally underfit the data, since only the regularization term will remain in the cost function and all the weights will go to zero.

This is for linear regression, but it is essentially the same logic also for logistic regression.

This is taken from Andrew Ng's course on Coursera. A more mathematical precise (and complex) traction of the problem can be found in the Bloomberg machine learning course material.

PS: in the derivation of the update rule for gradient descent, the $\lambda$ should be divided by the number of training examples, this is important in choosing the right $\lambda$ because we want this relationship to be inversely proportional, otherwise the coefficient won't be decreased.

",18516,,2444,,1/30/2021 17:31,1/30/2021 17:31,,,,0,,,,CC BY-SA 4.0 8137,2,,1479,9/25/2018 16:56,,3,,"

Do scientists know what is happening inside artificial neural networks?

YES

Do scientists or research experts know from the kitchen what is happening inside complex ""deep"" neural network with at least millions of connections firing at an instant?

I guess ""to know from the kitchen"" means ""to know in detail""?

Let me give you a series of analogies:

  1. Does an airplane engineer know from the kitchen what happens inside the airplane?
  2. Does a chip designer know in detail what happens in the chip (s)he designed?
  3. Does a civil engineer know everything about the house he constructed?

The devil is in the detail, but a crucial point here is that it's about artificial structures. They don't randomly appear. You need a lot of knowledge to get anything useful. For Neural Networks, I would say it took roughly 40 years from the publication of the key idea (Rosenblatt perceptron, 1957) to the first application (US Postal Service, 1989). And from there again 13 years of active reserach to really impressive systems (ImageNet 2012).

What we know super well is how the training works. Because it needs to be implemented. So on a very small structure, we know it in detail.

Think of computers. The chip designers know very well how their chip works. But they will likely only have a very rough idea how the Linux operating system works.

Another example is physics and chemistry: Physics describes the core forces of the universe. Does that mean they know everything about chemistry as well? Hell no! A ""perfect"" physicist can explain everything in chemistry ... but it would be pretty much useless. He would need a lot more information, not be able to skip the irrelevant parts. Simply because he ""zoomed in"" too much - considers details which are in practice neither interesting nor important. Please note that the knowledge of the physicist is not wrong. Maybe one could even deduce the knowledge from the chemist from it. But this ""high-level"" understanding of molecule interaction is missing.

The key insight from those two examples are abstraction layers: You can build complexity from simple structures.

What else?

We know well what is in principle achievable with the neural networks we design:

  • A neural network designed to play Go - no matter how sophisticated - will never even be able to play chess. You can, of course, add another abstraction layer around it and combine things. But this approach needs humans.
  • A neural network designed for distinguishing dogs from cats which has only seen pudels and Persian cats will likely perform really bad when it has to decide for Yorkshire Terriers.

Oh, and of course we have analytical approaches for neural networks. I wrote my masters thesis about Analysis and Optimization of Convolutional Neural Network Architectures. In this context LIME (Local Interpretable Model-Agnostic Explanations) is nice:

",3217,,,,,9/25/2018 16:56,,,,3,,,,CC BY-SA 4.0 8144,2,,8128,9/26/2018 10:47,,1,,"

Yes, you're correct, if Equation 8 is used it will only be possible to get estimates $\leq 0$ out of the term

$$\left( A(s, a; \theta, \alpha) - \max_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha) \right).$$

This matches the meaning that we intuitively assign to the $Q(s, a)$, $V(s)$, and $A(s, a)$ estimators (I'm leaving the parameters $\theta$, $\alpha$, and $\beta$ out of those parentheses for the sake of notational brevity). Intuitively, we want:

  • $Q(s, a)$ to estimate the value of being in state $s$ and executing action $a$ for the policy that we are learning about.
  • $V(s)$ to estimate the value of being in state $s$ for the policy that we are learning about.
  • $A(s, a)$ to estimate the advantage of executing action $a$ in state $s$ for the policy that we are learning about.

In the above three points, ""the policy that we are learning about"" is the greedy policy, the ""optimal"" policy given what we have learned so far (ideally this would be truly the optimal policy after a long period of training).

In the last point of the three points above, advantage can intuitively be understood as the gain in estimated value if we choose action $a$ over whatever the expected value would be if we were following the policy that we are learning about.

Since we are trying to learn about the greedy policy, we'll ideally (according to our intuition) want the maximum advantage $A(s, a)$ to be equal to $0$; intuitively, the best action is precisely the one we want to execute in our greedy policy, so that best action should not have any relative ""advantage"". Similarly, all non-optimal actions should have a negative advantage, because they are estimated to be worse than what we estimate to be the optimal action(s).

This intuition is mathematically enforced by using Equation 8 from the paper for training:

$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha) \right).$$

We can consider two cases to explain what this is doing:

  1. Suppose that action $a$ is the best action we could have selected in state $s$ according to our current estimates, i.e. $a = \arg \max_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha)$. Then, the two terms in the large brackets are equal to each other, so the subtraction yields $0$, and the state-action value estimate $Q(s, a)$ equals the state value estimate $V(s)$. This is exactly what we want because we are trying to learn about the greedy policy.

  2. Suppose that action $a$ is worse than the best action we could have selected in state $s$ according to our current estimates, i.e. $A(s, a; \theta, \alpha) < \max_{a' \in \vert \mathcal{A} \vert} A(s, a'; \theta, \alpha)$. Clearly, I've just stated here that the first term in our subtraction is less than the second term in our subtraction... so the subtraction yields a negative number. This means that the state-action value estimate $Q(s, a)$ becomes less than the estimated state value $V(s)$. This is also what we want intuitively, because we started with the assumption that action $a$ was a suboptimal action. Clearly, if we assume that the action $a$ is suboptimal, that should lead to a reduction in the estimated value.


Note that afterwards, when they start explaining Equation 9, they actually intentionally deviate from these standard, intuitive understandings that we have of what the three estimators should represent.


Concerning the additional question about Equation 9:

A major problem in the stability of training processes for Deep Reinforcement Learning algorithms (such as these DQN-based algorithms) is that the update targets contain components that are predictions made by the NN that is being trained. For example, the Dueling DQN architecture in this paper generates $V(s)$ and $A(s, a)$ predictions, which are combined into $Q(s, a)$ predictions, and those $Q(s, a)$ predictions of the network itself are also used (combined with some non-prediction reward observations $r$) in the loss function defined to train the Neural Network.

In other words, the Neural Network's own predictions are a part of its training signal. When these are used to update the Network, this will likely change its future predictions in similar situations, which means that its update target will also actually change when it reaches a similar situation again; this is a moving target problem. We do not have a consistent set of update targets as we would in a traditional supervised learning setting for example (where we have a dataset collected offline with fixed labels as prediction targets). Our targets are moving around during the training process, and this can destabilize learning.

Now, in that explanation following Equation 9, they essentially argue that this ""moving target"" problem is less bad with Equation 9 than it is with Equation 8, which can result in more stable training. I'm not sure if there is a formal proof of this, but intuitively it does make sense that this would happen in practice.

Suppose that you update your network once based on Equation 8. If your learning step changes the prediction of the advantage $A(s, a)$ of the best action $a$ by a magnitude of $1$ (kind of informal here, hopefully it makes sense what I'm trying to say), this will in turn move future targets for updates also roughly by a magnitude of $1$ (again, quite informal here).

Now, suppose that you update your network once based on Equation 9. It is unlikely that all of the different actions $a$ have their advantage $A(s, a)$ move by the same magnitude and in the same direction as a result of this update. It is more likely that some will move up, some will move down, etc. And even if they all move in the same direction, some will likely move by a smaller magnitude than others. In some sense, Equation 9 ""averages out"" the movements triggered by the learning update in all of these different advantage estimates, which causes the network's prediction targets overall to simply move more slowly, reducing the moving target problem. At least, that's the intuitive idea. Again, I don't think there is a formal proof that this happens, but it does turn out to often help in practice.

",1641,,1641,,9/27/2018 8:37,9/27/2018 8:37,,,,12,,,,CC BY-SA 4.0 8147,1,,,9/26/2018 14:57,,1,1300,"

I have a simple question about the choice of activation function for the output layer in feed-forward neural networks.

I have seen several codes where the choice of the activation function for the output layer is linear.

Now, it might well be that I am wrong about this, but isn't that simply equivalent to a rescaling of the weights connecting the last hidden layer to the output layer? And following this point, aren't you just as well off with just using the identity function as your output activation function?

",18546,,2444,,12/5/2020 21:41,12/5/2020 21:41,Is a linear activation function (in the output layer) equivalent to an identity function?,,1,1,,,,CC BY-SA 4.0 8148,2,,8147,9/26/2018 15:54,,3,,"

And following this point, aren't you just as well off with just using the identity function as your output activation function?

When someone declares that the output of a neural network layer is linear this is exactly what they mean. It can also be described as ""no activation function"".

Saying that a NN layer has linear activation is a kind of short-hand for saying ""the whole layer is a linear function of its inputs"", which is true without adding any activation function, just using the weights and bias.

There is usually no separate linear function applied, and libraries such as Keras include the term 'linear' only for completeness, or so that the choice can be made explicit in the code, as opposed to an unseen default.

Note that the link to Keras activation definition above says:

Linear (i.e. identity) activation function.

",1847,,1847,,9/26/2018 16:00,9/26/2018 16:00,,,,0,,,,CC BY-SA 4.0 8150,2,,5861,9/26/2018 19:21,,1,,"

Forward pass

The output of a layer can be calculated given the output of the previous layer. So the GPU can parallelize this computation for every layer and over the minibatch which is done by calculating a big matrix. But it needs to be sequential from layer to layer (earlier layers to higher layers). Regarding the layer type convolutions or especially fully connected layers can result in a big matrix calculation.

Backward pass

The gradient of a layer with respect to the layer input (and layer parameters) can only be calculated given the gradient of the layer output (input gradient of a subsequent layer) and input to the layer (output of the previous layer). This again can be parallelized over a layer and minibatch but is sequential from higher layers to earlier layers. Moreover, since the backward pass relies on the outputs of the forward pass all intermediate layer outputs of the forward pass have to be cached for the backward pass which results in a high (GPU) memory usage.

Forward and backward pass take most of the time

So, these two steps take a long time for 1 training iteration, and (depending on your network) high GPU memory usage. But you should read and understand the backpropagation algorithm that basically explains everything.

Moreover, to train a network from scratch, in general, takes lots of iterations because especially in the earlier layers training the parameters is based on gradients that are affected by lots of previous layers, which can result in noisy updates, etc., that do not always push the network parameters in the right direction directly. In contrast, e.g. fine-tuning a pre-trained network on some new task can for example already be done with much less training iterations.

",13104,,2444,,5/19/2020 19:55,5/19/2020 19:55,,,,0,,,,CC BY-SA 4.0 8151,2,,8105,9/26/2018 19:58,,-1,,"

Has a data set been chosen? There's a wide variety of data out there (see this, this, or this). Analytical possibilities are largely determined by the available data, so choosing your data would probably help to clarify your objectives.

",15223,,,,,9/26/2018 19:58,,,,0,,,,CC BY-SA 4.0 8152,2,,5414,9/27/2018 6:08,,2,,"

I haven't seen any dataset where some standard models worked and neural networks utterly failed.

For columnar data (e.g. Excel files / database dumps / CSV files) which contain structured data usually tree-based models like random forests and gradient boosting work better, but neural networks are also usually way better than random.

If you demand other things, e.g. explanations for the decision then Bayesian models might give you an easier time. Or for baselines/simple implementations linear models. Or for real time applications...

",3217,,,,,9/27/2018 6:08,,,,0,,,,CC BY-SA 4.0 8154,2,,35,9/27/2018 10:57,,2,,"

In simple words, Artificial intelligence is a field of science that is trying to mimic humans or other animals behavior.

Machine Learning is one of the key tool/technology behind Artificial intelligence.

",18563,,18563,,9/27/2018 18:33,9/27/2018 18:33,,,,0,,,,CC BY-SA 4.0 8156,2,,7966,9/27/2018 13:49,,3,,"

I mostly studied HMMs and such models are called Infinite HMMs in that specific domain.

I believe that what you are looking for is called Infinite Neural Networks. Not having access to scientific publications, I cannot really refer any work here. However, I found this GitHub repository: https://github.com/kutoga/going_deeper that provides some implementation and a document with multiple references.

",3576,,,,,9/27/2018 13:49,,,,0,,,,CC BY-SA 4.0 8159,2,,35,9/27/2018 20:32,,0,,"

First of all, I encountered the term MachineLearning much more in my Business Intelligence classes than in my AI classes.

My AI Professor Rolf Pfeifer would have put it that way: (after having a long speech about what intelligence is, how it can be defined, different types of intelligence, etc.). ML is more static and "dumb", unaware of its physical environment and not made to interact with it, or only on an abstract basis. AI has a certain awareness of its environment and interacts with it autonomously, making thereby autonomous decisions with feedback loops. From that point of view, Ugnes Answer would be probably the closest. Besides that, of course, ML is a subset of AI.

Machine Learning is not real intelligence (imho), it's mostly human intelligence reflected in logical algorithms, and as my Business Intelligence Prof would put it: about data and its analysis. Machine Learning has a lot of supervised algorithms which actually do need humans to support the learning process by telling what's right and what's wrong, so they're not independent. And once they're applied, algorithms are mostly static until humans readjust them. In ML you mostly have black boxes designs and the main aspect is data. Data comes in, Data gets analyzed ("Intelligently"), Data goes out, and Learning most times applies to a pre-implementation/Learning fase. In most cases ML doesn't care about the environment a machine is in, it's about data.

AI instead is about mimicking human or animal intelligence. Following my Prof's approach, AI is not necessarily about self-consciousness but about interaction with the environment, so to build AI you need to give the machine sensors to perceive the environment, a sort of intelligence able to keep on learning, and elements to interact with the environment (arms, etc.). The interaction should happen in an autonomous way and ideally, as in humans, learning should be an autonomous, ongoing process.

So a drone that scans fields in a logical scheme for colour patterns to find weeds within crops would be more ML. Especially if the data is later analyzed and verified by humans or the algorithm used is that a static algorithm with built-in "intelligence" but not capable of rearranging or adapting to its environment. A drone that flies autonomously, charges itself up when the battery's down, scans for weeds, learns to detect unknown ones and rips them out by itself and brings them back for verification, would be AI...

",15332,,2444,,4/1/2021 1:14,4/1/2021 1:14,,,,0,,,,CC BY-SA 4.0 8160,1,8175,,9/27/2018 22:17,,0,320,"

I am currently looking to use a neural network to classify gestures. I have a series of Dx,Dy,Dz readings that represent the differences across the three axes made during the gesture. About 10 movements for each example of the gesture. Basically a 10x3 matrix and then classify the training data into about 15 classes. I plan to use a CNN classifier to do this because, while the time domain is relevant this problem the difference in the movements can be differentiated when presented with as a discrete matrix.

I'm used to using images with a neural net so I instinctively want to just convert the matrices into a 2D tensor and feed them into a CNN, but I was wondering if there was a better way to do this? For example, I have seen 1D tensors passed to a fully connected neural network for classification which seems like it could be more appropriate for this data input type?

Any tips on general architecture would be really appreciated as well!

Thanks!

",18577,,18577,,9/28/2018 19:54,9/29/2018 14:32,Using 3D Points as Inputs to a Neural Net,,1,2,,,,CC BY-SA 4.0 8161,1,,,9/28/2018 4:29,,3,1560,"

I know the implication symbol, $\rightarrow$, is used for conditions like

If $A$ is true, then $B$ will be true.

which can be written as

$$ A \rightarrow B $$

However, sometimes the implication symbol is also used in other contexts. For example, if we want to say that

All $A$'s are $B$.

We could write

$$\forall X (A(X) \rightarrow B(X))$$

I don't understand why the implication is used here? And if the implication is necessary to use here, then why isn't the implication used in the example written below?

Some $A$'s are $B$'s.

$$\exists X (A(X) \land B(X))$$

",18582,,2444,,1/24/2021 14:38,1/24/2021 14:38,What is the correct way to use the implication in first-order logic?,,1,0,,,,CC BY-SA 4.0 8163,2,,8161,9/28/2018 12:08,,5,,"

When we state in English that ""All As are Bs"", this means that we gain information as soon as we observe an A, we can immediately deduce that it must also be a B. These are the kinds of situations where we use an implication. So, this would be written in formal logic as:

$$\forall X \left( A(X) \rightarrow B(X) \right)$$

When we state in English that ""Some As are Bs"", we do not gain any new information just from observing that something is an A, we cannot deduce anything about that A. It might happen to be one of the As that simultaneously is a B, but it also might happen not to be one of those examples. So, it would be wrong to use an implication here. The only information that the English sentence gives us is that there is at least one thing somewhere that happens to be an A as well as a B, which is written formally as:

$$\exists X (A(X) \land B(X))$$


Suppose that we would have written the following in logic:

$$\exists X (A(X) \rightarrow B(X))$$

This would be translated to English as follows:

There exists some $X$ such that, if it is an $A$, it is also a $B$.

That bolded part is very important there. Note that this logical statement is also true as soon as I find one example $X$ that is not an $A$. For example, the following statement is true in the real world:

There exists some human $X$ such that, if $X$ can fly, $X$ can also shoot fireballs from his or her hands.

(this is true in the real world, because I can come up with many examples of humans who cannot fly)

",1641,,,,,9/28/2018 12:08,,,,0,,,,CC BY-SA 4.0 8164,2,,7617,9/28/2018 20:16,,3,,"

Image Segmentation with Unsupervised Learning

Deep Learning is now widely used for image classification and segmentation. However, for segmentation, some algorithms are still really effective. For example, they could also be used for the development of self-driving cars.

K-means for image segmentation

When you identify the pixels of an RGB image to vectors in $\mathbb{R}^3$, you can run the classic k-means algorithm to distinguish objects. Furthermore, you can do superpixel segmentation, by adding to all pixel vectors two components corresponding to their coordinates in the image (so it will be vectors in $\mathbb{R}^5$). You can run again a k-means algorithm to segment your image in superpixels. You can read about that SLIC Superpixels Compared to State-of-the-Art Superpixel Methods (2012), Achanta et al.

Example

Below is an example of the segmentation of picture of a seagull on a roof. On the left, we have the original image. In the middle, 3 clusters. On the right, 12 clusters. If it easily distinguishes the roof from the sky, the seagull is still unclear with 12 centroids.

Similarity graph and normalized cut

The main idea is to build a graph of similarities between pixels and then to cut the graph into subgraphs. First, you need to define a distance between pixels. For example, the colour dissimilarity, that could be $d(p_1, p_2) = exp(-\sum{(p_{1,i} - p_{2,i})^2}), i \in (r, g, b)$. Then, build the graph over the whole image, and divide it iteratively, using the Normalized Cut algorithm.

Morphological gradients

Here, image segmentation is done by computing the morphological gradient of the image. It is the difference between the dilatation and the erosion of the input image. Erosion and dilatation are equivalent to passing min() and max() filters over all subwindows of a given size.

An example, still with the seagull. (Morphological gradient computed on a grayscaled input). The seagull appears more clearly.

",17759,,2444,,1/18/2021 11:41,1/18/2021 11:41,,,,2,,,,CC BY-SA 4.0 8168,1,,,9/29/2018 6:36,,1,1342,"

I recently learned about genetic algorithms and I solved the 8 queens problem using a genetic algorithm, but I don't know how to optimize any functions using a genetic algorithm.

$$ \begin{array}{r} \text { maximize } f(x)=\frac{-x^{2}}{10}+3 x \\ 0 \leq x \leq 32 \end{array} $$

I want a guide on how to find chromosomes and fitness functions for such a function? And I don't want code.

",18604,,2444,,11/5/2020 12:00,11/5/2020 14:58,How do I optimize a specific function using a genetic algorithm?,,2,1,,,,CC BY-SA 4.0 8175,2,,8160,9/29/2018 14:32,,0,,"

A few thoughts :

  • 10x3 matrix for each example is really a small amount of data. FCNN could do a good job on that.

  • As a result, I'm not sure CNN is appropriated. The smallest dimension is 3, so it'll force you to have really small kernels.

  • Have you thought about LSTM? since that your data is sequential, LSTM may be useful. However, I'm not sure they could be really effective on this little amount of data. but it could be nice to try.

",17759,,,,,9/29/2018 14:32,,,,1,,,,CC BY-SA 4.0 8181,1,8183,,9/30/2018 14:20,,3,785,"

I am trying to design a neural network on Python.

Instead of the sigmoid function which has a limited range, I am thinking of using the cube root function which has the following graph:

Is this suitable?

",18640,,,,,9/30/2018 16:08,Is the cube root function suitable as a n activation function?,,1,0,0,,,CC BY-SA 4.0 8182,2,,1458,9/30/2018 15:49,,1,,"

I would completely agree with mindcrime and Cem Kalyoncu.

Take into account that passive agressiveness for example is more difficult to detect (irony, black humour, sarcasm likewise)

Although another head start could be to think out of the box: Happens by chance that I have lying around a book concerning violent-free communication. So probably your best start could be to talk with linguists about violence in language and start from there. or just make some reviews about how linguists or psychlogists detect violence in language (susprise: It's probably quite complex)

Nevertheless: I don't think you need real AI, a blacklist of words and expressions together with some pattern detection for expressions could be precise enough for the beginning.

Then for all the expressions, words, etc. you could add a bayesian network for the learning part, which works with probablities (like e.g some email spam filters) Search for example for ""Naive Bayes spam filtering""

This should be pretty much enough to have a good start, so in strict sense, you don't need real AI here, just BusinessIntelligence and probability calculations.

",15332,,15332,,9/30/2018 15:59,9/30/2018 15:59,,,,0,,,,CC BY-SA 4.0 8183,2,,8181,9/30/2018 15:52,,3,,"

There are a few traits that you want the activation function to have, and cube roots rate as OK-ish:

  • Nonlinear – check.

  • Continuously differentiable – no. There is a problem at $x=0$. Unlike other discontinuous functions like ReLU, although the gradient can be calculated near zero, it can be arbitrarily high as you approach $x=0$, because $\frac{d}{dx}x^{\frac{1}{3}} = \frac{1}{3x^{\frac{2}{3}}}$

  • Range considerations – limited range functions are more stable, large/infinite range functions are more efficient. You may need to reduce learning rates compared to e.g. tanh.

  • Monotonic – check.

  • Monotonic derivative - no.

  • Approximates identity near the origin – no, the approximation is bad near the origin.

If you look through the list of current, successful activation functions, you will see a few that also fail to provide one or more desirable traits, yet are still used routinely.

I would worry about the high gradients near $x=0$, but other than that I think the function could work ok. It may sometimes be unstable during learning, as small changes near zero will result in large changes to output. You might be able to work around the high gradients simply in practice, by clipping them. If the raw calculation returns value greater than $1$ (or less than $-1$) then treat the gradient as if it were $1$ (or $-1$) for the rest of backpropagation.

The only way of finding out if the function is competitive with other more standard activation functions is to try it on some standard data sets and make comparisons.

",1847,,1847,,9/30/2018 16:08,9/30/2018 16:08,,,,0,,,,CC BY-SA 4.0 8184,1,,,9/30/2018 16:15,,1,65,"

How do Support Vector Machines (SVMs) differentiate between a glass and a bottle or between a malignant and a benign tumor when it dealing with it for the first time?

What will be the analysis mechanism involved in this?

",18643,,3726,,10/1/2018 13:49,10/31/2018 15:00,How does an svm work? How does it perform comparisons between malignant and benign tumor,,1,2,,,,CC BY-SA 4.0 8188,1,8191,,9/30/2018 17:52,,5,5564,"

I made my first neural net in C++ without any libraries. It was a net to recognize numbers from the MNIST dataset. In a 784 - 784 - 10 net with sigmoid function and 5 epochs with every 60000 samples, it took about 2 hours to train. It was probably slow anyways, because I trained it on a laptop and I used classes for Neurons and Layers.

To be honest, I've never used TensorFlow, so I wanted to know how the performance of my net would be compared to the same in TensorFlow. Not too specific but just a rough approximation.

",17103,,2444,,2/6/2021 18:17,2/6/2021 18:17,How fast is TensorFlow compared to self written neural nets?,,2,0,,2/6/2021 18:16,,CC BY-SA 4.0 8189,2,,8188,9/30/2018 18:08,,3,,"

A lot. There are all these optimizations that we might not have thought of like combining layers, functions, etc. I am a pytorch guy though, its clean and doesn't get in your way like tensorflow does.

",18646,,,,,9/30/2018 18:08,,,,0,,,,CC BY-SA 4.0 8190,1,,,9/30/2018 18:55,,16,18087,"

I was able to find the original paper on LSTM, but I was not able to find the paper that introduced "vanilla" RNNs. Where can I find it?

",18649,,2444,,1/18/2021 23:19,8/17/2021 14:24,Where can I find the original paper that introduced RNNs?,,4,0,,,,CC BY-SA 4.0 8191,2,,8188,9/30/2018 20:49,,9,,"

I wanted to know how the performance of my net would be compared to the same in Tensor Flow. Not to specific but just a rough aproximation.

This is very hard to answer in specific terms because benchmarking is very hard and is often wrong.

The main point of TensorFlow as I see it is to make it easier for you to use a GPU and further allows you to use a large supply of programs written in Python/JavaScript that still give C++ level performance.

How fast is TensorFlow compared to self written neural nets?

This is answering the general question of using TensorFlow/PyTorch vs a custom solution, rather than your specific question of how much of a speed up you'd get.

There was a relatively recent MIT paper, Differentiable Programming for Image Processing and Deep Learning in Halide trying to discuss the performance vs flexibility vs time spent coding a solution of 3 different languages.

Specifically they compared a solution in their language Halide vs PyTorch vs CUDA.

Consider the following example. A recent neural network-based image processing approximation algorithm was built around a new “bilateral slicing” layer based on the bilateral grid [Chen et al. 2007; Gharbi et al. 2017]. At the time it was published, neither PyTorch nor TensorFlow was even capable of practically expressing this computation. As a result, the authors had to define an entirely new operator, written by hand in about 100 lines of CUDA for the forward pass and 200 lines more for its manually-derived gradient (Fig. 2, right). This was a sizeable programming task which took significant time and expertise. While new operations—added in just the last six months before the submission of this paper—now make it possible to implement this operation in 42 lines of PyTorch, this yields less than 1/3rd the performance on small inputs and runs out of memory on realistically-sized images (Fig. 2, middle). The challenge of efficiently deriving and computing gradients for custom nodes remains a serious obstacle to deep learning.

So in general you'll probably get faster performance with TensorFlow/PyTorch than a custom C++ implementation, but for specific cases if you have CUDA knowledge on top of C++ then you will be able to write more performant programs.

",3726,,3726,,10/1/2018 9:51,10/1/2018 9:51,,,,0,,,,CC BY-SA 4.0 8192,2,,6161,9/30/2018 20:59,,0,,"

I would counter this question with another one: Is rationality all we strive for?

Intelligence is a waste ground, and we have not even defined which intelligence we're talking about: Emotional Intelligence (empathy), Rational/Intellectual Intelligence (IQ), artistic intelligence, etc. etc. etc.

Something which demarks us as humans is that sometimes we're able to deal with to counteracting intelligences. Logic sometimes dictates one thing, but logic is bound to what parameters are taken into account, and mostly it's about quantifying. an intelligence which does qualitative logic behaves sometimes different, and suggests sometimes other actions. As humans we weight and counterweight these factors.

Politics is a good example. In most countries there are 2 or 3 rational approaches to problem solving (to make it easy let's call them left, center, right - though I know in fact it's more positions and in an at least two dimensional space, not linear). But what I mean is all those positions are rational, nevertheless it's other types of intelligence who define their the way they use their ratio.

So in the end this fourth step is just a continuity and I assume that with time theoretically at least the shift should be away from blue collar jobs to more white collar jobs until there are no blue collar jobs. So far the theory I guess, but then again? Would this be smart? Or intelligent? A world where machines make all the dirty work and we're just supervisors?

My thought to this is we will still need mechanics, probably with more technology always more, just that their focus will shift. 20 years ago mechanics were teached analogue technology, nowadays they learn more digital technology (best example would be car mechanics), but there you see: They're still needed, we also need people with analogue knowledge to mantain older systems which cannot or will not be converted to digital era. And I think that's OK! So from my point of view, nobody has to fear to loose it's job, but instead embrace, that when you keep your knowledge up to date, there will always be a job for you. Nowadays for example IT-professionals who are able to code older computer languages are better paid than ever, because their scarce...

",15332,,,,,9/30/2018 20:59,,,,0,,,,CC BY-SA 4.0 8193,1,,,9/30/2018 21:25,,1,223,"

To me it seems to be ill defined. Partially because of absence of knowledge which points are to be considered outliers in the first place.

The problem which I have in mind is ""bad market data"" detection. For example if a financial data provider is good only most of the time, but about 7-10% of data do not make any sense.

The action space is binary: either take an observation or reject it.

I am not sure about the reward, because the observations would be fed into an algorithm as inputs and the outputs of the algo would be outliers themselves. So the outliers detection should prevent outputs of the algorithm going rouge.

It is necessary to add that if we are talking about the market data (stocks, indices, fx), there's no guarantee that the distributions are stationary and there might be trends and jumps. If a supervised classifier is trainer based on historical data, how and how often should it be adjusted to be able to cope with different modes of the data.

",18653,,18653,,10/1/2018 14:50,10/1/2018 15:25,Is it possible to state an outliers detection problem as a reinforcement learning problem?,,1,3,,,,CC BY-SA 4.0 8194,1,,,9/30/2018 23:47,,3,2943,"

In the circumstances of two perfect AI's playing each other, will white have an inherent advantage? Or can black always play for a stalemate by countering every white strategy?

",18656,,10135,,10/18/2018 8:32,10/18/2018 8:32,"If two perfect chess AI's played each other, would it always be a stalemate or would white win for an inherent first-move advantage?",,4,0,,,,CC BY-SA 4.0 8195,2,,8194,10/1/2018 3:40,,-1,,"

I'm no expert chess player. Some specialist chess forums also discussed this issue and there's no clear answer.

Due to the high amount of possible moves, I would suggest, that most outcome would probably be a remis. White would have the advantage of starting, but as black as a perfect AI would know which possible strategies are in play it could block every try for a clear winning strategy but probably always trying to avoid a loss and therefore probably reaching just a remis.

Basis for this thought is that black as a reaction can always make the choice with fewest loss possibility so probably most strategies of white would stall.

But now comes the interesting point: I am not quite sure how these AIs would look like. They obviously would have to lean on a strategy to choose between all the possible moves. A pure statistical best outcome algorithm even then would leave tooo much options open, specially in the beginning. So you would be forced to prefer certain strategy choices, and with this you would clearly implementing a bias (what to do if two moves have the same win/loss outcome?). This would make the experiment more random or human and thus no outcome could be predicted. So if both AIs are perfect (and thus identical) the outcome is most probably a draw, but I assume that statistically some games would be won/loss... I think...

EDIT: I just read about googles AlphaZero (a AI which has taught itself Chess, Go and Shogi) and excells in most games against other AIs. But apparently by precalculating not so much possible outcomes... So it could be that there still is no ""absolute perfect AI"" for this game(s)

",15332,,15332,,10/2/2018 17:34,10/2/2018 17:34,,,,0,,,,CC BY-SA 4.0 8198,2,,8194,10/1/2018 7:17,,7,,"

This relates to the concept of ""solved games"". In general, two player turn-based games with perfect information - of which chess is an example - can result in all three possible outcomes: a forced win for white, a forced win for black, or a forced draw.

The short, although unsatisfactory answer is that chess is not solved, and it is not clear whether it can be. There is generally thought to be an advantage to white for the first move, so likely results are considered to be a forced win for white, or a forced draw.

No current AI attempts to ""solve"" chess, although some of the techniques such as MCTS might be adapted theoretically to find a solution, the available computing power to run that search to completion from the start positions is too low by a few orders of magnitude.

",1847,,1847,,10/1/2018 7:28,10/1/2018 7:28,,,,3,,,,CC BY-SA 4.0 8201,2,,8193,10/1/2018 12:03,,1,,"

It is quite often possible to frame a problem as a Reinforcement Learning (RL) problem at some level. However, this may turn out to be for no benefit, or a net cost towards solving the problem. Casting parameter or hyperparameter searches as RL can be adding a layer of complexity and reduce efficiency.

One key thing to bear in mind is that any classification or regression that occurs within a RL framework will end up using effectively the same models and approaches that could solve the same problems directly. These models would either appear directly as the function approximators in RL that implement policies or value functions, or they would be an implied part of them. If you have labeled data for classification - even delayed until some time after you collect data, then you are usually going to be better off using supervised learning directly.

For hyperparameter searches (e.g. cutoffs for anomaly detection) then you may not need labelled data, but just need a good way to test the model offline.

The first point at which supervised learning or classic anomaly detection might fail for you is if you never receive any feedback about individual records, only a measure of overall performance. In other words, if you can measure consequences of good performance, but never measure or check correctness.

Your characterisation

about 7-10% of data do not make any sense.

does not appear to fit that. It looks like you could detect this, maybe manually labelling a few thousand records, and train a classifier using supervised learning techniques. That is likely a much better use of your time than trying to restructure the problem at a higher level and trusting a trail-and-error approach to discover the same rules.

Putting that to one side, assuming you do have a problem where

  • data to be classified is arriving as a stream, and needs to be processed online, item by item or in small batches
  • you have reason to think that an accept/reject stage before processing further would be useful
  • you have no way to label training data for accept/reject
  • you have a way to measure performance of the remaining system after the accept/reject phase

then you could use RL to frame the accept/reject phase as an action. There are some challenges there, but essentially you would use RL along with measurement feedback to sample errors or gradients - typically using TD Error or policy gradients. This could wrap almost any model that does classification or anomaly detection etc, provided it could be trained using those gradients.

From comments, if the underlying distributions for accept/reject are non-stationary, this may point you more towards a RL solution. However, that may come with a cost to performance - you will need to balance exploration rate (which will reduce the performance of the model against stationary data) versus speed of learning new distributions. This is a problem for all online learners; the main advantage of a RL approach here is that it will not require generating new labelled data. If you can use a recency-weighted anomaly detection algorithm instead, then you won't need the labeled data either - whether that is better requires testing, personally I'd take a working anomaly detection as the baseline and only use RL if it proved itself better.

The specific items that you turn into states and rewards are not clear to me from the question, and you would need to work on these things carefully. It is possible you will need more than the current data item in order to define state, and that will depend a lot on how the feedback loop works that establishes reward.

",1847,,1847,,10/1/2018 15:25,10/1/2018 15:25,,,,4,,,,CC BY-SA 4.0 8207,2,,8194,10/1/2018 14:21,,-1,,"

Since both AI know the best possible moves at each step, black would never win as white already knows all games that leads to black's victory and would easily avoid it, but since the black is also a perfect AI, the optimal reponses to white's moves are fixed. So the logical reasoning is these two perfect AI would not make any moves at all if they know their opponent is a perfect AI as well. Both of them know the outcome of the match even before it begins, which my gut feeling says is always a draw. The correct question is does the nth move matter?

",18646,,,,,10/1/2018 14:21,,,,0,,,,CC BY-SA 4.0 8208,2,,8184,10/1/2018 14:27,,1,,"

I will try to give you a simplified explanation of how SVMs work.

The data one works with can be of two types. Either it is very easily separable and there is a clear straight line boundary between data points of different classes. We call such data linearly separable(image) or the data points of the classes are mixed in a way that there is no clear straight line boundary that separates them. In other words the data is not linearly separable.(image).

One way to make data points linearly separable is to map them into a higher dimension where they would become linearly. If we have a set of data points which are not linearly separable in 2 dimensions, we could map them into 3 dimensions where the data becomes linearly separable(like so).

SVMs work by converting data from lower to higher dimensions where it is linearly separable and then tries to find the boundary that separates the data.

The same concept could be applied to images, where the images are converted into a very high dimension in which images of different types can be separated linearly.

This could mean that the SVM model trained is able to easily distinguish between a bottle and a glass or different tumour types but that would depend entirely on the data used to train the model. The problem arises when the distribution of the training data and testing data is completely different. So for example, you trained the SVM to distinguish between glasses and cups of a certain shape and colour and while testing you show it glasses of completely different shapes and colours, the SVM will not be able to distinguish between them.

",16569,,,,,10/1/2018 14:27,,,,0,,,,CC BY-SA 4.0 8212,1,,,10/2/2018 7:15,,1,926,"

I recently watched the video on Proximal Policy Optimization (PPO). Now, I want to upgrade my actor-critic algorithm written in PyTorch with PPO, but I'am not sure how the new parameters / thetas are calculated.

In the paper Proximal Policy Optimization Algorithms (at page 5), the pseudocode of the PPO algorithm is shown:

It says to run $\pi_{\theta_{\text{old}}}$, compute advantage estimates and optimize the objective. But how can we calculate $\pi_\theta$ for the objective ratio, since we have not updated the $\pi_{\theta_{\text{old}}}$ yet?

",18696,,2444,,2/16/2019 2:38,2/16/2019 2:38,How do I calculate the policy in the Proximal Policy Optimization algorithm?,,1,0,,,,CC BY-SA 4.0 8215,1,,,10/2/2018 8:07,,1,977,"

(Cross-posting here from the data science stack exchange, as my question didn't get any replies. I hope it's okay!)

I've been playing around with YOLOv3 and obtaining some good results on the ~20 custom classes I trained. However, one or two classes look like they can use some additional training data (not a lot, say about 10% more data), which I can provide.

What is the most efficient way to train my model now? Do I need to start training from scratch? Can I just throw in my additional data (with the appropriate changes to the config files etc.) and run the training based on the weight matrix I already acquired, but for a small number of iterations? (1000?) Or is this more like a transfer learning problem now?

Thanks for all tips!

",18699,,,,,12/13/2022 15:06,Add training data to YOLO post-training,,1,0,,,,CC BY-SA 4.0 8217,1,,,10/2/2018 8:42,,1,107,"

I implemented a LSTM neural network in Pytorch. It worked but I want to know if it worked the way I guessed how it worked.

Say there's a 2-layer LSTM network with 10 units in each layer. The inputs are some sequence data Xt1, Xt2, Xt3, Xt4, Xt5.

So when the inputs are entered into the network, Xt1 will be thrown into the network first and be connected to every unit in the first layer. And it will generate 10 hidden states/10 memory cell values/10 outputs. Then the 10 hidden states, 10 memory cell values and Xt2 will be connected to the 10 units again, and generate another 10 hidden states/10 memory cell values/10 outputs and so on.

After all 5 Xt's are entered into the network, the 10 outputs from Xt5 from the first layer are then used as the inputs for the second layer. The other outputs from Xt1 to Xt4 are not used. And the the 10 outputs will be entered into the second layer one by one again. So the first from the 10 will be connected to every unit in the second layer and generate 10 hidden states/10 memory cell values/10 outputs. The 10 memory cell values/10 hidden states and the second value from the 10 will be connected and so forth?

After all these are done, only the final 10 outputs from the layer 2 will be used. Is this how the LSTM network works? Thanks.

",18268,,,,,10/2/2018 8:42,Structure of a multilayered LSTM neural network?,,0,1,,,,CC BY-SA 4.0 8219,1,8346,,10/2/2018 11:15,,0,450,"

The dialog context

Turing proposed at the end of the description of his famous test, ""Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, 'Can machines think?'""1

Turing effectively challenged the 1641 statement of René Descartes in his Discourse on Method and Meditations on First Philosophy:

""It never happens that [an automaton] arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.""

Descartes and Turing, when discussing automatons achieving human abilities, shared a single context through which they perceived intelligence. Those that have been either the actor or the administrator in actual Turing Tests understand the context: Dialog.

Other contexts2

The context of the dialog is distinct from other contexts such as writing a textbook, running a business, or raising children. If you apply the principle of comparing machine and human intelligence to automated vehicles (e.g. self-driving cars), an entirely different context becomes immediately apparent.

Question

Can a brain be intelligent without a body? More generally, does intelligence require a context?


References

[1] Chapter 1 (""Imitation Game"") of Computing Machinery and Intelligence, 1951.

[2] Multiple Intelligences Theory

",4302,,2444,,3/14/2020 23:53,3/15/2020 0:08,Can a brain be intelligent without a body?,,4,0,,3/15/2020 0:10,,CC BY-SA 4.0 8220,1,8266,,10/2/2018 11:54,,2,880,"

I will be undertaking a project over the next year to create a self learning AI to play a racing game, currently the game will be Mario Kart 64.

I have a few questions which will hopefully help me get started:

  1. What aspects of AI would be most applicable to creating a self learning game AI for a racing game (Q-Learning, NEAT etc)
  2. Could a ANN or NEAT that has learned to play Mario Kart 64 be used to learn to play another racing game?
  3. What books/material should i read up on to undertake this project?
  4. What other considerations should i take throughout this project?

Thank you for your help!

",18209,,,,,10/5/2018 8:54,Creating a self learning Mario Kart game AI?,,1,3,,,,CC BY-SA 4.0 8223,1,8234,,10/2/2018 14:18,,3,223,"

I've been told this is how I should be preprocessing audio samples, but what information does this method actually give me? What are the alternatives, and why shouldn't I use them?

",18709,,2444,,12/17/2021 14:40,12/17/2021 14:40,Why is the short-time Fourier transform used for preprocessing audio samples?,,1,1,,,,CC BY-SA 4.0 8230,2,,8212,10/3/2018 2:51,,2,,"

You're right, the first time you run it the two policies ($\pi_{\theta old}$ and $\pi_\theta$) will be the same. This means your loss is simply the advantage (since you multiply the the ratio ($r(\theta)={\pi_\theta(a|s)\over\pi_{\theta old}(a|s)}$) by the advantage (so $loss=-r_t(\theta)A_t$).

However, with PPO you run multiple epochs of training on the same data. So after your first update you do the whole thing again (without exploring the environment any more) and this time $\pi_{\theta old}$ is different to $\pi_\theta$.

Here's a great explanation of the algorithm: https://stackoverflow.com/questions/46422845/what-is-the-way-to-understand-proximal-policy-optimization-algorithm-in-rl

",16035,,,,,10/3/2018 2:51,,,,0,,,,CC BY-SA 4.0 8231,2,,8219,10/3/2018 2:56,,1,,"

Can a brain be intelligent without a body?

In my opinion, yes, if you give it the right inputs. The brain is like a machine and its behavior depends on its architecture and the interaction with the environment, whether it is the internet or anything else, so it all boils down to the actual architecture of the system.

Intelligence is just an information processing system. A human gets info from his/her eyes, hears, or other sensors, then the brain does info processing and storage. One can potentially replace our sensors with other sensors that acquire the info from the world and send it to the brain.

",17018,,2444,,3/15/2020 0:00,3/15/2020 0:00,,,,0,,,,CC BY-SA 4.0 8232,2,,8194,10/3/2018 3:03,,0,,"

No that doesnt have to be the case because from each machine's point of view the game is different so they will have different possibilities like saying different paths to follow, now something different would be if two machines with the same brain play against the same opponent with same parameters then we can say theboutcome will problably be the same, if they are ""programed in a deterministic way""

",17018,,,,,10/3/2018 3:03,,,,0,,,,CC BY-SA 4.0 8233,2,,2563,10/3/2018 4:15,,0,,"

I kid you not when I tell you I just started with decoupled networks this morning. But the issue is terribly worded by most computer scientists. The way a decoupled neural interface works is not by sending the error backwards, but by storing it in a form of learned error inside a separate network. So not only do we have our nodes and weights, but also the Synths. The Synths modify the weight tables using past error. So instead of passing error back like the article suggests, it actually is passing it forward in a kind of ""hey don't do this."" What's really fun is that to you never have to mix up your thinking to reverse a network for backprop if you're using a decoupled network. The amazing part is that it is actually easier to code this decoupled network, than it is to write a ""simple"" ANN, even better still is that it functions a lot like an LSTM network without all the programming mumbo-jumbo. You can see for yourself two different forms of the same network from here.

The code from the demo in the repository is from https://towardsdatascience.com/only-numpy-implementing-and-comparing-combination-of-google-brains-decoupled-neural-interfaces-6712e758c1af it was a fundamental resource for me, and it may be for you. It also helps you with loss, or cost.

",3542,,,,,10/3/2018 4:15,,,,1,,,,CC BY-SA 4.0 8234,2,,8223,10/3/2018 5:39,,2,,"

Fourier transform is used to transform audio data to get more information (features).

For example, raw audio data usually represented by a one-dimensional array, x[n], which has a length n (number of samples). x[i] is an amplitude value of the i-th sample point.

Using the Fourier transform, your audio data will be represented as a two-dimensional array. Now, x[i] is a not a single value of amplitude, but a list of frequencies which compose original value at the i-th frame (a frame consists of a few samples).

See the image below (from wikipedia), the red graph is an original value of n samples before transformed, and the blue graph is a transformed value of one frame.

",16565,,,,,10/3/2018 5:39,,,,0,,,,CC BY-SA 4.0 8236,2,,8219,10/3/2018 7:41,,2,,"

It depends what you mean by intelligence. A robot that acts has a different sort of intelligence than a neural net that merely maps inputs to outputs. Bit patterns within a robot brain have meaning, whereas the meaning of the inputs and outputs of gain meaning only through the larger system in which humans steer input data to it, and act on the basis of the outputs.

In particular, a system that can act needs a causal model of the world that, at least in part, includes itself.

So a non-embodied system may be intelligent in some useful ways, but its intelligence will be radically different than a human intelligence. That's not necessarily a bad thing: we already have lots of humans, and can produce more fairly cheaply. The most cost-effective AIs are surely not human-like ones.

",12269,,,,,10/3/2018 7:41,,,,0,,,,CC BY-SA 4.0 8240,1,8247,,10/3/2018 11:45,,3,223,"

I am trying to dissect the paper Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.

Unfortunately, because my math is a little bit rusty, I got a little bit stuck with the proof. Could you provide me with some clarification about proof of the topic?

What I understand is that we introduce, instead of the weight vector $w$ a scalar $g$ (magnitude of original $w$?) and $\frac{v}{\|v\|}$ (direction of original $w$?) of the vector.

$$\nabla_{g} L=\frac{\nabla_{\mathbf{w}} L \cdot \mathbf{v}}{\|\mathbf{v}\|}$$

and

$$\nabla_{\mathbf{v}} L=\frac{g}{\|\mathbf{v}\|} \nabla_{\mathbf{w}} L-\frac{g \nabla_{g} L}{\|\mathbf{v}\|^{2}} \mathbf{v}$$

What I am not really sure about is:

If the gradients are noisy (does this mean that in some dimension we have small and in some high curvature or that error noise differs for very similar values of $w$?) the value will quickly increase here and effectively limit the speed of descent by decreasing value of ($\frac{g}{\|v\|}$). This means that we can choose larger learning rates and it will somehow adjust the effect of the learning rate during the training.

And what I completely miss is:

$$\nabla_{\mathbf{v}} L=\frac{g}{\|\mathbf{v}\|} M_{\mathbf{w}} \nabla_{\mathbf{w}} L$$

with

$$\mathrm{M}_{\mathrm{w}}=\mathrm{I}-\frac{\mathrm{w} \mathrm{w}^{\prime}}{\|\mathrm{w}\|^{2}}$$

It should somehow explain the reasoning behind the idea of the final effects. Unfortunately, I don't really understand this part of the paper, I probably lack some knowledge about Linear Algebra.

Can you verify that my understanding of the paper is correct?

Can you recommend some sources (books/videos) to help me to understand the second part of proof (related to the second set of formulas?)

",18729,,2444,,7/6/2020 23:11,7/6/2020 23:12,Can you help me understand how weight normalization works?,,1,0,,,,CC BY-SA 4.0 8241,1,8242,,10/3/2018 15:27,,2,55,"

Imagine a system that is trained to manipulate dampers to manage air flow. The training data includes damper state and flow characteristics through a complex system of ducts. The system is then given an objective (e.g. maintain even flow to all outputs) and set loose to manage the dampers. As it performs those functions there are anomalies in the results which the system is able to detect. The algorithm CONTINUES to learn from its own empirical data, the result of implemented damper configurations, and refines its algorithm to improve performance seeking the optimum goal of perfectly even flow at all outputs.

What is that kind of learning or AI system called?

",17061,,2444,,11/7/2020 17:24,11/7/2020 17:24,"What is the AI discipline where an algorithm learns from an initial training set, but then refines its learning as it uses that training?",,1,0,,,,CC BY-SA 4.0 8242,2,,8241,10/3/2018 16:37,,4,,"

I believe this can best be done with reinforcement learning via Deep Q Learning. That's where I would start. Steps are:

  • Initialize a Q table.

  • Choose an action.

  • Perform the action.

  • Measure the reward.

  • Update the Q.

A neural net will approximate the Q function. See: https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0

Also consider policy gradients, actor critic, and inverse reinforcement learning.

",3861,,,,,10/3/2018 16:37,,,,0,,,,CC BY-SA 4.0 8243,1,8252,,10/3/2018 16:44,,3,4988,"

Imagine a system that controls dampers in a complex vent system that has an objective to perfectly equalize the output from each vent. The system has sensors for damper position, flow at various locations and at each vent. The system is initially implemented using a rather small data set or even a formulaic algorithm to control the dampers. What if that algorithm were programmed to ""try"" different configurations of dampers to optimize the air flows, guided broadly by either the initial (weak) training or the formula? The system would try different configurations and learn what improved results, and what worsened results, in an effort to reduce error (differential outflow).

What is that kind of AI system called? What is that system of learning called? Are there systems that do that currently?

",17061,,2444,,11/7/2020 17:23,11/7/2020 17:23,What is the name of an AI system that learns by trial and error?,,3,1,,,,CC BY-SA 4.0 8244,2,,8219,10/3/2018 16:47,,1,,"

Can a brain be intelligent without a body?

If you define ""intelligence"" as ""doing the right thing at the right time"", then the statement itself implies some sort of embodied context, whether humanoid, networked or otherwise.

If you have a more existential definition where by fact that there are internal workings, or goings on but aren’t apparent in any embodied output then one could argue either way. Akin to the simulated universe theory: either way the outcome is only how we think about it, rather than having an experimental truth.

If one can refine the question so that the outcome can be used practically then I believe that may be more useful.

",11893,,2444,,3/15/2020 0:08,3/15/2020 0:08,,,,0,,,,CC BY-SA 4.0 8245,2,,8243,10/3/2018 16:57,,2,,"

Near solution to your problem definition is reinforcement learning. You can define some reward using the objective function and define some possible state space for the machine and finally solve the problem by reinforcement learning techniques (near to trial and error by learning the preferences).

",4446,,,,,10/3/2018 16:57,,,,0,,,,CC BY-SA 4.0 8246,2,,7926,10/3/2018 17:10,,0,,"

Love question. Firstly, as pointed out in other answers Narrow AI today are mostly algorithms following their procedures given some inputs. No need for philosophy here as they are following reproducible steps.

However, if your referring to general AI or AI a kin to human level intelligence or better then maybe the question holds some weight. But again as pointed out it would come back to whether you believe you or I indeed have free will?

For me I believe free will can be modelled as a sort of entropy. If you look on the macro level things are blurry, agents are making decisions and moving around in a unpredicatable way. On the micro level however, given all the data in one state one could predict the next state shattering the idea of free will. I guess it’s up to you to decide whether this fits with your free wil definition or not.

",11893,,,,,10/3/2018 17:10,,,,0,,,,CC BY-SA 4.0 8247,2,,8240,10/3/2018 18:03,,2,,"

If the gradients are noisy (does this mean that in some dimension we have small and in some high curvature or that error noise differs for very similar values of w?)

Gradients being noisy means that they are "inconsistent" across different epochs / training steps. With that I mean that they'll sometimes point in one direction, later in a different (maybe the opposite) direction, etc., that the gradients at different time steps give inconsistent/conflicting information, that we don't consistently keep following the same direction through gradient descent but keep jumping all over the place. Note that this is across epochs, it's not across different dimensions within the same epoch.

So, for example, we might have a vector of gradients that looks like $[1, 1, 1, \dots, 1]$ at one time step (i.e., positive gradients in all dimensions), and the next training step get a gradient more like $[-1, -1, -1, \dots, -1]$ (this example is pretty much the "most extreme" kind of noise you can have, completely conflicting directions, in reality it'd generally be less extreme).

Such "noisy gradients" can exist because, in practice, we almost always use estimates of the gradient of our loss function, rather than the true gradients (which we'll generally not even be able to compute). For example, if we have a very large training dataset in a supervised learning setting, we'd ideally use the gradient of the loss function computed across the complete dataset. That tends to be computationally expensive (requires forwards and backwards passes for every single instance in the dataset), so in practice we'll often only use a small minibatch to estimate the gradient; this can result in widely different estimates of the gradient for different minibatches. (note: there can also be different reasons for using minibatches rather than full dataset, such as avoiding overfitting, but that's not too important to consider for this specific question)

A different example of a setting where we can have noisy gradients is in Deep Reinforcement Learning. There, computing the loss function itself is often rather noisy (in the sense that our own predictions, which tend to still be incorrect during the training phase, are a component of the targets that we're updating towards), so our estimates of the gradients will also be noisy.


Derivation for your second question, about the gradient of the loss with respect to $\mathbf{v}$:

We start with the following, from Equation (3) in the paper (this gradient you understand, right?):

$$\nabla_{\mathbf{v}} L = \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g \nabla_g L}{\vert \vert \mathbf{v} \vert \vert^2} \mathbf{v}$$

Note that Equation (3) also gives:

$$\nabla_{g} L = \frac{\nabla_{\mathbf{w}} L \cdot \mathbf{v}}{\vert \vert \mathbf{v} \vert \vert}$$

If we plug that into the previous Equation we get:

\begin{aligned} \nabla_{\mathbf{v}} L &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g \frac{\nabla_{\mathbf{w}} L \cdot \mathbf{v}}{\vert \vert \mathbf{v} \vert \vert}}{\vert \vert \mathbf{v} \vert \vert^2} \mathbf{v} \\ &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g \nabla_{\mathbf{w}} L \cdot \mathbf{v}}{\vert \vert \mathbf{v} \vert \vert^3} \mathbf{v} \\ \end{aligned}

The $\frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L$ term that shows up before the minus can also be isolated in the term after the minus:

\begin{aligned} \nabla_{\mathbf{v}} L &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \frac{\mathbf{v}}{\vert \vert \mathbf{v} \vert \vert^2} \mathbf{v} \\ &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \frac{\mathbf{v}}{\vert \vert \mathbf{v} \vert \vert} \cdot \frac{\mathbf{v}}{\vert \vert \mathbf{v} \vert \vert} \\ \end{aligned}

Equation (2) tells us that $\mathbf{w} = \frac{g}{\vert \vert \mathbf{v} \vert \vert} \mathbf{v}$, which means that $\frac{\mathbf{w}}{g} = \frac{\mathbf{v}}{\vert \vert \mathbf{v} \vert \vert}$. Plugging this into the above leads to:

\begin{aligned} \nabla_{\mathbf{v}} L &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \frac{\mathbf{w}}{g} \cdot \frac{\mathbf{w}}{g} \\ \end{aligned}

Below Equation (2), they also explain that $\vert \vert \mathbf{w} \vert \vert = g$, so we can rewrite the above to:

\begin{aligned} \nabla_{\mathbf{v}} L &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \frac{\mathbf{w}}{\vert \vert \mathbf{w} \vert \vert} \cdot \frac{\mathbf{w}}{\vert \vert \mathbf{w} \vert \vert} \\ &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L - \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \frac{\mathbf{w} \mathbf{w}'}{\vert \vert \mathbf{w} \vert \vert^2} \\ \end{aligned}

In that last step, the $'$ symbol in $\mathbf{w}'$ denotes that we transpose that vector.

Finally, because we have that common $\frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L$ before and after the minus symbol, we can pull it out, multiplying it once by the identity matrix $\mathbf{I}$, and once by the remaining negated term. This is very similar to how you would simplify something like $a - ax$ to $a (1 - x)$ in "standard" algebra; the identity matrix $\mathbf{I}$ plays a very similar role here to the number $1$ in "standard" algebra:

\begin{aligned} \nabla_{\mathbf{v}} L &= \frac{g}{\vert \vert \mathbf{v} \vert \vert} \nabla_{\mathbf{w}} L \cdot \left( \mathbf{I} - \frac{\mathbf{w} \mathbf{w}'}{\vert \vert \mathbf{w} \vert \vert^2} \right) \\ \end{aligned}

Note: in all of the above, I kind of neglected to pay attention to order of matrix/vector multiplications and their dimensions. Probably the $\nabla_{\mathbf{w}} L$ term should by now already have been moved to after those large brackets, instead of being before those brackets. In this case, it should work out fine regardless because $\mathbf{I}$ as well as $\mathbf{w} \mathbf{w}'$ are square and symmetric matrices. The only resulting difference is in whether you get a row or a column vector out at the end.

This can now finally be rewritten as the Equation you mentioned that you didn't understand yet, Equation (4) in the paper.

",1641,,2444,,7/6/2020 23:12,7/6/2020 23:12,,,,3,,,,CC BY-SA 4.0 8248,2,,8243,10/3/2018 20:05,,1,,"

I think any learning algorithm probably uses trial and error and analysis of the results with the ultimate goal of maximizing utility.

It seems that the recent milestones in AI fall under the general umbrella of machine learning, which includes all forms of reinforcement learning. Essentially, any learning algorithm is using some form of statistical analysis.

  • For an umbrella term, I've been using ""learning algorithm""

However, there is also a venerable history of less capable adaptive systems such as self-organizing networks. (See also optimal control.)

",1671,,1671,,10/3/2018 20:20,10/3/2018 20:20,,,,0,,,,CC BY-SA 4.0 8249,2,,80,10/3/2018 20:53,,0,,"

If you want to understand relativity, read Einstein1,2, not a book about relativity authored by a professor who think's he's got it. If you want to understand Alan Turing's test for intelligence in the context of human dialog, read Turing.3 Interpretations can be worse than worthless. They are often misleading. If the principles seem too thick, read it over again until you get it.

In the case of Turing's test for intelligence in the context of human dialog, to understand it fully, the following background is assumed when Turing wrote, which, if you read his 1950 article, will become apparent.

  • How Turing's completeness theorem responds to Kurt Gödel's second incompleteness theorem
  • The strategy of a controlled test
  • The difference between (a) hearing and speaking and (b) listening and wittily responding — This is particularly pertinent today because the chat-bots do (a) and could be anywhere from 5 to 500 years away from doing (b). To reach (c) deeply comprehending and responding with inspiration, AI researchers must go beyond modelling the human mind and approach the challenge of modelling the minds of people like Gödel, Einstein, and Turing. Whether that will ever occur is yet to be revealed.

The specific requirements of the Imitation Game, Alan Turing's subtitle above the description of his thought experiment, are a matter of record.

Specific Requirements [Excerpt from Actual Article]

[The imitation game] is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:

"My hair is shingled, and the longest strands are about nine inches long."

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator.

The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

There have been thousands of critiques of both Einstein's relativity and Turing's test, none of which add much value. Study the thinking of great contributors through their own words and all the refuse that follows will be interesting primarily in its lack of greatness.

Secondary Questions in This Thread

What requirements if any must the evaluator fulfill in order to be qualified to give the test?

The interrogator (C) is not an evaluator. Evaluation would be an attempt to be objective, however the premise of Turing's thought experiment is that the interrogator provide her or his subjective judgment. From a statistics point of view, the interrogator should be selected randomly from the population of the world that shares a spoken language with (A) and (B).

Must there always be two participants in the conversation (one human and one computer) or can there be more?

There must be exactly two to fit the scenario described by Alan Turing. (See below for more detail.)

Are placebo tests (where there is not actually a computer involved) allowed or encouraged?

One could test all kinds of things, and researchers do, however, that would be outside of the scope of Turing's thought experiment.4

Can there be multiple evaluators? If so does the decision need to be unanimous among all evaluators in order for the machine to have passed the test?

What would reveal the most information to those that sponsor an actual Imitation Game would be a double blind fully randomized test where (A), (B), and (C) are pulled from as random a sample of those men, women, or software systems of the type under test that can converse in a common language, and the test would be run many times with random selections from the samples.

Unanimity, evaluation, additional complexity, and communication other than that which was specified by the test would only frustrate the cause, if one sticks with Turing's original intention regarding the question, "Can computers think?"

Other Views of Intelligence

Turing, as did René Descartes, who stated that machines will never pass a less controlled version of Turing's Imitation Game, saw intelligence through the lens of dialog. Others have considered other kinds of dialog and other contexts than dialog. I addressed this in another question:

Can a brain be intelligent without a body?

References and Footnotes

[1] Relativity: The Special and the General Theory by Albert Einstein, 1916

[2] The Principle of Relativity by Albert Einstein and Francis A. Davis, 1923

[3] A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460. https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[4] Turing's 1950 article did not recommend that his thought experiment should be embodied and used in commercial validation of future AI systems. Alan Turing was, however, concerned with practical computing at one specific point in his career. That was when the Nazis had overrun France, were pulverizing his homeland from the air, and had sunk a significant portion of the English Navy from below, with the help of Enigma cryptography.

",4302,,-1,,6/17/2020 9:57,10/3/2018 20:53,,,,0,,,,CC BY-SA 4.0 8250,1,,,10/3/2018 22:29,,2,292,"

I've been reading about expert systems and started reading about MYCIN.

I was astonished to find that MYCIN diagnosed patients better than the infectious diseases physicians.

http://www.aaaipress.org/Classic/Buchanan/Buchanan33.pdf

Since, it had such a good success rate, why did it fail?

",18748,,18748,,2/8/2021 22:35,2/8/2021 22:35,Why did MYCIN fail?,,0,0,,,,CC BY-SA 4.0 8251,1,,,10/3/2018 23:11,,2,268,"

I have created a game on an 8x8 grid and there are 4 pieces which can move essentially like checkers pieces (Forward left or Forward right only). I have implemented a DQN in order to pull this off.

Here is how I have mapped my moves:

self.actions = {""1fl"": 0, ""1fr"": 1,""2fl"": 2, 
  ""2fr"": 3,""3fl"": 4, ""3fr"": 5,""4fl"": 6, ""4fr"": 7}

essentially I assigned each move to an integer value from 0-7 (8 total moves).

My question is: During any given turn, not all 8 moves are valid, how do I make sure when model.predict(state) the resulting prediction will be a move that is valid? Here is how I am currently handling it.

def act(self, state, env):
    #get the allowed list of actions
    actions_allowed = env.allowed_actions_for_agent()

    #Do a random move if random # greater than epsilon
    if np.random.rand(0,1) <= self.epsilon: 
        return actions_allowed[random.randint(0, len(actions_allowed)-1)]

    #get the prediction from the model by passing the current game board
    act_values = self.model.predict(state)

    #Check to see if prediction is in list of valid moves, if so return it
    if np.argmax(act_values[0]) in actions_allowed:
        return np.argmax(act_values[0])

    #If prediction is not valid do a random move instead....
    else:
        if len(actions_allowed) > 0:
            return actions_allowed[random.randint(0,len(actions_allowed)-1)]

I feel like if the agent predicts a move, and if that move is not in the actions_allowed set I should punish the agent.

But because it doesn't pick a valid move I make it do a random one instead, but I think this a problem. Because its bad prediction may ultimately end up still winning the game since the random move may have a positive outcome. I am at a total loss. The agent trains....but it doesn't seem to learn.... I have been training it for over 100k games now, and it only seems to win 10% of it games.... ugh.

Other helpful information: - I am utilizing experience replay for the DQN which I have based on the code from here:

Here is where I build my model as well:

self.action_size = 8
LENGTH = 8
def build_model(self):
    #builds the NN for Deep-Q Model
    model = Sequential() #establishes a feed forward NN
    model.add(Dense(64,input_shape = (LENGTH,), activation='relu'))
    model.add(Dense(64, activation='relu'))
    model.add(Dense(self.action_size, activation = 'linear'))
    model.compile(loss='mse', optimizer='Adam')
",18244,,16565,,10/4/2018 12:31,10/4/2018 12:31,Using a DQN with a variable amount of Valid Moves per turn for a Board Game,,1,0,,6/8/2022 15:08,,CC BY-SA 4.0 8252,2,,8243,10/3/2018 23:52,,1,,"

I believe ""Reinforcement Learning"" is the term you are looking for (as mentioned by others as well) but keep in mind that the scope of your problem falls under the section of AI that is called Search.

Search algorithms are based upon experimenting with different actions (decisions) and selecting the one that minimizes an arbitrary cost function (reward), given the current and past problem states.

",15919,,,,,10/3/2018 23:52,,,,0,,,,CC BY-SA 4.0 8253,2,,8251,10/4/2018 5:31,,2,,"

I think instead of:

if np.argmax(act_values[0]) in actions_allowed:
        return np.argmax(act_values[0])

you can use something like:

if np.argmax(actions_allowed_values[0]):
        return np.argmax(actions_allowed_values[0])

So you didn't random your act if the best action is not allowed, but you choose the best action that allowed. You can do that by checking for each action value, whether it is a valid action or not, then take the best value. Or you can change invalid action value to a very small number.

",16565,,16565,,10/4/2018 5:40,10/4/2018 5:40,,,,0,,,,CC BY-SA 4.0 8254,1,,,10/4/2018 6:25,,1,141,"

As I'm beginner in image processing, I am having difficulty in segmenting all the parts in DICOM image.

Currently, I'm applying watershed algorithm, but it segments only that part that has tumour.

I have to segment all parts in the image. Which algorithm will be helpful to perform this task?

The image below contains the tumour.

This image is the actual DICOM image

",18661,,2444,,5/2/2019 13:33,1/11/2023 19:03,How do I segment each part of a DICOM image?,,1,0,,,,CC BY-SA 4.0 8255,2,,8254,10/4/2018 7:49,,0,,"

I looked at some picture from DICOM, and it seems that these images usually don't have many local minima, so I guess that's why watershed isn't optimal. However, it seems that in DICOM the color gradients are often strong, so I suggest to use techniques that works well with gradients:

  • Direct Gradient computations, either convolutional gradients or morphological gradients (selects shape of your different objects)

  • Normalized Cut on Similarity Graphs

If you want to know more about these you can read this post.

",17759,,2444,,5/2/2019 13:35,5/2/2019 13:35,,,,0,,,,CC BY-SA 4.0 8258,1,8544,,10/4/2018 14:41,,5,8089,"

I'm looking for annotated dataset of traffic signs. I was able to find Belgium, German and many more traffic signs datasets. The only problem is these datasets contain only cropped images, like this:

While i need (for YOLO -- You Only Look Once network architecture) not-cropped images.

I've been looking for hours but didn't find dataset like this. Does anybody know about this kind of annotated dataset ?

EDIT:

I prefer European datasets.

",18760,,18760,,10/5/2018 8:22,12/27/2019 9:54,Traffic signs dataset,,4,0,,1/29/2021 0:08,,CC BY-SA 4.0 8259,1,8287,,10/4/2018 17:54,,2,811,"

I am using the following perceptron formula $\text{step}\left(\sum(w_ix_i)-\theta \right)$.

Is $\theta$ supposed to be updated in a perceptron, like the weights $w_i$? If so, what is the formula for this?

I'm trying to make the perceptron learn AND and OR, but without updating $\theta$, I don't feel like it's possible to learn the case where both inputs are $0$. They will, of course, be independent of the weights, and therefore the output will be $\text{step}(-\theta)$, meaning $\theta$ (which has a random value) alone will determine the output.

",17488,,2444,,3/10/2020 22:41,9/13/2022 16:36,Is the bias supposed to be updated in the perceptron learning algorithm?,,1,0,,,,CC BY-SA 4.0 8261,1,8285,,10/4/2018 18:07,,2,749,"

As an Electronics & Communication Engineering student I've heard some stories and theories about ""The math we have is not enough to complete a thinker-learner AI.""

What is the truth? Is humankind waiting for another Newton to make new calculus or another Einstein-Hawking to complete the quantum mechanics?

If so, what exactly do we need? What will we call it?

",18764,,3217,,10/6/2018 23:01,12/30/2018 20:51,Is known math really enough for AI,,1,0,,,,CC BY-SA 4.0 8263,2,,8258,10/4/2018 18:51,,0,,"

I searched the web but there are no such dataset published but Check this out

",18764,,,,,10/4/2018 18:51,,,,0,,,,CC BY-SA 4.0 8264,2,,8215,10/5/2018 6:03,,1,,"

Assuming that you do have a dataset (images + labels/bounding boxes) in a format that's required by the training model, you can fine-tune your existing model. You can choose to unlock the final few layers or leave all the layers unlocked during the training process. When I had performed such an experiment with RetinaNet I chose to unlock final few conv layers and was able to achieve slightly higher accuracy.

",18772,,,,,10/5/2018 6:03,,,,0,,,,CC BY-SA 4.0 8265,2,,8258,10/5/2018 6:36,,1,,"

Check this one by UCSD. It contains both video as well as images related to traffic signs. The annotations are present in csv

",18772,,,,,10/5/2018 6:36,,,,3,,,,CC BY-SA 4.0 8266,2,,8220,10/5/2018 8:46,,4,,"

What aspects of AI would be most applicable to creating a self learning game AI for a racing game (Q-Learning, NEAT etc)

In general, you are looking at a problem that involves sequential decision making, in a machine learning context.

If you are wanting to build an agent that can learn by receiving screen images, then NEAT cannot scale to that complexity directly. Although there might be clever combinations of deep learning and evolutionary algorithms you could apply, the most heavily explored and likely successful solution will be found in Deep Reinforcement Learning. Algorithms like DQN, A3C, A2C, PPO . . . there are dozens to consider, but all are based around agents using samples of experience to update functions that measure either the ""value"" of acting in a certain way (a policy) or estimate the best policy directly.

Could a ANN or NEAT that has learned to play Mario Kart 64 be used to learn to play another racing game?

Within limits, yes. You will have built a system that takes pixel inputs, and outputs controller messages. If you re-start training from scratch on any compatible N64 game (with same screen resolution and same controller outputs) there is a chance it could learn to play that new game well. As other driving games on the N64 are a subset of all games, and more similar to each other than, say, a scrolling shooter or adventure game, then an agent that can successfully learn one will likely learn another too.

It is unlikely that a Mario Kart agent will immediately be good at another game without re-training. The visual and control differences will probably be too much. You could try though. An interesting experiment would be to take your trained agent, or some part of it, and see if starting with that improves learning time on a new game. This is called Transfer Learning.

What books/material should i read up on to undertake this project?

You will need at least the first parts of an introduction to Reinforcement Learning. If your goal is to aim to just make the agent, then you can skip theory-heavy parts, but within limits the more theory you understand, the easier it will be to change code features towards getting something working. I can suggest the following:

What other considerations should i take throughout this project?

Before you get started, you should know this could be an ambitious project, requiring a lot of compute time using currently-available toolkits. You need to think ahead a little, as you will be faced with these decisions:

  • Is it more important to you to make a working bot on this problem, or more important to understand the underlying theories so that you understand what is going on. You will need to do at least a little of both, but you can go strongly in either direction:
    • There are enough pre-built learning agents available on the internet that you might have a successful project by learning just enough to wire up a copy of the game, and then letting the learning agent do its job. Would this be enough for you, would you feel that you had solved your problem?
    • There is plenty of educational material about RL and how the maths behind it works. If you are interested in understanding that, and perhaps generating your own ideas for improvements to current algorithms, then you need to study that material harder. However, this may lead you away from your Mario Kart player goal, and you may never actually solve it, because far simpler problems that require less computing power are still actually interesting academically.

In addition, I can think of the following:

  • Building your Mario Kart player is a long-term goal. You will need to start with simpler agents in order to understand the techniques that you hope to apply to Mario Kart.

  • You may need more compute power to solve this game than you have available.

  • You will need to solve the issue of automating control of a N64 emulator. There is at least an existing emulator Mupen64Plus - I do not know whether it will be adequate for you, but at least one person has attempted to wrap this for automated learning, in the gym-mupen64plus project.

",1847,,1847,,10/5/2018 8:54,10/5/2018 8:54,,,,0,,,,CC BY-SA 4.0 8267,1,,,10/5/2018 13:09,,2,268,"

In reinforcement learning, the system sets some controllable variables, and then determines the quality of the result of the dependent variable(s); using that "quality" to update the algorithm.

In simple games, this works fine because for each setting there is a single result.

However, for the real world (e.g. an airflow system), the result takes some time to develop and there is no single precise "pair" result to the conditions set. The flow change takes time and even oscillates a bit as flow stabilizes to a steady-state.

In practical systems, how is this "lag" accounted for? How are the un-settled (false) results ignored? How is this noise distinguished from exogenous factors (un-controlled system inputs e.g. an open window exposed to wind)?

",17061,,2444,,6/20/2020 10:39,6/20/2020 10:39,"How are ""lags"" and ""exogenous factors"" accounted for in reinforcement learning?",,1,0,,,,CC BY-SA 4.0 8268,2,,8267,10/5/2018 13:34,,3,,"

One of Reinforcement Learning's core features is the ability to deal with delayed rewards/punishments - i.e. rewards that may occur as a consequence of a decision that occurred multiple time steps ago. That is because the value that it optimises is defined as a long-term sum of immediate rewards. This value is often called the return or utility and although it will take short term rewards into account, they do not have to be the dominant factors, and often are not.

All RL solvers are designed to solve this problem, it is core to the MDP formalism used in RL theory. You can decide to focus on immediate rewards or longer-term ones by adjusting using a discount factor (often represented as $\gamma$ in RL equations). Low discount factors will cause an agent to prefer short-term rewards, high discount factors will cause the agent to prefer for longer-term sums of reward. In continuous problems you must use $\gamma \lt 1$ or instead use average reward as the objective to be maximised.

One classic example of this kind of delay problem is the Mountain Car environment. Here, reward is only granted for reaching a certain point, and it is not possible to reach it by just taking the obvious action of moving directly towards the objective. Most RL algorithms can solve this problem, and get close to optimal solutions, purely from experiencing the final reward and associating it progressively with earlier and earlier decisions.

In practical systems how is this ""lag"" accounted for?

As above, this is inherent to RL, as opposed to simplified versions such as Contextual Bandits which don't have time steps and evolving state.

In practice, it may take some experimentation to find the best RL algorithm and solve a problem efficiently. Environments with very large delays and lots of noise are harder to solve.

It is also really important to set the reward mechanism for your true goals, and not some intermediate ones. If your goal is a stable state, you need to reward that, and not necessarily maximum throughput. If both throughput and stability are important to the end result, you may need to reward both - e.g. reward throughput, but penalise short-term fluctuations.

How are the un-settled (false) results ignored?

In general handling short-term vs long term effects of actions is known as the assignment problem.

This is most often solved statistically, over long term experience, as the agent associates not just immediate reward with an action, but the long term return after making each decision. Different RL solvers have different mechanisms for effectively moving this association back through time.

The key thing here is that an RL solver learns to maximise expected return or utility from any given state. Working with the expected value allows for smoothing out of variations, provided the agent gains enough experience to approach a statistical mean. Extremely rare and large variations can cause problems with this, and might need to be explicitly coded for as opposed to learned.

How is this noise distinguished from exogenous factors (un-controlled system inputs e.g. an open window exposed to wind)?

This is trickier, and depends more critically on the problem.

A RL-based system that is exposed to these variations whilst learning, and some way to represent their effects in the state variables should learn to pick the best action when these external factors impact it.

RL systems based on maintaining a stable state can sometimes do well when they are moved out of that stable state. A good example here might be pole balancing, where the agent learns enough variations of position and speed that if something knocks the pole out of balance, it will still right itself even though the agent was not trained to react to such events directly.

However, in some cases, if knowledge is not available to the agent, or it has not been trained in that particular scenario, it will fail to generalise and may behave in counter-productive ways.

Either way, if you are concerned about specific scenarios, you need to include them in training, or at least test for them after training.

",1847,,1847,,10/5/2018 16:45,10/5/2018 16:45,,,,0,,,,CC BY-SA 4.0 8270,1,,,10/5/2018 16:47,,1,1316,"

I'd like to implement a partially connected neural network with ~3 to 4 hidden layers (a sparse deep neural network?) where I can specify which node connects to which node from the previous/next layer. So I want the architecture to be highly specified/customized from the get-go and I want the neural network to optimize the weights of the specified connections, while keeping everything else 0 during the forward pass AND the backpropagation (connection does not ever exist).

I am a complete beginner in neural networks. I have been recently working with tensorflow & keras to construct fully connected deep networks. Is there anything in tensorflow (or something else) that I should look into that might allow me to do this? I think with tf, I should be able to specify the computational graph such that only certain connections exist but I really have no idea yet where to start from to do this...

I came across papers/posts on network pruning, but it doesn't seem really relevant to me. I don't want to go back and prune my network to make it less over-parameterized or eliminate insignificant connections.

I want the connections to be specified and the network to be relatively sparse from the initialisation and stay that way during the back-propagation.

",18788,,16920,,4/9/2019 10:07,11/10/2022 2:02,How to create Partially Connected NNs with prespecified connections using Tensorflow?,,3,0,,,,CC BY-SA 4.0 8271,2,,7528,10/5/2018 17:06,,2,,"

Fear of this kind is an irrational response (large negative incentive in response to a small risk). Modeling fear would need to model a ""grossness"" factor associated with, for example, spiders so that the normally un-proportional response would occur. The ""grossness"" factor could be manifested in many other forms to magnify a response to a previously unpleasant, though not particularly dangerous, experience. Such fear can also be inspired by hearsay (think hysteria caused by a sensational news story). A NN would normally only respond minimally to a minimal risk.

",17061,,,,,10/5/2018 17:06,,,,0,,,,CC BY-SA 4.0 8273,1,,,10/5/2018 19:59,,0,1443,"

We are currently working on developing a 3D modeling software that allows designers to set spatial constraints to models. The computer then should generate a 3D mesh conforming to these constraints.

Why should or shouldn't we use Lisp for the constraint satisfaction part? Will Prolog environment be any better? Or should we stick to C/C++ libraries?

One requirement we have is that we want to use the Unity Game Engine as it has a lot of 3D tools built in

",18726,,,,,12/19/2021 4:07,What are the advantages and disadvantages of using LISP for constraint satisfaction in 3D space,,1,1,,,,CC BY-SA 4.0 8274,1,,,10/5/2018 21:52,,3,816,"

I implemented an image segmentation pipeline and I trained it on the DICOM dataset. I compared the results of the model with manual segmentation to find the accuracy. Is there other methods for evaluation?

",18661,,2444,,5/4/2019 15:53,8/2/2020 12:51,Which evaluation methods can I use for image segmentation?,,2,0,,,,CC BY-SA 4.0 8276,2,,8273,10/5/2018 22:53,,1,,"

This is actually a question which will only receive opinion based answer's. A question you should ask yourself is if the constraint part is really that complex that it is worth to use a different programming language. Unity itself offers a C# API [1] and I would therefore stick with that.

[1] https://unity3d.com/programming-in-unity

",10228,,,,,10/5/2018 22:53,,,,0,,,,CC BY-SA 4.0 8277,2,,8274,10/5/2018 23:44,,2,,"

See:

Martin Thoma: A Survey of Semantic Segmentation, Section III

Subsection A is about metrics and B is about datasets.

Metrics include: accuracy, IoU, frequency weighted IoU, F-beta score, speed, ...

",3217,,,,,10/5/2018 23:44,,,,0,,,,CC BY-SA 4.0 8279,1,,,10/6/2018 8:46,,1,268,"

I was reading a machine learning book that uses probabilities like these:

$P(x;y), P(x;y,z), P(x,y;z)$

I couldn't find what they are and how can I read and understand them?

Apart from the context, I saw one of these probabilities on it here:

",9941,,16565,,11/7/2018 15:57,5/6/2019 17:04,"What are the meanings of these (P(x;y), P(x;y,z),P(x,y;z))?",,1,2,,,,CC BY-SA 4.0 8281,1,,,10/6/2018 11:49,,3,4996,"

With reference to the research paper entitled Sentiment Embeddings with Applications to Sentiment Analysis, I am trying to implement its sentiment ranking model in Python, for which I am required to optimize the following hinge loss function:

$$\operatorname{loss}_{\text {sRank}}=\sum_{t}^{T} \max \left(0,1-\delta_{s}(t) f_{0}^{\text {rank}}(t)+\delta_{s}(t) f_{1}^{\text {rank}}(t)\right)$$

Unlike the usual mean square error, I cannot find its gradient to perform backpropagation.

How do I calculate the gradient of this loss function?

",18804,,40434,,8/31/2022 16:17,11/1/2022 3:05,How do I calculate the gradient of the hinge loss function?,,2,1,,,,CC BY-SA 4.0 8283,1,8286,,10/6/2018 15:40,,1,81,"

Example: Texas Holdem poker vs Texas Holdem poker with the same rounds, just with no public cards dealt.

Would algorithms, like CFR, approximate the Nash equilibrium more easily? Could AI that does not look at public cards achieve similar performance in normal Texas Holdem as AI that looks at public state tree?

",18808,,2444,,12/31/2021 9:48,12/31/2021 9:48,What's the difference between poker with public cards and without them?,,1,0,,,,CC BY-SA 4.0 8284,1,8401,,10/6/2018 18:29,,6,1155,"

I have a neural network with the following structure:

I am expecting specific outputs from the neural network which are the target values for my training. Let's say the target values are 0.8 for the upper output node and -0.3 for the lower output node.

The activations function used for the first 2 layers are ReLu or LeakyReLu while the last layer uses atan as an activation function.

For back propogation, instead of adjusting values to make the network's output approach 0.8,-0.3. is it suitable if I use the inverse function for atan -which is tan itself- to get ""the ideal input to the output layer multiplied by weights and adjusted by biases"".

The tan of 0.8 and -0.3 is 0.01396 and -0.00524 approximately.

My algorithm would then adjust weights and biases of the network so that the ""pre-activated output"" of the output layer -which is basically (sum(output_layer_weight*output_layer's inputs)+output_layer_biases)- approaches 0.01396 and -0.00524.

Is this suitable

",18640,,,,,10/14/2018 5:10,Is it suitable to find inverse of last layer's activation function and apply it on the target output?,,4,6,,,,CC BY-SA 4.0 8285,2,,8261,10/6/2018 19:18,,4,,"

Maybe.

AI has a long history of encountering mathematical impossibilities and then working around them already. While the individuals who solved these problems don't get as much press as Newton, Einstein, or Hawking, a case could be made that their contributions to human knowledge are on a similar scale. Unfortunately, their results don't relate to physical systems, so they can be harder to explain to the layperson.

Something to keep in mind is that your question assumes the correctness of the ""Great Man Theory"" of science history, which holds that science advances by the efforts of exceptional people, and that we need to wait around for more such people to appear for science to advance. This view of scientific history is overly simplistic, and probably wrong. For example, most (all?) of Newton's discoveries were likely to have been made by someone else, around the same time, if he didn't make them (see, e.g. Leibnitz, who actually did discover calculus at the same time), so a better view might be of a large community of researchers who gradually develop more advanced models based on each others' work.

To answer your question, I've listed out some past examples of problems that were overcome, and some outstanding problems that might require the development of new mathematical tools to solve properly. Keep in mind though, that we can't know for sure whether new tools are needed: maybe existing tools are sufficient, but no one has applied them in the right way yet!

Some past problems that plagued AI and required the development of better mathematical approaches:

  1. The combinatorial explosion problem appeared to preclude having an AI system reason about probabilities and causation in a logically correct way. This problem was solved by the development of Bayesian networks and causal networks, with the work being led by Judea Pearl and his students. Pearl won a Turing award for this, but has received very little coverage in the popular press.

  2. Many optimization problems that AI systems need to solve are NP-hard. This means that no general purpose algorithms exist that can give exact solutions to these problems in a polynomial number of steps. Because of this, the problems were initially viewed as intractable. The development of PTAS algorithms, and a deep understanding of phase-changes in NP-Hard problems (pioneered by Cheeseman et al.) led to an ability to identify exactly what makes these problems hard, to identify which subparts cause the hardness, and to practical algorithms that solve all sorts of problems in these domains.

  3. Much effort in AI was spent designing new classification methods, and arguing about why one might or might not be better than another. The No Free Lunch Theorems, along with other work in computational learning theory provided a clear mathematical framework for understanding how systems can learn, and where their limits are.

  4. As a more recent example, games involving chance, like Poker, have state spaces so vast that they cannot be searched through effectively, even with heuristics. Phrasing these games in the language of Game Theory, and proving the convergence of the counter-factual minimax regret algorithm provided solutions. Bowling et al. were at the center of this work. It got some press coverage, but most of the coverage focused on ""computers can play poker"", and not the exiting, more generalizable, lessons AI researchers had learned in the process.

Some problems that are still outstanding, and might require new mathematical techniques:

  1. Are there efficient algorithms for solving problems in the complexity classes PPAD, NP, or #P? Many AI problems fall into these categories. We do not know the answer definitively, and although most researchers suspect that no such algorithms exist, understanding why not seems like it could also provide major research advantages. Existing proof techniques do not seem likely to crack this problem.

  2. We have no good mathematical models of subjective experience. While some researchers (like Churchland) think this is not likely to play a major role in the development of intelligent systems, a concrete model could either support or refute that view, and might provide solid frameworks for current problems in AI like the study of motivation.

  3. The original project of the social sciences was to provide mathematical models of individual humans, and then of human societies. This has mostly been abandoned (Psych is still at it, but Economics prefers to study rational agents instead of humans, and most of the others gave up mathematical modeling entirely to pursue the methodologies of the humanities instead of the sciences). Nonetheless, the lack of sound mathematical descriptions of human behaviours is becoming a major topic within AI, with work on norms, trust, emotion, and other topics. If a mathematical framework were developed to describe the actions of human societies or of individual humans, then AI would be advanced significantly (along with many other fields!).

",16909,,16909,,12/30/2018 20:51,12/30/2018 20:51,,,,0,,,,CC BY-SA 4.0 8286,2,,8283,10/6/2018 19:35,,1,,"

It depends a little on what you mean by ""the same rounds, just with no public cards dealt.""

If you mean that each player will just be dealt 2 cards, and no public cards exist, then really we're playing a sort of ""high card"" game. The best hand is just a pair of aces, CFR will solve this quickly, because the number of possible game states is extremely small compared to a full poker game (especially if we exploit the symmetry of suits, since flushes aren't possible).

If you mean that each player will be dealt 5 cards, with several rounds of betting as before, CFR will probably do less well. The state space will be larger, since there are more cards in play (10 instead of 9). Betting may become more complex, and more complex betting expands the state space enormously.

If you mean that the cards are dealt as before, but the program simply will not look at the cards, then you've kept the state space of the game the same size, but radically reduced the number of information sets. Playing against an opponent who can look at the cards on the table, your program would be at an enormous disadvantage. For instance, imagine the cards on the table are ""3 3 8 8 5"", and that you have a pair of 2's in your hand. You would want to play very differently from if the cards on the table were ""2 2 10 4 7"", but an AI without access to the table cards would have to act the same in both situations.

",16909,,,,,10/6/2018 19:35,,,,5,,,,CC BY-SA 4.0 8287,2,,8259,10/6/2018 23:11,,0,,"

I found the answer to my question.

Treat $\theta$ as a normal weight, associated with an input that always equals $-1$.

",17488,,2444,,3/10/2020 22:42,3/10/2020 22:42,,,,0,,,,CC BY-SA 4.0 8289,2,,8279,10/7/2018 1:16,,2,,"

This means ""Parameterized by"".

First, we all agree on the idea of conditional probabilities:

$$P(X | Y) = P(X,Y) / P(Y)$$

That is, the probability that X happens given that we've seen Y happen, is the fraction of worlds in which Y happened that also contain X. This is uncontroversial.

If you're a Bayesian, you might view parameters themselves as variables in a statistical model. So you might want to speak about the probability of a parameter taking on a certain value, or the probability of the data given that certain parameters have taken on certain values. In that case, you might write something like $P(D | \theta)$ to denote the probability of the data given a parameter $\theta$.

If you're a frequentist, you might find notation like this unsettling, because parameters don't have probabilities under the frequentist view, but instead have fixed values. You could talk about the probability of observing the data under a particular parameterization by defining a family of parameterized probability density functions, and then writing something like $P(D;\theta)$. You might also do this as a Bayesian if you wanted to clearly differentiate between model parameters and other things you might observe.

$P(x;y)$ then is read as ""The probability of x under probability density function P, parameterized by y.""

$P(x;y,z)$ is ""The probability of x under probability density function P, parameterized by y and z.""

$P(x,y;z)$ is ""The probability of x and y jointly, under probability density function P, parameterized by z.""

You could also write something like $P(X|Y; z)$, which would be ""The conditional probability of X given that we observe Y, under probability density function P, parameterized by z.""

An example of the latter would be in something like logistic regression. We might wish to know $P(L | D ; \theta)$, where L is the label of the data, D are the features we observe, and $\theta$ are the coefficients of the regression.

",16909,,,,,10/7/2018 1:16,,,,0,,,,CC BY-SA 4.0 8290,1,,,10/7/2018 2:03,,0,41,"

The subject matter is to count the number of people in a large room, wherein a camera is placed in a very high ceiling: an example would be Grand Central Station. Faces are not visible: the scalp (top of the head) is visible to the camera as shown in the link's video.

The goal: I would like to perform a Google literature search to assess the work that has been performed on overhead head recognition, however, I am not sure what the best keyword pairs: (scalp? head? people?) to describe the object that is to be recognized from a camera positioned in the ceiling (overhead? bird's eye? satellite?). I'd like the search to return leading-edge (AI) techniques that benchmark results

",18819,,18819,,10/8/2018 20:56,10/8/2018 20:56,Keywords to describe people counting from a camera?,,1,2,,,,CC BY-SA 4.0 8291,1,8343,,10/7/2018 2:15,,0,68,"

I have written my own basic convolutional neural network in Java as a learning exercise. I am using it to analyze the MIT CBCL face database image set. They are a set of 19x19 pixel greyscale images.

Network specifications are:

Single Convolution Layer with 1 filter: Filter Size: 4x4. Stride Size: 1

Single Pooling Layer 2x2 Max Pooling

3 layer MLP(input, 1 hidden and output) input = 64 neurons hidden = 15 neurons output = 2 neurons learning rate = 0.1

Now I am getting reasonable accuracy(92.85%), but my issue is that it is being achieved at very different points in the epoch count across network runs:

Epochs  Training Accuracy   Test Accuracy   Validation Accuracy

Run 1 415 93.13 92.44 93.35 Run 2 515 92.44 93.18 92.84 Run 3 327 93.83 92.05 92.38

I am using the Java random class with the same seed for every run to initialize the kernel, the MLP weights and break the input data into 3 sets.(training is being done using the 33-33-33 method)

I am a loss as to what is causing this variation in epoch count to achieve the highest point in validation accuracy. Can anybody explain this?

",18818,,10135,,10/18/2018 10:44,10/18/2018 10:44,Huge variations in epoch count for highest generalized accuracy in CNN,,1,0,,,,CC BY-SA 4.0 8293,1,8295,,10/7/2018 9:55,,16,5133,"

I've been looking at reinforcement learning, and specifically playing around with creating my own environments to use with the OpenAI Gym AI. I am using agents from the stable_baselines project to test with it.

One thing I've noticed in virtually all RL examples is that there never seems to be any dropout layers in any of the networks. Why is this?

I have created an environment that simulates currency prices and a simple agent, using DQN, that attempts to learn when to buy and sell. Training it over almost a million timesteps taken from a specific set of data consisting of one month's worth of 5-minute price data it seems to overfit a lot. If I then evaluate the agents and model against a different month's worth of data is performs abysmally. So sounds like classic overfitting.

But is there a reason why you don't see dropout layers in RL networks? Is there other mechanisms to try and deal with overfitting? Or in many RL examples does it not matter? e.g. there may only be one true way to the ultimate high score in the 'breakout' game, so you might as well learn that exactly, and no need to generalise?

Or is it deemed that the chaotic nature of the environment itself should provide enough different combinations of outcomes that you don't need to have dropout layers?

",18372,,,,,10/9/2018 17:32,Why do you not see dropout layers on reinforcement learning examples?,,1,3,,,,CC BY-SA 4.0 8295,2,,8293,10/7/2018 13:46,,15,,"

Dropout essentially introduces a bit more variance. In supervised learning settings, this indeed often helps to reduce overfitting (although I believe there dropout is also already becoming less.. fashionable in recent years than in the few years before that; I'm not 100% sure though, it's not my primary area of expertise).

In Reinforcement Learning, additional variance is not really what we're looking for. There already tends to be a large amount of variance in the learning signals that we get, and this variance already tends to be a major issue for learning stability and/or learning speed. For example:

  • Randomness in action selection leads to variance in returns that we observe
  • There may be randomness inherent to the environment itself, leading to extra variance in our observations (some environments are nondeterministic)
  • Unlike Supervised Learning settings, in Reinforcement Learning we often actually use our own predictions as a part of our loss function / training signal. For example, in temporal-difference learning (like Q-learning / DQN), the target that we update towards looks like $r + \max_{a'} Q(s', a')$. In that term, only the $r$ is a ground-truth observation (like we would use in supervised learning), and the other term is our own prediction. During a learning process, those latter parts (our own predictions) are changing over time. This is a ""moving target'' problem, which can be viewed as additional variance in our learning signals.

Many important parts of Deep RL algorithms (without which our training processes empirically turn out to destabilize and break down) are very much tailored towards reducing that variance. For example, Target Networks in DQN were introduced specifically to reduce the moving target problem. From this point of view, it's not surprising that if we were to add more artificial variance through other means again (such as dropout), that this would hurt performance / destabilize learning.


Is there other mechanisms to try and deal with overfitting? Or in many RL examples does it not matter? e.g. there may only be one true way to the ultimate high score in the 'breakout' game, so you might as well learn that exactly, and no need to generalise?

In the majority of current (Deep) Reinforcement Learning research, overfitting is indeed not viewed as a problem. The vast majority of RL research consists of training in one environment (for example Cartpole, or Breakout, or one particular level in Pacman, or navigating in one specific maze, etc.), and either constantly evaluating performance during that learning process, or evaluating performance after such a learning process in the same environment.

If we were to compare that evaluation methodology to what happens in supervised learning... we are basically evaluating performance on the training set*. In supervised learning, this would be absolutely unacceptable, but in RL it is very much treated as acceptable and more rule than exception. Some say this is simply a problem in current RL research, something that needs to change. It could also be argued that it's not necessarily a problem; if we really are able to train the agent in precisely the same environment that we wish to deploy it in later... well, then what's the problem with it overfitting to that environment?

So, when we're using the evaluation methodology described above, indeed we are overfitting to one specific environment, but overfitting is good rather than bad according to our evaluation criteria. It is clear that this methodology does not lead to agents that can generalize well though; if you consistently train an agent to navigate in one particular maze, it will likely be unable to navigate a different maze after training.

*Note: the truth, in my opinion, is slightly more nuanced than that we are really ""evaluating on the training set"" in RL. See, for example, this nice thread of tweets: https://twitter.com/nanjiang_cs/status/1049682399980908544


I have created an environment that simulates currency prices and a simple agent, using DQN, that attempts to learn when to buy and sell. Training it over almost a million timesteps taken from a specific set of data consisting of one month's worth of 5-minute price data it seems to overfit a lot. If I then evaluate the agents and model against a different month's worth of data is performs abysmally. So sounds like classic overfitting.

Note that your evaluation methodology described here indeed no longer fits the more ""common"" evaluation methodology. You have a problem with concept drift, with nonstationarity in the environment. This means overfitting may be a problem for you.

Still, I'm not sure if dropout would help (it's still additional variance which may hurt). First and foremost, you'd want to make sure that there's some way to keep track of the time / month in your inputs, such that you'll at least have a chance of learning a policy that adapts itself over time. If you have a clear, solid boundary between ""training phase"" and ""evaluation phase"", and you know that concept drift occurs across that boundary (you know that your environment behaves differently in the training phase from the evaluation phase)... you really don't have much hope of learning a policy only from experience in the training phase that still performs well in the evaluation phase. I suspect you'll have to get rid of that clear, solid boundary. You'll want to keep learning throughout the evaluation phase as well. This enables your learning algorithm to actually collect experience in the changed environment, and adapt to it.

",1641,,1641,,10/9/2018 17:32,10/9/2018 17:32,,,,2,,,,CC BY-SA 4.0 8297,1,,,10/7/2018 17:37,,3,48,"

I'm programming on Connect6 with MCTS.

Monte Carlo Tree Search is based on random moves. It counts up the number of wins in certain moves. (Whether it wins in 3 turns or 30 turns)

Is the move with less turns more powerful than the move with more turns?(as mcts just sees if it's win or not -- not considering the number of turns it took to win) And if so, is it meaningful to give bigger weight to the one with less turn win?

",18844,,1641,,10/8/2018 19:35,11/7/2018 20:00,Is it meaningful to give more weight to the result of monte carlo search with less turn win?,,1,0,,,,CC BY-SA 4.0 8300,2,,8297,10/7/2018 19:46,,2,,"

Traditionally (when not considering your idea), the evaluation function for terminal game states would be implemented to return $1$, $0$, or $-1$ for wins, draws, or losses, respectively.

Changing that in a naive/straightforward way to make short-term wins more rewarding, long-term wins less rewarding, short-term losses more negative, and long-term losses less negative can be dangerous, it may change the objective that your agent is ultimately optimizing for (i.e. may lose the guarantee of converging towards optimal play given an infinite amount of time) if not done very carefully.

There is definitely value in considering the idea though, especially because in the Play-Out phase of MCTS, trajectories of (semi-)random moves introduce uncertainty in the evaluations at the end of those simulations, and this uncertainty increases as the length of the trajectories increases (due to increased number of uninformed decisions being made along the trajectory). Note that it is especially important to take into consideration here the number of moves played in the Play-Out phase, not necessarily including the number of moves made in the Selection phase (which are selected according to a much more informed strategy).

One paper I know of that investigates ideas along these lines is ""Quality-based Rewards for Monte-Carlo Tree Search Simulations"".

",1641,,,,,10/7/2018 19:46,,,,0,,,,CC BY-SA 4.0 8302,2,,8026,10/7/2018 21:50,,1,,"

There are a lot of other potential applications. It's a good idea to start with GPU related problems, since GPUs are essentially doing a slightly wider set of operations, slightly slower. Some possible problems where TPUs might be advantageous are:

  • Shaders are algorithms for rendering graphics in one style or another. Since computer graphics can be understood as mostly linear algebra, it is natural to view this as operations over tensors.

  • Physical simulations, which again involve multiplication of vectors by a series of matrices.

  • Options Pricing, which again involves the multiplication of vectors by a series of matrices, especially when more complex derivatives are prices and the lattice becomes multi-dimensional.

Within AI, there are many other algorithms optimized to work with GPUs, and that could be modified to work with tensor specific hardware. For example, we have:

There does not yet seem to be much work optimizing these other problems for Tensor Processing units, but TPUs are also not yet very old. It took several years following the availability of inexpensive consumer GPUs before we started to see widespread use in AI. I suspect we will see more TPU-tailored code for other problems soon.

",16909,,,,,10/7/2018 21:50,,,,2,,,,CC BY-SA 4.0 8303,1,8308,,10/7/2018 23:58,,1,70,"

I want to implement a neural network on a big dataset. But training time is long (~1h30 per epoch). I'm still in the development process, so I don't want to wait such long time just to have poor results at the end.

This and this suggest that overfitting the network on a very small dataset (1 ~ 20 samples) and reach a loss near 0 is a good start.

I did it and it works great. However, I am looking for the next step of validating my architecture. I tried to overfit my network over 100 samples, but I can't reach a loss near 0 in reasonable time.

How can I ensure the results given by my NN will be good (or not), without having to train it on the whole dataset ?

",18852,,,,,10/8/2018 14:43,How to detect a Neural Network will work with the whole dataset?,,3,1,,,,CC BY-SA 4.0 8304,2,,8303,10/8/2018 5:37,,1,,"

How can I ensure the results given by my NN will be good (or not), without having to train it on the whole dataset ?

You can't.

If you're interested in diagnostic techniques for neural networks, read section 2.5 of my Master's thesis

",3217,,,,,10/8/2018 5:37,,,,0,,,,CC BY-SA 4.0 8305,2,,6994,10/8/2018 5:49,,0,,"

I would also like to be able to start with a predefined set of classes (or clusters/centroids) as I know for a fact what the types of those emails will be.

This is not a clustering problem, but a semi-supervised learning problem. If you don't have labeled data yet, then create some labels. You might also want to look into ""active learning"".

One approach is:

  1. For each category, create 5 labeled samples
  2. Train a classifier on them (e.g. tf-idf features and a small neural network)
  3. Let the neural network label your dataset
  4. Check the labels where it was most confident for all classes and the ones where the probabilities for all classes were most evenly spread. Use this to quickly create more labels.
  5. Maybe Amazon mechanical Turk is an option to quickly generate more labels
",3217,,,,,10/8/2018 5:49,,,,0,,,,CC BY-SA 4.0 8306,2,,8290,10/8/2018 5:58,,2,,"

I usually start with some papers and look at the references:

  • Counting people using video cameras
  • Sheng-Fuu Lin, Jaw-Yeh Chen, Hung-Xin Chao, Estimation of Number of People in Crowded Scenes Using Perspective Transformation, IEEE Transactions on Systems, Man and Cybernetics, November 2001, Part A, Vol. 31, Issue 6, pp. 645-654.
  • A. C. Davies, J. H. Yin, and S. A. Velastin, Crowd monitoring using image processing, Electronics & Communication Engineering Journal, February 1995, Vol. 7, Issue 1, pp. 37-47.

By this technique you only find older work.

To find more recent articles, use Google scholar and find who cites the articles you investigated.

Always write them down (e.g. in your bibtex file) to keep track of what you looked at already.

Good luck!

",3217,,3217,,10/8/2018 7:48,10/8/2018 7:48,,,,0,,,,CC BY-SA 4.0 8307,2,,8303,10/8/2018 14:17,,1,,"

Don't know anything about your dataset, but maybe by using clustering* on it, you can get N ""most distinct"" examples and train only on them. This obviously will not give you same performance as if network would have seen all examples, but this way at least you will show it ""diverse"" examples.

*That is, of course, if you have time for that.

",18866,,,,,10/8/2018 14:17,,,,0,,,,CC BY-SA 4.0 8308,2,,8303,10/8/2018 14:43,,2,,"

You can try to train it on 1% data, then on 2%, 3%, etc... Then plot it and see if increasing data increases the performance and how it is changing. Not sure if that's correct answer, but at least you can iterate this method pretty fast.

",18808,,,,,10/8/2018 14:43,,,,0,,,,CC BY-SA 4.0 8309,1,8587,,10/8/2018 15:15,,4,185,"

I'm learning about multilayer perceptrons, and I have a quick theory question in regards to hidden layer neurons.

I know we can use two hidden layers to solve a non-linearly separable problem by allowing for a representation with two linear separators. However, we can solve a non-linearly separable problem using only one hidden layer.

This seems fine, but what kind of representation does one hidden layer add? And how is the output of the network affected?

I've drawn a diagram of a multilayer perceptron with one hidden layer neuron. I used this same layout to solve a non-linearly separable problem. The single hidden layer node is inside the red square. Forgive my poor MS-Paint skills.

",18870,,2444,,3/11/2020 0:33,3/11/2020 0:36,How does a single hidden layer affect output?,,1,0,,3/11/2020 0:42,,CC BY-SA 4.0 8310,1,,,10/8/2018 15:55,,1,1271,"

I'm trying to apply the World Models architecture to the Sonic game (using the gym-retro library).

My problem concerns the evolutionnary algorithm part that I use as the controller (worldmodels = auto encoder + RNN + controller). I'm using a genetic algorithm called NEAT (I use the neat-python library). I am searching for someone who can help me with the neat-python implementation.

Here is the method that runs a generation : python best_genome = pop.run(popEvaluator.evaluate_genomes, 1)

Currently, all the individuals of the population are evaluated on the first level of Sonic The HedgeHog. The ""run"" method should return the best genome of the population based on their performance on this level. Then, I use this best genome to re-create the associated neural network in order to run it in the same level. I was expected to see the exact same run as the best individual, but this is not the case. Sometimes it does, sometimes not.

There are not a lot of examples with NEAT and I based my code on this one from the official documentation.

Here is my own implementation, if you want to check.

If anybody has already used NEAT, help would be welcome !

",18872,,,,,10/8/2018 15:55,NEAT + Keras : reproducibility problem (World Models implementation),,0,0,,,,CC BY-SA 4.0 8319,2,,2126,10/9/2018 1:52,,0,,"

Autonomous vehicles are dependent upon AI technology in that, to be autonomous in their driving or piloting, they cannot be controlled by people. Therefore they must make complex decisions required of drivers and pilots at least as safely and reliably as human drivers or pilots.

  • They must recognize objects to the degree that both the value and the typical behavior can be assigned to those objects (i.e. people, pets, property, barriers, curbs, grass, trees, bridges)
  • They must map trajectories of a wide array of object types based on their object type, what is known about that type of object, detectable variations such as age or condition, and what the object appears to be involved in doing at the time.
  • They must be able to acquire publicly available representations of drive-able roads (route segments, connection points, and other data), match the representation with the current state of the roads, and track their progress along an intended route to the destination.
  • They must plan their course in lieu of these real time and difficult to predict actions, traffic law, traffic conventions, traffic signs and signals, given destination, known possible routes, discontinuities, and anomalies.
  • They must be able to alter the plan to reach the destination if at all possible regardless of changes and challenges encountered.

Driving or piloting a vehicle is an intelligence intensive task. The only reason AVs will likely surpass human driven vehicles on the road in the near future in terms of the distributions of rates of fatalities and injuries per million meter of travel in the near future is because humans have two key handicaps that offset their intelligence potential as drivers.

  • Carelessness, as defined as multitasking either mentally or physically at a time when hazards might appear
  • Selfishness, as defined as risking the life, health, or property of others to gain a transportation related or psychologically related advantage

Although the above two appear to be subjective, they can be easily proven empirically by taking a sample of traffic patterns at any point in time in any highly trafficked road in the world. This is less true of pilots.

We should not presume that artificial intelligence in AVs is achieved when the behavior of the human mind is copied. That is the criteria for Alan Turing's Imitation Game, a test that was intended to define intelligence in the context of natural language dialog. But words don't normally kill people directly. Vehicles often do.

It would be a very limited vision the potential AV design space to consider human minds as the model of driving excellence. The tasks should not be performed in the same way by the AI system. The AI design objectives of AVs should be more consistent with these concerns and interests.

  • Road or sky safety laws
  • Ethics regarding right of way in normal and emergency situations
  • Civil rights concerns in terms of equal access to public resources
  • Balancing of spacial flow details to maximize transportation throughput
  • Collision aversion when difficult to predict risks emerge

These requirements on the cognitive and adaptive capabilities of the driving or piloting AI are not solely rule-based and mechanical. The vehicle itself is mostly mechanical in its operation, but it too presents surprises like blowouts or other difficult to predict failures. Vehicle control is not at all like chess or a game with a fixed rules of play and fixed game-play environment.

Although the intelligence requirements do NOT include self-awareness of itself as an intelligent system, there are forms of self-awareness required.

  • The relative position of the exterior surface of the vehicle and its projected path relative to that of other objects
  • The condition of the operational parts of the vehicle
  • The mass and location of passengers and any other transported objects in the vehicle

The question ended with an interesting and challenging requirement.

Choose a good way to act in a never before experienced situation

That is perhaps the most challenging aspect of AV driving or piloting system design.

Returning to the question of, ""Why are autonomous cars categorized as AI?"", the meaning of AI is indeed a critical aspect of answering well. Taken literally, the term artificial intelligence specifies two things.

  • It is artificial, in that it does not naturally occur in nature
  • It is intelligent, in that it adapts in ways that, if those ways are mechanical, they are mechanical at a level of detail that is beyond obviousness without considerable study

As year dependent and culturally dependent as that definition of intelligence is, no other definition is quite as sustainable over decades from both scientific and linguistic perspectives. By narrower definitions, AVs may not require AI, but there is no compelling scientific reason to narrow the definition of AI to a subset of this previous definition.

",4302,,,,,10/9/2018 1:52,,,,0,,,,CC BY-SA 4.0 8320,1,,,10/9/2018 2:19,,2,1323,"

Can the recurrent neural network input come from a short-time Fourier transform? I mean the input is not from the time-series domain.

",18884,,2444,,12/17/2021 14:37,12/17/2021 14:37,Can the recurrent neural network's input come from a short-time Fourier transform?,,1,0,,,,CC BY-SA 4.0 8321,2,,8320,10/9/2018 8:28,,2,,"

Yes you can apply RNN to any sequence of same data type. The sequence can be in space, time, or any arbitrary ordered list. The items in the sequence can have any data at all, the only requirement is that each represents that same kind of thing (if you have multiple types of thing to process as a sequence, you just need to expand the definition so that the input features can represent all types unambiguously - essentially creating a ""base class"" that can represent them all).

The RNN will consume the sequence as a time-based sequence, one item per time step of the RNN. However, you can think of that as the same as a processor clock for a computer . . . an RNN is essentially a trainable Turing machine, and in principle can learn to accumulate any data about the sequence it has seen, and output any function of that accumulated data. Although in practice this learning process might be too hard for our current systems, require immense amounts of data etc . . .

In your case, STFT does create a time-based sequence. Each item in the sequence is a frequency analysis for a short period of time, and each time step of the sequence represents a fixed time difference between STFT frames (the windows usually overlap a little), where frequencies in the signal may change. Typically each STFT frame is a single time step input to a RNN. You could input the frequency-domain values in fixed order (e.g. low to high frequency) one at a time into a RNN too, but that would be unusual and would make most learning tasks harder.

",1847,,1847,,10/9/2018 10:35,10/9/2018 10:35,,,,5,,,,CC BY-SA 4.0 8323,1,,,10/9/2018 12:43,,9,12530,"

Almost all the convolutional neural network architecture I have come across have a square input size of an image, like $32 \times 32$, $64 \times 64$ or $128 \times 128$. Ideally, we might not have a square image for all kinds of scenarios. For example, we could have an image of size $384 \times 256$

My question is how to we handle such images during

  1. training,
  2. development, and
  3. testing

of a neural network?

Do we force the image to resize to the input of the neural network or just crop the image to the required input size?

",17763,,2444,,6/14/2020 10:41,6/14/2020 10:52,How to handle rectangular images in convolutional neural networks?,,2,0,,,,CC BY-SA 4.0 8324,2,,8323,10/9/2018 13:07,,9,,"

I think the squared image is more a choice for simplicity.

There are two types of convolutional neural networks

  • Traditional CNNs: CNNs that have fully connected layers at the end, and
  • fully convolutional networks (FCNs): they are only made of convolutional layers (and subsampling and upsampling layers), so they do not contain fully connected layers

With traditional CNNs, the inputs always need to have the same shape, because you flatten the last convolutional layer, with a fixed size. As the flatten layer has a fixed size, the feature map shape from the layer before has to be the same shape, and so the inputs (images) also have to.

However, in FCN, you don't flatten the last convolutional layer, so you don't need a fixed feature map shape, and so you don't need an input with a fixed size.

In both cases, you don't need a squared image. You just have to be careful in the case you use CNN with a fully connected layer, to have the right shape for the flatten layer.

For instance, if you have an input size $320 \times 160$, and you have 3 pooling layers, so your output in the last convolutional layer is $40 \times 20 \times c$ (with $c$ the number of filters/channels), then you just need the flatten layer to have $40*20*c$ neurons.

If you create a new network, just design it to handle a rectangle image.

If you want to use an already pre-trained one, I think the better choice is to resize the image.

If the information in the cropped parts is important, maybe your prediction can be wrong (it depends if the object of interest is in the parts of the image that is cropped). Actually, in Yolo (an object recognition network), images are resized if they don't fit the input requirements. See figure 1 of the YOLO paper. It's because you don't need a high resolution to detect an object (for example, the CIFAR dataset has images of shape $32 \times 32$, but the network can still predict the correct label). So, I think that resizing your image may not affect the prediction much (unless the new size is very different from the original)

",17221,,2444,,6/14/2020 10:49,6/14/2020 10:49,,,,0,,,,CC BY-SA 4.0 8326,1,8342,,10/9/2018 14:59,,0,319,"

I am using Tensorflow CNN to build an image classification/prediction model. Currently all the images in the dataset are each about 1mb in size.

Most examples out there use very small images.

The image size seems large, but I not too sure.

Any thoughts on the feasibility of 1mb images? If not what can I do to compress programmatically?

",18896,,,,,11/9/2018 11:01,Is 1mb an acceptable memory size for images being trained in a CNN?,,1,6,,,,CC BY-SA 4.0 8327,1,,,10/9/2018 15:20,,0,128,"

According to the original paper on page 4, $224 \times 224 \times 3$ image is reduced to $112 \times 112 \times 64$ using a filter $7 \times 7$ and stride $2$ after convolution.

  • $n \times n = 224 \times 224$
  • $f \times f = 7 \times 7$
  • stride: $s = 2$
  • padding: $p = 0$

The output of the convolution is $(((n+2p-f)/s)+1)$ (according to this), so we have $(n+2p-f)=(224+0-7)=217$, then we divide by the stride, i.e. $217/2=108.5$ (taking the lower value), then we add 1, i.e. $118+1=119$.

How do we get an output image of $112$ now?

",17763,,2444,,4/22/2022 10:03,4/22/2022 10:03,"In the inception neural network, how is an image of shape $224 \times 224 \times 3$ converted into one of shape $112 \times 112 \times 64$?",,1,0,,,,CC BY-SA 4.0 8328,1,,,10/9/2018 15:58,,0,76,"

I have a project, which is the keyboard biometrics of users.

suppose I have 3 users, I do not know how to label in two types of class, (+ 1, -1).

If I want to verify the identity to user1, my idea of ​​class designation would be:

       TIMES                LABEL
user 1
9.4  9.2  1.0  3.4  0.5      1
9.4  9.2  1.0  3.4  0.5      1
9.4  9.2  1.0  3.4  0.5      1
9.4  9.2  1.0  3.4  0.5      1
9.4  9.2  1.0  3.4  0.5      1

user 2
0.1  3.2  1.0  1.2  1.7      -1
3.4  1.2  3.0  1.1  2.8      -1
2.4  2.2  3.0  1.6  2.9      -1
1.4  3.2  2.0  2.6  3.6      -1
3.4  0.2   3.0  2.7  3.5     -1

user N
0.2  1.4  4.5  3.7  2.9      -1
9.2  1.5  7.6  2.6  2.6      -1
9.3  1.6  7.5  2.9  3.4      -1
9.8  3.8  6.6  2.8  2.5      -1
9.8  2.8  1.7  3.8  1.6      -1

but as my system has more and more users classes -1 will be too many compared to classes +1, How should I label the classes?

",18464,,18464,,10/9/2018 16:04,10/11/2018 3:20,How should I label the classes in RNA?,,1,2,,,,CC BY-SA 4.0 8329,2,,8327,10/9/2018 16:03,,1,,"

The padding is not size zero* in the inception CNN layers. In fact it is deliberately chosen to pad so that the convolution by itself would produce an image the same size as the original. I.e. $p=(f−1)/2$, in some libraries this is called ""same"" padding.

So, $p=3$

The stride is not 2. It is $s=1$ for the convolution. The Inception CNN does not use strided convolutions. Instead the stride of 2 is associated with a later max-pooling layer.

Therefore, using $(((n+2p-f)/s)+1)$ with the correct values $(((224 + 6 - 7)/1)+1 = 224$

Then apply max-pooling, with stride 2. $224/2 = 112$.


* Not to be confused with ""zero padding"" which means pad using $0$ as the value to insert into the new area. So you can have ""zero padding with $p=3$""

",1847,,1847,,10/9/2018 16:20,10/9/2018 16:20,,,,3,,,,CC BY-SA 4.0 8331,1,,,10/9/2018 16:32,,1,66,"

I have a grid of rectangles acting as blocks. The robot traverses through the inter-spaces between these consecutive blocks. Now I have sensor data streaming in representing Right and left wheel speeds. Based on the differences in the speeds of the left and right wheels, I infer the robot's position and the path it has threaded. I get the associated individual segments of the total distance when it travels straight, left, or right.

These distances are a function of the actual speed of the robot and the time interval elapsed before the end of that activity. These computed distances for the segments though don't map and fit-in well when projected on the grid layout of the environment. The segments are rather not adhering to the boundary limitations.

I wanted to know if I can use RL to force the calculated distances to fit in with the layout given certain knowledge (or conditions, if you will): the start and end position of the robot and the inter-space distances.

If not RL, do you know how can I solve this problem? I suspect my function computing the distances is off and wondering if RL can help me figure out the right mapping of sensor data to the path traveled adhering to the grid layout dimensions.

If you consider the illustration above you will notice S, D, and D' signifying the starting position, the true destination, and the destination location computed by adding together the calculated distances for each of the segments representing right(r), left(l) and straight(s) along the path towards the destination. Inter-space length is given 7m and dimensions of the blocks are (27m x 15m). If you look at the data presented on the left side you will notice 18m left and consecutive 24m right represents the activity, in the grid, as the passage through the blocks. Granted -- perhaps the car negotiates the edges and corners through this passage in a protracted left(l) and right(r) movements, without necessarily going straight(s) straddling and linking the turns as one would expect.

The question arises, however, when taken into account these individual segment lengths and stitch them together you end up in a destination, not in the ballpark range of the expected value. How can we design this problem so as to employ RL methods to, sort of, impose these grid dimensional constraints on this distance calculation methodology to yield better results? Or, probably best to re-imagine the whole problem so it is amenable to the application of RL.

Any advice/ insights would be appreciated.

",16551,,30725,,5/29/2020 13:47,5/29/2020 13:47,Reinforcement learning for segmenting the robot path to reflect the true distances,,0,5,,,,CC BY-SA 4.0 8333,1,,,10/9/2018 18:47,,3,814,"

I (mis?)understood the NEAT algorithm has the following steps:

  1. Create a genome pool with N random genomes
  2. Calculate each genome fitness
  3. Assign each genome to a species
  4. Calculate the adjusted fitness and the number of offspring of each species
  5. Breed each species through mutation/crossover from the stronger genomes
  6. go to step 2.

Step 3 is tricky: speciation is made placing each genome G in the first species in which it is compatible with the representative genome of that species, or in a new species if G is not compatible with any existing species. Compatible is meant as having compabilitity distance below a certain threshold. Regarding representative genome NEAT paper says:

Each existing species is represented by a random genome inside the species from the previous generation

Somewhere I've found that keeping the number of species stable is good, and this is achieved automatically with dynamic thresholding. However, dynamic thresholding makes hard to evaluate species behaviour across generations.

Let me give one example: Assume that in Generation 20, Species 1 has Genome A as representative and Species 2 has Genome B as representative. Assume elitism is implemented.

As the representative genome is taken from previous generation, assume that in Generation 21, Genome A and B are still representatives for Species 1 and 2, however assume compatibility threshold has changed (i.e. bigger) in order to reach the target species number. With this change, A and B have now a compatibility distance lower than threshold and should be placed in the same Species, however they are representatives of different species.

How to solve this issue?

More in general, with dynamic thresholding, how to make sure species management across generations is consistent? E.g. NEAT paper also says:

If the maximum fitness of a species did not improve in 15 generations, the networks in the stagnant species were not allowed to reproduce.

How to make sure that across all 15 generations, we are still considering that same single species and this has not drastically changed (so that they are actually different 'objects'?). E.g. in the example above, if A and B are both placed in Species 1 in Generation 21, Species 2 no longer represents what it represented in Generation 20.

",13087,,40434,,1/30/2022 16:35,10/27/2022 21:09,NEAT - Managing species across generations,,2,1,,,,CC BY-SA 4.0 8335,1,,,10/9/2018 23:56,,1,40,"

When a human looks at a page. He notices the sets of letters are grouped together separated by white space. If the white space was replaced by another character say z, it would be harder to distinguish words.

For a neural network, spaces are ""just another character"". How can we set up an RNN so it gives special importance to the difference between certain characters like white spaces and letters so that it will train faster? Assume the input is just a sequence of ASCII characters.

",4199,,,,,10/9/2018 23:56,Pre priming a network for white space,,0,0,,,,CC BY-SA 4.0 8336,2,,8323,10/10/2018 4:28,,1,,"

If you have a rectangular image and you are using existing models (or existing code), then you have to add an input pre-processing pipeline which transforms the image to standard dimensions. This is very common in computer vision and both PyTorch and Tensorflow have support for easily adding input pre-processing input pipeline for such a transformation.

Also, if you have a fixed size rectangular image data, then you can design your own network architecture (or initial module) which takes image features into account by using asymmetric pooling and convolutions.

",18907,,2444,,6/14/2020 10:52,6/14/2020 10:52,,,,0,,,,CC BY-SA 4.0 8338,1,8340,,10/10/2018 5:45,,1,113,"

As shown below, my deep neural network is overfitting :

where the blue lines is the metrics obtained with training set and red lines with validation set

Is there anything I can infer from the fact that the accuracy on the training sets is really high (almost 1) ?

From what I understand, it means that the complexity of my model is enough / too big. But does it means my model could theoretically reach such a score on validation set with same dataset and appropriate hyperparameters ? With same hyperparameters but bigger dataset ?

My question is not how to avoid overfitting.

",18852,,,,,10/10/2018 8:32,Interpretation of a good overfitting score,,1,3,,,,CC BY-SA 4.0 8339,1,,,10/10/2018 7:02,,4,144,"

I have a dataset of images belonging to $N$ classes, $A_1, A_2...A_n,B_1,B_2...B_m$ and I want to train a CNN to classify them. The classes can be considered as subclasses of two broader classes $A$ and $B$, therefore the confusion between $A_i$ and $A_j$ is much less problematic than the confusion between $A_i$ and $B_j$. Therefore I want the CNN to be trained in such a way that the difference between $A_i$ and $B_j$ is considered as more relevant.

1) Are there any loss functions that take this requirement into account? Could a weighted cross entropy work in this case?

2) How this loss would change if the classes were unbalanced?

",16671,,16671,,10/11/2018 7:46,10/11/2018 13:33,How to define a loss function for a classifier where the confusion between some classes is more important than the confusion between others?,,1,0,,,,CC BY-SA 4.0 8340,2,,8338,10/10/2018 8:32,,0,,"

It doesn't tell you very much, to be honest. It does mean that (assuming your training and validation distributions are similar) your model could get the same results on your validation set should you train on that, but that would still be overfitting.

Really, the only useful thing overfitting tells you is that you don't have enough regularisation.

",16035,,,,,10/10/2018 8:32,,,,2,,,,CC BY-SA 4.0 8342,2,,8326,10/10/2018 10:03,,1,,"

1 MB by images is too much. It means that you have a lot of pixels to compute in the inputs, and your images have more features that are not very useful for the classification (we human don't need high resolution images for recognize objects, it's the same for model)

It means that maybe you need also a more deeper network, and then, more computation.

In your particular case, you have a dataset of trees, so images are somewhat similar and then, maybe the classification is harder. So, images need enough information in pixels to have a good prediction.

You should resize your whole datsets to more common size because 1mb are too much (I don't know the resolution, but I think it could be around thousand for both dimension). Imagenet have images of size 224x224, and there are 1000 classes, which some of them are close. For examples, there are many dogs classes (boxer, american terrier, ...), cats classes (tiger, egyptian, ....), thus I think size of a few hundred would be enough. In your comment you said you will try with 336x252, I think it can be a good start. Of course you will need some experimentation in this regard. Maybe you can try to train some models in a few epoches with different image sizes, to see the best one, and keep it to train the model further !

",17221,,,,,10/10/2018 10:03,,,,0,,,,CC BY-SA 4.0 8343,2,,8291,10/10/2018 11:15,,1,,"

Fixed. Was an issue with the random generator. In my class for the Neuron layer where I initialize the weights I get new doubles from the generator for each of the initial weight values, but I found a bug where I was re-initializing the random generator, which was of course causing different values.

",18818,,18818,,10/17/2018 4:05,10/17/2018 4:05,,,,1,,,,CC BY-SA 4.0 8344,1,,,10/10/2018 18:00,,3,414,"

I'm trying to create a simple Dyna-Q agent to solve small mazes, in python. For the Q function, Q(s, a), I'm just using a matrix, where each row is for a state value, and each column is for one of the 4 actions (up, down, left, right).

I've implemented the ""real experience"" part, which is basically just straightforward SARSA. It solves a moderately hard (i.e., have to go around a few obstacles) mazes in 2000-8000 steps (in the first episode, it will no doubt decrease with more). So I know that part is working reliably.

Now, adding the part that simulates experience based on what it knows of the model to update the Q values more, I'm having trouble. The way I'm doing it is to keep an experiences list (a lot like experience replay), where each time I take real action, I add its (S, A, R, S') to that list.

Then, when I want to simulate an experience, I take a random (S, A, R, S') tuple from that list (David Silver mentions in his lecture (#8) on this that you can either update your transition probability matrix P and reward matrix R by changing their values or just sample from the experience list, which should be equivalent). In my case, with a given S and A, since it's deterministic, R and S' are also going to be the same as the ones I sampled from the tuple. Then I calculate Q(S, A) and max_A'(Q(S', A')), to get the TD error (same as above), and do stochastic gradient descent with it to change Q(S, A) in the right direction.

But it's not working. When I add simulated experiences, it never finds the goal. I've tried poking around to figure out why, and all I can see that's weird is that the Q values continually increase as time goes on (while, without experiences, they settle to correct values).

Does anyone have any advice about things I could try? I've looked at the sampled experiences, the Q values in the experience loop, the gradient, etc... and nothing really sticks out, aside from the Q values growing.

edit: here's the code. The first part (one step TD learning) is working great. Adding the planning loop part screws it up.

def dynaQ(self, N_steps=100, N_plan_steps=5):

    self.initEpisode()
    for i in range(N_steps):
        #Get current state, next action, reward, next state
        s = self.getStateVec()
        a = self.epsGreedyAction(s)
        r, s_next = self.iterate(a)
        #Get Q values, Q_next is detached so it doesn't get changed by the gradient
        Q_cur = self.Q[s, a]
        Q_next = torch.max(self.Q[s_next]).detach().item()
        TD0_error = (r + self.params['gamma']*Q_next - Q_cur).pow(2).sum()
        #SGD
        self.optimizer.zero_grad()
        TD0_error.backward()
        self.optimizer.step()
        #Add to experience buffer
        e = Experience(s, a, r, s_next)
        self.updateModel(e)

        for j in range(N_plan_steps):

            xp = self.experiences[randint(0,len(self.experiences)-1)]
            Q_cur0 = self.Q[xp.s, xp.a]
            Q_next0 = torch.max(self.Q[xp.s_next]).detach().item()
            TD0_error0 = (xp.r + self.params['gamma']*Q_next0 - Q_cur0).pow(2).sum()

            self.optimizer.zero_grad()
            TD0_error0.backward()
            self.optimizer.step()
",18920,,30725,,5/29/2020 13:47,5/29/2020 13:47,"Dyna-Q algorithm, having trouble when adding the simulated experiences",,0,7,,,,CC BY-SA 4.0 8346,2,,8219,10/10/2018 22:32,,3,,"

Can a brain be intelligent without a body?

No. Don't forget that the main function of the brain is to provide homeostasis between the body and the environment. Without the body, the utility of the brain is no longer relevant.

Alternatively, why consider intelligence only in the brain? How far does our body extend? Embodied cognitive science asks us to consider our entire body and its surrounding extensions utility on the intelligent faculty; The fact that our thumbs stick out in a direction different to our other four fingers allows us to grab things in a effortless way is itself, intelligent.

From this understanding then, the segregation between the intelligent faculty and the ""non-intelligent"" seems to be murky at best. We'd might as well not consider it unless there is better motivation.

Does intelligence require a context?

Pragmatically speaking, yes. Intelligent behaviour is typically understood as being goal-directed and intentional. Intentionality implies some sort of agency. It being an agent, implies some sort of agent/environment relationship, which implies an environment to act as context.

On the other hand, Karl Friston notes in his review that a general principle that the brain (and thus one would imagine, intelligent behaviour) entails is reduction of thermodynamic free energy while maintaining homeostasis with its environment.

This hints at a promise to describe intelligent behaviour purely in terms of thermodynamic processes, but interpretively what this means in a general language is still unclear.

Also, keep in mind that there is no guarantee that we'd recognize a agent as intelligent if its construction is radically different from ours.

",6779,,6779,,10/10/2018 22:49,10/10/2018 22:49,,,,0,,,,CC BY-SA 4.0 8347,2,,7379,10/10/2018 23:15,,0,,"

... the neural network has to learn the transition probability between two keyframes. And a second neural network can then produce the motion plan which is also be trained by a large corpus. Is that idea possible or is it the wrong direction?

That would be trivial.

More difficult, but not hard, would be to take a 14 hour video of someone and create a new video with them saying something they have never said previously. Here is a demonstration, from the BBC YouTube channel, titled: ""Fake Obama created using AI video tool - BBC News"".

The University of Washington demonstrates, using a variety of clips and only audio from an impersonator, that they can create realistic videos of the former president saying anything they want.

It's possible to sketch out an idea and let AI create a finished product of excellent quality.

The YouTube channel ACMSIGGRAPH has a video titled: ""Technical Papers Preview: SIGGRAPH 2018"", which is discussed on the webpage: ""Could These Be The Next High-Tech Tools That Animators Use Daily?"".

What you have asked about was possible long ago and not particularly difficult to do.

",17742,,,,,10/10/2018 23:15,,,,0,,,,CC BY-SA 4.0 8348,1,8472,,10/10/2018 23:45,,5,2409,"

When extending reinforcement learning to the continuous states, continuous action case, we must use function approximators (linear or non-linear) to approximate the Q-value. It is well known that non-linear function approximators, such as neural networks, diverge aggressively. One way to help stabilize training is using reward clipping. Because the temporal difference Q-update is a bootstrapping method (i.e., uses a previously calculated value to compute the current prediction), a very large previously calculated Q-value can make the current reward relatively minuscule, thus making the current reward not impact the Q-update, eventually leading the agent to diverge.

To avoid this, we can try to avoid the large Q-value in the first place by clipping the reward between [1, -1].

But I have seen some other people say that instead of clipping the reward itself, we can instead clip the Q-value between an interval.

I was wondering which method is better for convergence, and under what assumptions / circumstances. I was also wondering if there are any theoretical proofs/explanations about reward/Q-value clipping and which one being better.

",17706,,,,,10/16/2018 19:14,Should the reward or the Q value be clipped for reinforcement learning,,1,1,,,,CC BY-SA 4.0 8349,2,,8328,10/11/2018 3:20,,2,,"

From a purely dataset-development perspective I would just label the classes with numbers starting from 0 (ID of the first person) to N (ID of the last person.

During training however, what you do with those classes will vary depending on the type of architecture you are training. For example, if you are building a neural-network classifier you could simply have multiple output nodes. When running backprop with a label for the ith person you could put a 1 on the ith output layer and -1 on the rest of them. This is similar to this answer as well

Whatever you end up doing, try not to label your data to the benefit of the algorithm you're trying to run unless you really know what you're doing. You don't want to accidentally lose information or make your training data less interpretable and harder to deal with in the long run.

",17408,,,,,10/11/2018 3:20,,,,0,,,,CC BY-SA 4.0 8350,1,8405,,10/11/2018 5:13,,3,66,"

Would this work at all?

Idea is to start training a neural net with some number of nodes. Then, add some new nodes and more layers and start training only the new nodes (or only modifying the old nodes very slightly). Ideally, we would connect all old nodes to the new layer added since we might have learned many useful things in the hidden layers. Then repeat this many times.

Intuition is that if the old nodes give bad information the new layer of nodes will weight the activations of old nodes close to zero and learn new/better concepts in the new nodes. The benefit is that we will keep old knowledge forever.

Caveat is that the network can still temporarily ""forget"" concepts if a new layer weights old information close to zero, but it can potentially remember it again too.

If this completely fails, I'm curious if there's some known way to prevent a neural network from forgetting concepts it learned.

",18936,,,,,10/14/2018 14:22,"Would this work to prevent forgetting: train a neural net with N nodes. Then, add more nodes and stop training the original nodes",,1,5,,,,CC BY-SA 4.0 8353,2,,8284,10/11/2018 10:21,,1,,"

I think your idea would work... fine, but I don't necessarily see any advantages to it. I haven't actually tried it (that'd be the best way for you to also see whether it works!), so I'm mostly going by first thoughts and intuition here.

Anyway, what you are essentially doing with your idea is ""cutting off"" the last layer of the Neural Network from the perspective of your learning algorithm (typically backpropagation). Whatever weights you have between the last hidden layer and the output layer will be fixed to their initial values. The last hidden layer can actually be viewed as an ""output"" layer, since you also have fixed targets that you want to converge towards for them.

Whether this makes your learning process better/faster/easier, or worse/slower/harder seems to be very much dependent on how the weights between your last hidden layer and your output layer are initialized. For example:

  • If those weights are initialized to all-zero, your ""real"" output layer is doomed to always predict zeros, so your problem becomes impossible to solve.
  • If those weights are initialized to implement the identity function, this becomes 100% equivalent to the case you would have if you'd simply cut off the last layer and train that in the traditional sense (i.e. you effectively have one layer less than you really do).
  • If those weights are initialized randomly, it looks to me like you have a post-processing step consisting of a random projection. Such a random projection may be beneficial for training (random projections can be useful for dimensionality reduction, or for, in combination with the subsequent non-linear function, turning an otherwise linear function into a non-linear function).

I don't think it'd very often be better than actually have an extra ""real"", trainable layer with a non-linear activation function though. I suspect such a ""non-trainable"" extra layer can sometimes be better than not having anything there, but I don't think it'd often be better than having a real, trainable layer.

",1641,,1641,,10/11/2018 10:28,10/11/2018 10:28,,,,0,,,,CC BY-SA 4.0 8355,2,,8284,10/11/2018 11:03,,2,,"

The main difference your change would have is to allow you to apply a loss function to a different part of the network. This may affect training.

If you keep the same loss function (e.g. MSE), but apply it to the pre-transformed values, then you will have changed the objective of the network, perhaps significantly. Whether or not this is a good thing depends on how much you needed the original loss function. However, the fact that it would result in a different training target is usually going to be a bad thing if your original training target was correct. This will also be true if you pick a new arbitrary loss function that seems to fit the pre-transform representation better.

If you engineer a ""correct"" loss function such that the objective of the network remains unchanged, then the behaviour of the network will not change much - probably not at all. However, in some cases this can lead to more stable and/or faster training - it is often used for classifiers to avoid need to use exponentiation, see tf.nn.softmax_cross_entropy_with_logits in TensorFlow, which does exactly this.

",1847,,1847,,10/11/2018 11:09,10/11/2018 11:09,,,,0,,,,CC BY-SA 4.0 8358,2,,8339,10/11/2018 13:33,,2,,"

Classic Question

Not only can the reliability or accuracy needs be asymmetrically distributed between category boundaries, but the asymmetry might not be describable in terms of a first degree polynomial translation of the error surface.

The question data set are images belonging to class $c \in \Big( A: \{A_1, \, ..., \, A_n\} \land B: \{B_1, \, ..., \, B_m\} \Big)$. The convolutional network must be trained with a sample of labeled images to categorize with a reliability of $1 - \delta$ and accuracy of $1 - \epsilon$, based on the PAC (probably approximately correct) framework.

How can the loss function reflect the conceptualization of optimal training result that the reliability of categorization between $A$ and $B$ be of greater value than categorization between $A_i$ and $A_j$ where $i \ne j$, and similarly with B.

Criteria X would state that the value of differentiation $D$ is not sub-category index dependent for differentiation between super-categories. That is, $\forall \; (i, j) \; \text{where} \; i \ne j$, $D(A_i, B_i) = D(A_i, B_j)$.

Criteria Y, the categorization WITHIN a super-category is not unlike the typical symmetric case $\forall \; (i, j, k) \; \text{where} \; i \ne j \land i \ne k \land j \ne k$, $D(A_i, A_j) = D(A_i, B_k)$.)

  1. (a) Are there any loss functions that take this requirement into account?

Let $1_p$ be the bit value that is 1 if and only if the label matches the super-category and let $1_b$ be the bit value that is 1 if and only if the label matches the super-category and the sub-category. Because of Criteria X, a conical and normalized loss function could then be defined as thus.

$\epsilon = \dfrac {(1 - 1_p) + \alpha (1 - 1_b)} {1 + \alpha}$

The proportional value of sub-category reliability in relation to super-category reliability $\alpha < 1.0$.

Because of criteria Y, there is no meaning to concavity in this context. Applying mean squaring to the loss simply changes the curvature and meaning of $\alpha$ in relation to the loss as can be seen here.

$\epsilon = \dfrac {(1 - 1_p)^2 + \alpha^2 (1 - 1_b)^2} {1 + \alpha^2} = \dfrac {(1 - 1_p) + \tau (1 - 1_b)} {1 + \tau}\text{, where }\tau = \alpha^2$

  1. (b) Could a weighted cross entropy work in this case?

Cross entropy between the two levels of categorization would be baseless unless there were correlative effects between super classification feature and sub classification feature inherent in the target concept of the learning.

  1. How this loss would change if the classes were unbalanced?

Although there may be schemes for modifying the loss function to compensate for skewed distribution of categories represented in the sample, this is primarily a data input problem best solved by improving data collection or choosing training algorithms, designs, architectures, treating input, and properly controlling hyper-parameters. The loss function represents the inverse of what is considered optimal, not how one achieves it.

",4302,,,,,10/11/2018 13:33,,,,1,,,,CC BY-SA 4.0 8361,2,,8284,10/11/2018 20:37,,1,,"

Be careful to specify exactly what you mean by adjust weights and biases of the network so that the ""pre-activated output"" ... approaches ....

When training a neural network, one minimizes a loss function. This loss function is determines how important the deviation from 0.01396 is compared to the deviation of the other node from -0.00524. By transforming the target labels backwards you should also express the original loss function in terms of the back-transformed labels.


What one can do in some cases is to combine the input to the last layer's activation function with the loss function and algebraically simplify the resulting expression.

This concept is for example implemented in Tensorflow's tf.nn.sigmoid_cross_entropy_with_logits. This function can be used for the case of a single output with sigmoid activation function and binary cross-entropy loss (a similar function also exists for the case of multiple output nodes with a softmax activation function).

Instead of first passing the values through a sigmoid activation and then calculating the binary cross-entropy with respect to the target label, it combines the two expressions and uses an equivalent but simpler expression.

If you look at the documentation of this function, you'll see that the number of transcendental functions (which are computationally expensive to calculate) can be reduced from

loss(x,z) = z * -log(1 / (1 + exp(-x))) + (1 - z) * -log(exp(-x) / (1 + exp(-x)))

to

loss(x,z) = max(x, 0) - x * z + log(1 + exp(-abs(x)))

where x is the input to the (sigmoid) nonlinearity (corresponding to the output of the green nodes in your diagram), also called 'logits' and z is the target label.

",18159,,,,,10/11/2018 20:37,,,,2,,,,CC BY-SA 4.0 8362,1,,,10/11/2018 22:19,,1,36,"

I am interested in understanding how to choose data-acquisition parameters for the subject matter:

  • Frame Resolution
  • Frame rates (FPS)

The goal is to have 'enough' (preferably the minimal) resolution and frames to enable AI to identify people.

QUESTIONS

  1. Are there any published rules of thumb or processes to select video parameters?
  2. Is there a term or label for the selection of video parameters for AI projects?
",18819,,,,,10/11/2018 22:19,People Counting Video Analytics: data acquisition parameters,,0,0,,,,CC BY-SA 4.0 8363,2,,7975,10/11/2018 22:55,,0,,"

The Problem Statement

It does not appear from the wording in the question that the semantics of the language in the questionnaire will be processed, so no knowledge of the associations in questionnaire questions and answers will be derived prior to its administration.

There are $M$ questions, with $N_m$ undifferentiated answers for each question, presented to human users. The administration of this questionnaire builds a set of data that will later be used to train an independent learning system.

This question pertains to both some subsequent training and the learning that may need to occur to achieve stated goals in the acquisition of data for the subsequent training.

  1. Reduce the burden on the system users that answer the questions.
  2. Reduce the storage requirements.
  3. Remove redundancy from the data to be used as examples for subsequent training.
  4. Utilize in real time early answers within a single user session to achieve the above three objectives.

Analysis

The assumption in the question that the above four can and should be achieved through the same mechanism may or may not prove true during the design process. These are some alternative ways to achieve goals.

  • Goal 2. can be achieved by employing bit-wise operations to pack the answer data into an innocuous payload.
  • Goal 3. can be achieved by auto-encoding independently of 1.

Goal 1. is the key to this problem, and its solution, utilizing goal 4., should be addressed first as the predominant technical risk. Afterward, if byproducts of the solution to goal 1. can be employed to assist in the solution of goals 2. and 3., then so be it.

There are at least three meanings of models in machine learning.

  1. The model of the concept being learned
  2. The model thorough which learning occurs
  3. The model of what is considered sufficient

This last model defines how accuracy and reliability are measured and what quality acceptance criteria are given regarding them.

It appears that none of the three models for either the data acquisition learning or the later learning using the data acquired are yet defined, however some architectural considerations can be addressed and some conclusions can be reached.

Structure of Information

Consider these structures of data that can aid learning.

  • The $M$ features of user linguistic response can be saved within a user session as a time series of vectors $\sigma(m, n, t)$.
  • Those series can be collected across $U$ users, each with an user ID $i$ indexed sequentially for mathematical purposes by $u$, as a structure of elements $\gamma(m, n, u, t)$.

It is the the second structure of $\gamma$ elements that contains information that can assist in the presentation of questions and answers to subsequent users, given the $\sigma$ time series in the current user's session state.

The Key Challenge

The challenge is not the application of probability and statistics to a single session, which is straightforward, outlined below. The difficulty is that once those of the $M$ questions are rated for their likely usefulness in later training, by reordering questions, defaulting them to the most likely response, or eliminating any answers, the system has changed the conditions driving the associations. This is the classic scenario when the measurement changes the item being measured. In such a feedback condition, the knowledge acquired about usefulness can produce unstable conditions, including oscillations or chaotic system behavior.

Important Early Design Choice

Real time learning is indicated, and the system must be stabilized using the same kind of stabilization techniques involved in electrical engineering, robotics, and balancing. Perhaps the best choice for current designs is Q-learning, pioneered by Watson and developed further by many others.

Applicable Probability and Statistics

The straightforward application of probability and statistics mentioned above is this. Using Bayes' Theorem (not the naive categorization precisely), one can calculate the probability that any given $\sigma$ will be selected, based on the multi-user sequence of $\gamma(m, n, u)$. However the most effective solution can be achieved via the application of Q-learning to the ordering, defaulting, and elimination of $\sigma$, given both the multi-user sequence of $\gamma$ and current user sessions sequence of $\sigma$. Where naive Bayesian categorization or feature extraction may be most useful is in the profiling of users to further enhance user experience. Even this can be contained within the Q-learning process. The key to Q-learning is the inverse exponential decay of past knowledge.

Application Architecture

The most scalable architecture would be to use JavaScript in the web browser to acquire learned behavior via AJAX and adjust answers order, defaulting, and hiding in real time with the must updated learned state.

A more sophisticated approach would be to encapsulate the learned information in fuzzy logic rules to essentially remove redundancy and compress the learned information.

Further Requirements Analysis Indicated

This may be the extent to which design can proceed without more analysis of the requirements of the system.

What must be defined first is how the training data will be labeled, if at all.

  • Is the goal to provide example data for supervised learning, where expected results are provided to the training process?
  • Or is the goal to provide example data for unsupervised learning, where no expected results are given, and the training is to proceed toward some defined goal that is universal across the examples used in training?

Another consideration is to decide whether intelligent defaulting will be employed.

  • Should the system guess what the next answer is going to be and intelligently select it, allowing the user to change the selection if the guess was wrong?
",4302,,4302,,10/11/2018 23:17,10/11/2018 23:17,,,,0,,,,CC BY-SA 4.0 8364,1,8365,,10/11/2018 23:09,,1,72,"

I am a newbie in deep learning and am looking for advice on predicting traffic congestion events. I have a table for vehicle travel times data, another table with the road length segmented based on stop locations. I am thinking to derive the time-wise route-specific speed details based on stop locations. After initial data cleansing and messaging, my input parameters are the time and stop location with actual speed details. I train my model with the training dataset and validate it as per the deep learning recommended approach.

So my questions are:

  1. Is this approach correct or how can I improve it? I am not sure if the number of inputs can be increased for better results.
  2. Which activation method will be best to utilize to get a range of conditions/event types rather than binary 1 or 0?
  3. This will require dealing with a bigger dataset of at least over a few GBs. This will evolve into around 200GBs in the final product. Can I use my professional grad laptop to process this data or if I should consider going to Big Data Processing power?

Please advise. Thanks in advance for your help.

",18959,,30725,,5/29/2020 13:48,5/29/2020 13:48,Deep learning model training and processing requirement for Traffic data,,1,0,,1/1/2022 9:34,,CC BY-SA 4.0 8365,2,,8364,10/12/2018 0:04,,0,,"

Adjustments

There are a few considerations that may help with problem analysis:

  • Traffic data alone will produce limited predictive results. For instance, the onset of precipitation is critical input for any production-ready predictive model.
  • Traffic congestion is highly chaotic in that events cascade, thus what is colloquially called the butterfly effect, is a dominant challenge. What you can realistically achieve are what chaoticians call attractors. They are morphological features in phase spaces and auto-correlations that are predictable under specific conditions.
  • Modelling traffic requires further segmentation beyond stop locations to include turn options and merge points, both of which are critical factors in free flow and congestion.
  • Feedback from the field will DRASTICALLY improve prediction. Expecting the system to predict start to finish scenarios longer than a few minutes of potential congestion dynamics is unrealistic. Inputs will need to be continuous and the closer to real time the better.
  • The physical relationships between position, velocity, and acceleration must be built into the parameterized model in which parameters are continuously updated.

We can assume that the system design predicts congestion for some purpose, which was not stated in the question. It is reasonable to assume, because of the economics of transportation, that the objective is to reduce the area under the probability distribution curve for undesirable events resulting from congestion.

  • Lost time
  • Accident occurrence
  • Fatalities

The mechanism of control through which such objectives could be achieved can be any of these.

  • Notifying drivers of traffic conditions via cell towers or WiFi
  • Notifying drivers via dynamic sign display
  • Adjusting remotely configurable traffic signaling devices
  • Controlling gates

These controls can be to human drivers or pilots or to automated driving or piloting systems. The context of the system is key to designing it properly, since the use cases are entirely driven by this context.

The Questions

  • Is this approach correct, [and] how can I improve it? — suggestions above
  • I am not sure if the number of inputs can be increased for better results. — absolutely yes
  • Which activation method will be best to utilize to get a range of conditions/event types rather than binary 1 or 0? — A ReLU or one of its derivatives may produce the best results for off line training, however it may be advisable to look into real time approaches such as Q-learning. In either case, it is advisable to finish requirements definition, decide upon approach, and produce a general architectural diagram of the system before attending to such minutia as activation functions.
  • [Given the likely data set size of 200GB] can I use my professional grad laptop to process this data or if I should consider going to Big Data Processing power? — Using VLSI hardware acceleration from vendors such as Intel or NVidia is advisable. Over the course of time, the initial expense and learning curve may be absorbed by not having to pay for additional bandwidth for PasS or SaaS services.

Regarding computing resources, it is inadvisable to begin with some of the most congested places.

Cities Definitely Too Congested

It is likely that these cities would require a super-computing platform of significant size and a multinational corporation sized budget.

  • Beijing
  • Dubai
  • Tokyo
  • Los Angeles
  • Chicago
  • London
  • Hong Kong
  • Shanghai
  • Paris
  • Amsterdam
  • Dallas

Commuter locations such as Melbourne, Australia; Palm Beach, Florida; or New Haven, Connecticut are examples of good choices to start.

",4302,,4302,,10/12/2018 0:16,10/12/2018 0:16,,,,1,,,,CC BY-SA 4.0 8367,1,,,10/12/2018 6:04,,2,154,"

There are some predefined categories( Overview, Data Architecture, Technical Details, Applications, etc). The requirement is to classify the input text of paragraphs into their resp. category. I can't use any pre-trained word embeddings (Word2Vec, Glove) because the data entered is not in general English ( talking about dogs, environment, etc) but pure technical (How does a particular program orks, steps to download anaconda, etc). Don't have any data available on the internet to train as well. Anything that understands semantic-surface-level of a sentence will work

",18963,,30725,,5/29/2020 13:47,5/29/2020 13:47,How to find the category of a technical text on a surface-semantic-level,,1,1,,,,CC BY-SA 4.0 8368,1,,,10/12/2018 6:43,,1,128,"

I'm try to train a RNN with a chunk of audio data, where X and Y are two audio channels loaded into numpy arrays. The objective is to experiment with different NN designs to train them to transform single channel (mono) audio into a two channel (stereo) audio.

My questions are:

  1. Do I need a stateful network type, like LSTM? (I think yes.)
  2. How should I organize the data, considering that there are millions of samples and I can't load into memory a matrix of each window of data in a reasonable time-span?

For example if I have an array with: [0, 0.5, 0.75, 1, -0.5, 0.22, -0.30 ...] and I want to take a window of 3 samples, for example. I guess I need to create a matrix with every sample shift like this, right?

[[0.00, 0.50, 0.75]
 [0.50, 0.75, 1.00]
 [0.75, 1.00,-0.50]
 [1.00,-0.50, 0.22]]

Where is my batch_size? Should I make the matrix like this per each sample shift? Per each window? This may be very memory consuming if I intend to load a 4 min song.

Is this example matrix a single batch? A single sample?

",18964,,4302,,10/14/2018 1:15,12/16/2019 5:02,Difficulty understanding Keras LSTM fitting data,,1,0,,,,CC BY-SA 4.0 8369,2,,8367,10/12/2018 12:46,,1,,"

Problem Statement

Find the category of technical text on a surface-semantic-level. The requirement is to classify the input text of paragraphs into their respective categories.

The categories given as predefined are as follows.

  • Overview
  • Data Architecture
  • Technical Details
  • Applications
  • etc

Some document types that would be added in place of 'etc' might include these.

  • Requirements
  • High Level Design
  • System Architecture
  • Testing Plan
  • Deployment Plan
  • Disaster Recovery Plan

Semantic Structure of Technical Documents

It is correct that the terminology between a requirements document and data architecture will likely have much in common. More precisely, the distributions of linguistic elements of all documents common to a given system or project are likely to contain domain commonalities. A system that pilots a drone will have the linguistic elements "drone(s)", "hover(ing|s)", "target(ing|s)", "flight plan(s)" in similar distributions throughout its documentations.

Indicators

It is likely that these five distinguishing characteristics can be exploited in categorization.

  • Sentence semantics
  • Inserted diagrammatic and pictorial conventions
  • Header conventions
  • Linguistic elements that appear commonly in only one type of technical document
  • Elements and structure from copying previous documents and modifying them or the use of boiler plates and templates in specific departments

Rather than focus entirely on text recognition and abandoning the other items in the list above would be unwise. Diagrams, such as network diagrams and UML diagrams, may be quite easy to discern using deep convolutional approaches and would clearly identify the category to which the hosting document belongs. That is also true of test case tables.

Recognizing that the proportions between the five indicators above and sections of indicators are variables in the model upon which training is applied will produce the best results. For instance, the final paragraph may be more telling than the rest of the body.

Also, be aware that one can couple the proportional appearance of language elements within all text documents on record with the language elements identified in the example and then in later use of the trained model. The training is likely to progress faster and produce more accurate and reliable results if features include an indication that the linguistic element "test case(s)" appearing in the input text is significantly less prevalent in the domain of technical documentation than the linguistic element "-ing" to indicate continuous tenses of verbs.

Avoid Static Grammars

Language parsers based on fixed language rules (grammars) have not had much success compared to association based semantic mapping, and linguistics has moved away from those static models for similar reasons. Avoid grammar based parsing.

Existing Work

For the textual categorization, the below academic publications are some of the recent work that have already gained some notoriety.

Faulty Approaches

The approach in the comment of comparing the semantics of sentences will only determine one aspect of information redundancy between two sentences.

The number of comparisons are also a consideration. For $\chi$ documents each containing $\sigma$ sentences would require $\chi \, (\chi - 1) \, \sigma \, (\sigma - 1)$ comparisons. For $\chi = 10,000$ documents containing $\sigma = 1,000$ sentences, we have 99,890,010,000,000 sentence comparisons, the totality of which provides no particularly useful information about the category of any of the documents.

The documents must be related to a concept class, not each other.

Visualizations of the semantics aren't particularly useful unless you are looking for something and the visualization is designed to present that which is sought.

A Better Plan

  • Determine the number of example training documents, $t$, needed to be categorized by experts to create a sufficiently large training data set. (Consider using the PAC learning framework designed for this purpose.)
  • Draw that number from the full set of documents, using an appropriate random or highly pseudo-random method, herein referred to as method $\mathbb{D}$. That is the training example set.
  • Draw that number from the full set of documents again, using method $\mathbb{D}$. That is the test example set.
  • Have experts label (categorize) both example sets according to technical document type. Use the same experts for both training and testing, and have them alternate periodically between the two sets so that their learning, fatigue, or boredom curves have a minimal effect on their categorizing of the two sets.
  • Profile each of the category-labeled documents in terms of the above five indicators. Note that to use the images embedded in the document as part of the profiling, which could drastically improve system reliability, the images must be run through a separate diagram categorizing network trained separately on images to find diagrams characteristic of one of the technical document types. That may sound like much work, but consider that categorizing drawing types is a well developed science and the labeling is only of the diagram types that are representative of particular document types.
  • Train an appropriately designed artificial network to categorize the documents, using the profiling results as example inputs and the labels from the categorizing from the experts.
  • Test using the test set
  • Based on the results of the test, decide whether to use the current training or re-execute a previous step using information gained from the first training and train again
  • Run the trained network on the full document set
  • Pull a sample from the result using method $\mathbb{D}$ to validate effective completion of the run
",4302,,-1,,6/17/2020 9:57,10/16/2018 12:24,,,,1,,,,CC BY-SA 4.0 8370,1,,,10/12/2018 18:14,,1,315,"

Sparse linear systems are normally solved by using solvers like MINRES, Conjugate gradient, GMRES.

Efficient preconditioning, i.e., finding a matrix P such that PAx = Pb is easier to solve than the original problem, can drastically reduce the computational effort to solve for x. However, preconditioning is normally problem-specific and there is not ONE preconditioner that works well for every problem.

I thought this would be an interesting problem to apply RL since there are certain norms (e.g. condition number of matrix PA) to measure if P is a good preconditioner, but I could not find any research in this field.

Is there a specific problem why RL could not be applied?

",18975,,30725,,5/29/2020 13:47,5/29/2020 13:47,Using reinforcement learning to find a preconditioner for linear systems of the form Ax = b,,1,2,,,,CC BY-SA 4.0 8371,1,8376,,10/12/2018 19:45,,1,50,"

I have created a classifier for some simple gestures using an input layer, a hidden layer with tanh activation and an output softmax layer. I'm also using the Adam optimiser. The network classifies perfectly with validation data. However, I'd like it to be able to take in random noise that looks nothing like the shapes and not be able to classify it confidently. For example:

One gesture input looks like this and is correctly classified as gesture 'A':

However, when I pass this 'noise', which is clearly differentiable to the human eye, as input it still classifies it with 100% confidence that it is the same gesture 'A'.

I assume it's because the inputs are still very close to 0? My instinct is to scale up the inputs perhaps to increase the differentiation between the noise and the input. However, in real operation the noise will all be on a similar scale to the inputs and I won't know what is noise and what isn't so I will still have to apply the same scaling to that noise. Will I run into the same problem?

On a more general note is there a teaching approach to prevent misclassifications, particularly if we know what they might look like? For example, in this case I thought I could perhaps generate some noise and use it at training time to create an extra noise class, or is it just best to come up with such a well-trained network that you can use some sort of confidence threshold? For example, if the network only produces 50% classifcation confidence for an input then I can discard it as noise. Any suggestions much appreciated!

",18577,,18577,,5/28/2020 22:18,5/28/2020 22:18,Recognising Noise in Simple Classification,,1,0,,,,CC BY-SA 4.0 8376,2,,8371,10/13/2018 7:52,,0,,"

The network classifies perfectly with validation data, however I'd like it to be able to take in random noise that looks nothing like the shapes and not be able to classify it confidently.

You need to train the network on noise well if you want it to be able to recognize it.

or is it just best to come up with such a well-trained network that you can use some sort of confidence threshold?

You can use spot to get a confidence score for clarification. But training the network on a noise class will work way better

See:

Gal: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

",3217,,,,,10/13/2018 7:52,,,,0,,,,CC BY-SA 4.0 8378,2,,3321,10/13/2018 9:53,,1,,"

One key to the answer is in the question, ""Even for one specific conv layer."" It is not a good idea to build deep convolution networks on the assumption that a single kernel size most aptly applies to all layers. When perusing the configurations that proved successful in publications, it becomes apparent that configurations that varying through their layers are more commonly found to be optimal.

The other key is to understand that two layers of 11x11 kernels have a 21x21 reach, and ten layers of 5x5 kernels have a 41x41 reach. A mapping from one level of abstraction to the next need not be completed in one layer.

Generalities regarding kernel sizes exist, but they are functions of the typical input characteristics, the desired output of the network, the computing resources available, resolution, size of the data set, and whether they are still images or movies.

Regarding input characteristics, consider this case: The images are shot with a large depth of field under poor lighting conditions, such as in security scenarios, so the aperture of the lens is wide open, causing objects at some ranges of distance to be out of focus, or there can be motion blur.

Under such conditions a single 3x3 kernel will not detect many edges. If the edge may span five pixels, the choice exists as to how many layers are dedicated to its detection. What factors affect that choice is based on what other characteristics exist in the input data.

Expect that as acceleration hardware develops (in VLSI chips dedicated to this purpose) that the computing resource constraints will decrease in priority as a factor in kernel size selection. Currently, the computation time is significant and forces the decision about how to balance layer count and layer size to be mostly a matter of cost.

This question begs another question. Can an oversight machine learner learn how to automatically balance the configuration of deep convolution networks? It could then be re-executed whenever additional computing resources are provisioned. It would be surprising if there weren't at least a dozen labs working on exactly this capability.

",4302,,4302,,10/13/2018 10:02,10/13/2018 10:02,,,,0,,,,CC BY-SA 4.0 8379,2,,6488,10/13/2018 10:41,,0,,"

Problem Statement

These are the features of the runs.

  • CNN class prediction using two 2D convolutions with associating max pooling
  • mini-batch execution approach
  • fixed shuffling process used

These are the results.

  • shuffling obtained 80%/70% train/test accuracy
  • shuffling for full set obtained 77% accuracy
  • no shuffling for full set obtained 45% accuracy

Listed Causes

These are the potential Causes for the apparent anomaly that were listed in the question.

  • model is learning incorrectly
  • learns that the order of the data points plays a role in their prediction
  • data point not predicted separately because of mini-batching

Other Causal Possibilities not Listed

Notice that both of these additional possibilities are related to an insufficiency in the simulation of randomness, just as can be the case in cryptographic protocols.

  • The CNN learns the shuffling system or some aspect of it so that when the shuffling is removed, the training no longer applies to the input patterns
  • How the training and testing samples are drawn is not sufficiently random

Additional Questions

Does the model learn from the average of all the data points in the mini-batch? — Yes.

Does it think one mini-batch is one data point? — No. It doesn't think, and the loop does not average the data points before the propagation. Mini-batch simply aggregates the results before back-propagating the correction signal to the parameter tensors.

Does order matter? — Order cannot matter in a stateless system, but often does if there is state remembered between discrete events. Mini-batch requires averaging, which requires statefulness to accumulate the addends. But that is not the likely cause. How the batches are selected from the sample is a more likely factor affecting accuracy.

Principles to Comprehend

The convergence of artificial networks in general is based on the statistical characteristics of the training scenario matching the statistical characteristics of the usage scenario. In other words, to use PAC (probably approximately correct) framework terminology, how the training sample is drawn from the total population must be identical to how the validation sample is drawn from the total population. Therefore, if the training sample is not drawn with sufficient randomness from the total population convergence cannot not guaranteed.

Questions to Consider

  • How am I deciding the individual operations within the shuffling?
  • How am I drawing the train and test samples?
  • How am I deciding what samples go in what batch?
  • What natural order is in the data examples, and is it really a sequence rather than a set?
  • If a sequence, then is a classic CNN, not designed out of the box to handle temporal sequences, the correct network design to apply?

Answering these questions and gaining a full conceptual understanding of the probability and statistics aspects of the approach should occur prior to thinking about normalization, which could fix your problem accidentally, but cannot be the root cause of the anomaly.

",4302,,,,,10/13/2018 10:41,,,,0,,,,CC BY-SA 4.0 8383,2,,6488,10/13/2018 12:36,,1,,"

The example followed in the question uses a relatively straightforward Convolutional Neural Network. These are not stateful, so the order in which predictions for instances from a test set are queried should have no influence on those predictions.

In a comment written by the author of the question on their own question, it was mentioned that the use of Batch Normalization appears to have been confirmed to be the cause of the issue. Given this info, one possible cause of the issue described in the question is incorrect usage of the training flag of TensorFlow's Batch Normalization implementation. The official documentation contains the following info on this flag:

training: Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (normalized with statistics of the current batch) or in inference mode (normalized with moving statistics). NOTE: make sure to set this parameter correctly, or else your training/inference will not work properly.

If this is incorrectly set to True rather than False outside of the training phase (i.e. when evaluating performance), predictions can be expected to be poor. This alone doesn't explain why specifically the order of test data would matter though, if this alone were the issue then we'd expect test performance to be poor regardless of order.

A different possible explanation can be that there is a mistake in the code that still causes moving_mean and moving_variance ops of the Batch Normalization to be updated during the testing/evaluation. These should only be updated during the training phase, as explained in the documentation linked to above. If they are still getting updated during the test phase, and if there is a meaningful structure in the unshuffled ordering of the test set (i.e. unshuffled test set ordered by class, or ordered according to certain features, etc.), then we would expect precisely the issue described in the question to occur.

",1641,,,,,10/13/2018 12:36,,,,0,,,,CC BY-SA 4.0 8384,2,,6765,10/13/2018 13:08,,3,,"

Judea Pearl's 2018 comment on ACM.org, in his To, Build Truly Intelligent Machines, Teach Them Cause and Effect is piercing truth.

All the impressive achievements of deep learning amount to just curve fitting.

It may be less sensational and more technically correct to state that it is not, "Just curve fitting," but rather, "sophisticated surface fitting." Nonetheless, his general assessment indicates the need to look beyond tuning nonlinear functions to fit a surface in $\mathbb{R}^n$ and consider whether cognition is achievable with a deep network. The split in answers to this question is odd. We have two conflicting assertions, often strongly stated.

  1. Artificial networks cannot perform logic.
  2. Artificial networks are the best approach to AI.

How can rationality be excluded from the list of important human features of intelligence, which is what these two assertions are taken together would mean?

Is the human brain a network of sophisticated curve fitters? Marvin Minsky's famous quote, "The brain happens to be a meat machine," was offered without proof, and neither a proof of his trivialization of the human brain nor a proof that the brain is beyond the reach of Turing computability has been offered since.

When you read these words, are your neural networks doing the following sequence of curve fits?

  • Edges from retinal rods and cones
  • Lines from edges
  • Shapes from lines
  • Letters from shapes
  • Linguistic elements from groups of letters
  • Linguistic structures from elements
  • Understanding from linguistic structures

The case is strong for the affirmation that the first five is a convergence mechanism on a model, and all the machine learning structure is just a method to fit the data to the model.

Those last two bullet items are where the paradigm breaks down and where many AI researchers and authors have correctly stated that machine learning has significant limitations when based solely on layers of multi-layer perceptrons and convolution kernels. Furthermore, the last bullet item is grossly oversimplified in its current state, probably by orders of magnitude. Even if Minsky is correct that a computer can perform what the brain does, the process of reading and understanding this paragraph could easily have a thousand different kinds of unique process components in patterns of internal workflow with massive parallelism. Imaging technology indicates this probability. We have computers modelling only the simplest peripheral layers.

Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI? — No. But there is no such argument that guarantees it either.

Other questions here investigate whether these sophisticated curve fitters can perform elements of cognition or reasoning.

The totem of three in the question's image, seeing, doing, and imagining, is not particularly complete, accurate, or insightful.

  • There are at least five sensory paradigms in humans, not one
  • Doing preceded human senses by billions of years — bacteria do
  • Imagining is not a significantly higher process than scenario replay from models of past experience with some method to apply set functions to combine them and inject random mutations
  • Creativity may just be imagining in the previous bullet item followed by weeding out useless imagination results with some market-oriented quality criteria, leaving the impressive creative products that sell

The higher forms are appreciation, a sense of realities beyond the scope of scientific measurement, legitimate doubt, love, sacrifice for the good of others or humanity.

Many recognize that the current state of AI technology is nowhere near the procurement of a system that can reliably answer, "How can I make Y happen?" or "If I have acted differently, will X still occur?"

There is no mathematical proof that some combination of small curve fitting elements can or cannot achieve the ability to answer those questions as well a typical human being can, mostly because there is insufficient understanding of what intelligence is or how to define it in mathematical terms.

It is also possible that human intelligence doesn't exist at all, that references to it are based on a religious belief that we are higher as a species than other species. That we can populate, consume, and exterminate is not actually a very intelligent conception of intelligence.

The claim that human intelligence is an adaptation that differentiates us from other mammals conflicts with whether we adapt well. We have not been tested. Come the next meteoric global killer with a shock wave of the magnitude of that of the Chicxulub crater's meteor, followed by a few and a thousand years of solar winter and we'll see whether it is our 160,000-year existence or bacteria's 4,000,000,000 year existence that proves more sustainable. In the timeline of life, human intelligence has yet to prove itself significant as an adaptive trait.

What is clear about AI development is that other kinds of systems are playing a role along with deep learners based on the multi-layer perceptron concept and convolution kernels which are strictly surface fitters.

Q-learning components, attention-based components, and long-short term memory components are all strictly a surface fitter too, but only by stretching the definition of surface fitting considerably. They have real-time adaptive properties and state, so they can be Turing complete.

Fuzzy logic containers, rules-based systems, algorithms with Markovian properties, and many other component types also play their role and are not surface fitters at all.

In summary, there are points made that have a basis in more than plausibility or a pleasing intuitive quality, however, many of these authors do not provide a mathematical framework with definitions, applications, lemmas, theorems, proofs, or even thought experiments that can be scrutinized in a formal way.

",4302,,36737,,4/4/2021 15:14,4/4/2021 15:14,,,,0,,,,CC BY-SA 4.0 8389,2,,7693,10/13/2018 15:22,,1,,"

It is correct to say that a sigmoid activation function would only work well as a model if the desired output is close to the sigmoid function applied to the input. This is a trivial fact that applies to a single layer perceptron. This is true for for the single layer case for any activation function, also a trivial fact.

When the layer number is between one and infinity (two or more), the theory bifurcates. The identity function becomes a special case: Any number of layers that conform to a first degree polynomial, $ax + b$, can be replaced with a single identity layers. The alternative case, where there are multiple layers that functionally do not conform to a first degree polynomial, $ax + b$, cannot be replaced by a single layer of some equally simple function. The complexity increases geometrically, which is the entire point of multilayer perceptrons.

Under particular constraints, the multilayer perceptron can produce a wide variety of functional behaviors that do not resemble the activation functions of the layers.

For instance, a properly trained network using sigmoid activation functions, with sufficient layer depth and sufficient massive allocation of computing resources, could theoretically approximate the topography of the Himalayas.

",4302,,4302,,1/7/2019 20:14,1/7/2019 20:14,,,,0,,,,CC BY-SA 4.0 8390,2,,7693,10/13/2018 17:14,,-1,,"

Because any number of linear layers can be represented by a single linear layer:

$A_2*(A_1*x +b_1)+B_2 = A_2*A_1*x+ A2*b_1+b_2$

The same is not true if you have non-linear functions.

",3217,,,,,10/13/2018 17:14,,,,0,,,,CC BY-SA 4.0 8391,1,,,10/13/2018 17:48,,0,338,"

Maxpooling is performed as one of the steps in inception which yields same output dimension as that of the input. Can anyone explain how this max pooling is performed?

",17763,,,,,10/14/2018 4:44,Maxpooling in inception?,,1,0,,1/1/2022 10:12,,CC BY-SA 4.0 8393,2,,8370,10/13/2018 20:09,,1,,"

Reinforcement Learning is a method for learning to perform beneficial actions in an environment. One way this is accomplished is by learning to predict useful actions as a function of the observed state of the environment. Another is by learning to predict the expected utility gain of doing an action in a particular observed state. Usually the fact that the agent executes a sequence of actions is exploited to learn more rapidly.

In contrast, the problem you describe is to solve a linear system of equations, which is to say, to learn some hidden values $x$ such that $Ax = b$ for known $A$ and $b$. Gradient decent is a natural way to solve this problem because it is easy to calculate the gradient, and, since the matrices and vectors have the same dimensionality, it is reasonable to expect that the optimization surface is smooth with a single global minima.

While RL techniques could be applied to solve this problem (I guess by predicting values of X in response to inputs b?), none of the usual features that privilege the RL approach over a standard supervised learning approach are present (there's no sequential relationship between datapoints, there's just a list of constraints essentially).

",16909,,,,,10/13/2018 20:09,,,,0,,,,CC BY-SA 4.0 8397,2,,3472,10/14/2018 0:35,,0,,"

Question 1. I am wondering whether this field (using RNNs for email spam detection) worths more researches or it is a closed research field.

Use of RNNs to detect spam grew out of the use of artificial networks to detect fraud in telecommunications and the financial industry as a result of the rise of attacks on long distance lines, ATMs, banks, and credit card systems in online and at data centers supporting physical points of sale.

Although basic RNN design has given way to the newer LSTM and GRU approaches and its variants and extensions, artificial networks are now one of the primary fraud detection technologies. The dominance of this fraud detection strategy extends to SPAM detection, with its close ties to fraudulence. The spammers present the appearance of a relationship with their recipients that does not exist.

The improvement on computing designs to recognize patterns in time series data and the application of those designs for fraud detection and countermeasures and the detection and routing or deletion of of unwanted incoming information will be a stable area of research and development for the foreseeable future.

Question 2. What is the oldest published paper in this field?

There is no oldest published paper. The first papers on RNN are given in this answer: Where can I find the original paper that introduced RNNs?, but the move from pattern based detection to artificial networks to stateful artificial networks was gradual. The earliest deployments of these networks in server side or client side solutions occurred before any papers were published on the specific topic of RNN use in spam detection.

Question 3. What are the pros and cons of using RNNs for email spam detection over other classification methods?

Spam also has a strong temporal element. What one considers undesirable spam in one year may be considered mission critical email a few years later, and vice versa. The performance in this space includes speed, accuracy, and reliability of classification, but also adaptation to changing user classification needs.

It is because of these four performance characteristics in tandem that stateful networks derived from RNNs are commonly used for spam detection. The need for gated learning and forgetting at the cell level to support the variable adaptivity makes the LSTM and GRU variants common choices.

Semantic document classification is riding on an emerging set of technologies, which are primarily artificial network designs that begin to broach the threshold of cognitive understanding of the text by storing linguistic structure in forms that allow analogy, comparison, and composition between them. Semantic algorithms that perform these operations on fuzzy associations in combination with recursive artificial networks may emerge as the dominant design as such designs are further developed.

References

Detecting Spam Blogs: A Machine Learning Approach, Pranam Kolari, 2006

Automated labeling of bugs and tickets using attention-based mechanisms in recurrent neural networks, Volodymyr Lyubinets et. al., 2018

Spam Filter Through Deep Learning and Information Retrieval, Weicheng Zhang, 2018

An Unsupervised Neural Network Approach to Profiling the Behavior of Mobile Phone Users for Use in Fraud Detection, Peter Burge, John Shawe-Taylor, Journal of Parallel and Distributed Computing, Volume 61 Issue 7, July 2001, pp 915-925

Intelligent junk mail detection using neural networks, Michael Vinther, June 2002

Mining for fraud, Margaret Weatherford, IEEE Intelligent Systems, 2002

Discovering golden nuggets: data mining in financial application, D Zhang, L Zhou, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 34, No. 4, November 2004

A Comprehensive Survey of Data Mining-based Fraud Detection Research, C Phua, V Lee, K Smith, R Gayler - Arxiv preprint arXiv, 2007

",4302,,4302,,1/1/2019 21:23,1/1/2019 21:23,,,,0,,,,CC BY-SA 4.0 8398,1,,,10/14/2018 1:06,,1,48,"

A fixed video camera records people moving through its field of view.

The goal is to detect and track the head, in real-time as it moves through the video. The norm is there are many heads, which often are sometimes partially obscured. This example video boxes heads and provides a head count.

There seems to be many different models. Examples include:

Given the context of the video, what is the thought process that you would use to choose a model?

",18819,,10135,,10/18/2018 10:44,4/18/2019 21:05,Choosing Instance Semantic Detection,,1,0,,,,CC BY-SA 4.0 8399,2,,8398,10/14/2018 2:42,,1,,"

There may be a conceptual disconnect between the term Semantic Detection and the task of Head Tracking, since sequential recognition of an object in a set of visual samples representing continuous movement isn't technically a semantics problem. Although a mapping strategy that works with semantic processing may somehow apply to, with appropriate modifications, the mapping of obstructions and tracked objects and their relationship in space-time.

Tracking a head is primarily a surface fitting convergence problem, where the convergence goal and the process that achieved it must be sustainable, especially if the system is required to take actions during the tracking operation. Distinguishing heads from one another requires a high degree of reliability and accuracy in this convergent continuity. That is the primary challenge.

Drawing boxes and displaying counts are trivial operations once the head tracking works.

These are some models listed in the question or improvements over them.

One seasoned expert spoke to a particular detail in the final question.

The thought process experts use to choose a model has no equivalent point value in StackExchange reputation. As experts retire, some may be willing to sell the professional heuristics they acquired over decades of research and development work if they haven't done so already to a book publisher.

Nonetheless, the above links are a good furtherance of the thought process begun in this question along a direction of continued effective thought. The following steps provide a road map for how to develop approaches in general.

  • Read the academic materials until you understand.
  • Determine your selection criteria.
  • Test the options against that criteria.
  • Pick the winner.
  • If something blocks progress with that one, pick the runner up.
",4302,,4709,,4/18/2019 21:05,4/18/2019 21:05,,,,0,,,,CC BY-SA 4.0 8400,2,,8391,10/14/2018 4:44,,1,,"

My bad, didn't look into the block diagram first

maxpooling filter/kernel is 3x3 = fxf

Formula for padding to get the same dimension as the input

p = (f - 1)/2

here f=3 ,so padding to be performed before maxpooling is (3-1)/2 = 1

",17763,,,,,10/14/2018 4:44,,,,0,,,,CC BY-SA 4.0 8401,2,,8284,10/14/2018 5:10,,1,,"

For the above stated artificial network, these two training scenarios are similar.

  • Training to converge to the ideal output vector at the point after the last layer's activation functions are applied
  • Training to converge to the vector formed by applying $tan$ functions to each component of the ideal output vector, when convergence occurs at the point just after the vector-matrix multiplication with the last layer's parameters, prior to the last layer's $atan$ activation functions

Distinctions between them include these.

  • The applications of gradients and associated code must be adapted to the modification of the starting point of back propagation to before the final $atan$ activation functions.
  • The slope and curvature of the loss function will differ if the same loss function is used for the two scenarios, so the accuracy, speed, and reliability of convergence will also be different.
",4302,,,,,10/14/2018 5:10,,,,0,,,,CC BY-SA 4.0 8403,1,8404,,10/14/2018 9:05,,4,951,"

I'm making a Connect Four game using the typical minimax + alpha-beta pruning algorithms. I just implemented a Transposition Table, but my tests tell me the TT only helps 17% of the time. By this I mean that 17% of the positions my engine comes across in its calculations can be automatically given a value (due to the position being calculated previously via a different move order).

For most games, is this figure expected? To me it seems very low, and I was optimistically hoping for the TT to speed up my engine by around 50%. It should be noted though that on each turn in the game, I reset my TT (since the evaluation previously assigned to each position is inaccurate due to lower depth back then).

I know that the effectiveness of TT's are largely dependent on the game they're being used for, but any ballparks of how much they speed up common games (chess, go, etc) would be helpful.

EDIT - After running some more tests and adjusting my code, I found that the TT sped up my engine to about 133% (so it took 75% as much time to calculate). This means those 17% nodes were probably fairly high up in the tree, since not having to calculate the evaluation of these 17% sped up things by 33%. This is definitely better, but my question still remains on whether this is roughly expected performance of a typical TT.

",16917,,16917,,10/14/2018 13:25,10/14/2018 13:29,Transposition table is only used for roughly 17% of the nodes - is this expected?,,1,0,,,,CC BY-SA 4.0 8404,2,,8403,10/14/2018 13:24,,4,,"

I don't think that's necessarily a strange number. It's impossible for anyone to really tell you whether that 17% is ""correct"" or not without reproducing it, which would require much more info (basically would have to know every single tiny detail of your implementation to be able to reproduce).

Some things to consider:

  1. The size of your transposition table / the number of bits you use for indexing into the TT. If you have a relatively small TT, meaning you use relatively few bits for indexing, you'll have bigger probabilities of collisions. That means you will have to replace existing entries more often, which means they might no longer be in the table anymore by the time you encounter transpositions during the search.

  2. Where in the search tree are the nodes located that are recognized as transpositions already in the table? If you detect transpositions very high up in the search tree, you save a lot more search time than if you detect a transposition somewhere deep down in the search tree; once you detect a transposition that has already been searched sufficiently deep for the value stored in the table to be valid, you can cut off the complete subtree below that node from the search. This becomes more valuable as it happens closer to the root. So, just the number ""17% of nodes"" doesn't really tell us much.

  3. Are you using iterative deepening? Since you mentioned only minimax + alpha-beta pruning in the question, I suspect you're not using iterative deepening. TTs become significantly more valuable once you do use iterative deepening, because then almost every state encountered becomes a ""transposition"". You'll already have seen all those states in a previous iteration with a lower search depth limit. Now, it is important to note with this combo of ID + TTs, that you can no longer completely cut off searches for all recognized transpositions. If an entry in the table holds a value that was computed with a search depth of $d$, that value will no longer be valid when performing a subsequent iteration of ID with a max search depth of $d + 1$ for example. However, that ""outdated"" value stored in the TT can still be used for move ordering, which can lead to significantly more prunings from alpha-beta pruning.

  4. How efficient is the remainder of your engine? A TT is not 100% free, it takes a bit of additional time too (for example to compute the hash values for your game states). If the rest of your engine is relatively slow (i.e. inefficient implementation for playing moves, copying game states, etc.), the computational overhead of the TT won't matter much and even a low number of recognized transpositions will still be valuable. If the rest of your engine is very fast, it'll be more important to have a high number of transpositions for the TT to be really valuable.


As an ""educated guess"", I'd say the number of 17% you describe is not necessarily strange. Especially given your edit to the question, where you indeed mention that it is likely that transpositions are found high up in the tree (close to the root). When this happens, you immediately remove the probability of recognizing all those states deeper down in the tree of getting recognized as transpositions yourself. So, the pool of states that could ""potentially"" be found in the TT is much less than 100% of the states stored in the TT.

It's really just that though, just an educated guess. It's going to be very difficult for anyone to give a conclusive ""yes"" or ""no"".

",1641,,1641,,10/14/2018 13:29,10/14/2018 13:29,,,,0,,,,CC BY-SA 4.0 8405,2,,8350,10/14/2018 14:22,,2,,"

Per Neil's line of questioning:

  • If you're doing offline, supervised or reinforcement learning, where you have the dataset and decide what to show to the network next, there's no reason to do this. It's better to just adjust your training to show rare examples to the network more often.

  • If you're doing online reinforcement learning, then the agent typically controls which examples it sees next through its choice of actions. Increasing the agent's exploration rate will cause it to see a more diverse set of examples, but these examples are also less likely to be useful for solving the task the agent is working on.

  • If you're doing supervised, online, learning, and you don't control the order in which examples appear, it might be useful to freeze some of the weights, but it might be better, and would certainly be simpler, to increase the learning rate for rare classes instead (in effect, showing the network a given rare example repeatedly when it does show up, so that it takes longer to forget it).

",16909,,,,,10/14/2018 14:22,,,,0,,,,CC BY-SA 4.0 8407,1,,,10/14/2018 18:25,,4,209,"

So let's say you had a really nice day in a flight simulator and you are getting videos of this type of quality:

This is Full HD (1080p), but heavily compressed. You can literally see the pixels. Now I tried to use something like RAISR, and this python implementation, but it only scales the image up and does not 'fix the thicc pixels'.

So is there a type of AI that does fix this kind of video/photo into a reasonable quality video? I just want to get rid of those pixels and image artefacts that was generated during the compression.

",19013,,19013,,10/14/2018 18:40,2/1/2019 22:01,Can AI 'fix' heavily compessed videos/photos?,,2,5,,,,CC BY-SA 4.0 8412,1,8421,,10/15/2018 4:27,,2,726,"

In keras, when we use an LSTM/RNN model, we need to specify the node [i.e., LSTM(128)]. I have a doubt regarding how it actually works. From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time. What if my sequence is larger than 128? How to interpret this? Can anyone please explain me? Thank in advance.

",18795,,2444,,2/16/2019 2:32,2/16/2019 2:32,What should I do when I have a variable-length sequence when instantiating an LSTM in Keras?,,2,0,,,,CC BY-SA 4.0 8414,1,8507,,10/15/2018 6:04,,2,2867,"

Following-up my question about my over-fitting network

My deep neural network is over-fitting :

I have tried several things :

  • Simplify the architecture
  • Apply more (and more !) Dropout
  • Data augmentation

But I always reach similar results : training accuracy is eventually going up, while validation accuracy never exceed ~70%.

I think I simplified enough the architecture / applied enough dropout, because my network is even too dumb to learn anything and return random results (3-classes classifier => 33% is random accuracy), even on training dataset :

My question is : This accuracy of 70% is the best my model can reach ?

If yes :

  • Why the training accuracy reach such high scores, and why so fast, knowing this architecture seems to be not compatible ?
  • My only option to improve the accuracy is then to change my model, right ?

If no :

  • What are my options to improve this accuracy ?

I'v tried a bunch of hyperparameters, and a lot of time, depending of these parameters, the accuracy does not change a lot, always reaching ~70%. However I can't exceed this limit, even though it seems easy to my network to reach it (short convergence time)

Edit

Here is the Confusion matrix :

I don't think the data or the balance of the class is the problem here, because I used a well-known / explored dataset : SNLI Dataset

And here is the learning curve :

Note : I used accuracy instead of error rate as pointed by the resource of Martin Thoma

It's really ugly one. I guess there is some problem here. Maybe the problem is that I used the result after 25 epoch for every values. So with little data, training accuracy don't really have time to converge to 100% accuracy. And for bigger training data, as pointed in earlier graphs, the model overfit so the accuracy is not the best one.

",18852,,18852,,10/15/2018 13:35,10/24/2018 0:12,How to improve testing accuracy when training accuracy is high?,,2,3,,,,CC BY-SA 4.0 8417,1,8420,,10/15/2018 7:46,,1,2459,"

The DenseNet architecture can be summarizde with this figure:

Why there are transition layers between each block?

In the papers, they justify the use of transition layers as follow :

The concatenation operation used in Eq. (2) is not viable when the size of feature-maps changes. However, an essential part of convolutional networks is pooling layers that change the size of feature-maps. To facilitate pooling in our architecture we divide the network into multiple densely connected dense blocks

So, if I understood correctly, the problem is that the feature map size can change, thus we can't concatenate. But how does the addition of transition layers solve this problem?

And how can several dense blocks connected like this be more efficient that one single bigger dense block?

Furthermore, why are all standard DenseNets made of 4 dense blocks? I guess I will have the answer to this question if I understood better the previous questions.

",18852,,2444,,3/10/2020 20:35,3/10/2020 20:37,Why are there transition layers in DenseNet?,,1,0,,,,CC BY-SA 4.0 8420,2,,8417,10/15/2018 8:24,,1,,"

The point of DenseNet was to go as deep as ResNets, if not deeper, and keep multiple skip connections to preserve the gradient flow back better as well as to keep the earlier layers context (which prevents overfitting). With layers as deep as 120, having a single block being fully concatenated to all the previous ones would mean having a way large feature map, which, I guess, would be computationally very expensive and not feasible.

About transition layers (convolution + pooling), I think it's just a way of downsampling the representations calculated by DenseBlocks slowly upto the end as after transition layers the representations go from $56 \times 56$ to $28 \times 28$ to $14 \times 14$, and so on.

The authors state it this way

To further improve model compactness, we can reduce the number of feature-maps at transition layers

",19027,,2444,,3/10/2020 20:37,3/10/2020 20:37,,,,2,,,,CC BY-SA 4.0 8421,2,,8412,10/15/2018 8:31,,1,,"

In Keras, what you specify is the hidden layer size. So :

LSTM(128)

gives you a Keras layer representing a LSTM with a hidden layer size of 128.

As you said :

From the LSTM/RNN unfolding image or description, I found that each RNN cell take one time step at a time

So if you picture your RNN for one time step, it will look like this :

And if you unfold it in time, it look like this :

You are not limited in your sequence size, this is one of the feature of RNN : since you input your sequence element by element, the size of the sequence can be variable.

That number, 128, represent just the size of the hidden layer of your LSTM. You can see the hidden layer of the LSTM as the memory of the RNN.

Of course the goal is not for the LSTM to remember everything of the sequence, just link between elements. That's why the size of the hidden layer can be smaller than the size of your sequence.

Sources :

Edit

From this blog :

The larger the network, the more powerful, but it’s also easier to overfit. Don’t want to try to learn a million parameters from 10,000 examples – parameters > examples = trouble.

So the consequence of reducing the size of hidden state of LSTM is that it will be simpler. Might not be able to get the links between the element of the sequence. But if you put a too big size, your network will overfit ! And you absolutely don't want that.

Another really good blog on LSTM : this link

",18852,,18852,,10/15/2018 23:32,10/15/2018 23:32,,,,2,,,,CC BY-SA 4.0 8424,1,,,10/15/2018 11:24,,3,806,"

I want to make a kind of robotic brain, i.e. a big neural network, which includes an NLP model (for understanding human voice), real-time object recognition system (so that it can identify particular objects), a face recognition model (for identifying faces), etc.

Is possible to build a huge neural network in which we can combine all these separate models together, so we can use all 3 model's capabilities at same time in parallel?

For example, if I ask the robot, using the microphone, "Can you see that table or that boy?", the robot would start recognizing the objects and faces, then answer me back by speaking if it could identify them or not.

If this is possible, can you kindly share your idea how can I implement this? Or is there any better to make such AI (e.g. in TensorFlow)?

",14527,,2444,,5/24/2021 12:24,5/24/2021 12:24,Can we combine multiple different neural networks in one?,,0,2,,,,CC BY-SA 4.0 8427,1,,,10/15/2018 12:38,,9,4535,"

What exactly are ontologies in AI? How should I write them and why are they important?

",18123,,2444,,4/30/2019 17:39,2/23/2021 13:46,What are ontologies in AI?,,1,1,,,,CC BY-SA 4.0 8429,1,,,10/15/2018 14:07,,4,135,"

The process revolves around a child's drawing. Each part of each drawing corresponds to a score as in the Draw a Person Test conceived by Dr. Florence Goodenough in 1926. The goal of the machine is to measure a child's mental age through a figure drawing task.

",18027,,2444,,8/21/2021 10:18,8/21/2021 10:18,"How to use Machine Learning to create a ""Draw-A-Person Test""",,1,0,,,,CC BY-SA 4.0 8430,1,,,10/15/2018 15:28,,0,221,"

I have coded an AI checkers game but would like to see how good it is. Some people have informed me to use the Chinook AI opensource code. But I am having trouble trying to integrate that software into my AI code. How do I integrate another game engine in checkers with the AI I have coded?

",16906,,,,,10/15/2018 15:28,Checkers AI game engines,,0,11,,,,CC BY-SA 4.0 8435,1,,,10/15/2018 19:47,,4,331,"

I am new to the object recognition community. Here I am asking about the broadly accepted ways to calculate the error rate of a deep CNN when the network produces different results using the same data.

1. Problem introduction

Recently I was trying to replicate some classic deep CNNs for the object recognition tasks. Inputs are some 2D image data including objects and the output are the identification/classification results of the object. The implementation involves the use of Python and Keras.

The problem I was facing is that, I may get different validation results among multiple runs of the training even using the same training/validation data sets. To me, that made it hard to report the error rate of the model since every time the validation result may be different.

I think this difference is because of the randomness involved in different aspects of deep CNN, such as random initialization, the random ‘dropout’ used in the regulation, the ‘shuffle’ process used in the choosing of epochs, etc. But I do not know yet the “right” ways to deal with this difference when I want to calculate the error rate in object recognition field.

2. My exploration – online search

I have found some answers online here. The author proposed two ways, and he/she recommended the first one shown below:

The traditional and practical way to address this problem is to run your network many times (30+) and use statistics to summarize the performance of your model, and compare your model to other models.

The second way he/she introduced is to go to every relevant aspect of the deep CNN, to ""freeze"" their randomness nature on purpose. This kind of approach has also been introduced from Keras Q&A here. They call this issue the “making reproductive results”.

3. My exploration – in academia community (no result yet, need your help!)

Since I was not sure whether the two ways mentioned above are the “right” ones broadly accepted, I was going further exploring in the object recognition academia community.

Now I just begin to read from imageNet website. But I have not found the answer yet. Maybe you could help me knowing the answer easier. Thanks!

Daqi

",19042,,19042,,10/30/2018 2:39,9/16/2021 13:00,"What are the ways to calculate the error rate of a deep Convolutional Neural Network, when the network produces different results using the same data?",,0,0,,,,CC BY-SA 4.0 8440,2,,8414,10/15/2018 23:15,,0,,"

I think sometimes it can also help to examine your test and training sets. Fundamentally, your data was produced by an underlying process/system that has certain properties. The system can have many ""states"" and all the possible states form the state space. If you have really tried things like dropout and regularization, my guess would be that the test set is somehow different from your train set. It is possible that your training set only takes samples from one part of the state space (AKA, your samples might all be similar in the training set and the test set has different samples - imagine you are classifying humans and all of your training samples have a class label of 1 meaning all the training samples have humans in them -> and all your test samples have no humans in them Good luck with that!). Some questions to ask:

  1. Are you combining datasets from different sources? If so: If you have ""n"" sources of data, you need to make sure that your training set has many samples from each of the ""n"" sources of data and your test set has samples from each of the ""n"" sources.

  2. Are you shuffling your data enough and randomly putting samples in both the training and test sets? This relates to the human example I gave, make sure your training set has a little bit of everything (different combinations of inputs and/or outputs) and your testing set has a little bit of everything (different combinations of inputs and/or outputs).

",15428,,,,,10/15/2018 23:15,,,,1,,,,CC BY-SA 4.0 8458,1,8545,,10/16/2018 6:20,,2,78,"

This is a question about pattern recognition and feature extraction.

I am familiar with Hough transforms, the Fast Radial Transform and variants (e.g., GFRS), but these highlight circles, spheres, etc.

I need an image filter that will highlight the centroid of a series of spokes radiating from it, such as the center of a asterix or the spokes of a bicycle wheel (even if the round wheel is obscured. Does such a filter exist?

",19049,,,,,10/23/2018 5:40,How to recognize non-circular radial symmetry in images?,,2,3,,,,CC BY-SA 4.0 8459,2,,8412,10/16/2018 7:22,,0,,"

Although this question has been answered I'd add a couple of remarks towards general neural networks design

As you know very NN has three types of layers: input, hidden, and output. Once this network is initialized, you can iteratively tune the configuration during training.

To optimize the network configuration we can use pruning.

Pruning describes a set of techniques to trim network size (by nodes not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.)

You can find more here: https://stats.stackexchange.com/questions/181/how-to-choose-the-number-of-hidden-layers-and-nodes-in-a-feedforward-neural-netw

",17925,,,,,10/16/2018 7:22,,,,0,,,,CC BY-SA 4.0 8461,1,8462,,10/16/2018 9:49,,2,85,"

I would like to get a simple example running in matlab that will use a neural net to learn an arbitrary function from input output data (basically model identification) and then be able to approximate that function from just the input data. As means of training this net I have implemented a simple back propagation algorithm in matlab but I was not able to get anywhere close to satisfactory results. I would like to know what I may be doing wrong and also what approach I may use instead.

The goal is to have the network represent an identified function f(x) which takes a series x as input and outputs the learned mapping from x -> y.

Here is the GNU octave code I have so far:

pkg load control signal

function r = sigmoid(z)
    r = 1 ./ (1 + exp(-z));
end

function r = linear(z)
    r = z;
end 

function r = grad_sigmoid(z)
    r = sigmoid(z) .* (1 - sigmoid(z));
end 

function r = grad_linear(z)
    r = 1;
end 

function r = grad_tanh(z)
    r = 1 - tanh(z) .^ 2;
end

function nn = nn_init(n_input, n_hidden1, n_hidden2, n_output)
    nn.W2 = (rand(n_input, n_hidden1) * 2 - 1)'
    nn.W3 = (rand(n_hidden1, n_hidden2) * 2 - 1)'
    nn.W4 = (rand(n_hidden2, n_output) * 2 - 1)'
    nn.lambda = 0.005;
end

function nn = nn_train(nn_in, state, action)
    nn = nn_in;

    [out, nn] = nn_eval(nn, state);

    d4 = (nn.a4 - action) .* grad_linear(nn.W4 * nn.a3); 
    d3 = (nn.W4' * d4) .* grad_tanh(nn.W3 * nn.a2);
    d2 = (nn.W3' * d3) .* grad_tanh(nn.W2 * nn.a1);

    nn.W4 -= nn.lambda * (d4 * nn.a3');
    nn.W3 -= nn.lambda * (d3 * nn.a2');
    nn.W2 -= nn.lambda * (d2 * nn.a1');
end

function [out,nn] = nn_eval(nn_in, state)
    nn = nn_in;

    nn.z1 = state;
    nn.a1 = nn.z1;

    nn.a2 = tanh(nn.W2 * nn.a1);
    nn.a3 = tanh(nn.W3 * nn.a2);
    nn.a4 = linear(nn.W4 * nn.a3);

    out = nn.a4;
end

nn = nn_init(1, 100, 100, 1);
t = 1:0.1:3.14*10;
input = t;
output = sin(input);
learned = zeros(1, length(output));

for j = 1:500
    for i = 1:length(input)
        nn = nn_train(nn, [input(i)], [output(i)]); 
    end
    j
end

for i = 1:length(input)
    learned(i) = nn_eval(nn, [input(i)]);    
end

plot(t, output, 'g', t, learned, 'b');

pause

Here is the result:

The result is not even close to where I want it to be. Has it got something to do with my implementation of back propagation?

What changes do I need to do to the code to get a better approximation going?

",19056,,,,,10/16/2018 10:58,Learning an arbitrary function using a feedforward net,,1,0,,,,CC BY-SA 4.0 8462,2,,8461,10/16/2018 10:58,,1,,"

You need to scale the input. Neural networks work best with a limited input domain, and train badly when it is exceeded.

For statistical data, you would typically scale your input to have mean 0, standard deviation 1.

Here, you will be better off fitting the input to roughly -1 to 1.

Up to you where you scale the values, but usually this is done outside of the NN code. So I would do something like:

nn_input = (t - 15)/15

And then use nn_input in the training and evaluation loops. As you are putting these directly into a sorted array for plotting, you won't need to do any further re-mapping back or maintain a conversion function. However, in the more common case of arbitrary inputs, you would need to store the conversion factors somewhere (in this case just hardcoded as a function perhaps) in order to make use of the trained NN.

Another thing that may help is shuffling your input/output data pairs during training to remove correlation between sequence of input pairs.

",1847,,,,,10/16/2018 10:58,,,,0,,,,CC BY-SA 4.0 8465,1,,,10/16/2018 15:43,,0,441,"

I already know the basics of the basic of Machine Learning. E.g.: Backpropagation, Convolution, etc.

First of let me explain Reinforcement learning to make sure I grasped the concept correctly.

In Reinforcement learning a random-initialized network will first ""play""/""do"" a sequence of moves in an environment. (In this case a Game). After that, it will receive a reward $r$. Furthermore a q-Value gets defined by the engineer/hooby coder. This reward times the q-Value $q$ to the power of the position $n$ of the action will be feeded back using BP.

So how do I know how slight chances in $\vec{w}$ are changing $rq^n$?

",19062,,1581,,10/16/2018 19:10,10/16/2018 19:10,How do I know how changes in the weights are changing the reward in Reinforcement Learning,,1,0,,,,CC BY-SA 4.0 8466,2,,8465,10/16/2018 17:03,,1,,"

You have the concept slightly wrong.

This part is mostly correct:

In Reinforcement learning a random-initialized network will first ""play""/""do"" a sequence of moves in an environment. (In this case a Game). After that, it will receive a reward r.

Technically neural networks are not required in RL, and it is really worth studying some simple systems that don't need them. It will make everything much clearer.

A reward $r$ can be received on every time step. However, some environments will only have a single reward at the end for success or failure for a whole episode - e.g. an instance of a game like chess where a player wins or loses.

This part is where things go a bit off track:

Furthermore a q-Value gets defined by the [developer]. This reward times the q-Value q to the power of the position n of the action will be [fed] back using BP.

Q values are one type of data that can be calculated for an agent acting in a Markov Decision Process. They are also called ""action values"" and they are not usually defined by a developer. The q value, if correct should return the expected future sum of rewards from following a current policy. One way of writing this is:

$$q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}\gamma^k R_{t+k+1}| S_t=s, A_t=a]$$

In natural language, the q value for state s and action a is the expected value (when following the given policy) of the discounted sum of rewards, starting from the given state and action. The discount factor, $\gamma$ can take any value from $0$ up to $1$, but only strictly episodic problems (which always terminate) should use the value $1$

A developer does not get to define that (except they might get to choose reward system and value of $\gamma$). Instead, they need to implement something that estimates the value of $q(s,a)$ based on what the agent has experienced. There are a few different algorithms that can do this. A popular one is called Q learning.

Regarding ""[fed] back using BP"", this is correct if you are using a neural network. Typically in DQN (Q learning with neural networks), this just consists of creating a small sample of training data from recent experience and training the neural network almost identically to supervised learning.

So how do I know how slight chances in $\vec{w}$ are changing $rq^n$?

Definitely don't use $rq^n$ - there is no purpose to that quantity in RL. Instead for value-based RL, you are mostly interested in your estimate for Q value. This might be written $\hat{q}(s,a,\vec{w}) \approx q(s,a)$.

However, in general your question stands. If you have implemented a neural network to learn q values, how do you know if it is working?

There are actually two parts to this problem:

  • How do you know whether the agent is getting better at it's task?

  • How do you know whether the q values are getting more accurate?

What you need to do is measure, and maybe plot some relevant quantities.

For the first question, you would typically plot the total reward that the agent gets each episode. This will be noisy, so it is a good idea to smooth it out by taking some kind of moving average (e.g. average total reward over last 100 episodes).

For the second question, it is normal to plot some loss function of the network, just like supervised learning. Typically this is Mean Squared Error loss, as the network is learning a regression to predict q values given $s$ and $a$. You can compare observed sums of discounted reward (aka ""return"" or ""utility"") with the earlier predicted ones, and take the error function. You need to get some measure of a ""true"" value of q - usually a noisy sample taken during training or testing, and measure loss. For MSE that might be

$$J(\vec{w}) = \frac{1}{2|D|}\sum_{(s,a) \in D}(\hat{q}(s,a, \vec{w}) - q(s,a))^2$$

Where $D$ is some dataset you have put together of $s,a$ and $q(s,a)$ measurements to test with. If this looks familiar to you from supervised learning MSE loss, then that's correct - it is essentially the same thing, just different how you go about collecting the data.

You may expect the loss function for $\hat{q}$ in Q learning to be somewhat unstable as the agent learns. That's because in Q learning, the policy is updating at the same time as the estimates are improving. Which makes the estimates out-of-date. However, it should still be possible to see a reduction in error as learning progresses. If it becomes stable at a relatively low value compared to initially, then the agent has probably learned all that it can - although sometimes new discoveries by the agent can open up more improvements, even late in training, and throw the error function out again.

Note that a low value of the error function does not mean you have an optimal agent. It means that the value function estimate is good for how the agent is currently behaving. In turn that means the agent cannot make further improvements without new and different experience.

",1847,,1847,,10/16/2018 17:24,10/16/2018 17:24,,,,1,,,,CC BY-SA 4.0 8467,1,,,10/16/2018 17:14,,4,328,"

Given infinite resources/time, one could create AGIs by writing code to simulate infinite worlds. By doing that, in some of the worlds, AGIs would be created. Detecting them would be another issue.

Since we don't have infinite resources, the most probable way to create an AGI is to write some bootstrapping code that would reduce the resources/time to reasonable values.

In that AGI code (that would make it reasonable to create with finite resources/time) is it required to have a part that deals with time/space estimation of possible actions taken? Or should that be outside of the code and be something the AGI discovers by itself after it starts running?

Any example of projects targeting AGI that are using time/space estimation might be useful for reaching a conclusion.

Clarification, by time/space I mean time/space complexity analysis for algorithms, see: Measures of resource usage and Analysis of algorithms

I think the way I formulated the question might lead people to think that the time/space estimation can only apply to some class of actions called algorithms. To clarify my mistake, I mean the estimation to apply to any action plan.

Imagine you are an AGI and you have to make a choice between different set of actions to pursue your goals. If you had 2 goals and one of them used less space and less time then you would always pick it over the other algorithm. So time/space estimation is very useful since intelligence is about efficiency. There is at least 1 exception though, imagine in the example before that the goal of the AGI is to pick the set of actions that leads to the most expensive time/space set of actions (or any non-minimal time/space cost) then obviously because of the goal constraint you would pick the most time/space expensive set of actions. In most other cases though, you would just pick the most time/space efficient algorithm.

",10826,,2444,,12/13/2021 11:52,5/12/2022 15:07,Is time/space estimation of possible actions required for creating an AGI?,,1,0,,,,CC BY-SA 4.0 8472,2,,8348,10/16/2018 19:14,,5,,"

I'll start with the last question in your post:

I was also wondering if there are any theoretical proofs/explanations about reward/Q-value clipping and which one being better.

I highly doubt there will be any such theoretical work. The problem is that these variants of clipping (clipping rewards and clipping $Q$ values) fundamentally modify the task / the original objective. Once you clip either of those things, you fundamentally change what your agent is trying to optimize for from what the original goal was. I don't think it's ever going to be possible to get any rigorous, theoretical proofs about which one would be better in general. You'd likely have to start out with some very strong assumptions on the reward structure in the original task to have any hope of proving anything here, but such strong assumptions make you lose generality.


Intuitively... I think reward clipping feels ""safer"" to me more often than clipping $Q$-values. Clipping $Q$-values seems more aggressive, it could be viewed as some combination of clipping rewards (if you clip $Q$-values to $[-1, 1]$, you're still at the very least also clipping all rewards to that range), but additionally also putting a constraint on how far in the future you're looking (in some sense). This whole argument is very handwavy though

I suppose, slightly less handwavy, you could say that reward clipping is definitely ""better"" (in the sense that you don't deviate as much from the original objective) in environments where rewards of similar magnitudes can be collected frequently. I struggle to really think of a situation where clipping $Q$-values would be a clear favorite based on intuition. I wouldn't be surprised if clipping $Q$-values may turn out to be better after empirical evaluation in some cases, but it's difficult to say where that would be. It will also very much depend on what range is chosen. Clipping rewards to a range of $[-1, 1]$ is very different from clipping $Q$-values to the same range.

",1641,,,,,10/16/2018 19:14,,,,1,,,,CC BY-SA 4.0 8475,2,,6102,10/17/2018 1:32,,1,,"

We can break down the problem as follows:

First, if you have two points on a plane and feed the coordinates of those points to a neural network (e.g., a vector $< x_0, y_0, x_1, y_1 >$) and and train it on a label thats the actual distance (e.g., $ \sqrt{(x_0 - y_0)^2 + (x_1-y_1)^2} $), it should be able to learn this relationship with arbitrarily close accuracy.

Next, if you have an image similar to what you describe, and feed that through a different neural network (e.g., a CNN), and as labels you used the the points of the two dots (once again $< x_0, y_0, x_1, y_1 >$), then it should be able to learn that relationship with arbitrarily close accuracy once again.

Of course, there's no reason to do this in two separate neural network, so we can just combine the two end-to-end have a model that takes the image as input and the distance as output.

This model would need to be trained on labeled data, however, so you'd either need to generate the data yourself or label images.

But if you wanted it to learn the notion of closing a distance in a less supervised way, you'd need to use reinforcement learning. In this case, you'd have to setup an environment that incentivises the agent to reduce the distance. This could be as simple as gaining reward if an action reduces the distance.

Another approach would be to incentivise the agent using future reward. That is, it's reward doesn't just come from the results of the next immediet state, but there's also contributions from the next possible state, and the one after that, and so on. This is the idea behind Deep Q-Learning, and I implement a simple example (very similar to what you're describing) in this notebook.

So, now the question is: has this implementation done something other than randomly moving around until it follows a path to success?

In your example, you talk about rewarding the agent when it lands on the goal. But in what I described, it gained reward by moving closer to the goal (either through the Q-Function or directly from the environment). It is able to do so by learning some abstract idea of distance (which can be illustrated in the supervised version).

When a human learns this, it's for the same exact reason: the human is gaining a reward for moving in that direction through a sense of future rewards.

I'd say that, given enough training and data, reinforcement learning could learn this concept with ease. As far as other rewards being present on the board (e.g., ""minimise the entropy of the board as well as try to get rewards""), you need to think about what it is you're asking. Would you rather the agent minimize distance or maximize reward? Cause, in general, it can't do both. If you're looking for some balance between the two, then really you're just redefining the reward to also consider the distance.

",19080,,,,,10/17/2018 1:32,,,,0,,,,CC BY-SA 4.0 8476,1,8477,,10/17/2018 7:42,,7,4896,"

What is an agent in reinforcement learning (RL)? I think it is not the neural network behind. What does the agent in RL exactly do?

",19062,,2444,,11/19/2018 18:21,11/19/2018 18:21,What does the agent in reinforcement learning exactly do?,,1,0,,,,CC BY-SA 4.0 8477,2,,8476,10/17/2018 8:10,,5,,"

The agent in RL is the component that makes the decision of what action to take.

In order to make that decision, the agent is allowed to use any observation from the environment, and any internal rules that it has. Those internal rules can be anything, but typically in RL, it expects the current state to be provided by the environment, for that state to have the Markov property, and then it processes that state using a policy function $\pi(a|s)$ that decides what action to take.

In addition, in RL we usually care about handling a reward signal (received from the environment) and optimising the agent towards maximising the expected reward in future. To do this, the agent will maintain some data which is influenced by the rewards it received in the past, and use that to construct a better policy.

One interesting thing about the definition of an agent, is that the agent/environment boundary is usually considered to be very close to the abstract decision making unit. For instance, for a robot, the agent is typically not the whole robot, but the specific program running on the robot's CPU that makes the decision on the action. All the relays/motors and other parts of the physical body of the robot are parts of the environment in RL terms. Although often loose language is used here, as the distinction might not matter in most descriptions - we would say that ""the robot moves its arm to achieve the goal"" when in stricter RL terms we should say that ""the agent running on the robot CPU instructs the arm motors to move to achieve the goal"".

I think it is not the Neural Net behind?

That is correct, the agent is more than the neural network. One or more neural networks might be part of an agent, and take the role of estimating the value of a state, or state/action pair, or even directly driving the policy function.

",1847,,1847,,10/17/2018 11:18,10/17/2018 11:18,,,,0,,,,CC BY-SA 4.0 8480,1,,,10/17/2018 10:54,,3,884,"

I have a collection of scanned documents (which come from newspapers, books, and magazines) with complex alignments for the text, i.e. the text could be at any angle w.r.t. the page. I can do a lot of processing for different features extraction. However, I want to know some robust methods that do not need many features.

Can machine learning be helpful for this purpose? How could I use machine learning to detect text and non-text regions in these scanned documents?

",18459,,2444,,7/25/2020 12:15,7/25/2020 12:15,How could I use machine learning to detect text and non-text regions in scanned documents?,,2,0,,,,CC BY-SA 4.0 8482,1,8486,,10/17/2018 12:00,,5,314,"

I implemented Q-learning to solve a specific maze. However, it doesn't solve other mazes. How could my Q-learning agent be able to generalize to other mazes?

",19094,,2444,,10/16/2021 23:20,10/16/2021 23:20,How can my Q-learning agent trained to solve a specific maze generalize to other mazes?,,1,1,,,,CC BY-SA 4.0 8483,2,,8480,10/17/2018 12:59,,2,,"

TextDetector, Tesseract and other open source packages implement text detection (object detection for text). There's also a pretrained Tensorflow model that does text detection. A text detector will give you the bounding boxes in your image for any text that it recognizes. In the case of Tesseract, it will also output the text (OCR is built in). So you can read the code in these packages to get ideas for your own machine learning pipeline. Basically you need both a regressor (for the bounding boxes) and a classifier (to detect whether the box contains text or not).

",19095,,19095,,10/19/2018 1:05,10/19/2018 1:05,,,,0,,,,CC BY-SA 4.0 8484,2,,1963,10/17/2018 13:23,,3,,"

DuckDuckGo learns answers to technical questions from StackExchange. Type a technical question like ""ongoing projects use stackexchange"" into DuckDuckGo and it will provide a highlighted summary of the answer on the right-hand side. And the duck has an open API for many (100s) more question answering data sources. Or you can go directly to the stackexchange api.

Projects can use the data from the SE open API as long as they comply with their TOU. Basically just make sure your users can tell that the data came from Stack Exchange. The copyright license may also limit your ability to alter the contents of the text, with say a learned abstractive summarizer. Perhaps that is why the Duck.com just highlights keywords.

Data rights law is in flux, especially when it comes to the data you submitted to a site and the machine learning models derived from that data. New European data and privacy rules empower you to download or delete all data you submit to a site like stack exchange.

",19095,,19095,,5/10/2019 22:18,5/10/2019 22:18,,,,0,,,,CC BY-SA 4.0 8486,2,,8482,10/17/2018 19:30,,3,,"

I'm going to assume here that you're using the standard, basic, simple variant of $Q$-learning that can be described as tabular $Q$-learning, where all of your state-action pairs for which you're learning $Q(s, a)$ values are represented in a tabular fashion. For example, if you have 4 actions, your $Q(s, a)$ values are likely represented by 4 matrices (corresponding to the 4 actions), where every matrix has the same dimensionality as your maze (I'm assuming that your maze is a grid of discrete cells here).

With such an approach, you are learning $Q$ values separately for every single individual state (+ action). Such learned values will always only be valid for one particular maze (the one you have been training in), as you seem to have already noticed. This is a direct consequence of the fact that you're learning individual values for specific state-action pairs. The things you are learning ($Q$ values) can therefore not directly be transferred to a different maze; those particular states from the first maze do not even exist in the second maze!

Better results may be achievable with different state representations. For example, instead of representing states by their coordinates in a grid (as you would likely do with a tabular approach), you'd want to describe states in a more general way. For example, a state could be described by features such as:

  • Is there a wall right in front of me?
  • Is there a wall immediately to my right?
  • Is there a wall immediately to my left?

An alternative that also may actually be able to better generalize to some extent could be pixel-based inputs if you have images (a top-down image or even a first-person-view).

When states are represented by such features, you can no longer use the tabular RL algorithms that you are likely familiar with though. You'd have to use function approximation instead. With good state-representations, those techniques may have a chance of generalizing. You'd probably want to make sure to actually also use a variety of different mazes during the training process though, otherwise they'd likely still overfit to only a single maze used in training.

",1641,,,,,10/17/2018 19:30,,,,0,,,,CC BY-SA 4.0 8487,1,,,10/17/2018 20:16,,0,160,"

I am reading article https://allenai.org/paper-appendix/emnlp2017-wt/ http://ai2-website.s3.amazonaws.com/publications/wikitables.pdf about training neural network and the loss function is mentioned on page 6 chapter 3.4 - this loss function O(theta) is expressed as marginal loglikelihood objective function. I simply does not understand this. The neural network generates logical expression (query) from some question in natural language. The network is trained using question-answer pairs. One could expect that simple sum of correct-1/incorrect=0 result could be good loss function. But there is strange expression that involves P(l|qi, Ti; theta) that is not mentioned in the article. What is meant by this P function? As I understand, then many logical forms l are generated externally for some question qi. But further I can not understand this. The mentioned article largely builds on other article http://www.aclweb.org/anthology/P16-1003 from which it borrows some terms and ideas.

It is said that l is treated as latent variable and P seems to be some kind of probability. Of course, we should assign the greated probability to the right logical form l, but where can I find this assignment. Does training/supervision data should contain this probability function for training/supervision data?

",8332,,8332,,10/17/2018 20:26,2/5/2021 3:18,How to understand marginal loglikelihood objective function as loss function (explanation of an article)?,,1,0,,,,CC BY-SA 4.0 8488,2,,8480,10/17/2018 20:20,,1,,"

Since the document is scanned, it will not be in an open document format so no associated API can be used.

Approach 1

Evaluate TextBridge Pro, FreeOCR, and other alternatives that purport page layout detection. If any of them work, drive them programmatically (preferably headless) to read the scanned document, detect page layout and OCR the text, export to a document with an open format, and then use the API

With this approach, the object recognition AI is in the product and development time and resources are saved.

Approach 2

Do a 2D FFT windowing through the page in both directions. See the cosine, trapazoidaly, Hamming, and Hanning windows and apply them in horizontal and vertical directions. Use Approach 1, assuming those products work with the scanned documents to label the examples, and then train a DCNN (deep convolutional NN) to recognize from the 2D FFT output spectra where the pictures are. By interpolation, close to a perfect crop of the images and the text regions can be obtained with some hyper-parameters on the model obtained.

Approach 3

This approach is just Approach 2 but preparing the labeled example data set by hand, which may be necessary because the existing software products may not handle the images being laid out at angles other than 0, 90, 180, or 270 degrees.

Approach 4

Create an architecture that is based on feature extraction, and use font rendering libraries to build the back half of an auto-encoder, allowing portions of image that do not auto-encode to be preserved as an x-y coordinate pair, which will allow the images to jump over the pictures if the convergence is set up correctly.

Final Note

One can offload some processing to a learning process, so that the actual document process runs faster, but sometimes the preparation of the example data set and the learning consumes more resources. That's why those who assess which approach will cost less and can recommend the best approach with some reliability are high paid.

",4302,,,,,10/17/2018 20:20,,,,0,,,,CC BY-SA 4.0 8489,2,,8458,10/18/2018 5:26,,0,,"

First step would be getting the object out of the scene. This bit is not trivial in your case, however, there are many methods to choose from. I suggest reading about watershed threshold algorithm.

Second part is easier. Once you have a single segmented object at hand, perform noise removal. Next step is to extract the contours. Find the center of gravity, transform coordinates to polar and represent these contours as a function which has the x axis as degrees and y axis as distance from the center. Take the Fourier transform of this function. If the shape is symmetrical, there will be few non-zero entries, and a large spike in the spectrum.

",210,,,,,10/18/2018 5:26,,,,0,,,,CC BY-SA 4.0 8490,1,,,10/18/2018 5:34,,5,67,"

In the abstract of the paper Network In Network, the authors write

We propose a novel deep network structure called "Network In Network"(NIN) to enhance model discriminability for local patches within the receptive field

What does the part in bold mean?

",18017,,2444,,12/19/2021 20:28,12/19/2021 20:28,"What is meant by ""model discriminability for local patches within the receptive field""?",,0,1,,,,CC BY-SA 4.0 8491,1,,,10/18/2018 7:30,,6,1240,"

While working through some example from Github I've found this network (it's for FashionMNIST but it doesn't really matter).

Pytorch forward method (my query in upper case comments with regards to applying Softmax on top of Relu?):

def forward(self, x):
    # two conv/relu + pool layers
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))

    # prep for linear layer
    # flatten the inputs into a vector
    x = x.view(x.size(0), -1)

    # DOES IT MAKE SENSE TO APPLY RELU HERE
    **x = F.relu(self.fc1(x))

    # AND THEN Softmax on top of it ?
    x = F.log_softmax(x, dim=1)**

    # final output
    return x
",19116,,1847,,10/18/2018 12:16,10/18/2018 12:56,Does it make sense to apply softmax on top of relu?,,1,1,,,,CC BY-SA 4.0 8493,1,8494,,10/18/2018 10:38,,2,165,"

I am new to RL and I am trying to work through the book Reinforcement Learning: An Introduction I (Sutton & Barto, 2018). In chapter 3 on Finite Markov Decision Processes, the authors write the expected reward as

$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}r\sum_{s'\in \mathcal{S}}p(s',r|s,a)$$

I am not sure if the authors mean

$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}\left[r\sum_{s'\in \mathcal{S}}p(s',r|s,a)\right]$$

or

$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\left[\sum_{r\in \mathcal{R}}r\right]\cdot\left[\sum_{s'\in \mathcal{S}}p(s',r|s,a)\right].$$

If the authors mean the first, is there any reason why it is not written like the following?

$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}\sum_{s'\in \mathcal{S}}\left[r\,p(s',r|s,a)\right]$$

",19123,,2444,,4/18/2022 9:24,4/18/2022 9:24,"Where are the parentheses in the definition of $r(s,a)$?",,1,0,,,,CC BY-SA 4.0 8494,2,,8493,10/18/2018 11:34,,2,,"

Your first option is correct:

$$r(s,a) = \mathbb{E}\left[R_t|S_{t-1}=s,A_{t-1}=a\right]=\sum_{r\in \mathcal{R}}\left[r\sum_{s'\in \mathcal{S}}p(s',r|s,a)\right]$$

It's partly a matter of taste, but I prefer not moving the $r$ into the double sum, because its value does not change in the ""inner loop"". There is a small amount of intuition to be had that way around, especially when it comes to implementation (it is one multiplication after the sum, as opposed to many within the sum).

There are a lot of sums containing sums in Sutton & Barto, and they mainly follow the convention of not using any parentheses or brackets to show the one sum containing the other explicitly.

In this case, the formulae help link to other treatments of RL, which work with the expected reward functions $r(s,a)$ or $r(s,a,s')$, or reward matrices $R_s^a$, $R_{ss'}^a$ such as the first edition of Sutton & Barto's book. The second edition of the book uses $p(s', r|s, a)$ almost everywhere though, and you won't see $r(s,a)$ mentioned much again. So it's not worth getting too concerned about how it is presented and what the author might be saying with the presentation.

Generally you don't need to know the distribution of reward, just its expectation (and how that depends on $s, a, s'$), in order to derive and explain most of the results in RL. So using $r(s,a)$ and similar functions is fine, in places like the Bellman equations. However, the use of $p(s', r|s, a)$ is general without needing to bring in more functions describing the MDP.

",1847,,,,,10/18/2018 11:34,,,,0,,,,CC BY-SA 4.0 8495,2,,8491,10/18/2018 12:37,,5,,"

Does it make sense?

In general, yes it is interpretable, back propagation will work, and the NN can be optimised.

By using ReLU, the default network has a minimum logit of $0$ for the softmax input, which means at least initially that there will be higher minimum probabilities associated with all classes (compared to allowing negative logits which would happen randomly with usual weight initialisation). The network will need to learn to produce higher logit values for correct answers, because it has no ability to produce lower logit values for incorrect answers. This is like training a network to produce the highest regression value on one output, whilst clipping all values to be 0 or above, so it does not have the option of making one output e.g. $-1.0$ and the rest $-100.0$

It can probably be thought of as a type of regularisation, as it puts constraints on activation values that will work.

Is it needed?

That is less clear. You can try training with and without the line, and using cross-validation or a test set to see if there is a significant difference.

If the network has been designed well, then I'd expect to see a slight improvement with the added ReLU.

If it is a mistake, then I'd expect to see no difference, or better performance without the ReLU.

",1847,,1847,,10/18/2018 12:56,10/18/2018 12:56,,,,2,,,,CC BY-SA 4.0 8496,1,8525,,10/18/2018 13:36,,6,1583,"

In the book Reinforcement Learning: An Introduction (Sutton and Barto, 2018). The authors ask

Exercise 3.2: Is the MDP framework adequate to usefully represent all goal-directed learning tasks? Can you think of any clear exceptions?

I thought maybe a card game would be an example if the state does not contain any pieces of information on previously played cards. But that would mean that the chosen state leads to a system that is not fully observable. Hence, if I track all cards and append it to the state (state vector with changing dimension) the problem should have the Markov Property (no information on the past states is needed). This would not be possible if the state is postulated as invariant in MDP.

If the previous procedure is allowed, then it seems to me that there are no examples where the MDP is not appropriate.

I would be glad if someone could say if my reasoning is right or wrong. What would be an appropriate answer to this question?

",19123,,2444,,3/13/2020 22:49,8/8/2020 6:17,When is the Markov decision process not adequate for goal-directed learning tasks?,,2,3,,,,CC BY-SA 4.0 8499,1,,,10/18/2018 18:54,,1,509,"

Introduction

Exhaustive search is a method in AI planning to find a solution for so-called Constraint Satisfaction Problems. (CSP). Those are problems that have some conditions to fulfill and the solver is trying out all the alternatives. An example CSP problem is the 8-queens problem which has geometrical constraints. The standard method in finding a solution for the 8-queens problem is a backtracking solver. That is an algorithm that generates a tree for the state space to search inside the graph.

Apart from practical applications of backtracking search, there are some logic-oriented discussions available which are asking on a formal level which kind of problems have a solution and which not. For example to find a solution for the 8-queen problem many millions of iterations of the algorithm are needed. The question is now: which problems are too complex to find a solution. The second problem is, that sometimes the problem itself has no solution, even the complete state space was searched fully.

Let us take an example. At first, we construct a problem in which the constraints are so strict that even a backtracking search won't find a solution. One example would be to prove that “1+1=3” another example would be to find a chess sequence if the game is lost or it is also funny to think about how to arrange nine! queen on a chess table so that they don't hurt.

Is there any literature available which is describing Constraint Satisfaction Problems on a theoretical basis in which the constraints of the problem are too strict?

Original posting

Just wondering - like with an 8-queens problem. If we change it to a 9-queens problem and do an exhaustive search, we will see that there is no solution. Is there a problem in which the search fails to show that a solution does not exist?

",19137,,-1,,6/17/2020 9:57,5/29/2020 13:47,Any problems/games/puzzles in which exhaustive search cannot show that a solution does not exist?,,2,0,,,,CC BY-SA 4.0 8504,2,,2870,10/18/2018 23:09,,1,,"

(I'd leave this in the comments but sadly I can't.)

Here's a great paper (leveraging formal game theory) in which ""players"" make strategic choices based on each event (""move"") and an algorithm calculates the best strategy for all players based on each player's abilities throughout the multi-stage ""game"". So instead of learning from winning and predicting what the next perfect move the player thinks (based off of learning) will lead to a ""win"", the player calculates an optimal strategy and moves to win the game itself over multiple moves.

https://users.ece.cmu.edu/~youzhib/paper/bao2017csf.pdf

Note: This paper deals with cyber war games so is heavily tilted towards that domain, but if you take away the game theoretics and concept of strategy over a multi-stage game like ti-tac-toe, you should be able to improve your overall outcomes. That being said, how the model would accommodate a random ""player"" with random moves would be interesting to see.

",15337,,,,,10/18/2018 23:09,,,,0,,,,CC BY-SA 4.0 8506,2,,4764,10/19/2018 2:45,,0,,"

They can definitely be made cheaper, using a combination of techniques. But I doubt if they will be free in the near term. Once the passenger is inside a car, they are bound to be in the car for the duration of the travel. Any company who finds an efficient way to sell things or make them their product during this time creates a business opportunity. Some scenarios that companies will employ to reduce cost of travel are:

  • Advertisements
  • Sale of items on board (snacks, soft drinks, liquor etc)
  • Usage based discount (The ride is free if you purchase $100 on amazon when you are in the car)
  • Discount for services by passenger (Give us feedback every minute of the car operation, we will discount 50% of cost)

Once self driving is popular, there will be a plethora of companies trying to exploit the passenger; the passenger is bound to be inside the car for the duration.

",6173,,,,,10/19/2018 2:45,,,,0,,,,CC BY-SA 4.0 8507,2,,8414,10/19/2018 5:33,,2,,"

I identified the origin of this overfitting..

Origins

I tried a lot of models, putting more and more dropout, simplifying as much as I could.

No matter what I did, after a few epoch of good learning, invariably my loss function was going up. I tried simpler and simpler models, always the same overfitting behavior. What bugged me at that moment is that no matter what kind of model I used, how deep or how complex, always the accuracy was fine, stabilized at some nice level.

So I tried the simplest model I could imagine : Input => Dense with 3 hidden units => Output. Finally I got random results, with a 33% accuracy ! From here, I guilt again my network, layer by layer, to see which one was causing the overfitting.

And it was the Embedding layer.

Even with a simple network like Input => Embeddings => Dense with 3 hidden units => Output, the model was overfitting.

How to solve it

In Keras, simply instantiate the Embeddings layer with trainable=False. After doing this, no more overfit.

In my opinion, this is quite counter-intuitive : I want my embeddings to evolve with the data I show to the network. But look like I can't...

",18852,,,,,10/19/2018 5:33,,,,0,,,,CC BY-SA 4.0 8508,2,,8499,10/19/2018 6:17,,1,,"

This should be a comment, but I don't have enough reputation to comment. I will remove this answer if question is updated

Your question is not really clear. As I understand it, the definition itself of the exhaustive search show that it's always possible to determine if a solution is valid or not.

Exhaustive search is defined as :

  • For a given potential solution, can determine if this solution resolve the problem.
  • Test every possible potential solution

From this, there is no problem where every potential solution is tested, but the search cannot show that a solution exist : either it exist and the search found the candidate, or it does not exist because all possible candidates were tested.


Maybe what you meant when asking your question is : ""Any problems in which exhaustive search cannot be applied ?""

And the answer is yes, there is plenty of problems where the search space is way to big to be integrally searched : for example the Rubik's cube 3*3*3 has 43 252 003 274 489 856 000 combinations (source).

My answer need more source, specifically about the exhaustive search definition. I would be happy to add it if you could share :)

",18852,,,,,10/19/2018 6:17,,,,0,,,,CC BY-SA 4.0 8509,1,21451,,10/19/2018 7:15,,4,1323,"

I've created a neural net using the ConvNetSharp library which has 3 fully connected hidden layers. The first having 35 neurons and the other two having 25 neurons each, each layer with a ReLU layer as the activation function layer.

I'm using this network for image classification - kinda. Basically it takes inputs as raw grayscale pixel values of the input image and guesses an output. I used stochastic gradient descent for the training of the model and a learning rate of 0.01. The input image is a row or column of OMR "bubbles" and the network has to guess which of the "bubble" is marked i.e filled and show the index of that bubble.

I think it is because it's very hard for the network to recognize the single filled bubble among many.

Here is an example image of OMR sections:

Using image-preprocessing, the network is given a single row or column of the above image to evaluate the marked one.

Here is an example of a preprocessed image which the network sees:

Here is an example of a marked input:

I've tried to use Convolutional networks but I'm not able to get them working with this.

What type of neural network and network architecture should I use for this kind of task? An example of such a network with code would be greatly appreciated.

I have tried many preprocessing techniques, such as background subtraction using the AbsDiff function in EmguCv and also using the MOG2 Algorithm, and I've also tried threshold binary function, but there still remains enough noise in the images which makes it difficult for the neural net to learn.

I think this problem is not specific to using neural nets for OMR but for others too. It would be great if there could be a solution out there that could store a background/template using a camera and then when the camera sees that image again, it perspective transforms it to match exactly to the template

I'm able to achieve this much - and then find their difference or do some kind of preprocessing so that a neural net could learn from it. If this is not quite possible, then is there a type of neural network out there which could detect very small features from an image and learn from it. I have tried Convolutional Neural Network but that also isn't working very well or I'm not applying them efficiently.

",19144,,2444,,6/4/2022 22:24,6/4/2022 22:24,Which neural network to use for optical mark recognition?,,3,8,,,,CC BY-SA 4.0 8510,1,,,10/19/2018 7:59,,1,25,"

while dealing with image data at very large scale, there are different sources where data is coming from. Often, we do not have any control over quality of labels/ annotations. I already do use sampling quality checks method to manually check the quality of annotations but as the volume of data has increased,even sampling QC become an inefficient job. Are there other methods to automate / simplify this task for data at large scale?

",17980,,17980,,10/19/2018 9:47,10/19/2018 9:47,input annotations quality check for large scale image data,,0,0,,,,CC BY-SA 4.0 8511,1,8513,,10/19/2018 8:04,,2,123,"

According to Wikipedia

Dropout is a regularization technique for reducing overfitting in neural networks

My neural network is simple enough and does not overfit.

Can the addition of dropout, in a non-overfitting neural network, increase accuracy? Even if I increase the complexity of the neural network?

",18852,,2444,,3/10/2020 20:27,3/10/2020 20:28,Can the addition of dropout in a non-overfitting neural network increase accuracy?,,1,0,,,,CC BY-SA 4.0 8512,2,,8509,10/19/2018 8:31,,1,,"

I'm not familiar with ConvNetSharp library, and the tag convolutional-neural-networks is a bit confusing me, but from :

So I've created a neural net using the ConvNetSharp library which has 3 fully connected hidden layers. The first having 35 neurons and the other two having 25 neurons, each with a ReLU layer as the activation function layer.

I assume you are building just a densely connected neural network. Correct me if I'm wrong.


The type of neural network you need is Convolutional Neural Network.

For image recognition (which is your case), convolutional network are almost always the answer.

There is plenty of type of CNN, just pick one that seems appropriate and try.

In my opinion, your task seems quite simple, you won't need really deep / complex architecture.


It would be great if there could be a solution out there that could store a background/template using a camera and then when the camera sees that image again

I am not aware of a model that could do what you are asking.

But what you are asking is not really in the 'neural network' mindset. The goal of building a neural network is that you don't specify anything. The neural network will learn and find the features for you. So you just have to feed him a lot of data, and he will be able to recognize your pattern.

Take a look at this visualization of CNN filters :

Here, no one gave the neural network the template of a nose or the template of an eye or the template of a face. The CNN learned it over a lot of image.

",18852,,,,,10/19/2018 8:31,,,,2,,,,CC BY-SA 4.0 8513,2,,8511,10/19/2018 8:36,,1,,"

Can the addition of dropout, in a non-overfitting neural network, increase accuracy?

Yes, maybe.

Even if I increase the complexity of the neural network?

Yes, maybe.

As always when making changes to ML algorithms, you need to test carefully to see if your changes have made an improvement. There are very few theories in non-linear machine learning models that make solid guarantees of results. One general difference you should note is that training a network with dropout will take longer (more epochs) than training a similar network without dropout, to reach the same levels of accuracy.

However, as well as the regularisation effect of dropout, it shares some behaviour with ensemble techniques such as bagging. Dropout effectively trains many sub-networks (that share weights) on different samples of the training data. This pseudo-ensemble effect can boost accuracy, and other success metrics. This is not a guaranteed effect, but it does happen in practice.

",1847,,2444,,3/10/2020 20:28,3/10/2020 20:28,,,,0,,,,CC BY-SA 4.0 8514,1,8520,,10/19/2018 9:34,,2,133,"

I am referring to eq. 3.6 (page 49) based on Sutton's online book and can be found in an image below.

I could not make sense of the final derivation of the equation $r(s, a, s')$. My question is actually how do we come to that final derivation?

Surprisingly, the denominator of $p(s'|s, a)$ can literally be replaced by $p(s', r|s, a)$ as eq. 3.4 suggests, then it will end up with "$r$" term only due to cancellation of numerator $p(s', r|s, a)$ and denominator $p(s'|s, a)$.

Any explanation on that would be appreciated.

",19147,,2444,,1/3/2021 23:08,1/3/2021 23:08,"Why is the equation $r(s', a, s') =\sum_{r \in \mathcal{R}} r \frac{p\left(s^{\prime}, r \mid s, a\right)}{p\left(s^{\prime} \mid s, a\right)}$true?",,1,0,,,,CC BY-SA 4.0 8515,2,,8168,10/19/2018 10:12,,1,,"

Yours is a really nice and easy question: you've seen how to use GAs in a complex problem but you're missing how to apply them to the most basic of all. I'll show you:

In a real-world implementation we have to structure our problem to be solved with GAs; we need to modify it, finding an equivalent representation that accepts individuals, each one of these is built from a genome and must be evaluated using a fitness function.

If you see this graphically you'll discover that the fitness function is just describing an area (or line or volume, depending on the grade) on our space and we are randomly dropping individuals on it, awarding the ones that fell higher than the others. We then try to modify the genomes of these guys to move them towards peaks of this fitness function.

In practice the fitness function is our world, our ground truth and we are exploring it.

Now to the function optimization part: well, we do not need any abstraction here, any strange individual representation or transformation of the problem; we just want to find a maximum, and this is exactly what GAs are for!

So, let's have a look at the elements you need to solve your problem with GAs:

  • Fitness function => you have it already! It's the function you want to find the max of.
  • Individual => as the maximum is enough, so the individual will be just a point in space.
  • Genome => like every respective point in space, the genome will just be a collection of real numbers (one for each dimension).

Now, like in any GA, you will instantiate randomly an initial population (random points in the domain). The evaluation of course is the easy part, just put them into the function and you'll se the value of the individuals.
What about crossover and mutation though? You'll have to use techniques that work with real-number genes, like BLXalpha, BLXalphabeta. These are just defining ranges between values and picking random values inside these ranges, I wrote a pretty detailed answer about this, you can check it out at: https://ai.stackexchange.com/a/6323/15530

",15530,,15530,,7/1/2019 2:59,7/1/2019 2:59,,,,1,,,,CC BY-SA 4.0 8517,2,,4779,10/19/2018 10:32,,2,,"

It is not possible to have a negative IG, your outcome is negative because of a computational mistake: initially you have 69 positive instances, and after creating 2 children the sum drops to 56+11=67. Set is therefore less pure -> IG is negative.

",19149,,,,,10/19/2018 10:32,,,,0,,,,CC BY-SA 4.0 8518,1,,,10/19/2018 11:55,,4,1406,"

What are some ways to design a neural network with the restriction that the $L_1$ norm of the output values must be less than or equal to 1? In particular, how would I go about performing back-propagation for this net?

I was thinking there must be some "penalty" method just like how in the mathematical optimization problem, you can introduce a log barrier function as the "penalty function"

",19150,,2444,,10/29/2020 9:13,10/29/2020 9:14,How do we design a neural network such that the $L_1$ norm of the outputs is less than or equal to 1?,,2,0,,,,CC BY-SA 4.0 8519,1,8521,,10/19/2018 14:50,,1,20,"

As you can see in the title, I'm trying to program an AI in Java that would help someone optimize his storage.

The user has to enter the size of his storage space (a box, a room, a warehouse, etc...) and then enter the size of the items he has to store in this space. (note that everything must be a rectangular parallelepiped) And the AI should find the best position for each item such that space is optimized.

Here is a list of what I started to do :

  • I asked the user to enter the size of the storage space (units are trivial here except for the computing cost of the AI, later on, I'm guessing), telling him that the values will be rounded down to the unit
  • I started by creating a 3-dimensional array of integers representing the storage space's volume, using the 3 values taken earlier. Filling it with 0s, where 0s would later represent free space and 1s occupied space.
  • Then, store in another multidimensional array the sizes of the items he has to store And that's where the AI part should be starting. The first thing the AI should do is check whether the addition of all the items' volumes doesn't surpass the storage space's volume. But then there are so many things to do and so many possibilities that I get lost in my thoughts and don't know where to start...

In conclusion, can anyone give me the proper terms of this problem in AI literature, as well as a link to an existing work of this kind? Thanks

",19155,,30725,,5/29/2020 13:48,5/29/2020 13:48,AI that maximizes the storage of rectangular parallelepipeds in a bigger parallelepiped,,1,0,,,,CC BY-SA 4.0 8520,2,,8514,10/19/2018 15:23,,4,,"

No, the substitution you suggest based on Equation (3.4) is not correct because you forgot about the $\sum_{r \in \mathcal{R}}$ in the right-hand side Equation (3.4).

Equation (3.4) says (leaving out the middle part):

$$p(s' \vert s, a) \doteq \sum_{r \in \mathcal{R}} p(s', r \vert s, a).$$

If you plug this into Equation (3.6) to substitute the denominator, you can't forget about that sum, you have to include the complete sum in the denominator. Because we have two different sums summing over all rewards in $\mathcal{R}$, I'll change the symbol used in the second sum to $r'$ rather than $r$. This yields:

$$r(s, a, s') \doteq \sum_{r \in \mathcal{R}} r \frac{p(s', r \vert s, a)}{\sum_{r' \in \mathcal{R}} p(s', r' \vert s, a)}.$$

The numerator and denominator are different and will not cancel out.


Intuitively, the numerator is just a single probability; the probability of observing a specific next state $s'$ and a reward $r$ given a specific current state $s$ and action $a$.

The denominator is a sum of many such probabilities, for all possible rewards $r'$ rather than just one specific reward $r$. Because this sum ""covers"" all possible events for the rewards, it essentially represents simply the probability of observing state $s'$ given state $s$ and action $a$, paired with any arbitrary reward. That can more simply be denoted as $p(s' \vert s, a)$.... which makes sense because that's the very thing we started with :D


Equation (3.6) is as follows (again leaving the middle part out):

$$r(s, a, s') \doteq \sum_{r \in \mathcal{R}} r \frac{p(s', r \vert s, a)}{p(s' \vert s, a)}.$$

In normal English, the left-hand side says ""what reward do we expect to get (on average) if we transition from state $s$ to state $s'$ by executing action $a$?""

Such an $\color{red}{\text{expectation of a quantity }r}$ can always be computed by multiplying $\color{blue}{\text{the possible values that }r\text{ can take}}$ by $\color{orange}{\text{the probability of each particular value occurring}}$, and summing up those multiplications. In math, this looks like:

$$\color{red}{r(s, a, s')} = \sum_{\color{blue}{r \in \mathcal{R}}} \color{blue}{r} \times \color{orange}{p(r \vert s, a, s')}.$$

Now, there is a rule in probability that says (see Rule of Multiplication on https://stattrek.com/probability/probability-rules.aspx):

$$p(A, B) = p(A) \times p(B \vert A),$$

so, if we take $A = s'$, $B = r$, and $s, a$ as additional givens for all probabilities, we get:

$$p(s', r \vert s, a) = p(s' \vert s, a) \times p(r \vert s, a, s').$$

This can be rewritten (dividing both sides of the equation by $p(s' \vert s, a)$ and swapping the left-hand and right-hand sides) to:

$$p(r \vert s, a, s') = \frac{p(s', r \vert s, a)}{p(s' \vert s, a)}.$$

Plugging this in for the orange term in the coloured equation above (which itself is hopefully fairly easy to understand intuitively) yields Equation (3.6).

",1641,,1641,,10/19/2018 17:36,10/19/2018 17:36,,,,1,,,,CC BY-SA 4.0 8521,2,,8519,10/19/2018 16:59,,0,,"

A simple approach that gives a good baseline for such problems is simulated annealing. The idea is that you do something random. If it improves things, then it is good. If it makes things worse, you still take it with some probability $p$, where $p$ shrinks over time.

The more bad solutions you can rule out beforehand / the smarter you can encode your problem, the better solutions simulated annealing will give.

",3217,,,,,10/19/2018 16:59,,,,0,,,,CC BY-SA 4.0 8522,1,,,10/19/2018 18:13,,3,118,"

For example, consider an agent concerned with predicting the weather, with variable R indicating whether or not it is likely to rain, variable C indicating whether or not it is cloudy, and variable L indicating low pressure. Given knowledge base K:

L (Pressure is low)

C (It is cloudy)

C ∧ L ⇒ R, (Clouds and low pressure imply rain)

the agent may conclude R; thus, the agent’s knowledge implies that R is true, because K |= R.

Similarly, given knowledge base L:

¬L (Pressure is high)

C (It is cloudy)

C ∧ L ⇒ R, (Clouds and low pressure imply rain)

the agent cannot conclude that R is true; L 6|= R

Deriving a truth table:

L C r ((L ∧ C) → r)

F F F T

F F T T

F T F T

F T T T

T F F T

T F T T

T T F F

T T T T

but this does not make sense.

",19164,,1671,,10/15/2019 19:10,10/15/2019 19:10,How do I use truth tables to prove Entailment?,,0,1,,,,CC BY-SA 4.0 8525,2,,8496,10/19/2018 19:06,,2,,"

Background

The Markov Decision Process is an extension of Andrey Markov's action sequence that visualize action-result sequence possibilities as a directed acyclic graph. One path through the acyclic graph, if it satisfies the Markov Property is called a Markov Chain.

The Markov Property requires that the probability distribution of future states at any point within the acyclic graph be evaluated solely on the basis of the present state.

Markov Chains are thus a stochastic model theoretically representing one of the set of possible paths. And the action-result sequence is a list of state transitions corresponding to actions chosen solely by each action's previous state and the expectations that the expected subsequent state will most probably lead to the desired outcome.

Andrey Markov based his work on Gustav Kirchhoff's work on spanning trees, which is based on Euler's initial directed graph work.

The Exercise

Exercise 3.2 was given with two parts.

Is the MDP framework adequate to usefully represent all goal-directed learning tasks?

Can you think of any clear exceptions?

The first question is subjective in that it inquires about usefulness but does not define what it means. If "useful" means the MDP will improve the chances of achieving a goal over a random selection of action at each state, then except in no win scenarios or the most contrived case where all actions have equal distribution of probable results, then the MDP is useful.

If "useful" means optimal, then there are other approaches, with additional complexity and requiring additional computing resources that will improve odds of goal achievement. These other approaches overcome one or more of the limitations of pure MDP.

Advancements and Alternatives

Advancements made to MDP and alternatives to MDP, which number in the hundreds, include these.

  • Logical detection of the infeasibility of goal achievement (no win scenario)
  • Calculation of probabilities when only partial information is available about the current state
  • Invocation of the decision at any point (continuous MDP used in real time systems)
  • Probabilities are not known and must be learned from past experience where simple Q-learning is employed
  • Past experience is used by statistically relating action-state details to generalizations derived from past action-result sequences or such information acquired or shared
  • The action-state decisions, made within the context of an unknown system of changing or not reliably applied rules, can be used to tune a set of fuzzy rules in a fuzzy logic container and utilize fuzzy inference in the decisions
  • Bluff and fraud detection

Card Games

Game play for a typical card game could make use of MDP, so MDP would be strictly useful, however not optimal. Some of the above decisioning features would be more optimal, particularly those that deals with unknowns and employ rules, since the card game has them.

Random or Decoupled

Two obvious cases are (a) a truly randomized action-result world where goal achievement has equal probability no matter the sequence of moves or (b) a scenario where goal achievement is entirely decoupled from actions the actor can take. In those cases, nothing will be useful with regard to the particular objective chosen.

Challenge

The way to best learn from the exercise, though, is to find a scenario where MDP would be useless and one of the above listed Advancements and Alternatives would be required rather than simply preferred. If you look at the list, there will be some cases that will eventually come to mind. I suggest you think it through, since the goal is to learn from the book.

",4302,,-1,,6/17/2020 9:57,10/19/2018 19:06,,,,0,,,,CC BY-SA 4.0 8528,1,8538,,10/19/2018 22:18,,1,751,"

I am reviewing a statement on the website for ES regarding structured exploration.

https://blog.openai.com/evolution-strategies/

Structured exploration. Some RL algorithms (especially policy gradients) initialize with random policies, which often manifests as random jitter on spot for a long time. This effect is mitigated in Q-Learning due to epsilon-greedy policies, where the max operation can cause the agents to perform some consistent action for a while (e.g. holding down a left arrow). This is more likely to do something in a game than if the agent jitters on spot, as is the case with policy gradients. Similar to Q-learning, ES does not suffer from these problems because we can use deterministic policies and achieve consistent exploration.

Where can I find sources showing that policy gradients initialize with random policies, whereas Q-Learning uses epsilon-greedy policies?

Also, what does ""max operation"" have to do with epsilon-greedy policies?

",19167,,,,,10/20/2018 20:56,"Some RL algorithms (especially policy gradients) initialize with random policies, which often manifests as random jitter on spot for a long time?",,1,1,,,,CC BY-SA 4.0 8533,1,,,10/20/2018 11:16,,1,28,"

There are several levels of abstraction involved in piloting and driving.

  • Signals representing the state of the vehicle and its environment originating from multiple transducers1
  • Latched sample vectors/matrices
  • Boundary events (locations, spectral features, movement, appearance and disappearance of edges, lines, and sounds)
  • Objects
  • Object movements
  • Object types (runways, roads, aircraft, birds, cars, people, pets, screeches, horns, bells, blinking lights, gates, signals, clouds, bridges, trains, buses, towers, antennas, buildings, curbs)
  • Trajectory probabilities based on object movements and types
  • Behaviors based on all the above hints
  • Intentions based on behavior sequences and specific object recognition
  • Collision risk detection

Moving from interpretation to control execution ...

  • Preemptive collision avoidance reaction
  • Horn sounding
  • Plan adjustment
  • Alignment of plan to state
  • Trajectory control
  • Skid avoidance
  • Skid avoidance reaction
  • Steering, breaking, and signalling
  • Notifications to passengers

What, if any, levels of higher abstraction can be sacrificed? Humans, if they are excellent pilots or drivers, can use all of these levels to improve pedestrian and passenger safety and minimize expense in time and money.

Footnotes

[1] Optical detectors, microphones, strain gauge bridges, temperature and pressure gauges, triangulation reply signals, voltmeters, position encoders, key depression switches, flow detectors, altimeters, radar transducers, tachometers, accelerometers

",4302,,,,,10/20/2018 11:16,To what level of abstraction must fully automated vehicles build their driving model before safety can be maximized?,,0,0,,,,CC BY-SA 4.0 8534,1,18547,,10/20/2018 12:58,,7,1240,"

I struggle to find Rosenblatt's perceptron training algorithm in any of his publications from 1957 - 1961, namely:

  1. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms

  2. The perceptron: A probabilistic model for information storage and organization in the brain

  3. The Perceptron — A Perceiving and Recognizing Automaton

Does anyone know where to find the original learning formula?

",19177,,2444,,1/14/2020 15:42,12/5/2020 19:04,Which Rosenblatt's paper describes Rosenblatt's perceptron training algorithm?,,1,0,,,,CC BY-SA 4.0 8536,2,,4953,10/20/2018 17:00,,2,,"

An introduction to the Haar features is provided in the youTube video. The video indicates the VJ face detector leverages a selected combination of Haar features (convolutional kernels) to detect facial features (weak classifiers), such as the nose bridge. The binary presence of the weak classifiers are summed to determine if the window contains a face.

The ability for a VJ algorithm to detect emotion would rely on the ability to assign a set of Haar features (kernels) to recognize features associated with a particular emotion label (surprise, anger, content, fear).

It is conceivable that the initial stage of an emotion classifier could use a the VJ algorithm to identify a face for additional stages to classify emotion.

",18819,,18819,,10/20/2018 17:15,10/20/2018 17:15,,,,0,,,,CC BY-SA 4.0 8538,2,,8528,10/20/2018 20:34,,0,,"

Where can I find sources showing that policy gradients initialize with random policies, whereas Q-Learning uses epsilon-greedy policies?

You can find example algorithms for Q learning and policy gradients in Sutton & Barto's Reinforcement Learning: An Introduction - Q learning is in chapter 6, and policy gradients explained in chapter 13.

Neither of these things are strictly true in all cases. However, both are very common situations for the two kinds of learning agent:

  • Policy gradient solvers learn a policy function $\pi(a|s)$ that gives the probability of taking action $a$ given observed state $s$. Typically this is implemented as a neural network. Neural networks are initialised randomly. For discrete action selection using softmax output layer, the initial function will roughly be choosing evenly from all possible actions. So if some actions oppose and undo each other, e.g. move left and move right are options - then the situation as described in your quote can easily happen. Nowadays there are many other solutions using policy gradients that don't suffer as much with this effect. For instance, deterministic policy gradient methods such as A2C or DDPG.

  • For Q learning, and many variants of it, $\epsilon$-greedy is a very commonly used action selection mechanism. It is convenient and simple, and allows a simple parameter to control balance between exploration and exploitation whilst learning. However, Q learning can work with any action selection mechanism that has some possibility of acting optimally at least occasionally.

The best approach to action selection during the learning process in both policy-based methods and value-based methods is an active area of research. So if you read the RL literature you may find a lot of variation. The blog you quoted from has identified two quite common choices.

Also, what does ""max operation"" have to do with epsilon-greedy policies?

The ""max operation"" is finding the maximum value of some function produced when varying a parameter. The related ""argmax operation"" is finding the value of the parameter that produces the maximum value. Q learning can use both types of operation, but specifically uses $\text{argmax}$ for $\epsilon$-greedy action selection.

An $\epsilon$-greedy policy requires acting optimally (according to current estimates of value function) with probability $p=1-\epsilon$, and randomly with equal probability of each action with probability $p=\epsilon$

In order to do this, the algorithm usually (with probability $p=1-\epsilon$) needs to know what the current best guess at an optimal solution is. That is the greedy action with respect to the current Q values i.e. $\text{argmax}_a Q(s,a)$ -

",1847,,1847,,10/20/2018 20:56,10/20/2018 20:56,,,,0,,,,CC BY-SA 4.0 8544,2,,8258,10/21/2018 6:42,,2,,"

Direct Answer

The Belgium TS Dataset may be helpful, as well as The German Traffic Sign Detection Benchmark.

Additional Notes Based on Question Author's Idea

The idea in the question author's addendum of placing signs onto street sides and corners is a good one, but to do it repeatably and in a way that doesn't bias the training is its own research project. However, it is a good research direction. What would be of benefit to AV researchers worldwide is a multi-network topology and equilibrium strategy with the objective to create the following data generation features.

  • Street sign symbol inputs in image form, with or without cropping, as movie frame sequences or single still shots, or from SVG files.
  • Annotation generation using partially human-labelled data.
  • 3D analysis of sign angle and perspective setting so that the images appear exactly as they would from a vehicle's imaging system.
  • Matching of lighting between the superimposed sign and the background scene.
  • Automatic blue-screening for the sign image.

This is obviously not a basic data hygiene problem. It is its own AI project, but the return on this research project in terms of furthering the AV technology is immense and may have significant data set statistical advantages over collecting data from the vendors that supply images to Google maps and other Big Data aggregators.

",4302,,4302,,10/21/2018 14:19,10/21/2018 14:19,,,,2,,,,CC BY-SA 4.0 8545,2,,8458,10/21/2018 7:13,,2,,"

The Hough Transform extended to orthogonal ellipses uses this model, accumulating on $\theta$ for all $\{x, y\}$ with parameter matrix

\begin{Bmatrix} c_x & c_y \\ r_x & r_y \end{Bmatrix}

where

$$1 = \dfrac {(x - c_x) \, \cos \theta} {r_x} + \dfrac {(y - c_y) \, \sin \theta} {r_y}$$

The question is looking to detect the normal lines, so any of the several algorithms for the above model can be modified to accumulate on $r$ for all $\{x, y\}$ with parameter matrix

\begin{Bmatrix} c_x & c_y \\ r_x & r_y \end{Bmatrix}

where

$$0 = \dfrac {x - c_x} {r_x} + \dfrac {y - c_y} {r_y}$$

Lines that intersect $(c_x, c_y)$ don't rely on $r_x$ or $r_y$. However, it may be useful to recognize that, if radially equally spaced, viewing the lines from a position other than one that projects into the plane of the lines at $(c_x, c_y)$ will present a line density that is a function of $\arctan (r_x, r_y)$.

",4302,,4302,,10/23/2018 5:40,10/23/2018 5:40,,,,1,,,,CC BY-SA 4.0 8546,1,,,10/21/2018 7:24,,4,79,"

Suppose that I have a model M that overfits a large dataset S such that the test error is 30%. Does that mean that there will always exist a model that is smaller and less complex than M that will have a test error less than 30% on S (and does not overfit S).

",19190,,,,,10/21/2018 16:08,Does overfitting imply an upper bound on model size/complexity?,,0,4,,,,CC BY-SA 4.0 8548,1,8552,,10/21/2018 8:17,,1,599,"

What is the best and easiest programming language to learn to implement genetic algorithms? C++ or Python, or any other?

",19192,,2444,,2/9/2020 17:39,2/9/2020 17:39,What is the best programming language to learn to implement genetic algorithms?,,2,0,,8/23/2021 11:40,,CC BY-SA 4.0 8549,2,,8548,10/21/2018 9:00,,5,,"

There is no ""best language"" for any problem. There are too many variables to consider, even when advising a single person with a single project plan.

If the choice is between Python and C++, I would generally advise:

  • If you want to implement from scratch and learn how the algorithm works, use Python with numeric/accelerated libraries such as NumPy or PyTorch. Python script is quicker to prototype and try different ideas, due to loose variable typing and built-in high-level structures such as dict and list.

  • If you want to write core, efficient libraries, then C++ will out-perform Python, but writing these will take longer. There are plenty of C++ libraries available (e.g. TensorFlow is available with a C++ API), but the community around them is less focused than with Python toolkits.

You can also combine both approaches and write specific libraries in C or C++ to improve performance at any time.

With Genetic Algorithms, the speed bottleneck is most often population assessment - e.g. running the environment simulation to get a fitness measure for each individual - and not the GA itself. So if you have a specific problem or problem domain in mind, may want to orient your choice around a language that already has support for the kind of environments where you want to run your GA. GAs usually benefit greatly from parallelisation, so if you are aiming for something ambitious you will want to look into GPU support and/or distributed computing toolkits.

Most researchers/hobbyists working in AI-related fields end up using multiple languages over time. You may end up with a favourite language environment, which might be Python, Julia, Java, C++, C, C#, Lua, LISP, Prolog, Matlab/Octave, R . . . but you will end up needing a smattering of other languages, and usually skills with specific toolkits such as TensorFlow, Scikit-Learn, Hadoop etc in order to complete projects.

Don't be afraid that you will ""waste time"" learning in one language initially then needing to transfer to another. Your learning of algorithms will be transferable, and your first attempts will most likely not be that re-usable as library code anyway, so you are going to re-implement your ideas, perhaps many times. My first simulated annealing project was written in Fortran 77 . . . 20 years later I dug that knowledge up again and implemented in Ruby/C - nowadays I work in Python for the AI/hobby stuff even though my professional career sees me working mostly in Ruby.

",1847,,1847,,10/21/2018 9:16,10/21/2018 9:16,,,,0,,,,CC BY-SA 4.0 8550,1,,,10/21/2018 9:15,,6,101,"

What are the future prospects in near future from a theoretical investigation of description logics, and modal logics in the context of artificial intelligence research?

",19193,,1581,,10/30/2018 17:00,4/28/2019 18:02,What are the current trends/open questions in logics for knowledge representation?,,0,0,,,,CC BY-SA 4.0 8551,1,,,10/21/2018 9:34,,3,110,"

I have a data input vector ( No Image classification) which size varys from 2 to 7 entrys. Every one of them belongs to a class Out of 7. So I have a variable Input size and a variable Output size. How can I deal with the variable Input sizes? I know Zero padding is a option but maybe there are better ways?

Seconds: Is multi Label classification possible in one Network? What I mean: The first entry has to bei classified in one of the seven classes, the second entry... and so on.

I am also open to other classification techniques, If there is a better one that suits the problem.

Best regards, Gesetzt

",19195,,,,,10/21/2018 9:34,Variable sized input-Multi Label Classification with Neural Network,,0,0,,,,CC BY-SA 4.0 8552,2,,8548,10/21/2018 11:00,,-2,,"

Matlab may be a good option to get started with the implementation of genetic algorithms, given that there a lot of pre-defined functions.

See e.g. https://www.mathworks.com/discovery/genetic-algorithm.html and https://www.mathworks.com/help/gads/examples/coding-and-minimizing-a-fitness-function-using-the-genetic-algorithm.html.

",14749,,2444,,2/9/2020 14:51,2/9/2020 14:51,,,,0,,,,CC BY-SA 4.0 8553,1,,,10/21/2018 12:44,,2,42,"

I have read quite a lot about capsule networks, but I cannot understand how the squashed vector would also rotate in response to rotation or translation of the image. A simple example would be helpful. I understand how routing by agreement works.

",19201,,2444,,6/9/2020 11:38,6/9/2020 11:38,How exactly is equivariance achieved in capsule neural networks?,,0,0,,,,CC BY-SA 4.0 8554,1,,,10/21/2018 13:35,,6,1032,"

I am trying to study the book Reinforcement Learning: An Introduction (Sutton & Barto, 2018). In chapter 3.1 the authors state the following exercise

Exercise 3.5 Give a table analogous to that in Example 3.3, but for $p(s',r|s,a)$. It should have columns for $s$, $a$, $s'$, $r$, and $p(s',r|s,a)$, and a row for every 4-tupel for which $p(s',r|s,a)>0$.

The following table and graphical representation of the Markov Decision Process is given on the next page.

I tried to use $p(s'\cup r|s,a)=p(s'|s,a)+p(r|s,a)-p(s' \cap r|s,a)$ but without a significant progress because I think this formula does not make any sense as $s'$ and $r$ are not from the same set. How is this exercise supposed to be solved?

Edit

Maybe this exercise intends to be solved by using

$$p(s'|s,a)=\sum_{r\in \mathcal{R}}p(s',r|s,a)$$

and

$$r(s,a,s')=\sum_{r\in \mathcal{R}}r\dfrac{p(s',r|s,a)}{p(s|s,a)}$$

and

$$\sum_{s'\in\mathcal{S}}\sum_{r\in\mathcal{R}}p(s',r|s,a)=1$$

the resulting system is a linear system of 30 equation with 48 unknowns. I think I am missing some equations...

",19123,,2444,,4/18/2022 9:26,4/18/2022 9:26,"How do compute the table for $p(s',r|s,a)$ (exercise 3.5 in Sutton & Barto's book)?",,5,0,,,,CC BY-SA 4.0 8555,2,,8554,10/21/2018 15:06,,3,,"

The function $r(s,a,s')$ gives the expected reward in each scenario, but not the distribution of rewards that lead to values $r_{search}$ and $r_{wait}$

The text explains that reward is $+1$ for each can found, and that different distributions of numbers of cans are expected when waiting as opposed to searching. However, it does not give any description of the actual distributions, just summarises them as the two expected rewards, and suggests $r_{search} \gt r_{wait}$

You have two main ways to answer the exercise:

  1. Invent some parameters for the distributions of $r_{search}$ and $r_{wait}$ in order to split up single values of $p(s'|s,a)$ into multiple values of $p(s', r|s,a)$. E.g you could decide that $r_{search}$ consists of $0 \eta_0 + 1 \eta_1 + 2 \eta_2 + 3 \eta_3$ where $\eta_0, \eta_1, \eta_2, \eta_3$ are probabilities that sum to $1$ - each row that currently has $r_{search}$ as the output of $r(s,a,s')$ would then split into 4 rows with reward 0, 1, 2, 3 to complete the new table . . . $r_{wait}$ would need a different set of parameters.

  2. Ignore the details of the distribution, move column $r(s,a,s′)$ to the left and call it $r$, changing $p(s|s,a)$ to $p(s', r|s,a)$. It might be all that's expected given the lack of information.

My personal opinion is that the authors want you to think about solution 1 - the only issue is that it requires you to invent some new parameters that were not provided. The ones I name are only a suggestion, they do not represent a specific ""correct"" answer in terms provided by the book, because the book omits those details.

As an example to start with, if you start solution 1, and use parameters as I have labelled them, you will end up with a first row looking like this:

$s\qquad \qquad a\qquad \qquad s'\qquad \qquad r \qquad p(s', r| s, a)$ $high \qquad \quad search \qquad high \qquad \quad 0 \qquad \alpha \eta_0$

",1847,,1847,,10/21/2018 16:19,10/21/2018 16:19,,,,11,,,,CC BY-SA 4.0 8560,1,,,10/21/2018 17:09,,20,59974,"

Batch size is a term used in machine learning and refers to the number of training examples utilised in one iteration. The batch size can be one of three options:

  1. batch mode: where the batch size is equal to the total dataset thus making the iteration and epoch values equivalent
  2. mini-batch mode: where the batch size is greater than one but less than the total dataset size. Usually, a number that can be divided into the total dataset size.
  3. stochastic mode: where the batch size is equal to one. Therefore the gradient and the neural network parameters are updated after each sample.

How do I choose the optimal batch size, for a given task, neural network or optimization problem?

If you hypothetically didn't have to worry about computational issues, what would the optimal batch size be?

",11814,,2444,,10/31/2020 10:07,5/31/2021 16:50,How do I choose the optimal batch size?,,3,0,,,,CC BY-SA 4.0 8564,2,,7755,10/21/2018 19:08,,7,,"

The most straightforward solution is to simply make every action ""legal"", but implementing a consistent, deterministic mapping from potentially illegal actions to different legal actions. Whenever the PPO implementation you are using selects an illegal action, you simply replace it with the legal action that it maps to. Your PPO algorithm can then still update itself as if the illegal action were selected (the illegal action simply becomes like... a ""nickname"" for the legal action instead).

For example, in the situation you describe:

  • 2 actions (0 and 1) are always available
  • 2 actions (2 and 3) are only available when the internal_state == 0
  • 1 action (4) is only available when the internal_state == 1

In cases where internal_state == 0, if action 4 was selected (an illegal action), you can always swap it out for one of the other actions and play that one instead. It doesn't really matter (theoretically) which one you pick, as long as you're consistent about it. The algorithm doesn't have to know that it picked an illegal action, whenever it picks that same illegal action in the future again in similar states it will consistently get mapped to the same legal action instead, so you just reinforce according to that behaviour.


The solution described above is very straightforward, probably the most simple to implement, but of course it... ""smells"" a bit ""hacky"". A cleaner solution would involve a step in the Network that sets the probability outputs of illegal actions to $0$, and re-normalizes the rest to sum up to $1$ again. This requires much more care to make sure that your learning updates are still performed correctly though, and is likely a lot more complex to implement on top of an existing framework like Tensorforce (if not already somehow supported in there out of the box).


For the first ""solution"", I wrote above that it does not matter ""theoretically"" how you choose you mapping. I absolutely do expect your choices here will have an impact on learning speed in practice though. This is because, in the initial stages of your learning process, you'll likely have close-to-random action selection. If some actions ""appear multiple times"" in the outputs, they will have a greater probability of being selected with the initial close-tor-andom action selection. So, there will be an impact on your initial behaviour, which has an impact on the experience that you collect, which in turn also has an impact on what you learn.

I certainly expect it will be beneficial for performance if you can include input feature(s) for the internal_state variable.

If some legal actions can be identified that are somehow ""semantically close"" to certain illegal actions, it could also be beneficial for performance to specifically connect those ""similar"" actions in the ""mapping"" from illegal to legal actions if you choose to go with that solution. For example, if you have a ""jump forwards"" action that becomes illegal in states where the ceiling is very low (because you'd bump your head), it may be better to map that action to a ""move forwards"" action (which is still kind of similar, they're both going forwards), than it would be to map it to a ""move backwards"" action. This idea of ""similar"" actions will only be applicable to certain domains though, in some domains there may be no such similarities between actions.

",1641,,1641,,10/21/2018 19:30,10/21/2018 19:30,,,,3,,,,CC BY-SA 4.0 8565,2,,8518,10/21/2018 19:37,,-3,,"

The term 'size' isn't applicable to the tensor output of a network or network. These are the qualities.

  • Rank $N$ that defines the rank of each tensor instance in $\mathbb{R}^N$
  • Ranges of the indices to the dimensions from $1$ to $N$
  • Ranges of the scalar values that comprise the tensor instance — If they are discrete rather than real (approximated by floating point or fixed point numbers), then the range is the description of the permissible ordinal values.

The question may be referring to this last quality.

The imposition of a penalty for values that are in the range of the numeric type used as the output of the last activation function but not in the allowable range of output for the desired trained function works in a limited way. It skews the output distribution with respect to the natural distribution of possible learning states and therefore can easily interfere with convergence quality or speed or both.

There are a number of techniques that map natural output distribution with constrained ranges, but it must be done without skewing the distribution upstream from the technique used, to avoid negatively impacting favorable convergence properties of the artificial network.

One simple case that can be described here is when the number of possible output states is in the set of $2^i$ where $i \in$ { positive integers }. In such a case, the final layer of the network can be $i$ threshold activation functions with 1 or -1 as possible output values.

In that case, the ordinal then becomes

$$o = \sum_{n=0}^i \frac {y_n + 1} {2},$$

where $y_n$ is the output from activation function index $n$.

",4302,,,,,10/21/2018 19:37,,,,1,,,,CC BY-SA 4.0 8566,2,,8560,10/21/2018 22:49,,6,,"

Here are a few guidelines, inspired by the deep learning specialization course, to choose the size of the mini-batch:

  • If you have a small training set, use batch gradient descent (m < 200)

In practice:

  • Batch mode: long iteration times
  • Mini-batch mode: faster learning
  • Stochastic mode: lose speed up from vectorization

The typically mini-batch sizes are 64, 128, 256 or 512.

And, in the end, make sure the minibatch fits in the CPU/GPU.

Have also a look at the paper Practical Recommendations for Gradient-Based Training of Deep Architectures (2012) by Yoshua Bengio.

",14749,,2444,,7/13/2019 13:35,7/13/2019 13:35,,,,2,,,,CC BY-SA 4.0 8567,1,,,10/22/2018 0:26,,1,43,"

Suppose there are sensors which supply numerical metrics. If a metric goes above or below a healthy threshold, an event (alert) is raised. Metrics depend on each other in one way or another (we can learn the dependencies via ML algorithms) so when the system is in alerting state only one or a few metrics will be a root cause and all others will be simply consequences.

We can assume there is enough historical metric data available, to learn dependencies but there are just a few historical malfunctions. Also, when malfunction happens there is no one to tell what was the root cause, the algorithm should learn how to detect root causes by itself.

Which algorithms can be used to detect the root cause events in the situation above? Are there any papers available on the subject?

",19218,,30725,,5/29/2020 13:47,5/29/2020 13:47,Detect root cause across many event occurrences,,0,1,,,,CC BY-SA 4.0 8568,2,,8368,10/22/2018 1:14,,1,,"
  1. Yes, intuition says that RNNs like LSTM or GRU will work better in your case, because predicted values might depend on input patterns corresponding to much earlier time intervals.
  2. There is no reason to create samples shifted by a single measurement because many of the samples will contain pretty much the same information for your model. A viable approach is to shift by sample size. Keeping some overlap between samples is feasible as well.

Keep in mind that as a general rule for processing audio data, it makes sense to convert raw audio data into vectors representing audio spectrum before feeding it into LSTM RNN (see this video for e.g. https://www.coursera.org/learn/nlp-sequence-models/lecture/sjiUm/speech-recognition).

Batch size is different from sample or window size in your case.

",19218,,,,,10/22/2018 1:14,,,,0,,,,CC BY-SA 4.0 8570,1,8574,,10/22/2018 5:15,,3,205,"

Why are we now considering neural networks to be artificial intelligence?

",19220,,2444,,12/9/2021 22:38,12/10/2021 7:05,Why are neural networks considered to be artificial intelligence?,,2,0,,,,CC BY-SA 4.0 8571,2,,8560,10/22/2018 5:39,,6,,"

From the blog A Gentle Introduction to Mini-Batch Gradient Descent and How to Configure Batch Size (2017) by Jason Brownlee.

How to Configure Mini-Batch Gradient Descent

Mini-batch gradient descent is the recommended variant of gradient descent for most applications, especially in deep learning.

Mini-batch sizes, commonly called “batch sizes” for brevity, are often tuned to an aspect of the computational architecture on which the implementation is being executed. Such as a power of two that fits the memory requirements of the GPU or CPU hardware like 32, 64, 128, 256, and so on.

Batch size is a slider on the learning process.

  • Small values give a learning process that converges quickly at the cost of noise in the training process.
  • Large values give a learning process that converges slowly with accurate estimates of the error gradient.

Tip 1: A good default for batch size might be 32.

",18852,,2444,,10/31/2020 10:02,10/31/2020 10:02,,,,1,,,3/23/2021 10:00,CC BY-SA 4.0 8574,2,,8570,10/22/2018 6:10,,7,,"

Why are we now considering neural networks to be artificial intelligence?

"We" aren't. It is generally due to reporting by media sources that simplify science and technology news.

The definition of AI is somewhat fluid, and also contentious at times, but in research and scientific circles it has not changed to the degree that AI=NN.

What has happened is that research into neural networks has produced some real advances in the last decade. These advances have taken research-only issues such as very basic computer vision, and made them good enough to turn into technology products that can be used in the real world on commodity hardware.

These are game-changing technology advances, and they use neural networks internally. Research and development using neural networks is still turning out new and improved ideas, so has become a very popular area to learn.

A lot of research using neural networks is also research into AI. Aspects such as computer vision, natural language processing, control of autonomous agents are generally considered parts of AI. This has been simplified in reporting, and used by hyped-up marketing campaigns to label pretty much any product with a NN in it as "Artificial Intelligence". When often it is more correctly statistics or Data Science. Data Science is another term which has been somewhat abused by media and technology companies - the main difference between use of AI and Data Science is that Data Science was a new term, so did not clash with pre-existing uses of it.

The rest of AI as a subject and area of study has not gone away. Some of it may well use neural networks as part of a toolkit to build or study things. But not all of it, and even with the use of NNs, the AI part is not necessarily the neural network.

",1847,,2444,,12/9/2021 22:38,12/9/2021 22:38,,,,0,,,,CC BY-SA 4.0 8576,2,,3101,10/22/2018 8:30,,0,,"

Thomas Cover and David MacKay proofed the capacity of a perceptron. This proof was recently extended to Neural Networks. All of them provide upper bounds for the number of parameters needed to learn something.

",19227,,,,,10/22/2018 8:30,,,,0,,,,CC BY-SA 4.0 8578,1,,,10/22/2018 9:31,,4,113,"

The Levenshtein algorithm and some ratio and proportion may handle this use case.

Based on the pre-defined sequence of statements, such as ""I have a dog"", ""I own a car"" and many more, I must determine if an another input statement such as ""I have a cat"" is the same or how much percentage does the input statement is most likely equal to the pre-defined statements.

For Example:

Predefined statements: ""I have a dog"", ""I own a car"", ""You think you are smart""

Input statements and results:

I have a dog - 100% (because it has exact match), I have a cat - ~75% (because it was almost the same except for the animal, think - ~10% (because it was just a small part of the third statement), bottle - 0% (because it has no match at all)

The requirement is that TensorFlow be used rather than Java, which is the language I know, so any help with what to look at to get started would be helpful.

My plan was to use the predefined statements as the train_data, and to output only the accuracy during the prediction, but I don't know what model to use. Please, guide me with the architecture and I will try to implement it.

",19229,,2444,,4/25/2019 10:21,4/25/2019 10:21,Which model should I use to determine the similarity between predefined sentences and new sentences?,,1,0,,,,CC BY-SA 4.0 8580,1,8581,,10/22/2018 13:29,,1,52,"

Consider I have a 3 layers neural network.

  • Input Layer containing 784 neurons.
  • Hidden layer containing 100 neurons.
  • Output layer containing 10 neurons.

My objective is to make an OCR and I used MNIST data to train my network.

Suppose I gave the network an input taken from an image, and the values from the output neurons are the next:

  • $0: 0.0001$
  • $1: 0.0001$
  • $2: 0.0001$
  • $3: 0.1015$
  • $4: 0.0001$
  • $5: 0.0002$
  • $6: 0.0001$
  • $7: 0.0009$
  • $8: 0.001$
  • $9: 0.051$

When the network returns this output, my program will tell me that he identified the image as number 3.

Now by looking at the values, even though the network recognized the image as 3, the output value of number 3 was actually very low: $0.1015$. I am saying very low, because usually the highest value of the classified index is as close as 1.0, so we get the value as 0.99xxx.

May I assume that the network failed to classify the image, or may I say that the network classified the image as 3, but due to the low value, the network is not certain?

Am I right thinking like this, or did I misunderstand how does the output actually works?

",7983,,7983,,10/22/2018 13:34,12/21/2018 19:02,How much extra information can we conclude from a neural network output values?,,1,0,,,,CC BY-SA 4.0 8581,2,,8580,10/22/2018 16:27,,0,,"

From the values you have provided I can easily guess your output layer has the sigmoid (do clarify!) activation function. For sigmoid activation function this can be a quite normal occurrence. Also maybe the number of training epochs is not high enough.

The case you have mentioned of 0.99 is generally in the case of the output being subjected to a softmax probability function. Although 0.99 is still achievable using sigmoid activation, it will depend on your network hyper-parameters and in general the data. If data is very easily separable the sigmoid will generally give very high contrasting difference among classes. Also if the hidden network has ReLu it becomes easy to provide contrasting class scores due to its huge scale difference compared to a sigmoid activation in the hidden layer.

The point here to note is contrasting difference among classes, because your sigmoid might give 0.99 for the correct class while 0.9 for other classes, which is undesirable.

",,user9947,,user9947,10/22/2018 16:46,10/22/2018 16:46,,,,0,,,,CC BY-SA 4.0 8583,2,,8578,10/22/2018 23:36,,2,,"

If this is a simple syntax comparison, neural networks is not the best way to achieve this.

If it's semantic comparison, then you can take a look at models used in the SNLI dataset for example.

From your question, it looks like just a syntax comparison.

Consider the 2 sentences :

She likes playing guitar

She likes listening guitar

The 2 sentences have almost the same words, but the meaning is different.

Now consider these 2 sentences :

The bird is taking a bath in the fountain

Birdie wash himself with water in public place

These 2 sentences have almost no words in common, but the meaning are very similar.


So if your use case need to return a high-score for the first example, give up neural network (it is possible, but pointless).

If your use case need to return a high-score in the second example, take a look at SNLI leaderboards, there is plenty of models that can works.

",18852,,-1,,6/17/2020 9:57,10/22/2018 23:36,,,,0,,,,CC BY-SA 4.0 8585,2,,5885,10/23/2018 5:50,,0,,"

With the increase of both unit capacity turbines and size of wind farm, the safe operatio wind farm has received growing attention. In ma factors that affect the safe operation of wind far being struck by lightning is an important aspect. intelligent lightning monitoring system is used fo surveillance of wind turbine generators, which ca real-time and accurate monitoring of the lightnin current waveforms, amplitude, time of occurrenc number of lightning strokes and all the other imp parameters of lightning thus providing an effectiv monitoring and analysis tool to quickly locate fa location on wind turbine generator and cause of malfunction and a theoretical basis for the desig lightning protection system of wind turbines. Thi examines the principle and main methods of the turbine generator-matching lightning monitoring and combined with the specific research project designs and implements a high-precision and multifunctional intelligent lightning monitoring sy based on the theory of Rogowski coils.

",17941,,,,,10/23/2018 5:50,,,,1,,,,CC BY-SA 4.0 8586,1,,,10/23/2018 7:48,,4,2728,"

I am training a deep neural network. There is a constraint on the output value of the neural network (e.g. the output has to be between 0 and 180). I think some possible solutions are using sigmoid, tanh activation at the end of the layer.

Are there better ways to put constraints on the output value of a neural network?

",19245,,2444,,12/30/2021 18:00,12/30/2021 18:00,How to constraint the output value of a neural network?,,1,1,,,,CC BY-SA 4.0 8587,2,,8309,10/23/2018 10:54,,1,,"

When we add a single layer with a non-linear activation function, right after the application of the activation function, a new basis function is found (for that neuron), which is some combination of the weights and biases, which acts as a new way to view or analyze the feature sets.

With an increasingly deeper network, we keep finding representations, which are new basis vectors of the combination of the previous layer's weights and biases, that is, higher-level representations.

If they're error-free, they'll give better performances, but if small errors creep in earlier basis vectors, the error increases through depth.

A nice analogy is the Taylor series, where $1$, $x$, $x^2$, and so on, are the basis vectors for estimating the function in 1D.

",19027,,2444,,3/11/2020 0:36,3/11/2020 0:36,,,,0,,,,CC BY-SA 4.0 8588,1,8657,,10/23/2018 13:28,,3,294,"

I am creating a dataset made of many images which are created by preprocessing a long time series. Each image is an array of (128,128) and the there are four classes. I would like to build a dataset similar to the MNIST in scikit-learn.database but I have no idea how to do it.

My aim is to have something that I can call like this:

(x_train, y_train), (x_test, y_test) = my_data()

Should I save them as figures? or as csv? Which is the best way to implement this?

",19251,,,,,10/28/2018 22:58,Best way to create an image dataset for CNN,,1,0,,,,CC BY-SA 4.0 8591,1,,,10/23/2018 22:22,,3,147,"

I am developing an image search engine. The engine is meant to retrieve wrist watches based on the input of the user. I am using SIFT descriptors to index the elements in the database and applying Euclidean distance to get the most similar watches. I feel like this type of descriptor is not the best since watches have a similar structure and shape. Right now, the average difference between the best and worst matches is not big enough (15%)

I've been thinking of adding colour to the descriptor, but I'd like to hear other suggestions.

",19266,,2444,,4/23/2019 23:08,5/24/2019 0:02,What is a good descriptor for similar objects?,,1,1,,,,CC BY-SA 4.0 8594,1,,,10/24/2018 6:48,,0,1444,"

The intelligence of the human brain is said to be a strong factor leading to human survival. The human brain functions as an overseer for many functions the organism requires. Robots can employ artificial intelligence software, just as humans employ brains.

When it comes to the human brain, we are prone to make mistakes. However, artificial intelligence is sometimes presented to the public as perfect. Is artificial intelligence really perfect? Can AI also make mistakes?

",19275,,2444,,2/8/2020 12:49,2/8/2020 17:39,Can artificial intelligence also make mistakes?,,3,0,,,,CC BY-SA 4.0 8595,1,,,10/24/2018 7:21,,5,470,"

Can artificial intelligence (or machine learning) applications or agents be hacked, given that they are software applications, or are all AI applications secure?

",19220,,2444,,10/11/2019 22:20,10/11/2019 22:20,Can artificial intelligence applications be hacked?,,2,3,,3/14/2020 13:59,,CC BY-SA 4.0 8596,1,,,10/24/2018 8:45,,1,38,"

Assume that we have a labeled dataset with inputs and outputs, where the output range is $\left[0, 2\right]$, but the majority of outputs is in $\left[0, 1\right]$. Should one adopt some kind of over- or undersampling approach after compartmentalising the output space to make the dataset more balanced? That would usually be done in classification, but does it apply to regression problems, too? Thanks in advance!

",16901,,16901,,10/24/2018 9:41,10/24/2018 9:41,Unbalanced dataset in regression rather than classification,,0,0,,,,CC BY-SA 4.0 8597,1,,,10/24/2018 10:32,,2,365,"

I am writing an app, where when a ball is shot from a canon it is supposed to land in a hole that is on a given distance. The ball is supposed to land between the distance of the beginning of the hole and the end of the hole. The size of the hole is 4m and the size of the ball is 0.4m. My problem is that I am not sure how to write the fitness function for this. The place where the ball falls should be close to this interval of [D, D+3.6], where D is the distance of the hole. If anyone could give me a hint on how to approach this problem, I would be grateful.

",19283,,30725,,5/29/2020 13:48,5/29/2020 13:48,Fitness function in genetic algorithm based on an interval,,1,2,,,,CC BY-SA 4.0 8598,2,,8597,10/24/2018 11:49,,3,,"

Genetic algorithms work best when given a scalar fitness value that increases smoothly, so that you can compare two population members regardless of whether they failed or succeeded at the task.

That usually requires you to analyse the problem, and come up with a measure that would improve as an individual gets closer to solving a task. It generally helps if you score better for an individual if they solve a problem ""better"". You want to avoid simple boolean success/fail metrics.

A simple measure of how well an individual has done is to use the absolute distance it was away from a ""perfect"" shot (where the middle of the ball hits the middle of the hole). The only issue with this is that a perfect shot scores 0, whilst a miss scores 2+, and you want the best result to have the highest fitness. This can be fixed simply, take the negative of the absolute distance:

$$F = -|D_{hole} - D_{ball}|$$

where $D_{hole}$ - and $D_{ball}$ are horizontal distances from origin to centre of each object.

There is no requirement for fitness score to be positive. This will score $F \lt -2$ for a miss, $-2 \lt F \lt 0$ for hit, and $0$ for a perfect shot.

For GAs you don't have to care as much about differentiability or gradient of the fitness function, provided it gives reasonably good ranks between quality of individuals. So there is no point using e.g. a squared error metric here, although you could if you wished.

",1847,,1847,,10/24/2018 15:07,10/24/2018 15:07,,,,0,,,,CC BY-SA 4.0 8599,2,,8595,10/24/2018 14:41,,1,,"

Everything can be hacked. The solutions found by artificial intelligence can be much more efficient than human solutions, but they can also be confused because of the diversity and immensity of details that our mind possesses.

Artificial Intelligence models bring us more secure solutions, but nothing is 100% safe when we talk about information security. There are ways to improve security, hinder invasions and attacks, but every system has flaws.

Perhaps, in the future (things of my imagination) we will have an artificial superintelligence ahead of the human, which may be one of the greatest challenges of invasion of history, but until then .. just my imagination.

",7800,,,,,10/24/2018 14:41,,,,0,,,,CC BY-SA 4.0 8600,2,,8595,10/24/2018 14:42,,5,,"

To answer your question, it really depends on the purpose of the Artificial Intelligence program.

For example, 4Chan has hacked a number of ""Artificial Intelligent"" bots, most notably was Microsoft's Twitter bot Tay. The general purpose of the bot was to parse what was tweeted at it and respond in kind, learning and evolving with each and every interaction.

Within 24 hours, 4Chan had corrupted Tay beyond repair, by teaching it racist and sexist terminology, ironic memes, sending it to shitpost tweets, and otherwise attempting to alter its output so much so that Microsoft had to remove it.

Now, the flaw with Tay was that it accepted any input, and learned off of that exclusively, without any interaction from the developers. Other bots have similar features, but they have checks in place that require human intervention to determine what is ""quality"" information to learn, and what is ""bad"" information to learn as to not pollute the global knowledge base of the bot.

These are just two examples of how Artificial Intelligence can be ""hacked"", but it ultimately comes down to how the programs are implemented.

You mention in one of your comments about Cellphone Artificial Intelligence such as Siri, and whether this technology can be hacked. The answer is - not really.

Siri learns based off of her global interactions - with limited user input allowed. You can ask Siri how to pronounce a name. When she pronounces it incorrectly, you can say to her ""Siri that's now how you pronounce that"". And she will provide you with a limited set of options of how you pronounce that name, and you have to choose which option sounds the best.

There was no option to allow a user to give Siri ""bad"" information, as she already populates the results for you, and you have to teach her based off of her list of options. To give Siri bad input, you would have to have access to Siri's global learning base, which we do not have access to, and alter how she accepts human interactions within the program - which would never happen due to too many moving parts within the iPhone update process, and would be caught before you were able to deploy your update.

",19290,,,,,10/24/2018 14:42,,,,1,,,,CC BY-SA 4.0 8604,1,,,10/24/2018 17:37,,1,100,"

I understand the concept of convolution. Let's say that my input dimension is 3 x 10 x 10

And if I say that I will have 20 activation maps and a filter size of 5, I will end up with 20 different filters for my layer, each with the dimension of (3 x 5 x 5)

My output will therefor be (20 x ? x ?). I Put a ""?"" there, because it obviously depends on the filter stride etc.


Now I wanted to implement deconvolution but I am stuck at the following point:

For the following questions, let's assume that the input size for the deconvolution is (5 x 8 x 8),

  1. If we think about a filter in 3 dimensions. Can I choose any depth for the filter?
  2. How would the effect of the amount of filters (amount of activation maps) work with deconvolution? Do I only have one filter?
  3. How does the input depth (5) come into play. Would the output depth be equal to
    (filter depth) * (input depth) ?

I am trying to find the symmetry to forward convolution but I do not understand how to use the amount of activation maps in deconvolution.

I am very thankful for any help.

",16353,,,,,10/24/2018 17:37,neural network deconvolution filters,,0,1,,,,CC BY-SA 4.0 8605,1,8636,,10/24/2018 18:29,,4,1692,"

I am applying a Double DQN algorithm to a highly stochastic environment where some of the actions in the agent's action space have very similar "true" Q-values (i.e. the expected future reward from either of these actions in the current state is very close). The "true" Q-values I know from an analytical solution to the problem.

I have full control over the MDP, including the reward function, which in my case is sparse (0 until the terminal episode). The rewards are the same for identical transitions. However, the rewards vary for any given state and action taken therein. Moreover, the environment is only stochastic for a part of the actions in the action space, i.e. the action chosen by the agent influences the stochasticity of the rewards.

How can I still ensure that the algorithm gets these values (and their relative ranking) right?

Currently, what happens is that the loss function on the Q-estimator decreases rapidly in the beginning, but then starts evening out. The Q-values also first converge quickly, but then start fluctuating around.

I've tried increasing the batch size, which I feel has helped a bit. What did not really help, however, was decreasing the learning rate parameter in the loss function optimizer.

Which other steps might be helpful in this situation?

So, the algorithm usually does find only a slightly suboptimal solution to the MDP.

",18109,,2444,,1/2/2022 9:46,1/2/2022 9:46,"How can I ensure convergence of DDQN, if the true Q-values for different actions in the same state are very close?",,1,0,0,,,CC BY-SA 4.0 8606,2,,6878,10/24/2018 18:43,,2,,"

It would be hard to tell if you don't provide what kind of data/problem you are working on, but LDA works well when data that are grouped in gaussian blobs surrounding centroids while vanilla SVM works well when the data is almost linearly separable and naive bayes works well when your features are relatively independent of each other.

",19270,,,,,10/24/2018 18:43,,,,0,,,,CC BY-SA 4.0 8607,1,,,10/24/2018 20:05,,4,7205,"

I am training a multilayer neural network with 146 samples (97 for the training set, 20 for the validation set, and 29 for the testing set). I am using:

  • automatic differentiation,
  • SGD method,
  • fixed learning rate + momentum term,
  • logistic function,
  • quadratic cost function,
  • L1 and L2 regularization technique,
  • adding some artificial noise 3%.

When I used the L1 or L2 regularization technique, my problem (overfitting problem) got worst.

I tried different values for lambdas (the penalty parameter 0.0001, 0.001, 0.01, 0.1, 1.0 and 5.0). After 0.1, I just killed my ANN. The best result that I took was using 0.001 (but it is worst comparing the one that I didn't use the regularization technique).

The graph represents the error functions for different penalty parameters and also a case without using L1.

and the accuracy

What can be?

",19268,,2444,,1/3/2022 9:10,1/3/2022 9:10,Why did the L1/L2 regularization technique not improve my accuracy?,,2,0,,,,CC BY-SA 4.0 8609,1,,,10/24/2018 22:46,,0,224,"

I am so much curious about how do we see(with eyes ofc) and detect things and their location so quick. Is the reason that we have huge gigantic network in our brain and we are trained since birth to till now and still training. basically I am saying , are we trained on more data and huge network? is that the reason? or what if there's a pattern for about how do we see and detect object. please help me out, maybe my thinking is in wrong direction. what I wanna achieve is an AI to detect object in picture in human ways.thanks.

",18459,,,,,11/7/2018 16:21,Can we make Object Detection as human eyes+brain do?,,2,1,,,,CC BY-SA 4.0 8610,1,,,10/24/2018 23:39,,1,28,"

I am preparing the Bus movement dataset for deep learning (ANN/CNN/RNN) analysis for congestion events detection. This is an extension to my original question, which can be located at 'Deep learning model training and processing requirement for Traffic data' for the general approach on this topic, and this question is for preparing the dataset and need your kind advice on it. In simple words, I would like to know the state of congestion for a bus route at a specific point in time (year).

Here are my entities:

  • Routes
  • Bus_Scheduled_Routes
  • Bus_Route_Stops
  • Bus_Trips (operational_date, Vehicle_id, Trip_id, Vehicle_Position_Update, Trip_stop_id, passenger_loaded, velocity, direction, scheduled_arrival_time, actual_arrival_time)
  • Events (human and non-human induced)
  • Points of Interests (POIs)

If I have these entities based data and I create a view that gives me a time reference based view comprising of week(52), day(7), Vehicle_id, Trip_id, Stop/Position_update_interval, speed, acceleration, velocity, scheduled_arrival_time, actual_arrival_time. Will this view be recommended to start training the model?

Secondly, how can I integrate the human / non-human induced events and Points of Interests (POIs) data into this view so my model can predict better results? To generalize the model data will be 'time segment / trips time (seasons), location component (Bus Routes and Stops), Arrival time / trip completion time'. I am thinking to add an attribute for human/non-human induced events as type tying with the 'time segment' and adding the POIs as type and vicinity to the stop points. What are your recommendation about it? Thanks in advance for your help.

",18959,,,,,10/24/2018 23:39,Data / model preparation for spatio-temporal deep-learning analysis for traffic congestion events detection,