text
stringlengths
81
47k
source
stringlengths
59
147
Question: <p>What are some incapabilities that the earlier spectroscopy (Visible, Ultraviolet and Infra-red) have when compared to the more modern method?</p> Answer: <p>Some of the things that come to mind are:</p> <ul> <li><strong>Resolution</strong> Modern equipment has much higher resolution than older equipment. Optical component surface smoothness and uniformity, optical component alignment and electrical component stability all permit the user to see more detail and fine structure in the spectrum, which assists in analysis</li> <li><strong>More stable light sources</strong> Again resulting in more detailed and more reproducible spectra</li> <li><strong>Improved monochromaticity of the light sources</strong> Further improving spectral resolution</li> <li><strong>Improved sample cells</strong> Polished glass surfaces provide for better signal to noise making it easier to discern the signal</li> <li><strong>Signal processing software</strong> Such as Fourier Transform methods which increase the signal to noise ratio making it easier to discern the true peaks in a spectrum</li> <li><strong>Other computer and software enhancements</strong> That can be applied to sample handling and spectrum processing. They dramatically reduce the time it takes to record and process a spectrum</li> </ul>
https://chemistry.stackexchange.com/questions/11039/what-are-some-incapabilities-that-the-early-spectroscopy-have-when-compared-to-t
Question: <p>I'm trying to find a reason for an experimental observation, I noticed that when the absorbance of $\ce{Ca}$ is measured with FAS(flame atomic spectroscopy) that it decreases when metals such as $\ce{Al}$ and $\ce{K}$ is present(the ions). I used a $\ce{Ca}$ hallow cathode lamp. </p> <p>Any idea as to why? </p> Answer: <p>Well whatever metal you add will interfere in the absorbance of Ca, because the metal will absorb some of the light too, that's why that of the Ca decreases and thats why the analyte is almost every time mixed only with water which does not absorb the light.( and ofcourse you can add a reagent to form the complex which helps inorganic compounds to absorb the light )</p>
https://chemistry.stackexchange.com/questions/16327/why-does-the-absorbance-of-ca-decrease-in-the-presence-of-certain-metals
Question: <p>Typically the antisymmetric stretch in IR spectroscopy is higher than the symmetric stretch for a given functional group. For example for $\ce{NO2}$ the antisymmetric stretch falls at $\sim 1530\ \mathrm{cm^{-1}}$ compared to $1350\ \mathrm{cm^{-1}}$ for the symmetric stretch. Likewise $\ce{NH2}$ has a similar pattern ($3400\ \mathrm{cm^{-1}}$ antisymmetric and $3300\ \mathrm{cm^{-1}}$ symmetric).</p> <p>However, in some cases this pattern is inverted. e.g. $\ce{[HCO]+}$ has an symmetric stretch at $3896\ \mathrm{cm^{-1}}$ and an antisymmetric stretch at $2814\ \mathrm{cm^{-1}}$. How can this be explained?</p> Answer:
https://chemistry.stackexchange.com/questions/19562/assign-symmetric-and-antisymmetric-stetches-in-ir-spectroscopy
Question: <p>I ordered a DIY spectroscopy kit from public-lab, however don't have access to a halogen lamp to calibrate the wavelengths.</p> <p>I'm wondering if it would somehow be possible to calibrate the wavelength using the iPhone camera light?</p> Answer: <p>According to the <a href="http://publiclab.org/wiki/spectral-workbench-calibration" rel="nofollow">site</a>, wavelength calibration is meant to be done with a compact fluorescent lamp (for the mercury lines), but at the bottom there's a tool for calibrating using any two known wavelengths. If the LED on the phone has two distinct lines that you can calibrate against, it may work, but something like a pair of monochromatic LEDs or laser pointers (say one blue and one red) or something will probably work better. </p>
https://chemistry.stackexchange.com/questions/24969/wavelength-of-iphone-4s-camera-light-for-visible-spectroscopy-calibration
Question: <p>can anyone tell me the difference between absorption spectroscopy and extinction spectroscopy in terms of experiment? and how to get extinction spectroscopy, how to get absorption spectroscopy? Thank you so much.</p> Answer: <p>There should not be any difference in experiment, because the extinction is something you have to calculate by measuring the transmittance/absorbance of a given frequency.</p> <p>The transmittance $T$ of a material is the ratio between the received intensity of a certain frequency to the transmitted intensity of the same frequency. $$T=I/I_0$$ The absorbance $A$ is defined as the decadic logarithm of the inverse transmittance. $$A=\log_{10}\left(I_0/I\right)=\log_{10}\left(1/T\right)$$</p> <p>By using Beer-Lambert’s law $$I=I_0\operatorname{e}^{-\varepsilon~c~d}$$ one can calculate the extinction $\varepsilon$ from the measured transmittance/absorbance by $$\varepsilon=\frac{A}{c~d}$$ where $c$ is the concentration of the absorbing material and $d$ is the distance that the light has to travel in that given material.</p>
https://chemistry.stackexchange.com/questions/33187/difference-between-absorption-spectroscopy-and-extinction-spectroscopy
Question: <p>I have done a geometry optimization of <em>cis</em> and <em>trans</em>-difluoroethene using RHF/STO-3G in Gaussian03. Although this is a simple method and a small basis set, this should ideally put place the molecule in a minimum on its potential energy curve. Within this minimum are vibrational states. I am wondering if it is possible to see, from the calculated IR and Raman spectra whether the optimisation was successful, i.e. whether the molecule was indeed placed in a minimum.</p> <p>I am thinking like this: If the molecule is in a minimum, then all vibrational IR and Raman active vibrational states should be visible (given a high enough activity, of course) in the theoretical spectra. However, if the molecule is not in the minimum, then only those vibrational states of <em>higher energy</em> should be visible. Hence, fewer IR/Raman active vibrational modes would be observed if the geometry optimization was not complete.</p> <p>According to <a href="https://chemistry.stackexchange.com/questions/14413/how-are-the-frequencies-at-a-local-maximum-of-pes-like">this</a> question, all frequencies at a local minimum would be positive, and all negative on a maximum. I assume then that somewhere in-between, you would have both negative and positive frequencies. However, given <em>not</em> the table of IR/Raman active modes and their frequencies, but the actual spectra instead, would I then be able to decide if the molecule is in a minimum or not?</p> Answer:
https://chemistry.stackexchange.com/questions/40481/observe-from-ir-raman-spectra-whether-molecule-is-in-a-local-minimum-on-pes
Question: <p>A typical data processing step for acquired absorption/transmission data is normalizing, i.e. stretching the curve such that it is bounded between 0 and 100%.</p> <p>However, the absorption/transmission curve changes with the density and thickness of the sample. Moreover, absorption is a non linear phenomena (for example <a href="https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law" rel="nofollow">Beer–Lambert law</a>), so I would expect that the <strong>relative</strong> height of the various peak would not remain constant across several samples of the same material.</p> <p>Why, therefore, is it justified to normalize acquired spectroscopic data? Why not present data with original abs./trans. values, while providing details about the sample itself (for example thickness)?</p> <p>In addition, how can spectroscopy be quantitative if the relative height of the peaks changes with illumination/thickness/density/etc.?</p> Answer: <p>There is confusion here about absorption at one wavelength and the absorption spectrum. At <em>each</em> wavelength the Beer-Lamber law applies $$I_{trans_{\lambda}}=I_{0_{\lambda}}\exp(-\epsilon_{\lambda} c l)$$ for a compound with extinction coefficient $\epsilon$ at wavelength $\lambda$ and concentration <em>c</em> and path length <em>l</em>. </p> <p>When an absorption spectrum is produced it is normal to show it as optical density vs wavelength, i.e. $OD_{\lambda}=\log(I/I_0)_{\lambda}=-\epsilon_{\lambda} c l $ and in this case the <em>ratio</em> of peaks for the same compound is constant no matter what the concentration. (Naturally assuming that it does not absorb so much that the instrument fails).</p>
https://chemistry.stackexchange.com/questions/42441/why-absorption-transmission-spectroscopic-data-is-normalized
Question: <p>I need to calculate the spectral overlap integral for the emission spectrum of coumarin 334 and the absorption spectrum for rhodamine, using spreadsheets (MS Excel).</p> <p>Following the theory of Fluorescence Resonance Energy Transfer (Förster), I have the definition of the spectral overlap integral:</p> <p><span class="math-container">$$ J(\lambda) = \frac{\int_{0}^{\infty}f_\ce{D}(\lambda) \epsilon_\ce{A}(\lambda) \lambda^4\,\mathrm{d}\lambda}{\int_{0}^{\infty}f_\ce{D}(\lambda) \,\mathrm{d}\lambda} $$</span></p> <p>I also have the data for emission spectrum of the donor (coumarin 334) and the absorption spectrum of the acceptor (rhodamine). However, I am not sure how to proceed and actually extract the spectral overlap. Since the spectral overlap is a function of wavelength, will I end up with a "spectral overlap spectrum"? And how do I get the absorption coefficient of rhodamine? I have been given the maximum absorption coefficient (at wavelength of maximum absorption), but how do I get the molar absorption spectrum, <span class="math-container">$\epsilon_\ce{A}(\lambda)$</span>?</p> <p>Should I calculate values at a "per-wavelength" basis, or should I fit functions and integrate those?</p> Answer:
https://chemistry.stackexchange.com/questions/44376/how-to-calculate-spectral-overlap-integral-using-spreadsheets
Question: <p>I am to show that the number of absorbed photons only is proportional to the optical density at <em>low optical densities.</em></p> <p>I do not know how to do this, but this reminds me of the linear range of absorbance vs concentration; at high concentrations, the relationships is no longer linear. I also feel that the "number of absorbed photons" is another way of saying "absorbance", and that "optical density" is another way of saying "concentration times optical path length", but I am not sure. I cannot find references that clarifies this for me, so I would appreciate some guidance.</p> Answer: <p>Yes, your statement that "the number of absorbed photons only is proportional to the optical density at low optical densities" is correct. (I'll point out that optical density is a depreciated term for what is now referred to as absorbance.) </p> <p>First let's define some terms.</p> <blockquote> <p>T: Transmittance, i.e. the fraction of photons not absorbed. So a transmittance of 0.90 would mean that 90% of the photon pass through the sample.</p> <p>A: Absorbance of sample which is given by the <a href="https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law" rel="nofollow">Beer–Lambert law</a>:<br /> $\ce{A = -log_{10}(T)}$ or: $\ce{T = 10^{-A}}$</p> </blockquote> <p>Now a bit of math swizzling...</p> <p>For exponentials of $e$ there is a nice series expansion. $$ e^x = 1 + x + \dfrac{x^2}{2!} + \dfrac{x^3}{3!} + \dfrac{x^4}{3!} + ... $$ and if $x &lt; 0 $ then: $$ e^{-x} = 1 - x + \dfrac{x^2}{2!} - \dfrac{x^3}{3!} + \dfrac{x^4}{3!} + ... $$ now if also $|x| &lt;&lt; 1$ then the higher order terms can be ignored and the function becomes linear where: $$ e^{-x} \approx 1 - x $$</p> <p>Now for a bit more math voodoo... Let's let $$10^x = e^y$$ so: $$y = x\mathrm{ln}(10) \approx 2.303x$$ </p> <p>So now substituting $\ce{e^{-2.303A}}$ for $\ce{10^{-A}}$ we get: $$\ce{T = e^{-2.303A}}$$ and thus only when $2.303A &lt;&lt; 1$ is the Beer-Lambert law linear. </p> <p>Now for the rest of the story let's define some more terms:</p> <blockquote> <p>$\epsilon$ is the molar attenuation coefficient of the attenuating species in the sample;</p> <p>$c$ is the molar concentration of the attenuating species in the sample;</p> <p>$l$ is the path length of the beam of light through the material sample.</p> </blockquote> <p>Now also accoding to the Beer-Lambert law:<br /> $\ce{A = \epsilon c l}$ so $\ce{A}$ is proportional to $\ce{c}$.</p>
https://chemistry.stackexchange.com/questions/44666/proportionality-between-number-of-absorbed-photons-and-optical-density
Question: <p>Why is it that lighter atoms are more likely to produce Auger emission and the heavier atoms fluorescence in x-ray spectroscopy? It doesn't make any sense. Lighter atoms should try to resist double ionization, plus there are fewer electrons around which means there is a lesser chance of a photon hitting an electron.</p> Answer:
https://chemistry.stackexchange.com/questions/46529/auger-emission-probability
Question: <p>Through EPR measurements it is possible to gain information about the spin states of molecules. So a singlet state will give no signal/only noise and a molecule with unpaired electrons will result in an EPR spectrum.</p> <p>What happens if a measured sample would contain a mixture of singlet and triplet state molecules (of the same species)? Will there be a signal for the triplet state or is there any interaction between both that hinders the experimentalist from measuring the triplet states?</p> Answer:
https://chemistry.stackexchange.com/questions/46572/epr-spectrum-of-a-singlet-triplet-mixture
Question: <p>I know you can use GC-MS for the analysis of gaseous molecules, but would you be able to instead use an infrared spectroscopy method to analyse gaseous molecules, or does it need to be mass spectroscopy?</p> Answer: <p>GC-MS is just one common method of what is known as '<a href="https://en.wikipedia.org/wiki/Instrumental_chemistry#Hybrid_techniques" rel="nofollow">hyphenated</a>' analysis. GC-IR instruments do exist (<a href="https://www.bruker.com/products/infrared-near-infrared-and-raman-spectroscopy/ft-ir/ft-ir-accessories/gc-ftir/overview.html" rel="nofollow">see</a> <a href="http://www.thermoscientific.com/content/tfs/en/product/gc-ir-interface-nicolet-ft-ir-spectrometers.html" rel="nofollow">examples</a>), although usually involve some pretty specialised cells to allow for gas flow through the sensor. Hyphenated instruments are limited pretty well by your imagination (and budget) only. LC-NMR-MS is a great example of extended hyphenation, and is a powerful emerging tool for automated analysis of complex sample mixtures.</p>
https://chemistry.stackexchange.com/questions/48022/changing-the-spectroscopy-type-to-infrared-in-gc-ms
Question: <p>My lecturer said 462 nm is infrared region. I think the wavelength is visible light. Which region is the correct one? I need confirmation about this one.</p> Answer: <p><a href="https://i.sstatic.net/ibV3r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ibV3r.png" alt="enter image description here"></a></p> <p>Light with a wavelength of 462nm is in the visible light region (blue light visible by the human eye). </p> <p>For certain, your lecturer was wrong that the light is in the infrared region, this occurs at much higher wavelengths closer to 700nm. More possibly he meant to say ultraviolet, which occurs somewhere below 400nm (shorter wavelengths), though even this isn't technically correct. </p> <p>The <a href="https://en.wikipedia.org/wiki/Electromagnetic_spectrum" rel="nofollow noreferrer">Wikipedia article</a> gives a fairly good overview of the various regions, along with a schematic describing where they come:</p> <p><a href="https://i.sstatic.net/QSrYK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSrYK.png" alt="enter image description here"></a></p>
https://chemistry.stackexchange.com/questions/58403/what-is-the-region-of-this-wavelength-462-nm
Question: <p>I understand the basic concepts of the photoelectron spectrum. However, say I am instructed to draw the PES graph for Silicon. I know how to figure out the relative peaks based on its configuration. But when arranging them on the graph, how can I accurately place them where they should be in ionization energy? That is, the x-axis is labeled 300.0, 200.0, 100.0, etc. Without any other information (and the usage of a periodic table only) how do I know where to put the peaks among the increments? Is there a way to figure out the ionization energies for each peak? </p> <p>If this is NOT possible, please explain why. </p> Answer:
https://chemistry.stackexchange.com/questions/60423/how-to-draw-a-photoelectron-spectrum-with-ionization-energies
Question: <p>I am trying to get some bearing on spectroscopy basic concepts using self-study.</p> <p>In the absorption spectrum, if I shine light on a particular liquid of a single type of molecule it absorbs a specific frequency in the visible range. Now where I am struggling is once an electron absorbs a photon of a particular frequency and goes into a higher energy state, it won't stay there (as it is in an unstable state) and will come back to its ground state. Now if this is the case, the photon that it absorbed will get re-emitted again and kill the absorption premise.</p> <p>What I can think of as a possible explanation is that the re-emitted photon can go into random directions thus reducing its contribution to the reflected beam from the liquid surface.</p> Answer: <p>There is no remission of a photon after absorption in the majority of cases, especially in solutions and solids. The idea that the absorption of light is observed because the re-emitted photon travels in a random direction and does not reach the detector is incorrect</p> <p>First, recall the fundamental property of light waves. A light beam travels in straight line, so there is nothing unusual about measuring the absorption of light by molecules in gas or liquid phases at 180 degrees—that is, the light beam enters one end of a container, and its attenuated intensity is measured at the other end in a straight path. There is a small trick in the instruments. They collimate the light beam, i.e., it does not diverge like your mobile phone flash light. Your question is worth pondering: if energy must be conserved, where does the light energy go after absorption? It appears as heat!</p> <p>Think of it in terms of kinetics. Molecules prefer to return to the ground state as quickly as possible, and nature dictates that it will choose the energy loss path that takes the least time (this principle is quite universal in nature). Molecules typically lose excess energy through vibrational relaxation. Examples include the absorption of green light by purple permanganate solutions and the absorption of red light by copper sulfate solutions. If you irradiate these solutions for a long time, you will notice that their solution temperature will increase.</p> <p>In some rigid molecules, the loss of energy via vibration is not feasible. Instead, they may lose the absorbed energy in the form of light. This is where light emission occurs in all directions, a phenomenon called isotropic emission (fluorescence and phosphorescence). However, this is not universal, and exceptions exist. You see this phenomenon daily, for example, with yellow highlighter inks on paper. The key point is that the emitted light energy is lower than the incident energy.</p> <p>A third case occurs when there is <em>no</em> absorption of light, but the light ray changes its path. This is called <em>scattering</em>. Add a drop of milk to water and pass a flashlight beam through it. Observe at right angles. Since light travels in straight lines, why do you see light at right angles to the incident beam? Think about it. In short, there are many interesting optical phenomena to explore, many of which can be easily studied at home.</p> <p>Coming to the original question. Is there a possibility that the absorbed photon is re-emitted with the same energy? Yes, in extremely dilute gas phase atoms. The phenomenon you are looking for is called <em>atomic fluorescence</em>. It has never been observed in solution or molecules, as far as I know. Keep in mind in each absorption, emission, scattering event, energy is conserved.</p>
https://chemistry.stackexchange.com/questions/187283/absorption-spectrum-and-re-emission-of-photon
Question: <p>For example, potassium chloride released a purple colored flame? Does it have to do with the energy level of the valence electrons in the metal cation that is being burned?</p> Answer: <p>You are right, it depends on the energy level of valance electrons, but more precisely it depends on the difference between the highest occupied and lowest unoccupied levels' energies. If the energy difference is $\Delta E$ then the light emitted by the excited atom has a frequency $\nu = \frac{\Delta E}{h}$ so a wavelength $\lambda = \frac{c}{\nu}=\frac{h c}{\Delta E}$</p>
https://chemistry.stackexchange.com/questions/62450/why-do-different-metal-ions-release-different-colors-of-light
Question: <p>I want to ask if the rotational-vibrational spectra of a molecule is related to any one mode of frequency or if it is related to the whole molecule?</p> <p>Additionally, the vibrational levels that we say are composed of lower rotational energy levels. Are these vibrational energy levels of certain vibrational normal modes or of the whole molecule?</p> Answer: <p>I'm not quite sure what you mean by 'vibrational levels composed of lower rotational energy levels' so I try to explain the whole thing, briefly.</p> <p>In analysing the electronic, vibrational &amp; rotational spectra of molecules the Born-Oppenheimer approximation has to be used. This is applicable because the vibrational period (a few femtoseconds) is far faster than rotation period (a few picoseconds) thus these motions do to interact appreciably with one another to a good approximation. This allows the energy levels of, rotation and vibrational motions to be added together.</p> <p>A normal mode vibration is the motion of all atoms in the molecule in a fixed phase relationship with one another. For <em>N</em> atoms there are <span class="math-container">$3N-6$</span> normal modes (<span class="math-container">$3N-5$</span> for a linear molecule). How how far and in what direction each atom moves has to be determined by calculation, but the overall symmetry species (<span class="math-container">$A_1, B_g, E_u$</span> etc) of the total motion of all atoms is given by the Point Group of the molecule, <span class="math-container">$C_{3v},D_{2h}, O_h$</span> etc. The symmetry species determines the relative phase of each atoms motion with respect to all the others. By 'phase' is meant that as some bonds contract others extend but they always do this together.</p> <p>As energy levels add each vibrational normal mode has its own stack of rotational levels. (Because the B.O. approximation is not exact a more detailed analysis in needed to add the interaction between vibration and rotation.)</p> <p>The figure shows a typical spectrum (calculated) of a (perpendicular band) of a symmetric top molecule such as <span class="math-container">$\ce{PCl3 , CHCl3} \mathrm { ~or~ } \ce{CH3Cl}$</span>. The total spectrum is shown on the bottom line and its separation into individual PQR rotational bands above this. There will be similar complicated features for each vibrational normal mode. Clearly the analysis of spectra such as this is non-trivial :)</p> <p>(In symmetric top molecules two of its three moments of inertia are the same but different from the third. Two quantum numbers are needed <em>J</em> for total angular momentum and <em>K</em> to fix the angular momentum about the symmetry axis and this is why there are so many rotational bands for a single vibration.)</p> <p><a href="https://i.sstatic.net/TTTtc.png" rel="noreferrer"><img src="https://i.sstatic.net/TTTtc.png" alt="vib-rot-sym-top" /></a></p> <p>(Spectrum is taken from G. Herzberg, ' Infra-red and Raman Spectra of Polyatomic Molecules')</p>
https://chemistry.stackexchange.com/questions/65016/rotational-vibrational-spectroscopy-of-a-molecule
Question: <p>I (partially) understand how a breakdown voltage is applied, then the gas is ionized to cations and electrons. Then the electrons will accelerate to the cathode and knock off more electrons from other H2, creating a current ( which is the flow of electrons ). The electrons might excite the gas and the gas emit light as it goes back to normal state. If that the case the shouldn’t in hydrogen discharge tube, the spectral lines observed are those of hydrogen molecules (the gas)? And how does it atomize to give the hydrogen spectral lines?</p> Answer:
https://chemistry.stackexchange.com/questions/69326/how-are-hydrogen-atoms-formed-in-hydrogen-discharge-tube
Question: <p>I did an experiment where I analyzed the amount of caffeine and benzoate in soda via spectroscopy. I've got the absorbances and did all the math for everything, but I'm having a hard time figuring out why we had to add an acid to protonate the benzoate to form benzoic acid before running the spec.</p> <p>I've been able to find many similar procedures and many with various pre-lab questions, but none seem to address the reason for the conversion from benzoate to benzoic acid. Here is one such procedure: <a href="http://www.chm.uri.edu/sgeldart/chm_414/414%20Ultraviolet%20Spectroscopy.pdf" rel="noreferrer">UV Spectroscopic Analysis of Caffeine &amp; Benzoic Acid in Soft Drinks</a></p> <p>The only thing I can think of is that it was in an aqueous solution (soda), so perhaps some benzoate will be in the form of benzoic acid and some will still be in the form of benzoate. By adding $\ce{HCl}$, we can force all the benzoate to be in the form of benzoic acid for better quantification.</p> <p>Is this the correct line of thinking or am I missing something?</p> Answer: <p>Your assumption is correct, the addition of HCl, as already in the earlier publication by McDevitt (J. Chem. Educ., 1998, 75, 625-629 <a href="http://pubs.acs.org/doi/abs/10.1021/ed075p625?journalCode=jceda8&amp;quickLinkVolume=75&amp;quickLinkPage=625&amp;selectedTab=citation&amp;volume=75" rel="noreferrer">DOI: 10.1021/ed075p625</a>) ensures benzoates are converted into benzoic acids. </p> <p>The publication by Guo <em>et al.</em>, is instructive as it lists several benzoic acids side by side, for example: </p> <p><a href="https://i.sstatic.net/TZpy4.png" rel="noreferrer"><img src="https://i.sstatic.net/TZpy4.png" alt="enter image description here"></a></p> <p>source: J. Phys. Chem. A, 2012, 116, 11870–11879 <a href="http://pubs.acs.org/doi/abs/10.1021/jp3084293" rel="noreferrer">DOI: 10.1021/jp3084293</a></p> <p>The shift of the centre of absorption may be more important than a potential change in $\varepsilon$ if you want to use the UV-Vis spectrum recorded in a principal component analysis <a href="https://en.wikipedia.org/wiki/Principal_component_analysis" rel="noreferrer">(PCA)</a>, where overlap of absorption bands of your consitiuents in the analyte should be avoided.</p>
https://chemistry.stackexchange.com/questions/71722/why-convert-benzoate-in-soda-to-benzoic-acid-for-spectroscopy-experiment
Question: <p>Disclaimer: My background is computer science.</p> <p>Is it possible to detect any gas using a visible light and near infrared (400 nm - 1000 nm) <a href="https://en.wikipedia.org/wiki/Hyperspectral_imaging" rel="nofollow noreferrer">hyper spectral imaging</a> camera? This can be related to gas leaks, pollution, gasses released in fires... From what I found when googling, this is not really possible, but I though to check here to be sure.</p> Answer: <p>The answer is that you can detect all types of gases if you have the correct light source and use Raman spectroscopy. Raman is more adaptable than absorption or emission spectroscopy as it is a scattering process and the laser does not have to operate at the wavelengths that the molecules absorb, thus one laser can be used for all samples. A suitable source is going to have to be a narrow band laser (although when Raman discovered this effect he used sunlight). You will also need notch -filters to remove the laser wavelength and prevent it from saturating your detector. Not cheap.</p>
https://chemistry.stackexchange.com/questions/71904/gas-spectra-in-the-400-nm-1000-nm-range
Question: <p>I am trying to measure the absorbance of a solution at 882 nm, but this seems to be beyond the range of many UV-Vis spectrometers. Is this a job for the IR spectroscopy, or something else? I do not have much experience with IR spectroscopy.</p> Answer: <p>No this is not a job for an IR machine. It all depends on the machine you are using, but provided the monochromators will extend that far all that may be necessary is to change the detector for one that is more red sensitive. If your machine will not do the job, and if its a one-off type of measurement try to get time on another machine somewhere else. If this is not possible or you need to measure many samples over a long period you may need to experiment by using a lamp , interference filters and a detector etc and make your own machine to measure transmittance. </p>
https://chemistry.stackexchange.com/questions/72541/how-to-measure-absorbance-at-882-nm
Question: <p>What is the fastest or the most definite method (preferentially a single unexpensive method) for verifying the presence of a certain small orgnic molecule which is expected to be of >90% purity in a "powdry" sample (known solvent is also availble)? </p> <p>Thank you in advance</p> Answer:
https://chemistry.stackexchange.com/questions/73340/the-best-methods-to-identify-verify-the-presence-of-a-small-organic-molecule
Question: <p>I have difficulty understanding the first sentence of the Introduction in the paper linked below. Can someone explain to me what is spectroscopic transition frequency and thermal bath? </p> <blockquote> <p>The question addressed here is how the modulation of the spectroscopic transition frequency Ω(t) by fluctuations of the thermal bath is related to the observed spectral line shape.</p> </blockquote> <p><a href="http://pubs.acs.org/doi/abs/10.1021/jp5081059" rel="nofollow noreferrer">http://pubs.acs.org/doi/abs/10.1021/jp5081059</a></p> Answer: <p>In an isolated molecule /atom the intrinsic width of the transition is limited by the lifetime of the upper level and this can be huge ( seconds) and so the line-width can be minuscule by time-energy 'uncertainty'. If the decay is exponential the spectral line shape is Lorentzian by Fourier Transform theory. </p> <p>In a medium, the surrounding solvent or gas molecules repeatedly interact with our victim molecule by collisions and by electric and magnetic dipoles etc, and do so in a random fashion and thus cause fluctuation in its energy levels. These changes are then observed in the transition energy and so broaden the observed transition in the ensemble of molecules. So far this is quite general. This paper examines one model of this, linear + quadratic coupling and the quadratic part is presumably why the peak is split - but I only skimmed the paper so you will have to check this. The detailed maths is just the technical part, ignore this and try to get to the basic physics first.</p> <p>(The term 'thermal bath' is just jargon for all the random interactions from solvent/buffer gas surrounding the victim molecule, 'modulation' just means changes, so the first sentence could read 'We describe how random interactions with a solvent or buffer gas changes the observed spectroscopic line shape ....')</p>
https://chemistry.stackexchange.com/questions/73481/spectroscopic-transition-frequency
Question: <p>I am comparing two IR spectrums of the same molecule, one in the solid state and one in $\ce{CH2Cl2}$ solution. There is only one CO group in the molecule.</p> <p>Why does the solid state spectrum show two carbonyl stretches, while the solution spectrum has only one carbonyl stretch?</p> Answer:
https://chemistry.stackexchange.com/questions/74734/why-is-the-number-of-carbonyl-stretches-in-the-ir-spectrum-different-for-solid-s
Question: <p>What is the differnce between a ground state electron and an excited state electron? Upon excitation what exactly changes about the electron?</p> Answer: <p>You can think of electron states as book shelves and electron as a book. Is there anything different about that book if it stands on lower or upper shell? Not really. Same is with electron in different states. </p> <p>I guess what made you think otherwise is the fact that electron in lower electronic states has a lower potential energy. When electron occupies higher(excited) electronic states it has more energy, and by releasing it in a form of photon it may be relaxed to lower(ground) electronic state. So the difference is in amounts of energy electrons in different states have. </p> <p>Also, since electrons in different electronic states are described by different wavefunctions, probability density will be different. And other observables, besides energy, will have different values. </p>
https://chemistry.stackexchange.com/questions/80481/what-is-the-difference-between-an-electron-in-ground-state-and-an-electron-in-ex
Question: <p>In calculating the ratio of first state excited to unexcited chemical species using the Boltzmann distribution, are we assuming that we only have species in the unexcited and first state excited states? It seems to me that this must be the case, because there's no part of the equation that accounts for the population of e.g. the second excited state.</p> Answer: <p>The expression for the Boltzmann equation is:</p> <p><span class="math-container">$\eta_{i} = \dfrac{g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)}{\sum_{i} g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)}$</span>,</p> <p>wherein <span class="math-container">$\eta_{i}$</span> the probability is to encounter the ensemble in state <span class="math-container">$i$</span> is in the ensemble, <span class="math-container">$g_{i}$</span> the degeneracy of state <span class="math-container">$i$</span>, and <span class="math-container">$\epsilon_{i}$</span> its energy (I reckon the other variables are familiar to you).</p> <p>To calculate the ratio between two states, you can simply divide the above expression for two different states by which you obtain:</p> <p><span class="math-container">$\dfrac{\eta_{i}}{\eta_{j}} = \dfrac{g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)}{\sum_{i} g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)} \cdot \dfrac{\sum_{i} g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)}{g_{j} \exp \left( \dfrac{-\epsilon_{j}}{k_{b}T} \right)} = \dfrac{g_{i} \exp \left( \dfrac{-\epsilon_{i}}{k_{b}T} \right)}{g_{j} \exp \left( \dfrac{-\epsilon_{j}}{k_{b}T} \right)}$</span></p> <p>The expression above shows that because states <span class="math-container">$i$</span> and <span class="math-container">$j$</span> are of the same ensemble, you do not have to use (or even know) the energy levels of the other states in that ensemble if you are only interested in the probability ratio between the two states.</p>
https://chemistry.stackexchange.com/questions/104142/are-we-using-an-assumption-in-calculating-the-ratio-of-first-state-excited-to-un
Question: <p>I know for a fact that a frequency domain spectrum can be obtained from a time domain spectrum using a Fourier transform - but can you do the reverse?</p> <p>Also what are the advantages and disadvantages of the frequency and time domain spectra?</p> Answer: <blockquote> <p>Can you do the reverse?</p> </blockquote> <p>Yes, mathematically the Fourier transform can be reversed. If we define the spectrum <span class="math-container">$S(\omega)$</span> as the Fourier transform of a time-domain signal <span class="math-container">$f(t)$</span>, </p> <p><span class="math-container">$$S(\omega) = \mathcal{F}[f(t)] = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)\mathrm{e}^{-\mathrm{i}\omega t}\,\mathrm{d}t$$</span></p> <p>then the inverse Fourier transform is simply given by</p> <p><span class="math-container">$$f(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}S(\omega)\mathrm{e}^{\mathrm{i}\omega t}\,\mathrm{d}\omega$$</span></p> <p><a href="https://en.wikipedia.org/wiki/Fourier_inversion_theorem" rel="nofollow noreferrer">Wikipedia has a page on this</a> which is rather technical, but any decent book on Fourier transforms should cover this, so a mathematics book targeted at physicists/chemists may be easier to digest.</p> <blockquote> <p>What are the advantages and disadvantages of the frequency and time domain spectra?</p> </blockquote> <p>The time domain signal (in NMR, "free induction decay") is for the most part, not particularly useful. It is the Fourier transform which makes it valuable for chemists, because the frequencies that appear in the spectrum are related to transitions between different energy states:</p> <p><span class="math-container">$$\Delta E = \hbar\omega$$</span></p> <p>In general, probing these energy levels is the main point of spectroscopy, regardless of what range of energies you are using.</p>
https://chemistry.stackexchange.com/questions/104144/is-a-time-domain-spectrum-obtainable-from-a-frequency-domain-spectrum
Question: <p>I have struggled to find a formal definition of cross-section of absorption; from what I've gathered, its best defined as 'the intensity of absorption'.</p> <p>Wikipedia's formal definition of the molar extinction coefficient is 'how strongly a species absorbs EMR at a given wavelength, per molar concentration'.</p> <p>So how are the molar extinction coefficient and cross-section different? I presume that whilst the molar extinction coefficient remains constant for a given wavelength across all concentrations, the cross-section differs for a given wavelength with changing concentration. </p> <p>Is either the molar extinction coefficient or cross-section of an absorption equal to the probability that a single species absorbs a photon at any one point in time?</p> Answer:
https://chemistry.stackexchange.com/questions/104464/are-the-molar-extinction-coefficient-and-cross-section-of-an-absorption-related
Question: <p>Obviously the molar extinction coefficient is not equal to the probability of a photon of the associated wavelength being absorbed, but is it a measure of the probability i.e. the higher the value of the molar extinction coefficient, the higher the probability of absorption of the photon per molecule?</p> Answer:
https://chemistry.stackexchange.com/questions/104500/molar-extinction-coefficient-a-measure-of-the-probability-of-a-photon-of-the-ass
Question: <p>The electrons in the 1s orbital of chlorine have a binding energy of 273 MJ/mol, but the 1s electrons in sulfur have a binding energy of 239 MJ/mol. Why is this?</p> Answer: <p>Chlorine has 17 protons and Sulfur has 16. In the 1s orbital there is no ineer electron to provide shielding effect of repulsion. So greater the positive charge, greater is the attraction and better is the binding.</p>
https://chemistry.stackexchange.com/questions/114386/binding-energy-question
Question: <p>Usually what is the size (in nanometer) of the sample or target substance in IR and Raman Spectroscopy?</p> <p>Which one has largest size of sample it can scan, how many molecules or atoms are involved? If you can make the size bigger or scan coverage bigger, then the more intense would be the IR or Raman stroke signal? Or no difference in intensity in the case of Raman even if sample size is increased? Please answer separately for IR and Raman since they are different. One uses IR light, the other laser. I guess IR light needs to be focus at a spot just like a laser too.</p> Answer: <p>This all depends on the exact geometry of your spectrometer light path, if you measure in transmission or reflection, using ATR, etc. AND of course on your sample concentration.</p> <p>If you want to ask about the <em>minimal size</em>: As small as you can get your focal point or laser spot. A few dozen microns, maybe less. Local sample heating can be a problem. You can buy scanning IR and FT Raman microscopes.</p> <p>In transmission, you can measure IR on a cm sized sample, given that is is translucent. With Raman, that would be tricky, because you should focus the <em>scattered</em> light again.</p> <p>Minimum number of molecules for IR is very low, but that also depends on how strong the absorption line in question is. Raman used to be very insensitive, that has become a lot better with FT Raman. </p>
https://chemistry.stackexchange.com/questions/122469/size-of-target-substance-in-ir-and-raman-spectroscopy
Question: <p>Rayleigh scattering occurs when the dimensions of the scatter is much smaller than the wavelength of the incident electromagnetic radiation. </p> <p>Mie scattering occurs when the dimensions of the scattered is much larger than the wavelength of the incident electromagnetic radiation. An example is when light is scattered by small water droplets in clouds.</p> <p>So does Rayleigh or Mie scattering occur in liquid or bulk water? </p> <p>It seems the answer is Mie scattering. But in Raman spectrometry. The incident light say 532nm is rayleigh backscattered in liquid. This is why the equipment needs to have notch filter, etc. So why does Rayleigh scattering (?) occur in liquid/bulk water when the particles are much larger than the wavelength of light? Or is it Mie Scattering that the raman device is filtering in the incident light backscattered?</p> Answer: <blockquote> <p>So does Rayleigh or Mie scattering occur in liquid or bulk water? So why does Rayleigh scattering (?) occur in liquid/bulk water when the particles are much larger than the wavelength of light?</p> </blockquote> <p>What is the approximate size of water molecule? It is on the order of 2.7 Angstroms or 0.27 nanometers. When we are talking about visible light, the wavelength is 500 nm/0.27 times larger than the scatterer! In case of pure water and with visible light, Rayleigh scattering is observed along with Raman effect. If you disperse larger "particles" in water such as milk, one sees Mie scattering.</p>
https://chemistry.stackexchange.com/questions/123077/does-mie-scattering-occur-in-liquid-or-is-rayleigh-scattering
Question: <p>The test for anions can be done with a platinum wire and a Bunsen flame.</p> <p>My textbook says that it can also be done by preparing a solution of the given salt in water and ethanol and spraying it onto the Bunsen flame.</p> <p>How do I prepare this solution in terms of how much water, ethanol and salt ?</p> Answer:
https://chemistry.stackexchange.com/questions/150124/prepare-solution-of-salt-in-ethanol
Question: <p>The chemical shift of enantiotopic protons is defined as follows in <a href="https://www.masterorganicchemistry.com/homotopic-enantiotopic-diastereotopic/" rel="nofollow noreferrer">Spectroscopy/Homotopic, Enantiotopic, Diastereotopic</a></p> <blockquote> <p>Enantiotopic protons have the same chemical shift in the vast majority of situations. However, if they are placed in a chiral environment (e.g. a chiral solvent) they will have different chemical shifts.</p> </blockquote> <p>The second sentence in this question piques my interest. Even after some searching, I only saw this same sentence over and over again.</p> <p>What makes the chiral solvent cause a change in magnetic environment causing the change in chemical shift for enantiotopic protons?</p> Answer:
https://chemistry.stackexchange.com/questions/151378/reason-for-different-chemical-shift-in-chiral-solvents-for-enantiotopic-protons
Question: <p>IR is only absorbed my molecules with polar bonds regardless of the overall polarity of the molecule , what about Microwaves ?</p> Answer: <p>It depends on the strength of the incident EMF. Pure <span class="math-container">$\ce{H2}$</span> can be <a href="http://www.scholarpedia.org/article/Microwave_ionization_of_hydrogen_atoms" rel="nofollow noreferrer"><em>ionized</em> by microwave radiation in multi-photon collisions</a>, as well as being heated. Even nonpolar molecules have quantum fluctuations in the electron cloud, so they interact to some extent with n electromagnetic field.</p>
https://chemistry.stackexchange.com/questions/105268/do-microwaves-heat-polar-molecules-or-molecules-with-polar-bonds
Question: <p>I have two set of spectrum data, the first record (in wavenumbers) looks like this:</p> <pre><code>... 967.0511723 968.9374611 970.8224407 972.7061127 974.5884787 976.4695403 ... </code></pre> <p>And the second record (again, wavenumbers) umber looks like this:</p> <pre><code>... 955.9630179 957.8733753 959.7823965 961.6900828 963.596436 965.5014577 967.4051494 969.3075128 971.2085494 973.1082609 975.0066489 976.9037149 978.7994605 980.6938874 982.586997 ... </code></pre> <p>How do i merge two datasets with the same wavenumber?</p> Answer:
https://chemistry.stackexchange.com/questions/162220/how-to-combine-raman-spectrum-data-with-different-wavenumber
Question: <p>Since my last post yesterday, I found some leads and started to analyse my data. But then I started encountering more doubts, questions and confusion. Any help will be highly appreciated and will help me in speeding my research.</p> <p>For analysis, I am matching the peak binding energy for each orbital for each element with the NIST Reference Database. During the analysis process I was confused how to guess in what form is the element/element's present. I am not sure what I am doing is correct or not.</p> <p>I am analysing XPS results of Pt impregnated on gamma alumina. I have 4 samples each subjected to different conditions. I am trying to figure out in what oxidation state/in what form the Pt will be present.</p> <p>So I'll give an example -</p> <p>For Pt, I got a peak binding energy of 74.76 eV for the 4f7/2 orbital. When I searched this peak in the reference database, I got the following possible candidates with the following reference binding energies</p> <p>a) PtO - 74.6 eV</p> <p>b) PtO2 - 74.6 eV</p> <p>c) Pt - 74.5 eV</p> <p>This particular sample was fired in an engine and the temperatures were around 300 deg cel, and we don't know exactly in what form the Pt will be present. How do I decipher in what state the Pt will present from the XPS data ?</p> Answer: <p>Recently, there a lot of XPS questions here which are being closed or not receiving an answer. The problem is that you are not sharing enough experimental details so nobody has an answer and nobody knows if your 74.76 eV peak is at the true position or not (see the reasons below). You also need get a good XPS textbook written by the very inventor of XPS. Check Wikipedia about the inventor of XPS.</p> <p>When people wish to assign XPS to a certain oxidation state or assign a certain compound, here are some of the steps that help people in interpretation.</p> <ol> <li><p>One should be aware of the fact that there are two types of X-ray photoelectron spectra. Survey scans which are acquired rapidly, or high resolution scans which take several hours or even a full day. <em>Peak assignments should be done on high resolution scans</em>. A survey scan is a quick and dirty analysis and this is most likely done in your case. High resolution scans are expensive.</p> </li> <li><p>Peak positions are very sensitive to charged state of the surface. By chance, if you have seen scanning electron microscopy of particles, one can see particles flying from the surface due to charging. Similar charging effects exist in XPS. If the XPS substrate to be analyzed is being electrically charged by impinging X-rays, the peak positions will be shifted.</p> </li> <li><p>Usually adventitious carbon (always present on all samples) is used to correct the peak positions. Peak assignments should not be done on uncorrected spectra.</p> </li> </ol> <p><em>Now the <strong>74.76 eV</strong> value can be a useful number OR it can be completely useless if steps or conditions (1) to (3) were not fulfilled.</em></p> <ol start="4"> <li><p>The fourth step is to use chemical intuition about the nature of the surface. Do you expect such compounds on the surface? Was Pt present in an oxidative environment? If you think there is negatively charged platinum species, the charge must be balanced by some cations. Is there an indication of cations present on the surface. If you assign PtO formation, is there a oxygen peak in similar proportion to Pt atom %?</p> </li> <li><p>Look at peak widths of the element in question. A wide peak indicates multiple environments or oxidation states. This is done by peak fitting (wrongly called deconvolution in common parlance) in the software. CASA XPS manuals are online.</p> </li> <li><p>In short do not look at just at a single number from the NIST table, look at the conditions in which that NIST data was acquired and also see how good quality XPS data is summarized. They write all the details. Ask your XPS data provider to show all the conditions too (peak position correction, baseline correction method etc). <strong>Your 74.76 eV can only be assigned to a certain oxidation or compound if and only if your AND their experimental conditions match closely.</strong> Otherwise, everything will be handwaving. A lot of XPS data in analytical literature is based on handwaving because XPS peak assignment is not a trivial task. This is from the NIST website. Note they do not mention uncertainity, which is also important.</p> </li> </ol> <p><a href="https://srdata.nist.gov/xps/XPSDetailPage.aspx?AllDataNo=41551#Data_process.htm" rel="nofollow noreferrer">XPS for Pt</a></p> <p>.</p> <p><a href="https://i.sstatic.net/Iio3o.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iio3o.jpg" alt="enter image description here" /></a></p>
https://chemistry.stackexchange.com/questions/163959/how-to-guess-the-chemical-state-in-which-particular-element-will-be-present-from
Question: <p>Does anyone know what is scattering coefficient, absorption coefficient and extinction coefficient, and how to separate them experimentally?</p> Answer: <p>In most applications, these terms are used interchangeably.</p> <p>Most absorbance spectrophotometers work by measuring the intensity of the light beam that is <em>transmitted</em> through the sample. A strong light beam goes in, and a weaker one comes out the other side. The transmission is the fraction of the light that made it through. </p> <p>There are two main ways in which the light that goes in gets diminished in intensity before it goes out. The first is <em>absorption</em>. Molecules that absorb light of a certain frequency make photons of that frequency disappear. The energy of those photons excite electrons into higher energy states. These states may eventually decay, releasing heat or lower-energy photons, <em>but not photons of the same energy that were absorbed</em>. The propensity of molecules to absorb can be called the <em>absorption coefficient</em>.</p> <p>Scattering just deflects the incident light to a new angle. Now, when it exits the sample, it won't come straight out on the side opposite where it went in, and it won't be detected by the detector. Molecules can cause scattering if they are very large. The "scattering coefficient" describes a molecule's propensity to scatter light of a certain frequency.</p> <p>The "extinction coefficient" is -- in most applications -- just equal to the scattering coefficient plus the absorption coefficient, i.e., it's proportional to a molecule's ability to disrupt incident light in some way or other, regardless of the mechanism. </p> <p>In the usual, transmission-based spectrophotometer, only extinction coefficients can truly be measured. To resolve these measurements into scattering contributions and absorption contributions would require measuring the intensity of the scattered light, i.e. detectors would need to be present at multiple angles.</p>
https://chemistry.stackexchange.com/questions/33223/several-coefficient-differences-in-uv-vis-spectroscopy
Question: <p>When doing regression or classification when faced with a categorical attribute with <span class="math-container">$n$</span> possible values there are two options:</p> <ol> <li>Feed this attribute directly into your model.</li> <li>Partition your data into <span class="math-container">$n$</span> pieces based on the categorical attribute and train a model for each separately. During inference choose the model appropriately based on the same attribute.</li> </ol> <p>One of the advantages of approach #2 is that it allows you to do more specific feature engineering. E.g. if you are modeling property prices and you decided to make separate models for residential/industrial properties you can choose separate features that are relevant for each.</p> <p>Another advantage of approach #2 I can think of is that it can linearize otherwise non-linear relations. E.g. for a residential property having a railroad track nearby almost always heavily reduces property value while for an industrial property it could be a massive value booster.</p> <p>In general, what factors go into deciding between approach #1 and #2?</p> Answer: <p>I've tried 2 several times but it has never proved better than 1.</p> <p>I think the reason is, the more data you feed to a model, the better. The disadvantage of 2 is that the models that are trained use less data than the model in 1.</p> <p>In addition, some features might be independent of the group. For instance, when modelling property prices, being in the city centre always increases the price, both for residential and industrial.</p> <p>Let me discuss two of the main models used for tabular data:</p> <ul> <li>Tree based models will already do the feature engineering that you described in your first point. The model will already do a split residential/industrial if it contributes to the gain and then it will keep doing particular splits for each group.</li> <li>Linear models: a generalization of general linear models are mixed models, that kind of does what you mentioned on the second point, but keeping some structure that allows it to acknowledge that the city center is more expensive.</li> </ul> <p>That being said, if you have very different categories, it might be worth splitting the dataset, it's just a matter of trying.</p>
https://datascience.stackexchange.com/questions/76222/when-is-it-appropriate-to-split-a-dataset-on-a-categorical-value-and-generate-n
Question: <p>Assume I want to predict if I'm fit in the morning. One feature is the last time I was online. Now this feature is tricky: If I take the hour, then a classifier might have a difficult time with it because 23 is numerically closer to 20 than to 0, but actually the time 23 o'clock is closer to 0 o'clock.</p> <p>Is there a transformation to make this more linear? Probably into multiple features? (Well, hopefully not 60 features if I do the same for minutes)</p> Answer: <p>The question was already posted, you can find the answer there :</p> <p><a href="https://datascience.stackexchange.com/questions/5990/what-is-a-good-way-to-transform-cyclic-ordinal-attributes">What is a good way to transform Cyclic Ordinal attributes?</a></p> <p>The idea is to transform your time feature into two feature : it's like if you represent the hour as the angle of the hand on the clock, and use the sin/cos of the angle as your features</p>
https://datascience.stackexchange.com/questions/23933/how-can-i-deal-with-circular-features-like-hours
Question: <p>Problem is stated: we have giant csv file with one target column and rest are inputs, we don't know these features impact target but we would like to use algorithm that besides using linear and non-linear transformations will also take into account that right solution would be some_feature_A/some_feature_B. Is there algorithm that will allow to take this case into account? One way is to craft these feature columns yourself but is there better way?</p> Answer: <p>In theory, I think a deep neural net might be able to find features that are the product of two other columns. There exist some nice mathematical results which guarantee the ability of a neural net (with certain activation functions) to approximate <em>any</em> function, so there's no theoretical reason why a neural net couldn't compute the division function <span class="math-container">$f(x, y) = \frac{x}{y}$</span>. That being said, it may be difficult to achieve in practice without a little preprocessing.</p> <p>If you want to give it a try, I would suggest adding additional features obtained by taking the logarithm of any numerical features in the dataset. This might make it easier for the network for learn features that are products of other features (since <span class="math-container">$\log{(x * y)} = \log{x} + \log{y}$</span> and <span class="math-container">$\log{x / y} = \log{x} - \log{y}$</span>).</p>
https://datascience.stackexchange.com/questions/55507/problem-of-finding-best-combination-of-features-when-desired-feature-is-feature
Question: <p>I have been struggling to find proof for that but I couldnt</p> <p>Every time I prepare dataset I face the same issue</p> <p>when a column is a classification such as <code>CountryCode</code> or <code>TaskType</code> in this dataset</p> <pre><code>TaskType CountryCode Target 1 61 Red 1 962 Yellow 2 1 Yellow 6 61 Yellow 4 81 Red 2 1 Blue 1 61 Red 2 962 Green 4 61 Blue </code></pre> <p>if I applied the dataset as to different models such as linear regression, SVM, KNN, etc.</p> <p>will these model consider <code>CountryCode</code> and <code>TaskType</code> as numeric fields and treat them as continuous data?</p> <p>Shall I One Hot Encode these features before using them?</p> <p>what is the best way to handle this scenario?</p> Answer: <p>An intuitive explanation, why we should encode categorical features, is that otherwise there will be absolutely another sense of "closeness" between features of the same type. The model will treat this features as continuous and as a result if you have two points in your feature space (p1 with CountryCode 61 and p2 CountryCode 962) and then you add the third point p3 with CountryCode 81, then model will miss out, that it can't evaluate distances between 61, 962 and 81. </p>
https://datascience.stackexchange.com/questions/57465/to-one-hot-encode-or-not-to-one-hot-encode
Question: <p>Let's say I have a data set like the following:</p> <p><code>file group_a_co_1 group_a_co_2 group_b_co_1 group_b_co_2 file_1 0.8 0.2 0.3 0.7 file_2 0.1 0.9 0.2 0.8 file_3 0.5 0.5 0.7 0.3 ... </code></p> <p>I wonder, whether there are ways/tricks to tell the model about the group information here: since group_a_co_1 + group_a_co_2 = 1 and the same goes for group_b. Somehow I figure if I expose the group information, the performance of my model will improve.</p> Answer: <p>The information in groups 'group_a_co_2' and 'group_b_co_2' are already redundant; they do not add more information to the model. Therefore they can be removed. Adding even more redundant information will not improve your model further.</p>
https://datascience.stackexchange.com/questions/64693/how-to-use-feature-group
Question: <p>Does anyone know any good search algorithms for feature optimization that search through every possible combination to find the optimal combination of features for maximum predictive power? (Permutations are not important).</p> <p>So far I have been using Recursive Feature Elimination (RFE), which trains a model many times over and each time removes a feature with the least ranking. It is good but not perfect. Say for example we have a,b,c, it then goes to a,b and then a, but does not consider a,c.</p> <p>There are hundreds of algorithms, but if you know one such, I would really appreciate it! Computational power is not that important, as I only need to run it once!</p> Answer: <p>This is implemented in <code>mlxtend</code> as <code>ExhaustiveFeatureSelector</code>: <a href="http://rasbt.github.io/mlxtend/user_guide/feature_selection/ExhaustiveFeatureSelector/" rel="nofollow noreferrer">docs</a>.</p>
https://datascience.stackexchange.com/questions/104412/are-there-any-search-algorithms-for-feature-optimization-similar-to-rfe-but-whi
Question: <p>This is rather a practical question. I'm looking for an efficient way of calculating the frequency of an event for a large number of samples. Here's a more concrete example.</p> <p>Let's say that I have a system with millions of users. Each user has so many different features that I can use to categorize them into different classes. Among them, there's an event (let's say clicking) that each user generates once in a while. I'm interested in considering the frequency of clicking as an input feature, how would you calculate that frequency efficiently?</p> <p>The brute force answer is that each time the user clicks, I store that as a pair (timestamp, 1). Then, for each new incoming event, I can construct a list of such pairs into a window. Each element of this list represents a bucket (time range) and the value of the bucket shows the number of pairs that fall into it. At last, I'll calculate FFT to transform the window in time into a frequency spectrum which is my classification's input feature.</p> <p>It seems to me doing so for millions of users who are constantly generating events is very heavy processing. I was wondering if there's a lighter way of calculating (or even estimating) such a frequency spectrum for the events that occur over time?</p> Answer: <p>Sounds like more of a resource issue, but it is still related to data science, because of its final objective.</p> <p>Dealing with millions of users could require a lot of memory and computing power.</p> <p>That's why client-side processing should be a priority, using client-side functions like javascript.</p> <p>On the other hand, it is interesting to start with a data analysis about clicks (mean amount of clicks per person, mean time spent in a session, etc.).</p> <p>This is important to set rules to call the database and save the information.</p> <p>For instance, you could count clicks on the client-side and every and save it in the database every (mean time spent)/2 for example.</p> <p>The aim is to reduce as much as possible the request to the server-side, without having to use a long time-out.</p> <p>In addition to that, if you collect enough click data, it is possible to do some interesting stats (rush hours, functions performance, most used functions, ...) and adapt the server-side or client-side processing to it.</p>
https://datascience.stackexchange.com/questions/111996/an-efficient-way-of-calculating-estimating-frequency-spectrum-for-an-event
Question: <p>I am trying to understand how I can encode categorical variables using likelihood estimation, but have had little success so far.</p> <p>Any suggestions would be greatly appreciated.</p> Answer: <p>I was learning this topic too, and these are what I found:</p> <ul> <li><p>This type of encoding is called <em>likelihood encoding</em>, <em>impact coding</em> or <em>target coding</em></p></li> <li><p>The idea is encoding your categorical variable with the use of target variable (continuous or categorical depending on the task). For example, if you have regression task, you can encode your categorical variable with the mean of the target. For every category, you calculate the corresponding mean of the target (among this category) and replace the value of a category with this mean. </p></li> <li><p>If you have classification task, you calculate the relative frequency of your target with respect to every category value. </p></li> <li><p>From a mathematical point of view, this encoding means a probability of your target, conditional on each category value.</p></li> <li><p>If you do it in a simple way, how I described above, you will probably get a biased estimation. That's why in Kaggle community they usually use 2 levels of cross-validation. Read <a href="https://www.kaggle.com/c/mercedes-benz-greener-manufacturing/discussion/36136#201638" rel="nofollow noreferrer">this comment by raddar here</a>. The corresponding notebook is <a href="https://www.kaggle.com/raddar/raddar-extratrees" rel="nofollow noreferrer">here</a>.</p></li> </ul> <p><strong>The quote:</strong> </p> <blockquote> <p>It's taking mean value of y. But not plain mean, but in cross-validation within cross-validation way;</p> <p>Let's say we have 20-fold cross validation. we need somehow to calculate mean value of the feature for #1 fold using information from #2-#20 folds only.</p> <p>So, you take #2-#20 folds, create another cross validation set within it (i did 10-fold). calculate means for every leave-one-out fold (in the end you get 10 means). You average these 10 means and apply that vector for your primary #1 validation set. Repeat that for remaining 19 folds.</p> <p>It is tough to explain, hard to understand and to master :) But if done correctly it can bring many benefits:)</p> </blockquote> <ul> <li><p>Another implementation of this encoding <a href="https://www.kaggle.com/tnarik/likelihood-encoding-of-categorical-features" rel="nofollow noreferrer">is here</a>.</p></li> <li><p>In R library <strong>vtreat</strong> they have implementation of impact encoding. See <a href="http://www.win-vector.com/blog/2014/08/vtreat-designing-a-package-for-variable-treatment/" rel="nofollow noreferrer">this post</a>.</p></li> <li><p>In <a href="https://tech.yandex.com/catboost/doc/dg/concepts/algorithm-main-stages_cat-to-numberic-docpage/" rel="nofollow noreferrer">CatBoost library</a> they have a lot of options for categorical variable encoding including target encoding. </p></li> <li><p>There is no such encoding in sklearn yet.</p></li> </ul> <hr> <p>UPDATE: There is a nice package for sklearn models and pipelines! <a href="https://github.com/scikit-learn-contrib/category_encoders" rel="nofollow noreferrer">https://github.com/scikit-learn-contrib/category_encoders</a></p>
https://datascience.stackexchange.com/questions/11024/encoding-categorical-variables-using-likelihood-estimation
Question: <p><strong>BACKGROUND:</strong> I have dataset that includes <code>Race</code> (e.g., White, Black) and <code>Ethnicity</code> (e.g., Hispanic, Non-Hispanic) as <strong>observed variables</strong>. The dataset also includes <code>Race_Ethnicity</code> (e.g., Hispanic White, Non-Hispanic Black) as an <strong>engineered variable</strong>, if you will. I am am wondering if I should retain the observed variables in my supervised ML model?</p> <p>The observed variables are obviously correlated with the engineered variable. This is an issue for ML (i.e., the multicollinearity problem), if I am thinking about this correctly (but please correct me if I'm wrong). However, it may be possible that <code>Race</code> interacts with yet a 4th variable, whereas <code>Ethnicity</code> does not. Thus, leaving out <code>Race</code> may be costing me important boost in performance. (<code>Race_Ethnicity</code> may have a more &quot;muddied&quot; relationship with the 4th variable than <code>Race</code> alone.)</p> <p><strong>QUESTION:</strong> What to do, y'all? Should they (the observed variables) stay or should they go?</p> Answer:
https://datascience.stackexchange.com/questions/116904/should-original-features-be-retained-in-the-model-after-using-them-to-engineer-n
Question: <p>There is one behavior of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html" rel="nofollow noreferrer">labelbinarizer</a> </p> <pre><code>import numpy as np from sklearn import preprocessing lb = preprocessing.LabelBinarizer() lb.fit(np.array([[0, 1, 1], [1, 0, 0]])) lb.classes_ </code></pre> <p>The output is <code>array([0, 1, 2])</code>. Why there is a 2 there?</p> Answer: <p>I think the documentation is kind of self-explanatory here. Fit takes in array of size <code>n_samples</code> in which each element is the class of the datum or if the data point belongs to multiple classes, the input would be obviously of size <code>n_samples x n_classes</code>. That is what you gave in as input in your example. Each point can belong to any of the three classes. That is why you have <code>[0, 1, 2]</code> as number of classes. So as mentioned in the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html#sklearn.preprocessing.LabelBinarizer.fit" rel="nofollow noreferrer">documentation</a> if you try</p> <pre><code>&gt;&gt; lb.transform([0, 1, 2, 0]) [[1 0 0] [0 1 0] [0 0 1] [1 0 0]] </code></pre> <p>and if you try a class that is non-existent after fit like</p> <pre><code>&gt;&gt; lb.transform([0, 1, 2, 1000]) [[1 0 0] [0 1 0] [0 0 1] [0 0 0]] </code></pre> <p>No class named <code>1000</code> exists, so multi-targeted conversion for <code>1000</code> class case is plainly <code>[0, 0, 0]</code>. Hope this helps.</p>
https://datascience.stackexchange.com/questions/27130/2d-matrix-for-labelbinarizer
Question: <p>I have a feature for machine learning (using methods like SVM, naive bayes, neural network and random forest) called member duration as follows: Should I make it as numerical or categorical data?</p> <p><a href="https://i.sstatic.net/fyMsI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fyMsI.png" alt="enter image description here"></a></p> Answer: <p>You definitely have <strong>interval</strong> data, that is, data which takes on discrete values, as opposed to <strong>continuous</strong> data, which takes on values along a continuum. </p> <p>It may be of value to additionally determine if the data is <strong>ordinal</strong>, meaning that the order of the values is important, for example if <strong>[0, 1, 2]</strong> signifies <strong>[small, medium, large]</strong> or some analogous system. </p> <p>In the case of ordinal data, it may be best to keep the data as exposed to the SVM training process in integer form, as the integer representation encodes some information about the relationship between the categories. </p> <p>This approach would also be more reasonable if the values that the variable could take on in a production setting could expand beyond the values you've already observed in the training set- a categorical approach would be less able to handle new values in that context.</p> <p>If there are no ordinal relationships and you suspect all of the possible values are enumerated in the training set,treating the variable as categorical would be approriate.</p>
https://datascience.stackexchange.com/questions/17124/numerical-or-categorical-data
Question: <p>I have a feature for machine learning as follow that skew to the left, and only have number in certain number range (here 0-2000). Will skewness and range of number affect the learning? If yes what should I do?</p> <p><a href="https://i.sstatic.net/M00zz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M00zz.png" alt="enter image description here"></a></p> Answer: <p>Typically, folks would transform the variable. When it is strictly greater than zero, a log transform is usually sufficient. If zero is included, as in your case, one popular alternative is the <a href="https://en.wikipedia.org/wiki/Power_transform" rel="nofollow noreferrer">box-cox transformation</a>.</p>
https://datascience.stackexchange.com/questions/17125/effect-of-skewness-and-data-range-in-machine-learning
Question: <p>Let's say I wan't to predict the lifespan of an ad in a listing.</p> <p>I know a bunch of thing from the ad like:</p> <ul> <li>the title</li> <li>the price</li> <li>the location</li> <li>etc</li> </ul> <p>The target value is the duration of the ad in the listing before it's being removed (item has been sold).</p> <p>What would be the best approach for engineering the target?</p> <p>I've tried categorizing the log of the duration, but it's not leveraging the cyclic pattern you can see in the histogram of the lifespan (in hours):</p> <p><a href="https://i.sstatic.net/u5BO7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u5BO7.png" alt="Lifespan"></a></p> <p>x-axis : lifespan in hours</p> Answer: <p>I think you need to come up with a way to treat the data such that you're thinking in days, not hours, right? The peaks look they're just at 24, 48, 72, 96, (1 day, 2 days, 3 days, 4 days) etc, and are pretty much normally distributed around those peaks.</p> <p>I think a good test might be to try a categorical approach to start, to see how well you can predict which 'peak' the ad belongs to (is this an ad in the 24-hour normal distribution? the 48-hour?). If you can figure out which peak that ad belongs to then you might be able to identify features that indicate whether it's more likely to be on the short- or long-side of the hump. If you have bad results putting the ads into categories that might tell you something too.</p> <p>If you do try this, be careful of how you measure performance as your dataset will be unbalanced by the higher occurrence rate of quickly-pulled ads.</p>
https://datascience.stackexchange.com/questions/35730/what-is-a-good-approach-for-a-lifespan
Question: <p>If I have generated features using state of the art feature engineering methods of a dataset, can I use it for any kind of algorithm to build the model apart from few modifications in the features so as to plug in different algorithm?</p> <p>Is there any dependency of algorithm while building features from dataset?</p> Answer: <p>No.</p> <p>An example: feature engineering for Gradient Boosting algorithms.</p> <p>XGBoost can't handle categorical variables - you need to use one-hot encoding, target encoding, or something like that if you want to use it.</p> <p>On the other hand more recent GBM libraries like CatBoost or LightGBM can handle categorical data and have reasonable defaults for doing that.</p> <p>I wouldn't say encoding categorical variables would be 'few modifications' because depending on what you do (one-hot encode vs target encoding) your model's behavior can change significantly.</p>
https://datascience.stackexchange.com/questions/32491/is-it-safe-to-say-if-features-are-generated-once-for-a-dataset-it-may-be-used-f
Question: <p>Given hourly updates of precipitation amount (for the preceding hour) and temperature, how would you calculate if it's slippery or not?</p> Answer: <p>A purely physical model, i.e. no training data, I would say something along the lines of: IF temperature was (and still is) below 0 centigrade and precipitation was between now and up to an hour before the onset of freezing larger than 0 then: SLIPPERY.</p> <p>If you have historical data, connecting slippery observations with precipitation and temperature, i.e. you have a table like so:</p> <pre><code>hour: precipitation: temp: slippery: 0 12.5 2.3 NO 1 11. 2.1 NO 2 8. 1.6 NO 3 9.5 1.1 NO 4 8 0.8 NO 5 7.9 0.3 NO 6 8.3 0 YES 7 7 -0.8 YES </code></pre> <p>Than you could train a timeseries network, a recurrent network or a transformer.</p>
https://datascience.stackexchange.com/questions/131176/calculating-risk-or-amount-of-slipperiness-based-on-historical-weather-data
Question: <p>While feature engineering and deriving features why should I not use I’d as a field for tasks like regressions</p> Answer: <p>An Id, like a person’s name, is typically a unique identifier with no meaningful relationship to the target variable. Since it doesn’t carry any inherent pattern or information relevant to the outcome, it usually lacks statistical significance and can introduce noise into the model</p>
https://datascience.stackexchange.com/questions/130496/why-should-i-not-use-id-as-a-field-in-feature-engineering-for-ml
Question: <p>I have a feature which has specific categorical values ex(Technology, Hardware, Software, Marketing, Evnts etc). Based on this and some other features, I am trying to classify the dataset into 2 categories IsSoftwareSystem or NotSoftwareSystem. In this case is this cause a reduce in accuracy because i am feeding the category itself in the data and trying to predict the same.<br /> Using Random Forest/XGB.</p> Answer: <p>You have two problems:</p> <ol> <li>Technical problem: As 10xAI said in a comment, if the target also belongs to the features then the model should very easily predict every instance correctly. So you should obtain perfect performance on the test set. The ML model doesn't care about &quot;cyclic dependency&quot;, it uses any good indicator it receives as feature. Since you mention a reduction in accuracy, it means that there is an error somewhere in the process.</li> <li>Semantic problem: which problem are you trying to solve with this ML setup? If the goal is to distinguish software vs. non-software and you have as input a category &quot;software&quot; then it's pointless to train a ML model. You can obtain the same result much more efficiently with a simple condition:</li> </ol> <pre><code>for each instance: if instance.category == &quot;software&quot;: instance.answer=&quot;software&quot; else: instance.answer=&quot;non-software&quot; </code></pre>
https://datascience.stackexchange.com/questions/90469/cyclic-dependency-between-feature-and-predictor-class
Question: <p>In the classical linear regression implementation, if I suspect the square of the values of the column is correlated to the target, then I actually need to create a new column with the squares for the algorithm to make use of that.</p> <p>Is this also necessary when using neural networks? I know it's a broad question - are there cases where this is necessary and cases where it isn't?</p> Answer: <p>You don’t necessarily <em>need</em> to, according to the <a href="https://medium.com/predict/artificial-neural-networks-universal-function-approximators-cf5198224b58" rel="nofollow noreferrer">universal function approximation theorem</a>.</p> <ul> <li>It is easier for a neural network to learn an identity function than some other function, so if one of the inputs <strong>definitely</strong> needs to be squared your network will learn faster if you pass the input already squared</li> <li>If your network is sufficiently large it should work out that squaring that input is helpful and approximate the squaring function as part of the overall learning process</li> </ul>
https://datascience.stackexchange.com/questions/80938/do-i-need-to-square-a-column-if-i-want-a-neural-network-to-try-using-that
Question: <p>I'm working on one use case where I have to explore source code repo files. Different files will be a categorical values for me. But with such large number of files, One Hot Encoding comes out to be very large.</p> <p>Also, all files are divided among unique modules, such that each files belongs to a specific module. So if I consider unique files associated with specific modules comes out to be small in number and One Hot Encoding matrix is also not large.</p> <ol> <li>So if I define different modules as a categorical features, I'm not sure how to map the associated unique files.</li> <li>How to derive new feature with such combination of two categorical features?</li> <li>Any other better way to handle such use case?</li> </ol> Answer: <p>Recently practitioners are representing categorical variables as embeddings for ML models. I can see a solution to your problem there.</p> <p>As your problem is having a two-level hierarchy you can consider two embeddings, one set of embeddings for modules, and another set for files. For every document, you can take their combination and pass it as input to the model. In this way, both the module and file-level information will be captured.</p> <p>References: <a href="https://towardsdatascience.com/deep-embeddings-for-categorical-variables-cat2vec-b05c8ab63ac0" rel="nofollow noreferrer">https://towardsdatascience.com/deep-embeddings-for-categorical-variables-cat2vec-b05c8ab63ac0</a></p>
https://datascience.stackexchange.com/questions/82757/reduce-categorical-values
Question: <p>I have database with three columns, y,x1 and x2:</p> <pre><code>&gt;&gt;&gt;y x1 x2 0 0.25 -19.3 -25.1 1 0.24 -18.2 -26.7 2 0.81 -45.2 -31.4 ... </code></pre> <p>I want to create more features based on the x columns. until now I have just created random functions and tries to check their correlation with the y, but my question is if there is any propeer way/ common functions in order to create thise new features. I have used PolynomialFeatures of scikit learn but as I understood is not common to do more than 3.</p> <pre><code>from sklearn.preprocessing import PolynomialFeatures #split x y.... poly = PolynomialFeatures(3) poly=pd.DataFrame(poly.fit_transform(X)) </code></pre> <p>My end goal is to use those new columns in random forest algorithm (I have more columns than x1 and x2 but those two that are interesting for me and would like to investigate them and their relationshop more).</p> Answer: <p>You are going to have to do something - You can try combining them in different ways, multiply them together, divide them by each other, subtract one from another. Without the context around what these features actually relate to its difficult to say what would make sense. Ultimately to make a new derived feature you are going to have to combine them or transform them in some way.</p>
https://datascience.stackexchange.com/questions/85551/generate-new-features-from-two-columns
Question: <p>Say for example I am building a model to predict a customer churn event from Spotify, with my target being whether a customer churns in the next 90 days.</p> <p>One feature I might expect could be predictive of this event is customers checking their billing statements online - so I might engineer features for each customer on each training date to encode the information of how many times they have checked their billing statements.</p> <p>For example, I might create a feature CHECKBILL_CNT_0_10 which is a count of how many times this customer has checked their online bill in the last 10 days, with many of these such features across different time ranges.</p> <p>I have seen two different styles of how data scientists do this:</p> <ol> <li>CHECKBILL_0_10, CHECKBILL_0_30, CHECKBILL_0_90 ...</li> <li>CHECKBILL_0_10, CHECKBILL_10_30, CHECKBILL_30_90 ...</li> </ol> <p>Both technically encode the same information; however, I'm wondering if one of these options offers advantages over the other? I'm inclined to think that option 2 would be preferable since the features would be less correlated, &amp; therefore the model might learn more easily, but this is speculative.</p> Answer: <p>You may want to try both options out and see which is better. Feature engineering I think is more like a trial and error (iterative) process.</p>
https://datascience.stackexchange.com/questions/84633/what-is-best-practice-to-feature-engineer-from-prior-event-counts
Question: <p>I need some advice for my feature engineering. Suppose I have 90 days follow-up data. on 12 patients and I have the vital status of the patients at the end of these 90 days (deceased=1, alive=0)</p> <pre><code>ID&lt;-as.factor(c(1,1,1,2,2,2,2,3,3,4,4,4)) time&lt;-c(0,12,36,0,7,23,68,0,23,0,32,45) Age&lt;-rnorm(12,45,9) Sexe&lt;-c(&quot;F&quot;,&quot;F&quot;,&quot;F&quot;,&quot;M&quot;,&quot;M&quot;,&quot;M&quot;,&quot;M&quot;,&quot;M&quot;,&quot;M&quot;,&quot;F&quot;,&quot;F&quot;,&quot;F&quot;) biology1&lt;-rnorm(12,12,3) biology2&lt;-rnorm (12,100,20) biology3&lt;-rnorm(12,45,9) biology4&lt;-rnorm(12,20,2) sign1&lt;-c(1,1,1,1,0,1,0,1,0,0,0,1) sign2&lt;-c(1,0,0,1,1,0,1,0,1,1,0,1) stage&lt;-c(3,3,4,2,2,1,1,3,2,3,2,3) Death&lt;-c(1,1,1,0,0,0,0,0,0,1,1,1) data&lt;-data.frame(ID,time,Age,Sexe,sign1,sign2,biology1,biology2,biology3,biology4,stage,Death) </code></pre> <p>Patients were seen at irregular rhythms where they benefited from clinical and biological examinations. I would like to make a model of mortality prediction. What kind of transformations will be adapted to the numerical variables for my case, and which take into account the evolutivity of the data, knowing that the data are not periodic. For the moment I am thinking of calculating the slopes for the numerical variables. What other transformation could I do that takes into account the evolutivity of the variables? The variable &quot;stage&quot;, is an ordinal variable and in principle the higher the stage, the more critical the state of the patient is, and the more likely he is to die, Would it be a good idea to do the One-Hot-Encoding? Or should I do Ordinal Encoding? And in the latter case is it a good idea to leave the variable with its 1,2, 3,4? (knowing that there is no interval property)</p> <p>What type of encoding could I do that takes into account the evolutivity of the binary variables (sign1 and sign2)? For example <code>patient 1</code> has 1,then 0, then 0 for <code>sign 2</code> Would it be a good idea to make a slope here as well?</p> Answer:
https://datascience.stackexchange.com/questions/88029/feature-engineering-and-longitudinal-data
Question: <p>Say you have a binomial distribution with $p$ very small ($\approx 0.001$). </p> <p>You are asked to predict the conditional success rate $SR=S/T$ with $S$ successes out of $T$ trials given a set of conditions $X$.</p> <p>One would expect (correct me if I'm wrong, though I ran simulations and am quite confident) $SR$ to have a downward with increasing $T$ when $T$ is still small, and to approach $p$ when $T$ is large.</p> <p>The distribution of trials and successes is not uniform across $X$, so the train set tends to give higher $SR$ to conditions with smaller $T$ values.</p> <p>How would you treat this data skew when constructing a model?</p> Answer: <p>When fitting a GLM (at least in R), I know there is a optional weight vector that you can include. This weight is not to give more importance to an observation, but to rather weight observations based on $T$ for example.</p> <p>The R documentation says:</p> <pre><code> For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes </code></pre> <p>So I believe this would help adjust for your non-uniformity. How to choose those weights can be tricky I suppose and can vary a lot with your data, but its worth looking into if it looks like it will help your problem.</p> <p><a href="https://i.sstatic.net/D7TPT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D7TPT.jpg" alt="Simulation of problem"></a></p> <p>Do you have enough data where you can play with dropping data, or maybe if you simulate how much the $SR$ is inflated at low $T$ you can try to adjust for it somehow. It looks like after about 1000 trials the high success rate issue begins to mellow out, and after 3000 or so you start getting more reliable measures.</p>
https://datascience.stackexchange.com/questions/10463/skewed-binomial-data-for-small-p
Question: <p>I have a few Independent variables that's normal and a Dependent variables that's skewed , I pick <strong>log(feature+SHIFT)</strong> to correct skewness. The procedure I follow to get prediction is just take <strong>exp(predictions)-SHIFT</strong>.</p> <p>Now how do I get my predictions for the following cases.</p> <p>1) I have <strong>Normal - dependent variable</strong> and the <strong>independent variables</strong> are <strong>skewed</strong> , I transform only a <strong>few of them</strong> using same <strong>log(feature+SHIFT)</strong> where the <strong>shifts</strong> are <strong>unique</strong> for every feature.</p> <p>2) Transformations on <strong>both</strong> dependent and independent variables <strong>with different SHIFT values.</strong></p> <p>Suitable code examples will be highly appreciated. Thanks</p> Answer:
https://datascience.stackexchange.com/questions/17542/transformation-of-dependent-and-independent-variables
Question: <p>I've been using GIST, HOG and SURF descriptors for extracting features from different collections of Chest-X-rays and measuring performance using accuracy and area under the curve. These collections are obtained using different machinery, from different medical institutions, and, with different pixel resolutions. I could repeatedly see that one descriptor performs better than the other and the performance is not the same across these collections though all are frontal chest X-rays. What attributes to these differences in performance across the collections though they are from the same imaging modality?</p> Answer: <p>Feature-extraction mechanisms like GIST, HOG, etc are built and optimized to improve performance on given datasets. Because of this, they don't perform as well across datasets. It's kind of like putting specialized fuel in a vehicle that isn't built to utilize it - it might even do harm.</p> <p>Hand-engineered features are, as a rule, brittle. I once heard it said that the dirty secret of machine learning is just knowing how to transform your domain-specific information into meaningful features - after that, you can use an extremely simple classifier and it may do surprisingly well. The drawback is that the rules you built are very specific to your domain.</p> <p>Deep neural networks, and convolutional neural networks in particular, are an advancement in that they <em>learn</em> what features are useful about raw data - for CNNs, these are the raw pixel or time-series values. Instead of hand-building feature extraction mechanisms, these architectures <em>automatically</em> build them. </p> <p>One benefit of this is that if you use a CNN to identify images in general, you can re-use the top few feature extraction layers of the CNN on a different image recognition dataset, and re-train the bottom few layers to make the network specific to recognizing e.g. dog breeds. You can transfer what you learned about the statistical structure of natural images in general to other, more specific questions (general -> specific). In your case, the 'top few layers' are analogous to your GIST/HOG methods - and they wouldn't be expected to perform well when the task changes, because they were constructed for a specific task (specific -> other specific).</p>
https://datascience.stackexchange.com/questions/24764/why-is-there-a-difference-in-performance-across-the-feature-descriptors-for-the
Question: <p>I am working on the KDD dataset given in this <a href="https://www.kaggle.com/c/kddcup2012-track1#Description" rel="nofollow noreferrer">link</a>. </p> <p>The dataset is related to a typical recommendation systems dataset. So you find an item and information about the item. One of the information given about the Item is its category. Item-Category is a string <code>a.b.c.d</code>, where the character delimits the categories in the hierarchy <code>.</code>, ordered in a top-down fashion (i.e., category <code>a</code> is a parent category of <code>b</code>, and category <code>b</code> is a parent category of <code>c</code>, and so on.</p> <p>I'm not sure how to use this information correctly in my feature engineering. For example, the simplest information I can derive is for each Item I can estimate the topmost category to which it belongs. However, to go beyond this and use the subcategory information, how can I model this feature for a linear regression? </p> Answer:
https://datascience.stackexchange.com/questions/25910/feature-engineering-for-hierarchical-data
Question: <p>Are there any differences between <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow noreferrer">get_dummies</a> and <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html" rel="nofollow noreferrer">labelbinarizer</a> in terms of what they want to achieve? It seems to be both will somehow do a one-hot encoding.</p> Answer: <p>A couple of things comes to mind.</p> <p>get_dummies can transform a dataframe with many columns, whereas LabelBinarizer will only do one column. </p> <p>get_dummies outputs a dataframe (if the input is a dataframe) with a nicely formatted columns, whereas LabelBinarizer outputs a numpy array, so if you want to attach labels to them, you'd need to get them from the fitted instance of the LabelBinarizer. </p> <p>Inverse transform is more intuitive with LabelBinarizer, it has a method named inverse_transform, whereas with get_dummies you would need to do something like dummies.idxmax(axis=1)</p> <p>So overall, get_dummies seems to be a better choice</p>
https://datascience.stackexchange.com/questions/27126/differences-of-get-dummies-and-labelbinarizer
Question: <p>Days ago,One AI financial service provider offered us a lesson and mentioned that you are supposed to perform specific feature engineering according to the specific algorithm you are using.For example,when using logistics regression,fitting more features(uncorrelated)like binning the continuous variable into discrete ones are often suggested.Because logistics regression is a simple algorithm and we try to raise dimension in the way that samples will be separated better.</p> <p>I searched a lot(maybe not yet),most of materials are "why/what feature engineering important","scaling/standardization/binning continuous variable","dealing with null value" or some theoretical comments with no discrete manipulation.</p> <p>why and how the specific feature engineering should work on specific algorithm.Or any advice on this saying,is it right or wrong?what do you think.any comments are appreciated.</p> <h2>(I am not good at English,sorry about that if I am not clear enough)</h2> <p>I am not looking forward a detailed answer,some deep thinking about this part is good.</p> Answer: <p>In general, features are engineered so as to retain optimum relevant information present in the dataset with succinct representation and then features are adapted so that an algorithm can accept it as input.</p> <p>Feature engineering generally involves methods like binning, PCA, etc. Adapting those features to pass to algorithm is another step which is a very small part of the feature engineering step. Adapting example: for an image, we may have to reshape the image like below so that x[0] points to the image</p> <pre><code>x = image.img_to_array(img) normalise(x) // feature engineering x = x.reshape((1,) + x.shape) // Adapting for algorithm </code></pre> <p>With this understanding, if features are generated once for a dataset, it may be used for any relevant algorithm with corresponding adaption.</p>
https://datascience.stackexchange.com/questions/28398/specific-feature-engineering-for-specific-algorithm
Question: <p>I'm pretty new to machine learning.</p> <p>I know I can represent a set of discrete values as a vector of 0/1 values. For instance, in the set of features {a, b, c, d, e}, the subset containing <code>{a, c}</code> can be represented as <code>[1, 0, 1, 0, 0]</code> and the subset containing <code>{c, d, e}</code> can be represented as <code>[0, 0, 1, 1, 1]</code>, meaning I have as many dimensions as elements, which is workable when you have a finite (and small) number of elements.</p> <p>But now, for a clustering task, I want to represent sets of sets, like, for instance, representing the set <code>{{a, c}, {c, d, e}}</code>. How can I do that? Here, the basic 0/1 approach won't work, as I'll have <code>2^n</code> possible combinations. What is the workaround, if any?</p> <p><strong>Edit</strong>: here is the transcription as a less abstract, more business problem. I want to find clusters of people according to the trips they made. A trip consists in a set of cities visited, and a set of transportation used. For instance, people might have visited cities in the set <code>{Rabat, Alger, Marrakech, Tunis, Hammamet}</code> with transportation such as <code>{car, plane, train}</code>. A trip could be <code>{Rabat, Marrakech, plane}</code> or <code>{Alger, Marrakech, Tunis, car, train}</code>. Note the order in which cities were visited, or the order in which vehicles were used, is not considered.</p> <p>An example of the items I want to find clusters of could be a person having made those two trips, represented as <code>p1 = {{Rabat, Marrakech, plane}, {Alger, Marrakech, Tunis, car, train}}</code>.</p> Answer: <p>You are describing <a href="https://en.wikipedia.org/wiki/One-hot" rel="nofollow noreferrer">one-hot</a> encoding. There is a slot for each element. If the element is present, the slot has a one, and if the element if not present, the slot has a zero.</p> <p>Typically, people will encode orthogonal features in different dimensions. In your case, cities would be one dimension and transportation type would be another dimension. A given data point would be one-hot encoded in a matrix (a 2D collection of vectors). If you want to add people, you would another dimension. That would create a 3D tensor with each person in a row.</p> <p>Another way to compress your data is to encode not the cities (nodes) but the path between the cities (edges). From that encoding, you can create a <a href="https://en.wikipedia.org/wiki/Laplacian_matrix" rel="nofollow noreferrer">Laplacian matrix</a> that sets up <a href="https://en.wikipedia.org/wiki/Spectral_clustering" rel="nofollow noreferrer">spectral clustering</a>. Since you have multiple transportation methods for the routes, you can create clusters with multi-dimensional spectral clustering.</p>
https://datascience.stackexchange.com/questions/29693/how-to-represent-a-set-of-sets-as-a-vector
Question: <p>I have some data in raw csv files which I would like to store in a MySQL database. The problem is there are constant feature engineering done on this dataset so coming up with one schema to fit all the needs is not possible. The approach I thought of was to have one main table where the original data is held, and for each new feature that is created, a new table is created. Then, the user can join the original table with other tables which includes the features they want and use it for their own purposes. </p> <p>With the above approach, I'm worried about having too many joins when a user needs numerous features. Please advise on alternative approaches to this problem.</p> <p>Thanks in advance!</p> Answer: <p>Depending on the technical level of your users, the frequency of the update, the complexity of the transformation, the need to share these features among the users, etc. would a custom VIEW for each user be a feasible solution?</p> <p>Alternately, would you consider some ETL tools where you can create/ modify calculated columns, customizing the data pipeline as needed?</p>
https://datascience.stackexchange.com/questions/32414/storing-engineered-features-in-a-database
Question: <p>I am working on a problem in which I have several instances that have predictors that have activity over various different time periods (i.e. &lt;3 months to well over 20 months.) Originally I attempted to use knowledge I have about this problem (it is an opportunity to sale conversion model) and learned that the average time for a deal to close is about 9 months, so I broke my predictors up into three month intervals. However, I took another look at the lengths of these deals and see that there are a variety of instances that have durations that are not even close to 9 months so this idea does not make sense.</p> <p>The only idea I have gotten is just creating a duration column where I subtract the start and the stop date and then just do the summation for each predictor. However, I feel that the instances might get incorrectly labeled because some might have an overwhelmingly higher amount of activity than another due to the duration of the deal. Has anyone else encountered such a problem. Not sure if this is a common problem but a quick glance at google/reddit did not come up with anything (I could be asking the problem wrong.)</p> <p>Comment</p> Answer: <p>I had the same problem. you can use aggregation functions. For example use Max, Min, Avg, count, std or some calculation like the slope of the line. In cases which you have a different period, you can divide your value to days of each period.</p>
https://datascience.stackexchange.com/questions/34423/how-to-aggregate-data-where-instances-occur-over-different-time-intervals
Question: <p>i have created some new features for my model. I found people use kde plot to find out the correlation between the created feature and the target variable, but I am not really sure how to find the correlation from kde.</p> <p>Any help on how to interpret the correlation from kde plot will very helpful </p> Answer: <p><a href="https://www.kaggle.com/c/home-credit-default-risk/discussion/62669" rel="nofollow noreferrer">CoreyLevinson on kaggle</a> answered my question. I am simply adding the explaination.</p> <p>The kde shows the density of the feature for each value of the target. There are usually 2 colored humps representing the 2 values of TARGET. If the humps are well-separated and non-overlapping, then there is a correlation with the TARGET. If the humps are overlapping a lot, then that means the feature is not well-correlated with the TARGET, because the TARGET is equally as common on those values of the feature.</p>
https://datascience.stackexchange.com/questions/36575/kde-plot-for-interpreting-the-correlation
Question: <p>I'm using decision tree learning to try and classify a device based its components. Different devices have a different number of components and the location of these components within the device is important.</p> <p>Device 1 might have components 9, 3, 8, 4, and 1 in that order. Then device 2 might have components 4, 3, 2, 6, and 7 in that order. </p> <p>The issue with this is I'm finding it difficult to convert these into features.</p> <p>I would have the features of each row be the locations and the value be the name of the component in that location but with some devices having few (&lt;10) components and some with many more (>60) this could lead to many blank values.</p> <p>I'm worried that when training if most of the training data is on devices with for example &lt; 40 components, when trying to classify a device with > 40 components I'm assuming it would go terribly wrong, but you also can't just use devices with more than 40 components because then it will never learn to work for simple devices with smaller numbers of components.</p> <p>If there is a technique or way to get around this please let me know! Or if I've come at this problem the wrong way and need to rethink let me know that as well!</p> Answer: <p>I think you need to create a dataset with features covering all component / position options - i.e. create a dataset that is 60+ columns wide. This dataset would contain observations for both larger and smaller devices (you might want to take care in the balance between small and large devices). This is because many ML techniques require datasets of the same structure to be used in training and prediction.</p> <p>In this dataset you may have many observations where the features are 0 or blank (e.g. smaller devices). Depending on the decision tree implementation you are using, you might have to perform some data cleaning to deal with the blank features.</p> <p>From your question, I think you're approaching the problem the right way. I would say that if you're interested finding out the feature importance you might need to switch out features. As an example, if you're interested in finding out which component is most predictive of a device, you might want to have the columns of your dataset representing component number (and vice versa if you're interested in finding out which position is most predictive of a device).</p>
https://datascience.stackexchange.com/questions/42494/how-important-is-it-for-each-row-of-data-to-have-the-same-number-of-features
Question: <p>I have a multi class classification problem where I should predict the passengers for flights (0-7 classes). The training set consists of the following features:</p> <ul> <li>Date of the flight</li> <li>Mean of the weeks that the passengers bought their tickets</li> <li>Standard Deviation of the above</li> </ul> <p>I extracted from the date, the day of the week, the month and if the flight is in a high season. What other features could I extract from the date? How could I use the mean and standard deviation to create new features?</p> Answer: <p>Date fields are quite interesting data since the limit of what you can "feature engineer" with them is your imagination. However, it is difficult to know a priori if one of them will improve your model before you try it.</p> <p>Here some ideas:</p> <ol> <li>Year</li> <li>Month (1-12)</li> <li>Day of Month (1-31)</li> <li>Day of week (1-7)</li> <li>Week of year (1-52)</li> <li>The quarter of the year (1-4)</li> <li>Is it weekend or weekday (0,1)</li> <li>Season (Winter, Spring, Summer, Autumn)</li> </ol> <p>If you have also the time on the date field, then you can try these:</p> <ol> <li>Hour of day (1-24)</li> <li>Part of the day based on eating habits (breakfast, lunch, dinner etc)</li> </ol> <p>If you have the destination of the flight, then you can use an external source and get the holidays or specific "big events"</p> <ol> <li>Is the flight during a holiday (0,1)</li> <li>What is the distance in days from the closest holiday (before and after the flight)</li> <li>Any big event on the destination like Olympic Games, Superbowl, Football finals etc</li> </ol>
https://datascience.stackexchange.com/questions/43514/feature-engineering-from-date-mean-and-standard-deviation
Question: <p>I am facing a dilemma with a project of mine. One of the variables (numerical) doesn't have enough data i,e almost 99% data are missing. However, upon talking to the domain experts, it appears that the particular variable is important to the problem we are trying to solve (model). Initially, I thought of converting it to a binary variable such that 1 will represent that the variable has a value at that position and 0 will represent the missing value. However, it seems that we are missing information by doing it. </p> <p>Can anybody suggest any way go forward? </p> <p>One thought came to me is to discretize the variables using quantiles, but then what to do with the missing values? </p> <p>Another one is to include both the binary variable along with the original variable in the model with missing values replaced by some imputed values. But I cannot come to any logical reasoning as to why or why not this will work. </p> <p>Any light on this matter would be greatly helpful (other than of course drop it altogether). </p> <p>Thanks.</p> Answer: <p>What is the best way to deal with this kind of missing value problem can only be answered empirically? It will vary depending on your dataset and algorithm of choice. But here is a few things you can try.</p> <p><strong>Impute the missing value</strong></p> <ul> <li>Impute missing value of the mean</li> <li>Impute missing value with special values. For example, If the variable takes only positive values, then you can encode the missing value as 0.</li> </ul> <p><strong>Try to predict the missing value</strong></p> <ul> <li>use the other variables to predict the missing value. (However, if you can actually make a good prediction of the missing value out of the rest, then this might suggest you can drop this variable altogether)</li> </ul> <blockquote> <p>Another one is to include both the binary variable along with the original variable in the model with missing values replaced by some imputed values.</p> </blockquote> <p>There is nothing inherently wrong with this method. You should certainly try. One possible downside of this method when you have 99% of value missing, this original variable is going to highly correlated with the derived <code>is_missing</code> variable. This can be problematic depends on the particular classification algorithm you are using. For example:</p> <ul> <li>It is widely known that multicollinearity is a huge problem for any variant of linear regression</li> <li>Support Vector Machine suffers a similar problem [<a href="https://stats.stackexchange.com/questions/149662/is-support-vector-machine-sensitive-to-the-correlation-between-the-attributes">Ref</a>]</li> <li>Naive Bayes assumes independence among features. This is even stronger.</li> <li>The <code>is_missing</code> variable is a categorical variable, this makes it tricky to define a distance metrics in K-Nearest-Neighbour algorithm </li> </ul>
https://datascience.stackexchange.com/questions/45284/problem-with-important-feature-having-a-lot-of-missing-value
Question: <p>I have a dataset, where a particular feature is a collection of many JSON objects for a single feature.</p> <pre><code>Timestamp Observations 1 {"name":"bob", "place":"TX"},{"name":"ann", "place":"NY"},{"name":"jack", "place":"MA"},{"name":"jill", "place":"CA"} 2 {"name":"bob", "place":"TX"},{"name":"ann", "place":"NY"},{"name":"jack", "place":"MA"},{"name":"jill", "place":"CA"}, {"name":"darth", "place":"XX"} </code></pre> <p>How can I encode the Observations feature for each tuple, so that I can use it to do some sort of anomaly detection? (Note that for the second tuple, there is an extra JSON object)</p> Answer:
https://datascience.stackexchange.com/questions/45996/feature-encoding-for-multiple-json-objects
Question: <p>I want to create a new metric based on some features but dont know how to start. I basically want to create a "job satisfaction level" metric based on some features. The features could be work hours, shift, If working on weekend and so on. I dont know how to start. In ideal world, I want to comp up with weights for each of these features and compute a final value and then put the final value in a job satisfaction level bucket. I then want to use this metric in my training model. Is there any methodology to do so? Lets assume I have different warehouses with different values for those features and I want to compute a "job content" or "job satisfaction" metric based on the features I mentioned above for all of these locations. Then I want to use this new computed metric with my other features for an employee resignation prediction. Any help is appreciated.</p> <p>Thanks</p> Answer: <p>Indeed, there are methodologies that have been tested elsewhere, some with greater and less success.</p> <p>I will propose one of them to build a prediction of job satisfaction, which you can then enter as an explanatory variable in a supervised model of employee resignation, whose methodology you can review in this tutorial with Python code that I did some time ago: <a href="http://https:%20//%20github.com/iair/hrAnalytics" rel="nofollow noreferrer">HR analytics MVP</a></p> <p>Methodology to generate a satisfaction level prediction: Deduce the importance of the variables from a score that represents the satisfaction declared by a subset of the members of your company</p> <p>I think the best way to start doing a good MVP (minimum viable product) with which you can deliver relatively fast results having a result that incorporates elements of your company is one in which you derive the importance of the features from a dataset in which have your explanatory variables and a target with a declarative satisfaction survey made to the workers from which the score that is the variable explained was calculated. For this you must follow the following steps:</p> <p>1.-You design a satisfaction survey that will be answered by the workers and that will allow you to calculate a Score from it. Here the important thing is that the design of the survey is as complete as possible, that the number of respondents allows you to draw conclusions at a statistical level and, most importantly, that of those who answer the survey have how to extract the raw data that allows you later deduce which are the most relevant variables. <a href="https://iedunote.com/measuring-job-satisfaction" rel="nofollow noreferrer">Here are some resources</a> that can give you some ideas of how to generate the satisfaction level index</p> <p>2.-Then, using that dataset generated in step 1, you can make a feature engineer and establish which variables have the greatest impact on the satisfaction declared by the workers.</p> <p>3.-Solved the point 2 you can generate predictions on the score and apply your model to the future and with other workers of the same company.</p> <p>Important: Whenever you run the prediction for the next period you should do a few satisfaction surveys in each iteration to confirm that the model is still valid and to use that data as a permanent retraining. In general, the model should be useful as long as the context of the company does not undergo major changes (mergers, significant deterioration of the work environment due to massive dismissals, etc.), since in such cases you should try to capture the short and long term effects of these shock</p> <p>Although this methodology is a good starting point, it is omitting many things that are difficult to detect for a company because it corresponds to exogenous variables to it, such as:</p> <p>a.- That the person changes his interests and / or goals in terms of career. Example: a software developer who wants to change the focus of his career towards a more commercial facet or another specialty such as Data Science or Data Engineer</p> <p>b.-That the person change their objectives and / or prioritize them in their life. Example: A person who wants to begin to dedicate more time to his personal life because he went through a crisis with his partner</p> <p>Here is an example of where they used that methodology: <a href="https://www.researchgate.net/profile/Maurizio_Carpita/publication/233420755_Mining_the_drivers_of_job_satisfaction_using_algorithmic_variable_importance_measures/links/548196ed0cf22525dcb6256d.pdf" rel="nofollow noreferrer">Mining the drivers of job satisfaction using algorithmic variable importance measures</a></p> <p>PD: There are other lines of research that avoid extracting the satisfaction index from the direct query to the employee and occupy other variables such as equivalent income or time spent in the company as equivalent metric. It is not my favorite line, but here I leave an example of that: <a href="https://www.researchgate.net/publication/46443377_Measuring_job_quality_and_job_satisfaction" rel="nofollow noreferrer">Using equivalent income as metric</a></p>
https://datascience.stackexchange.com/questions/48361/creating-a-metric-based-on-some-features
Question: <blockquote> <p>Goal: Predict a performance score of a place of interest in a given city based on (amongst others), the number of restaurants within 200m. <span class="math-container">$\\$</span></p> <p>Dataset: <span class="math-container">$D$</span> with a feature <span class="math-container">$x$</span> indicating the <span class="math-container">$\textit{number of restaurants within 200m}$</span> of some place of interest (target varible, <span class="math-container">$y$</span>)</p> <p><span class="math-container">$\textbf{Question}$</span>: How can x be tranformed, such that it fits e.g a gaussian distribution for better modelling?</p> </blockquote> <p>I am not sure how this is handled. If it was a categorical variable, it could just be a one-out-of-k encoding.</p> <p>Are there any standard approaches to this? A logarithmic transformation does not yield anything useful. </p> Answer:
https://datascience.stackexchange.com/questions/53692/transformation-of-non-categorical-discrete-feature
Question: <p>I have a data set of 700+ mil records with a feature that should yield good predictive power. The problem is that it has far more unique values than it should. The 10k+ unique values should map to about 150. I have that list of 150 values I want them to map to. Thinking about using a distance algorithm (levenshtein?) to map unique values from data to the desired set of values. What are some other ways to think about this problem? </p> <p>Ex. 'Table', 'tab', 'tbl' should all map to 'table'. I'm not about to manually build a lookup table for this process given the volume of unique values. The unique values in the data are all derived from the desired values - they are acronyms or abbreviations. </p> Answer: <p>I agree with the idea of using a similarity or distance measure (approximate string matching). I would try a bunch of them and test them on a sample: Levenshtein, Jaro, overlap coefficient or cosine (optionally with TF-IDF) over bi/tri-grams of characters. </p> <p>I would also try to capture the most common abbreviations and have a lookup table for these common cases because:</p> <ol> <li>Computing similarity/distance measures takes time, so it's inefficient to compute the same result many times for the same string (and it's likely that some of these abbreviations are used many times).</li> <li>That gives you an opportunity to check that the mapping is correct (or to fix if it's not) at least for the most common cases, thus minimizing the overall amount of noise in the data.</li> </ol>
https://datascience.stackexchange.com/questions/56112/reduction-of-feature-values
Question: <p>Could changing the hyperparameters of a model change <strong><em>relative</em></strong> feature importance?</p> Answer: <p>Yes. The most obvious example is when using a Lasso regression : for an increasing <span class="math-container">$\alpha$</span> parameter you will have more and more coefficients set to zero. This resulting in a smaller set of features and thus a bigger share of feature importance for remaining features.</p>
https://datascience.stackexchange.com/questions/71818/relative-feature-importance-w-r-t-hyperparameters
Question: <p>Hi all I would love to hear your answers on this. Lets say I have two variables, voltage and current, in my data set. I could add another feature by squaring current (so as to calculate power). </p> <p>Is this an example of feature engineering?</p> <p>Recently I tried to predict on a diameter prediction on asteroids and I took the natural log of some features which worked well. </p> <p>Can someone provide some insight as to why this may have improved the model's performance? </p> Answer: <p>Sure, that's feature engineering.</p> <p>If you're fitting a linear model, then you are looking for features that have a linear relationship with the predicted value. If you're predicting, say, the cost per hour of a device consuming current I, then clearly that's directly related to power not current, so <span class="math-container">$I^2$</span> is more likely to be useful.</p> <p>What you want to be careful about is trying a bunch of transformations of the input blindly; it's possible for a small data set that one odd function of an input happens to be predictive in that sample, but doesn't generalize. </p>
https://datascience.stackexchange.com/questions/72584/is-it-suitable-to-change-a-feature-by-itself-to-generate-an-another-feature
Question: <p>In the tutorial, they normalize the data and say "The mean and standard deviation should only be computed using the training data"</p> <p>What does this refer to? Why should you only use the training data?</p> Answer: <p>When building <strong>any</strong> Machine Learning model, the only observable data you have is training data. Test data is supposed to be unobserved data, meaning that even though you might have it now, you need to act as if you didn't. When you apply normalisation, you first observe the data to get the parameters you need. As you are only supposed to be able to observe the training data, you can't use the test data to calculate those values. Doing so would be like cheating, as you are accomodating your parameters to new <em>unobserved</em> data (how can you observe unobserved data?).</p> <p>Imagine you build a model today and you want to make predictions tomorrow. You can't use tomorrow data to build your model since you don't have it yet. You are not supposed to know tomorrow's mean and std, though your hope is that they will be similar enough. That is why when you normalose/standarise you get the parameters with the training data and then use them to transform both train and test data, so you can use them as inputs for your model.</p>
https://datascience.stackexchange.com/questions/74018/tensor-flow-time-series-tutorial-question
Question: <p>I've been exploring the use of XGBoost in many different applications. Up to now, I always find the best results with shallow trees (from 1 to 3 levels), with the rest of the parameters very dependent on the problem.</p> <p>On my current assignment, I found that I get a much better performance if I use &gt;300 trees, with a depth &gt;20 !! I understand that this is saying a lot about the complexity of the issue, but I can stop wondering if there is some feature transformation I could do to change this. EX: by doing PCA and adding the resulting component to a dataset, one can sometimes not only replace several features by a smaller number but also capture higher level relashionships</p> Answer:
https://datascience.stackexchange.com/questions/89661/xgboost-with-deep-trees
Question: <p>I have a scenario in which I'm required to run my analysis at the Account level. One of the features that I'd like to look at is the no. of subscriptions against an account. There can be multiple subscriptions against one account. I wonder how I can "aggregate" these multiple subscriptions and roll them up at the Account level, such that I have a single row for each account. </p> <p>I could think of binary encoding but I have 5000 products and that would require creating these many features. </p> Answer: <p>You can use pandas groupby function to group your rows according to accounts and then perform your desired operation on them.</p>
https://datascience.stackexchange.com/questions/60589/aggregate-categorical-data
Question: <p>I have time-series data that track event occurrence in 3 locations. Here's a sample:</p> <pre><code> Count Total Location A B C Date 2018-06-22 0 1 1 2 2018-06-23 2 1 0 3 2018-06-24 0 0 1 1 2018-06-25 2 2 1 5 2018-06-26 0 3 1 4 </code></pre> <p>I would like to use the data to predict the total number of event occurrences at a given date in the future. How do I test if an event happening in one location has an impact on events happening in another location (dependency)? I believe that if an event happening in locations B and C are dependant, I should sum the 2 columns together as 1 feature in my model.</p> Answer: <blockquote> <p>How do I test if an event happening in one location has an impact on events happening in another location (dependency)?</p> </blockquote> <ul> <li><a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient" rel="nofollow noreferrer">Pearson correlation</a> between the two columns would already give you a simple indication of whether there is a dependency relation.</li> <li>A <a href="https://en.wikipedia.org/wiki/Chi-squared_test" rel="nofollow noreferrer"><span class="math-container">$\chi$</span>-square test</a> would tell you whether there is a significant difference between an observed variable (e.g. count in one location) and an expected variable (count in the other location). In other words, it can tell you whether the variables are independent or not.</li> <li>The <a href="https://en.wikipedia.org/wiki/Conditional_probability" rel="nofollow noreferrer">conditional probability</a> <span class="math-container">$p(A|B)$</span> of a variable A given the other variable B tells you how likely the event A is assuming the event B happens. <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are independent if <span class="math-container">$p(A|B)=p(A)$</span> (note that it's unlikely to be exactly equal in the case of a real sample).</li> </ul> <blockquote> <p>I believe that if an event happening in locations B and C are dependant, I should sum the 2 columns together as 1 feature in my model.</p> </blockquote> <p>Unless you have a specific reason to do that (e.g. you want to consider a large area which includes locations B and C), this doesn't make a lot of sense:</p> <ul> <li>first dependency is not &quot;all or nothing&quot;, two variables can have a certain degree of dependency but it doesn't mean that they follow each other exactly. Therefore you would lose some information by merging them into one feature.</li> <li>this would make it impossible to predict future events for a specific location, for instance B, if the two values for B and C are combined together.</li> </ul>
https://datascience.stackexchange.com/questions/77921/test-for-feature-dependencies-in-time-series-modelling
Question: <p>Imagine, there is a service, providing credit history for customers in form of list of his loans. Let's call it <strong>my-loan-service</strong>. For the sake of simplicity - I can <code>GET http://my-loan-service/42</code> (where 42 is my customer id) and get back <code>json</code></p> <pre class="lang-json prettyprint-override"><code>{ &quot;loans&quot;: [ { &quot;loanId&quot;: 1, &quot;creditLimit&quot;: 1000 }, { &quot;loanId&quot;: 2, &quot;creditLimit&quot;: 2000 } ] } </code></pre> <p>Also, this service provide it's data for data analysis to some OLAP data storage. Shortly - we have a table</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>customerId</th> <th>loanId</th> <th>creditLimit</th> </tr> </thead> <tbody> <tr> <td>42</td> <td>1</td> <td>1000</td> </tr> <tr> <td>42</td> <td>2</td> <td>2000</td> </tr> </tbody> </table> </div> <p>There is also a data scientist - Bob - working with this data storage. And his task is to implement and serve a model, deciding to accept or reject new loan applications from customers based on data from <code>my-loan-service</code>. So far so good, Bob came up with 2 features - number of loans and total credit limit. Bob used plain numpy or pandas to transform raw data into his features and some popular framework (sklearn, tensorflow or any other) to train and test the model. Now, he can serialize his model and serve it using favorite model serving solution.</p> <p><strong>And here is the culmination followed by the question:</strong> <br> Any model serving instruments/frameworks or SaaS/PaaS solutions I looked into expects to get features as an input. But in our case I can't get features from <strong>my-loan-service</strong> - only <code>raw-like</code> data. Obviously, I can create another service - <strong>feature-calculation-service</strong> - just to get loans, count them, sum up credit limits and then pass these features to served model. This way, every time someone wants a new feature, I'll be forced to duplicate logic of its engineering in <strong>feature-calculation-service</strong>. And python currently not used for production web-services - only for data analysis. So I am asking for any tips, which can help me avoid duplicating logic of feature engineering for model serving.</p> Answer:
https://datascience.stackexchange.com/questions/90453/serving-feature-pipeline
Question: <p>When Machine Learning libraries don't support categorical features those features can be one-hot encoded into a series of binary feature columns. I have a feature that represents a sequence or permutation of values and I want to transform it into something scikit-learn or similar ML libraries can use. What are the well known ways of doing this?</p> <p>In my problem a physical system is damaged and I'd like to use ML to recommend the sequence in which the damages should be repaired. Due to limited resources and limited repair crews and equipment only a certain number of components can be repaired at any one time. I've already determined a rough importance of the various components. Typically repairing the most important components first works well. But in specific situations non-obvious repair orders work even better. I have a dataset with a couple of million data points. For each data point I have the set of components that were damaged and the order that the repairs were undertaken as well as a metric of how well the strategy worked. The number of components that can be damaged is fixed and approximately 1600. In a realistic scenario there would be less than 50 damaged sub components.</p> <p>Say there were four components A,B,C,D</p> <p>Assume B, A and C were damaged but D was not. In an example dataset there might be two entries: [A,C,B] = 11 [B,A,C] = 7</p> <p>I want to transform the [A,C,B,D] part into something I can give to a regressor or categorizer.</p> <p>Approaches I have thought of so far:</p> <ol> <li>One column per component with the order the component was repaired. If a component wasn't damaged then the column might have 0 or N/A</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A_repaired_at</th> <th>B_repaired_at</th> <th>C_repaired_at</th> <th>D_repaired_at</th> <th>result</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>2</td> <td>0</td> <td>11</td> </tr> <tr> <td>2</td> <td>1</td> <td>3</td> <td>0</td> <td>7</td> </tr> </tbody> </table> </div> <ol start="2"> <li>Instead of using the rank, use a normalized rank. Seems like approach #1 wouldn't work well when the number of damaged components changes. Being repaired third out of three damaged components means something was repaired last but being repaired third out of 50 damaged components means a component was repaired towards the front.</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A_repaired_at</th> <th>B_repaired_at</th> <th>C_repaired_at</th> <th>D_repaired_at</th> <th>result</th> </tr> </thead> <tbody> <tr> <td>1/3</td> <td>3/3</td> <td>2/3</td> <td>0</td> <td>11</td> </tr> <tr> <td>2/3</td> <td>1/1</td> <td>3/3</td> <td>0</td> <td>7</td> </tr> </tbody> </table> </div> <ol start="3"> <li>Use comes-before attributes directly. #2 seems like it makes it possible for an ML library to compare the repair positions between rows - if that is how we'd like the ML to function maybe I should add those features directly.</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A_before_B</th> <th>A_before_C</th> <th>A_before_D</th> <th>B_before_C</th> <th>B_before_D</th> <th>C_before_D</th> <th>result</th> </tr> </thead> <tbody> <tr> <td>True</td> <td>True</td> <td>na</td> <td>False</td> <td>na</td> <td>na</td> <td>11</td> </tr> <tr> <td>False</td> <td>True</td> <td>na</td> <td>True</td> <td>na</td> <td>na</td> <td>7</td> </tr> </tbody> </table> </div> <p>Option #3 is nice because the columns are only {True,False,NA} but it has this nightmare inducing problem where my 1600 components become a million and some change feature columns.</p> <p>Is there some other way to transform the sequence into useful ML features?</p> Answer:
https://datascience.stackexchange.com/questions/90700/how-can-i-transform-a-sequence-into-features
Question: <p>I am aiming to predict the number of days it takes to sell a given property, let's call this variable &quot;DaysForSale&quot; - in short DfS</p> <p>Using the DfS I created a variable called &quot;median_dfs_grouped_street_name&quot; which returns the median days it takes to sell a property for the different streets available in the dataset. (The street names are all categorized).</p> <p>After this, I do my train/test split and run my Random Forest method.</p> <p>Using the feature_imporatances function I see that the new feature is the second most important, which makes me wonder if this is the correct approach?</p> <p>I have two questions:</p> <ol> <li>Is it wrong to develop features using the target variable?</li> <li>Is it wrong to do feature engineering on the full dataset?</li> </ol> Answer: <blockquote> <p>Is it wrong to develop features using the target variable?</p> </blockquote> <p><strong>Not necessarily</strong>. It is called &quot;target encoding&quot; or &quot;Mean encoding&quot; and can be very useful. In your case you could, for example, use the <code>DfS</code> of your train data to calculate a median value per street. But you need to carefully design the target encoding to avoid overfitting (there are different strategies to do that - see below link). And for the test data you can only use the target encoding based on your train data.</p> <p>The Coursera course &quot;How to Win a Data Science Competition: Learn from Top Kagglers&quot; has great content on target/mean encoding to be found <a href="https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv" rel="nofollow noreferrer">here</a>.</p> <blockquote> <p>Is it wrong to do feature engineering on the full dataset?</p> </blockquote> <p><strong>Not necessarily.</strong> As pointed out in <a href="https://datascience.stackexchange.com/a/80771/84891">Nicolas' answer</a> you need to be careful to not leak data though.</p> <p>Here's an example where it would be ok: let's assume one of your features is <code>date of enlisting</code> which is the date when the property was published for sale. You could, for example, add a feature to the whole dataset called <code>days since enlisting</code> which simply calculates the days between now and when the property was published for sale. However, your median is an example which results in data leakage since it is not &quot;per row&quot; data engineering but &quot;across rows&quot; data engineering applied to train and test data.</p> <p>That's why the safer approach is to first split the data, remove the target variable from the val/test data and then do feature engineering. Thereby, you avoid any unintended data leakage.</p>
https://datascience.stackexchange.com/questions/80770/i-do-feature-engineering-on-the-full-dataset-is-this-wrong
Question: <p><a href="https://www.reddit.com/r/MachineLearning/comments/ma59mt/d_is_my_idea_of_a_feature_store_wrong/" rel="nofollow noreferrer">Cross-posted on Reddit ML</a>.</p> <p>Should a Feature Store be part of an enterprise data catalog?</p> <p>To me, a feature store seems to be a highly niche data catalog but missing a lot of the benefits of having an enterprise data catalog / data discovery tool. My need is to have generated features discoverable when searching for data.</p> <p>For example, if I have dataset A and B used to generate a feature set AB', I would want to know about that information if I search and ever come across dataset A or B in my data catalog.</p> <p>Along with that, it would be beneficial to have the code / git commit that generated the features.</p> <p>Am I missing something?</p> Answer:
https://datascience.stackexchange.com/questions/90980/is-my-idea-of-a-feature-store-wrong
Question: <p>I have 6 input features <span class="math-container">$[m1,m2,m3,m4,m5,m6]$</span>.</p> <p>I am trying to build a model that can predict the value of all 6 of these values using <span class="math-container">$[m1,m2,m3]$</span>. However, I have the option of asking for another feature from <span class="math-container">$[m4,m5,m6]$</span> on a per-prediction basis. Obviously, I want to choose the feature that improves my predictive performance, based on the values of <span class="math-container">$[m1,m2,m3]$</span>.</p> <p>How would I go about doing this, and is there a name for this form of problem?</p> <p>The naive option would obviously be to work out which of <span class="math-container">$[m4,m5,m6]$</span> improves performance most in my training set and always use it, but I would prefer a method that makes use of my known <span class="math-container">$[m1,m2,m3]$</span> to identify which of the <span class="math-container">$[m4,m5,m6]$</span> would be best in this case.</p> <p>The closest method I could find was 'active learning', which takes new labels. I get the impression I could maybe use bayesian optimization but I'm unsure how to proceed.</p> Answer:
https://datascience.stackexchange.com/questions/96774/using-on-demand-features-in-machine-learning
Question: <p>When fitting a model with the AutoTS package, it will fit a number of models per generation (it uses the genetic algorithm). However, there does not seem to be an option to edit the number of different models that are evaluation per generation.</p> <p>As such, I am wondering how to edit the number of different models evaluated per generation</p> Answer:
https://datascience.stackexchange.com/questions/131259/changing-the-number-of-model-evaluations-per-generation
Question: <p>I have built an XGBoost classification model in Python on an imbalanced dataset (~1 million positive values and ~12 million negative values), where the features are binary user interaction with web page elements (e.g. did the user scroll to reviews or not) and the target is a binary retail action. My ultimate goal was not so much to achieve a model with an optimal decision rule performance as to understand which user actions/features are important in determining the positive retail action. </p> <p>Now, I have read quite a bit in forums and literature about evaluating/optimizing an XGBoost model and subsequent decision rule, which I assume is required before achieving my ultimate goal. It seems that there are a lot of different ways to evaluate the decision rule part (e.g. Area Under the Precision Recall Curve, AUROC, etc) and the model (e.g. log-loss). I believe that both AUC and log-loss evaluation methods are insensitive to class balance, so I don't believe that is a concern. However, I am not quite sure which evaluation method is most appropriate in achieving my ultimate goal, and I would appreciate some guidance from someone with more experience in these matters.</p> <p>Edit: I did also try permutation importance on my XGBoost model as suggested in an answer. I saw pretty similar results to XGBoost's native feature importance. Should I now trust the permutation importance, or should I try to optimize the model by some evaluation criteria and then use XGBoost's native feature importance or permutation importance? In other words, do I need to have a reasonable model by some evaluation criteria before trusting feature importance or permutation importance?</p> Answer: <p><strong>So your goal is only feature importance from xgboost?</strong></p> <p>Then <strong>don't focus on evaluation metrics</strong>, but rather splitting.</p> <p>I would suggest to read <strong><a href="https://explained.ai/rf-importance/" rel="nofollow noreferrer">this</a>.</strong> Using the default from tree based methods can be slippery.</p>
https://datascience.stackexchange.com/questions/65608/xgboost-feature-importance-permutation-importance-and-model-evaluation-criteri
Question: <p>I don't understand why using the <em>test set</em> for model <strong>evaluation</strong> is a bad idea.</p> <p>I completely understand why you should not use your test set to <strong>train</strong> your model (because in that case, you would be memorizing and you just cannot tell whether your model will generalize well or not if you don't have a separate test set). But why is it that simply using your test set to test (not train) your model is bad? You won't be changing any parameters of the model (because you are not training).</p> <p>For instance, <a href="https://www.youtube.com/watch?v=aDW44NPhNw0" rel="nofollow noreferrer">at the end of this video</a>, Luis says we are breaking what he calls the "Golden rule" (i.e. never use your testing data for training). However, all I can see he is doing is using the test set to <em>verify</em> which model performs better to then be able to make a selection on which model he will use in the end.</p> Answer: <p><em>Choosing</em> a variation of your model is a form of training. Just because you are not using gradient descent or whatever training process is core to a model class, does not mean your parameters are not influenced by this selection process. If you generated many thousands of models with random parameters and picked the best performing one on a data set, then this is also form of training. In fact, this is a valid way of optimising, called <a href="https://en.wikipedia.org/wiki/Random_search" rel="nofollow noreferrer">Random Search</a> - it is somewhat inefficient for large models, but it still works.</p> <p>You may generate hundreds of models using the training data and using gradient descent or boosting (depending on what the training algorithm uses in your model), then select the one that performs best on cross-validation. In that case, then as well as the selection process that you intend to use this for, you are also effectively using the cv data set to fine-tune the training from the first step, using something quite similar to random search.</p> <p>The main benefit of having two stages to testing (cv and test sets), is that you will get an <em>unbiased</em> estimate of model performance from the test set. This is considered important enough that it has become standard practice.</p>
https://datascience.stackexchange.com/questions/23309/why-exactly-using-a-test-set-for-model-evaluation-is-a-bad-idea
Question: <p>I'm working on topic modeling and I have generated clusters with two different methods.</p> <p>How can I evaluate which method performs better than the other?</p> <p><a href="https://i.sstatic.net/2JGKo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2JGKo.png" alt="" /></a><a href="https://i.sstatic.net/h7tDw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h7tDw.png" alt="enter image description here" /></a></p> Answer: <p>Evaluating unsupervised learning methods is always an interesting question. There are typically two main ways to evaluate clusters.</p> <h1>Explicit evaluation</h1> <h2>Qualitative analysis</h2> <p>First of all, you should always manually inspect the results to make sure they make sense to you. In practice, this is hard to beat and probably what you would use as a tie-breaker between methods who perform similarly on hard metrics.</p> <h2>How many clusters?</h2> <p>Different methods might yield varying numbers of clusters. Methods like K-Means force you to set the number of clusters in advance, but others like Mean Shift do not.</p> <p>If you already have some intuition about how many clusters is desirable, it can already be used as a way to discriminate against the results of certain methods (i.e. yielded too many or too few clusters)</p> <h2>Is it really unsupervised?</h2> <p>Even though the model might be learned in an unsupervised way*, it doesn't mean that the evaluation must also be! A good way to validate clustering results is to have pairs of data points that you know should or should not end up in the same cluster. You can then use regular metrics like accuracy to measure how well your clustering results satisfy your preferences/constraints.</p> <p>* some clustering methods are semi-supervised and can use preferences as input</p> <h1>Implicit evaluation</h1> <p>Obviously, there should always be some hard metrics you can compute to compare different clustering results.</p> <h2>Stability</h2> <p>Running the clustering method on different subsets of the data or with a different seed (i.e. for something like K-Means) can teach you about how stable your clustering technique is. An &quot;ideal&quot; clustering technique would always cluster the same data points together.</p> <h2>Cohesion &amp; Separation</h2> <p>The data points within the same cluster should be close to each other (cohesion) and far from data points in other clusters (separation). How you measure distance and the shape of your clusters obviously impact this metric greatly. The silhouette score for instance works best with convex clusters.</p> <p><a href="https://towardsdatascience.com/7-evaluation-metrics-for-clustering-algorithms-bdc537ff54d2" rel="nofollow noreferrer">This link</a> provides a few evaluation metrics and describes them quite well:</p> <ul> <li>Rand index</li> <li>Mutual information</li> <li>V-measure</li> <li>Fowlkes-Mallows</li> <li>Silhouette coefficient/score</li> <li>Calinski-Harabasz</li> <li>Davies-Bouldin</li> </ul>
https://datascience.stackexchange.com/questions/126302/topic-modeling-evaluation
Question: <p><strong>My method of evaluating a model is the following :</strong> </p> <ol> <li>Split the training data set and do cross validation to obtain an accuracy of my model on my cross validation data set.</li> <li>Use the parameters that gave me the best accuracy and use predict() on my test data set ( hold-out data set )</li> <li>Run a little 'for' loop to check how many labels i missclassified ( Let's assume i'm doing classification ) on my test dataset for which i hid the real labels.</li> <li>I'd look at the percentages of 'accuracy' given by each algorithm and pick the best accuracy.</li> </ol> <p><strong>My question</strong> :</p> <p>What can be done to improve my method of error-analysis and model evaluation using <strong>Python</strong>? <strong>Code snippets and their purposes would be helpful.</strong></p> <p>Thanks!</p> Answer:
https://datascience.stackexchange.com/questions/52004/error-analysis-and-evaluation-of-a-model-using-python
Question: <p>I was reading <a href="https://changhsinlee.com/better-validation-test/" rel="nofollow noreferrer">a blog post</a> about improving machine-learning model train/validate/test splits. Towards the end was this remark:</p> <blockquote> <p>I say we should be more creative in the way we test machine learning models than a 60-20-20 split. But there is a catch – if I see the results of tests and decide this model isn’t good enough, then what do I do? If I let the information of a test set to change my decision of which model to use, then it’s not really an unbiased test anymore – this is what people called “peeking” or “data leakage.” In general, I would say one should avoid this as much as possible. But the more tests you put on the model, the more likely you will see some failed tests and as a result, peek. Now we have a dilemma.</p> </blockquote> <p>I found this a bit strange, because my impression is this is precisely how the test is used, to determine whether or not to use a model. However, it did occur to me that maybe what the author is getting at, is something like this:</p> <p>There are many possible models one can use for a given dataset, some of which may by chance perform better on evaluation metrics than others, even when you withhold the test set from them during training. By using a large ensemble of models, and selecting the one with the best performance, you are increasing the likelihood that this performance was due to chance rather than the model truly representing/predicting the data well. Basically, similar to the idea behind p-hacking in scientific studies.</p> <p>It seems to me that in practicality, this is probably an issue of degree (if you are testing 2-5 models, it may not be as much of a problem as if you test hundreds or more), but I can't find much discussion about this outside of this blog post.</p> <p>I could easily be missing something here, but I want to ask, can/does using test evaluation metrics to perform model selection introduce a kind of data leakage (leak information about the test data to the model)? Are there any more authoritative sources on this issue?</p> Answer: <p>This is exactly the same reason why hyper-parameter tuning, feature selection and other decisions which impacts the final model should be done on a validation set, not on the final test set.</p> <p>In theory one should really evaluate the model only once on a fresh test set, so if there is a chance that a model won't be the final one depending on its performance then it should be evaluated on another validation set. The training data can be split as many times as needed (assuming it's large enough) in order to have enough validation sets.</p> <p>Clearly things are not always done perfectly in practice, and this can be considered acceptable to some extent. For example the repeated usage of benchmark datasets, which is really useful in order to compare models, is potentially a source of data leakage: indirectly, people use the knowledge based on the evaluation of the last state of the art model in order to design a new competitive model.</p>
https://datascience.stackexchange.com/questions/116626/can-using-model-evaluation-metrics-to-choose-a-model-cause-data-leakage
Question: <p>Suppose that we have train a model (as defined by its hyperparameters) and we evaluated it on a test set using some performance metric (say <span class="math-container">$R^2$</span>). If we now train the same model (as defined by its hyperparameters) on a different training data we will get (probably) a different value for <span class="math-container">$R^2$</span>.</p> <p>If <span class="math-container">$R^2$</span> depends on the training set, then we will obtain a normal distribution around a mean value for <span class="math-container">$R^2$</span>. Shouldn't therefore average the <span class="math-container">$R^2$</span> from the various evaluations in order to get a better picture of the models performance? Also why when reporting the performance of a model variance isn't included? Isn't this also an important factor for assessing model's performance?</p> <p>I am not speaking about hyperparameters tuning. I suppose that we know the best values for the hyperparameters and we need to estimate the generalization error. My question arised by the fact that we just evaluate once on the test set.</p> Answer: <p>Estimating the variance in generalization error is useful and is best assessed through <a href="https://en.wikipedia.org/wiki/Cross-validation_(statistics)" rel="nofollow noreferrer">cross-validation</a> (not on train/test split). The data should be split into folds and each fold should be trained with the same algorithm and hyperparameters. Then each training fold should evaluation on its respective validation fold. Given the repeated nature, it is possible to estimate the &quot;spread&quot; of generalization error.</p> <p>Additionally, <span class="math-container">$R^2$</span> is often considered not an appropriate metric to evaluate generalization error because <span class="math-container">$R^2$</span> relies on the mean of the training data.</p>
https://datascience.stackexchange.com/questions/110026/bias-variance-trade-off-and-model-evaluation
Question: <p>A colleague and I are working on a churn model and reached an impasse:</p> <p>Our data set is for a global product. We've been asked to look at the US market only.</p> <p>When we subset the data to the US only, the classifier evaluation metrics are lower than when we use the total global data set.</p> <p>My colleague wants to use the global data set because the output metrics are higher. I consider this the wrong thing to do, we should limit the data to the US market only.</p> <p>My thinking: Only use the data set that best represents the situation you are looking to explore. That is, we should only be using the data set filtered to the US market here.</p> <p>As we're dealing with human shopper behavior here, there could be lots of localised factors that change from market from market - culture, salary, shopper behavior, localised competitors.</p> <p>Is the approach to use the filtered data set correct? Are there papers, similar that talk to this point? Is there a useful term to search on Google for?</p> Answer: <p>You could consider it a hyperparameter and tube it to the best value.</p> <p>As you point out, there are multiple possibilities. Your stance of using only the most representative data has merit; the stance of using all available data has merit, since more data results in tighter estimates, and nothing says that Americans have to be so unique.</p> <p>Therefore, go figure out which approach gives the best results.</p> <p>Early evidence suggests that using more data results in better performance.</p>
https://datascience.stackexchange.com/questions/113772/more-representative-data-set-or-higher-model-evaluation-metrics
Question: <p>In supervised machine learning, are there any evaluation approaches <em>beside</em> using a fixed holdout test dataset, which allow me as a scientist to <strong>manually</strong> compare preprocessing approaches, without leaking information from the test dataset.</p> <p>For example, if I want to compare feature selection approaches (e.g., recursive feature elimination vs variance threshold), if I have a fixed training/test split, I can simply manually experiment only on the training dataset to find which works best (e.g. with CV), then use that best approach in the pipeline I evaluate on the test dataset.</p> <p>Cross validation or boostrapping are much more powerful evaluation approaches, but because the whole dataset may be used in testing at some point, I cannot manually test out approaches on <em>any</em> of the dataset without leaking information and biasing my final evaluation. Are there any smart approaches in practice that give the best of both worlds?</p> <p>My context: I'm advising students on best practices for their final projects, and I'd like them to understand data leakage and its serious implications. However, I also want to encourage them to do manual comparisons between preprocessing approaches, so they have a personal sense of what strategies do and don't work for their datasets, and ideally <em>why</em>. However, I want them to experience the advantages of CV/bootstrapping.</p> Answer: <p>Instead of splitting the data in two parts, train and test, you could split the data into more parts. Basically, every time you want to evaluate something you need data that is completely unseen.</p>
https://datascience.stackexchange.com/questions/128804/model-evaluation-approach-allowing-manual-experimentation-without-data-leakage
Question: <p>I am a newbie here and trying to make sense out of the scores from <code>model.evaluate</code> from what I am actually seeing in <code>model.predict</code></p> <p>I have a created a CNN model for the Google Audio Set data and achieved a 99%+ accuracy on training.</p> <p>Here is how I do the prediction</p> <pre class="lang-py prettyprint-override"><code>model = load_model('model_audioset.h5') for x, y in unbal_generator: score = model.evaluate(x, y, verbose=0) pred_y = model.predict(normalized_x) </code></pre> <p>Here is what I am seeing for one specific iteration of <code>x</code> and <code>y</code> from <code>model.evaluate</code></p> <pre><code>model.metrics_names = {list: 2} ['loss', 'acc'] 0 = {str} 'loss' 1 = {str} 'acc' score = {list: 2} [0.03851451724767685, 0.9905123114585876] 0 = {float64} 0.03851451724767685 1 = {float64} 0.9905123114585876 </code></pre> <p>Here is a readable output from <code>model.predict</code> and comparing it to <code>y</code></p> <p><a href="https://i.sstatic.net/OqS4J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OqS4J.png" alt="Actual vs. Expected"></a></p> <p><strong>Wondering how Keras came up with an accuracy score of <code>99.05</code> for this output? Clearly, the predicted classes are not the same as expected</strong></p> <hr> <p>I am using a <code>binary_crossentropy</code> loss function and <code>sigmoid</code> activation in the predictions layer as classes are NOT mutually exclusive</p> Answer: <p>Actually, I think accuracy is not a good metric for this case as <code>y_actual</code> and <code>y_expected</code> have most values <code>0</code> and the length of <code>y_actual</code> is quite big too. </p> <p>So the accuracy calculation using an equation like <code>K.mean(K.equal(K.argmax(y_true, axis=-1), K.argmax(y_pred, axis=-1)))</code> (<a href="https://datascience.stackexchange.com/questions/14415/how-does-keras-calculate-accuracy/14742#14742">Reference</a>) is not a good metric on model performance</p>
https://datascience.stackexchange.com/questions/64844/keras-model-evaluation-accuracy-vs-observation
Question: <p>After study in time series analysis, I recognized RMSE and MAPE are the best evaluation metrics for used model in real time series application. But my queries are below as this is my first practice application project in time series analysis:</p> <p>If I use past 1 year data for training and forecast for next 30 days but we will get actual data after 30days in real time. How could we integrate these RMSE and MAPE with time-series application?</p> <p>As I heard from some data scientist if I skip first training cycle and calculate these RMSE and MAPE in next retraining cycle for past training cycle where I have actual data and forecast data, Is this approach useful ?</p> <p>I concerned about calculated RMSE and MAPE could help to improve model performance in real time.</p> <p>Could the evaluation metrics calculation every time on retaining useful from production point of view?</p> <p>My thanks in advance</p> Answer: <p>I am assuming you have a model running in production and it retrains periodically (example: every month) and it forecasts for the next X days (example: 30 days) and you are trying to evaluate the model using RMSE and MAPE (If this assumption is wrong, please clarify it in your question)</p> <blockquote> <p>If I use past 1 year data for training and forecast for next 30 days but we will get actual data after 30days in real time. How could we integrate these RMSE and MAPE with time-series application?</p> </blockquote> <p>you will have to back test your model, i.e. predict with your model on a day in the past from which you will have 30 forecast-ed values and 30 actual values. Then the aggregation happens in mean part of the <code>Root Mean Square Error</code> or <code>Mean Average Percentage Error</code>. Ideally, your model should not have seen the evaluation data during the training phase.</p> <blockquote> <p>As I heard from some data scientist if I skip first training cycle and calculate these RMSE and MAPE in next retraining cycle for past training cycle where I have actual data and forecast data, Is this approach useful ?</p> </blockquote> <p>Yes, this is one of the ways to evaluate the model.</p> <blockquote> <p>Could the evaluation metrics calculation every time on retraining useful from production point of view?</p> </blockquote> <p>It is a good practice (unless it costs too much) to monitor the model in production regularly, for this you will have to estimate the evaluation metric in the frequency that is most suitable for you.</p>
https://datascience.stackexchange.com/questions/82758/time-series-analysis-model-evaluation-performance-metrics-integration-in-time-se
Question: <p>I would like to get help with evaluation of my classification model. It is a typical model that for each input produces vector of floats that represents probabilities of labels and I classify the input with label with highest probability.</p> <p>But I have a problem with evaluation of this model. The reason is, that my validation set is not perfect. My validation set could have multiple &quot;correct&quot; labels for each input.</p> <p>An Example:</p> <p>Let's say that there are 4 possible classes: dog, cat, frog, parrot. Then my validation set could look like this: <img src="https://i.sstatic.net/QXpL9jnZ.png" alt="validation set" /></p> <p>In this example for only input1, we know the exact answer. But for inputs 2 - 4, my validation set is not sure, which label is correct. It could be anyone from the set. For input2, if model says &quot;cat&quot;, I know that it is wrong. But if it says &quot;frog&quot;, I consider it as correct, even if in reality it is &quot;parrot&quot;. I'm fine with that, because there is no way to get this information.</p> <p>And now for evaluation. Compute accuracy is easy. It is just to check if the label produced by model is in possible label set in validation set. But what about some other evaluation methods? Are there any recommendations? I really like confusion matrix, but I can't think of a way, how to modify it for this case.</p> <p>Thanks for all your suggestions.</p> Answer: <p>You could easily compute the TP, TN, FP and FN for your use case and then you can compute accuracy, recall, precision, f1, confusion matrix, ...</p> <p>For example, for TP, instead of computing the number of samples where prediction == 1 and target == 1, you compute the number of samples for which prediction &quot;is in&quot; (Python style) the list of possible targets.</p>
https://datascience.stackexchange.com/questions/131495/evaluation-of-model-on-imperfect-validation-set