text
stringlengths
1
3.78M
meta
dict
Reaction paths of phosphine dissociation on silicon (001). Using density functional theory and guided by extensive scanning tunneling microscopy (STM) image data, we formulate a detailed mechanism for the dissociation of phosphine (PH3) molecules on the Si(001) surface at room temperature. We distinguish between a main sequence of dissociation that involves PH2+H, PH+2H, and P+3H as observable intermediates, and a secondary sequence that gives rise to PH+H, P+2H, and isolated phosphorus adatoms. The latter sequence arises because PH2 fragments are surprisingly mobile on Si(001) and can diffuse away from the third hydrogen atom that makes up the PH3 stoichiometry. Our calculated activation energies describe the competition between diffusion and dissociation pathways and hence provide a comprehensive model for the numerous adsorbate species observed in STM experiments.
{ "pile_set_name": "PubMed Abstracts" }
Induction of protective immunity in mice using a 62-kDa recombinant fragment of a Schistosoma mansoni surface antigen. Mice exposed to radiation-attenuated cercariae of Schistosoma mansoni are highly resistant to challenge infection, and sera from these mice can confer partial resistance when transferred to naive recipients. These sera recognize Ag present in schistosomular and adult worms, among them an Ag of 200 kDa. A cDNA encoding a 62-kDa portion of this Ag was cloned; the deduced amino acid sequence of this cDNA clone shares homology with myosins of other species. To assess the immunoprophylactic potential, we carried out vaccination trials in mice using the recombinant polypeptide expressed as a fusion protein with beta-galactosidase presented in the form of proteosome complexes with the outer membrane protein of meningococcus. The level of protection achieved was 32%, and this level could be increased to 75% by removal of those amino acids included in the fusion protein that were derived from the vector to yield a polypeptide, designated rIrV-5. A similar level of protection was achieved when mice were immunized with the same dose of rIrV-5 in the form of protein complexes but without outer membrane protein, suggesting that protection did not require the use of adjuvant. However, at least three immunizations were necessary to achieve protection. Using mAb and sera from mice vaccinated with rIrV-5, we demonstrated that the native protein recognized by antibodies against rIrV-5 is a 200-kDa protein that is expressed on the surface of newly transformed schistosomula. The protection achieved with rIrV-5 in mice encourages additional studies of its potential as a vaccine candidate for the prevention of schistosomiasis.
{ "pile_set_name": "PubMed Abstracts" }
Hotel Features. Complimentary wireless Internet access is available in public areas and a computer station is located on site. There is a 24-hour business center on site. A complimentary breakfast is served each morning. Additional amenities include a fitness center, barbecue grills, and a picnic area. Self parking is complimentary. This is a smoke-free property. Special features Awards and affiliations Green / Sustainable Property This property participates in Green Key Eco-Rating Program, a program that measures the property's impact on one or more of the following: environment, community, cultural-heritage, the local economy. Comfort Inn Cambridge, Cambridge’s small print Also known as Cambridge Comfort Inn Comfort Inn Cambridge Comfort Inn Cambridge Kitchener Optional extras Rollaway beds are available for CAD 10.00 per night Pets are allowed for an extra charge of CAD 20.00 per accommodation, per night We have included all charges provided to us by this hotel. However, charges can vary, for example, based on length of stay or the room you book.
{ "pile_set_name": "Pile-CC" }
*2 - 3 + 6 - 2 + l**3 to t + c*l + g*l**2 + k*l**3 and give k. 1 Express -r + 51*r**2 + 16*r - 48*r**2 in the form m + u*r + d*r**2 and give u. 15 Express (-2*z - 3*z + 4*z)*(0*z + 2*z - 4*z) - 7*z**2 - 51*z**2 - 4*z**2 + 4 - 2 + 3*z**2 + 0 in the form c + w*z + j*z**2 and give c. 2 Rearrange 22*x**3 - 829*x + x**2 + 2 + 829*x to t*x**3 + q*x**2 + k*x + b and give q. 1 Rearrange 7 - d - 5 + 9*d**3 + 0 to the form r*d**3 + z + b*d**2 + g*d and give r. 9 Rearrange (2*b + 19 - b - 6)*(2*b + 0*b - 3*b) to the form j*b + z + q*b**2 and give j. -13 Express (3 - 3 + t)*(3 - 3 - 2) + 0*t + t + 0*t + 0 + 0 + 2*t - 23 + 5*t + 23 as o*t + z and give o. 6 Rearrange (0*a + 0*a + a)*(-94*a + 46*a + 44*a) to p + w*a + s*a**2 and give s. -4 Express -4 + 6*u**3 + 2 + 3*u**4 - u - 5*u**3 in the form k*u**4 + g*u**2 + b + m*u + v*u**3 and give m. -1 Rearrange (-9*a + a + 3*a)*(2*a - 4*a + a)*(5 - 4 + 0) to the form y*a**2 + k + o*a and give y. 5 Rearrange 6 - 11*a**3 + a**3 + 8*a**3 - 5 + 22*a to the form b*a**2 + s + o*a + y*a**3 and give b. 0 Rearrange (-3 + 0 + 5)*(230 - 13*p**3 - 230) to the form q*p + w + x*p**2 + l*p**3 and give l. -26 Express 7*s + 5 - 3 - 2 in the form c + t*s and give t. 7 Express 21*t**2 + 2*t - 13 - 40*t**2 - 14 + 21*t**2 as w*t**2 + a + u*t and give a. -27 Express (w - 2 + 2)*(4*w**3 + w**3 - 4*w**3)*(132 - 62 + 9) as i*w**3 + s*w + n*w**2 + r + a*w**4 and give a. 79 Express -8*d**3 + 2 + 31*d**3 - 28*d - 13*d**3 - 12*d**3 as c*d**2 + z*d + b*d**3 + p and give z. -28 Express (-1 + 6 + 3)*((-3*l + 3*l - 2*l)*(-3 + 4 + 0) + l + 0*l + 3*l) as x + s*l and give s. 16 Rearrange (-z + 3*z**3 + z)*(-1 + 2 - 3)*(58 - 7*z - 58) to l*z**2 + c*z**4 + o*z + s*z**3 + f and give c. 42 Express (2*w**3 + 0 + 0)*(1 + w - 1) - 3*w + 3*w + 3*w**4 - 2*w**4 + 8 - 1 - w**3 + w**4 as f*w**4 + d*w + l + j*w**2 + b*w**3 and give f. 4 Express (-3*o - 2*o + 3*o + (3*o + o - 3*o)*(-20 - 8 - 1))*(o - o**3 - o) in the form x*o**4 + d*o**3 + z*o**2 + g*o + r and give x. 31 Rearrange 3*f + 0*f - 15 - 14*f**2 + 10*f**2 - 4*f to the form g + h*f + q*f**2 and give h. -1 Express (-2*k**3 - 3*k**3 + 2 + 17*k**3)*(k - 5*k + 0*k) as c*k + h*k**4 + j*k**2 + u + n*k**3 and give h. -48 Rearrange -15*y - 16*y**2 - 19*y**2 + 59*y**2 - 23*y**2 to x + w*y + g*y**2 and give g. 1 Express -7*x + 0*x + 7*x + 38*x**2 as c*x + r*x**2 + d and give r. 38 Express -8 + 5 + 3 + 33*q in the form s + a*q and give a. 33 Express (2*s + 0*s + 0*s)*(5 + 2 - 4) + 0 + 0 - 5*s as t + b*s and give b. 1 Express 9*x + 2*x - 11*x + x**2 - x - 2 as s*x + g*x**2 + j and give j. -2 Rearrange -10*x**4 + 2 + 0*x**4 + x + 3*x**4 to b + w*x + d*x**2 + k*x**3 + o*x**4 and give o. -7 Rearrange 2*z**2 - 111*z**4 - 3 - 109*z**4 + 1 + z + 213*z**4 to the form h + i*z**2 + p*z**4 + v*z + n*z**3 and give i. 2 Express (-4*u**2 + 11*u**2 - 4*u**2 + (-3*u + 2*u + 2*u)*(u - 2*u - u))*(u - u - u**2 - 1) in the form j*u**4 + q*u**3 + k*u**2 + z*u + b and give k. -1 Express 2*x**3 - x**3 + x**3 + (0 + 2*x + 0)*(3*x**2 + x**2 - x**2) + (-x - 3*x + x)*(0*x**2 + 3*x**2 - 4*x**2) as u + i*x + z*x**2 + l*x**3 and give l. 11 Express (6 - 4 + 2)*(-4*i + 4*i + 2*i + (-2 + 5 - 5)*(-i - 4 + 4)) + 2*i - 3*i - i as b + t*i and give t. 14 Express (-f + 2*f + 16*f)*(-f + 5*f - 3*f) in the form a*f + g + y*f**2 and give y. 17 Express (2*s - 4*s - 2 + 3)*(3*s - s - 3*s)*(s - 8*s + 0*s) in the form w + u*s**3 + q*s**2 + x*s and give q. 7 Rearrange 0*a**4 - 8*a**3 + 2*a**2 - a**2 - 2*a + 2 + 9*a**3 + 2*a**4 to m + n*a**3 + p*a**4 + q*a + t*a**2 and give p. 2 Express 3*n**3 + 2*n**2 + 7*n**3 - 3*n**3 - 4*n**3 - 1 + n as j*n + s*n**3 + r + i*n**2 and give j. 1 Express 1 + 6*l**3 - 3*l**3 + l + 14*l**3 + l**3 in the form g + o*l + f*l**3 + n*l**2 and give f. 18 Express 2*m - 2*m - 3*m**2 + (-8*m - 8*m + 12*m)*(0*m + 2*m + 0*m) in the form a*m**2 + y*m + n and give a. -11 Rearrange -39 - j + j**3 + 37 + 2*j**3 to g*j**2 + p*j + k + o*j**3 and give o. 3 Rearrange 2 + 3*z**4 + 2*z - 3*z**4 + z**4 + 0*z**4 to u*z**4 + v*z**2 + b*z**3 + w*z + s and give w. 2 Rearrange -4*i**4 + i**4 + 2*i**4 + (i**2 - 3*i**2 + 0*i**2)*(-12*i**2 - 11*i + 11*i) to the form r*i**2 + m + h*i + z*i**3 + d*i**4 and give m. 0 Rearrange -62*i**2 - 13*i**2 - 16*i**2 - 1 + 2*i to the form d*i**2 + s + j*i and give s. -1 Rearrange (-4 + 2 - 6)*(2 - 20*m + 2*m + 0*m) to the form p*m + y and give p. 144 Express 3*s + 4 - 4 + (2 + 4 - 4)*(-2*s + 3*s - 2*s) + 3*s - 2 + 2 as i*s + k and give i. 4 Express -2 + 26*f + 3 - 26*f - 2*f**2 in the form o*f + k + j*f**2 and give j. -2 Rearrange -m - 5*m**2 + 2 + 4*m**4 + 4*m**2 - m**4 to h*m**4 + n + w*m + f*m**3 + y*m**2 and give h. 3 Express -44*z**2 + 44*z**2 - 3*z**3 - 7*z in the form l*z + y + d*z**2 + f*z**3 and give l. -7 Express 106 - 3*d**2 - d + 2*d**2 - 106 as y*d**2 + r + j*d and give y. -1 Rearrange 0*a**3 + 0*a**3 + a**3 + (-5*a + 7*a - 11*a)*(-5*a**2 + 4*a**2 - a**2) to the form t + b*a**3 + x*a**2 + d*a and give b. 19 Rearrange (0 + 2 - 1)*(2*k + k - 2*k + (4 - 2 - 1)*(0*k + 3*k - k) - 3*k - 3*k + 0*k) to the form x + w*k and give x. 0 Rearrange -4*a + 9*a**2 - 8*a**2 + 8*a**3 - a to the form t + g*a + m*a**2 + u*a**3 and give m. 1 Rearrange 2 - 20*l + l**2 + 18*l**4 + 46*l - 27*l to the form o*l**4 + j*l**2 + g*l + f + i*l**3 and give o. 18 Express (2*c - 2*c - c**2)*(-3*c - 2*c + 0*c)*(0 + 1 + 9) in the form t + x*c**3 + q*c + y*c**2 and give x. 50 Rearrange 29*h + 4 - 27*h - 1 - 14*h**2 to the form z*h + u*h**2 + q and give u. -14 Express 3*i + 74*i - 8*i + 18*i as a*i + p and give a. 87 Express (1 + 2*p - 1)*(3 + 2 - 6) + 4*p - 4*p - 2 + p in the form k*p + x and give k. -1 Express 2*x**2 - x**2 + 45*x**3 - 5*x**2 + 2*x**2 as m*x**2 + a + u*x + h*x**3 and give h. 45 Rearrange 45*f + 31*f - 18*f to a*f + i and give a. 58 Rearrange 3*d + 4*d - 7*d - 1 + 4*d**2 to y*d + x*d**2 + p and give p. -1 Rearrange 171 - 171 - 29*u**2 to the form y*u**2 + l + h*u and give h. 0 Express (3*g + 15 - 15)*(-g**2 - g**2 + 4*g**2) in the form h*g**2 + k + u*g + x*g**3 and give k. 0 Express -77*n - 22*n - 21*n - 3*n as k*n + l and give l. 0 Rearrange (2*j + 4*j - 2*j + (4 - 3 + 0)*(-2 + 2 - 2*j))*(9 - 2 + 3) to v*j + o and give v. 20 Express (-2*j**2 - 3*j**2 + 2*j**2)*(102*j - 3*j + 59*j) as i*j**3 + h + k*j + o*j**2 and give i. -474 Rearrange -4*k**3 + 3*k**4 + 4*k**3 + (-2*k**3 + 0*k**3 + 0*k**3)*(1 + k - 1) - 2*k**4 + 2*k**4 + 2*k**4 to v*k**3 + p*k**2 + j*k + l*k**4 + q and give l. 3 Express (-2 + t + 1 - 2*t)*(-66*t + 80*t + 61*t) as d + v*t**2 + h*t and give h. -75 Express (-2 + 34*k + 7*k + k)*(-2*k**2 + 2*k**2 + 2*k**2) in the form s + l*k**2 + v*k**3 + j*k and give v. 84 Rearrange (2 - 2 - 1)*(-27*b**2 - 61*b**2 + 23*b**2 - 17*b**2) to the form l*b + p*b**2 + h and give p. 82 Express -q + 29 + 0*q - q + q as h*q + k and give k. 29 Rearrange (-5*i + 5*i**2 + 5*i)*(31*i - 31*i - 4*i**2) to the form p*i + z*i**3 + x*i**4 + f*i**2 + o and give x. -20 Express 48*k**2 + 67*k**2 + 6 - 4 + 3*k**2 as s*k**2 + p + u*k and give p. 2 Express 10*k - k**2 - 10*k + 53*k**2 in the form n*k + v + w*k**2 and give w. 52 Express 239 + b**3 + 0*b**2 - 247 + 2*b**2 as d*b**2 + l*b + t + o*b**3 and give d. 2 Express -15*w**2 + 15*w**2 - 2*w**3 + 7 + 4*w**3 in the form i + r*w + o*w**3 + g*w**2 and give i. 7 Rearrange (-2*q**2 + 0*q + 0*q)*(17 - 18 + 16) + (1 - 1 - q)*(-q - 3*q + 3*q) to y + g*q + s*q**2 and give s. -29 Rearrange ((-y - 4 + 4)*(-2 - 2*y + 2) + 127 - 62*y**2 - 127)*(2*y**2 - 4*y**2 + 0*y**2) to the form n + h*y**2 + c*y**4 + k*y**3 + t*y and give c. 120 Express -52*o - 6*o**2 + 50*o + 3*o**2 as y + z*o + g*o**2 and give z. -2 Express (0 + 0 - 2*t)*((-t + 2*t - 2*t)*(6 - 4 - 3) + (-4*t + 2*t + 5*t)*(-3 + 3 + 2)) in the form d*t**2 + i*t + u and give i. 0 Express (-5*r**4 + r**4 + 5*r**4)*(-4 + 3 + 7 + (-2 + 0 + 1)*(-3 - 1 + 3)) as o*r**2 + w*r**3 + g*r**4 + z + q*r and give g. 7 Express -4 + 10*w + 1 + 2 - 8*w**3 - 8*w in the form m + v*w**3 + u*w + s*w**2 and give u. 2 Rearrange (-7*k + 4*k + 2*k)*(2 - 3 - 3 + (-3 + 3 - 1)*(2 - 2 - 2) - 2 - 1 + 2) to h + m*k and give m. 3 Rearrange 4 - 4 + 3*n + 8*n to the form c*n + o and give c. 11 Express 29 + 3*r**2 - 10 - 10 - 5 as c*r + i + m*r**2 and give m. 3 Rearrange -133*q**3 + 3*q**4 + 12
{ "pile_set_name": "DM Mathematics" }
freeStyleJob('mirror_udict') { displayName('mirror-udict') description('Mirror github.com/genuinetools/udict to g.j3ss.co/genuinetools/udict.') checkoutRetryCount(3) properties { githubProjectUrl('https://github.com/genuinetools/udict') sidebarLinks { link('https://git.j3ss.co/genuinetools/udict', 'git.j3ss.co/genuinetools/udict', 'notepad.png') } } logRotator { numToKeep(100) daysToKeep(15) } triggers { cron('H H * * *') } wrappers { colorizeOutput() } steps { shell('git clone --mirror https://github.com/genuinetools/udict.git repo') shell('cd repo && git push --mirror ssh://git@g.j3ss.co:2200/~/genuinetools/udict.git') } publishers { extendedEmail { recipientList('$DEFAULT_RECIPIENTS') contentType('text/plain') triggers { stillFailing { attachBuildLog(true) } } } wsCleanup() } }
{ "pile_set_name": "Github" }
Q: Mesh intersection in three.js after modifying vertices/position I have a mesh inited with plane geometry with placeholder parameters (like xyz 0, wh 100). Later I set actual position for the mesh and 4 vertices for the geometry, and it renders fine. However when I try to intersect the mesh with raycaster, I get intersections only where it was inited, not where it's rendered after parameters were set. What should I call after vertices/position update for the mesh to be intersectable? A: You need to recompute the geometries boundingBox and/or boundingSphere using computeBoundingBox() and/or computeBoundingSphere() method. This should do the trick
{ "pile_set_name": "StackExchange" }
Traceability of 'Mozzarella di Bufala Campana' production chain by means of carbon, nitrogen and oxygen stable isotope ratios. New techniques are required to guarantee the authenticity of food, especially for PDO (Protected Designation of Origin) trademarks. The genuineness of a product is directly related to the raw material and to the production process used. In this article, the traceability of the Mozzarella di Bufala Campana PDO was investigated, using carbon, nitrogen and oxygen stable isotopes ratios, measured on buffalo feeding, milk and mozzarella, from Caserta and Salerno farms. Furthermore, 37 mozzarella brands were analyzed (carbon, nitrogen and oxygen isotopes) from the different production areas, to characterize their origin. The results of this work showed no changes in carbon and nitrogen isotopic ratios of milk and mozzarella, indicating no fractionation in the production process. The δ13 C of milk was influenced by feeding signal; while, milk δ15 N was regulated by fractionation occurring during ruminant metabolism. Mozzarella oxygen isotopic signal depleted with respect to the milk one. Regarding brand samples, it was found that the geographical differentiation is based more on carbon isotopes than on the nitrogen and oxygen ones. This work gives an important contribution to the knowledge regarding the traceability of such a particular cheese as mozzarella. © 2019 Society of Chemical Industry.
{ "pile_set_name": "PubMed Abstracts" }
Mushroom Lasagne "Contained Potato" Vegetarians were shocked today to discover that a supermarket 'mushroom' lasagne contained traces of potato DNA. "I'd been really enjoying the horsemeat scandal" said smug vegan Arthur Bottleneck. "I earn a really low wage and drive an old Fiesta, so it was nice to feel superior for a change. I'd made a collage of all the best horsemeat headlines and stuck them over my computer. Now I just feel silly". Scottish Food Minister Jim McTavish responded to the scandal by asking the food industry to " . .end these dangerous experiments with vegetation". He continued : "If vegetarian meals can be contaminated with potato, what's to stop real meals being contaminated with, I dunno, fresh fruit? It disnae bear thinking aboot"
{ "pile_set_name": "Pile-CC" }
Well Known Agent 000 (stinger-1)Music 29240 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 00:09 / MP3: 256 kbps, 44.1 kHz $9.95 Well Known Agent 000Package 29244 Instrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 8 Files $79.75 Well Known Agent 000 (30sec-2)Music 29237 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 00:35 / MP3: 256 kbps, 44.1 kHz $19.95 Well Known Agent 000Music 29235 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 02:02 / MP3: 256 kbps, 44.1 kHz $29.75 Well Known Agent 000 (60sec-2)Music 29239 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 01:02 / MP3: 256 kbps, 44.1 kHz $24.75 Well Known Agent 000 (stinger-2)Music 29241 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 00:08 / MP3: 256 kbps, 44.1 kHz $9.95 Well Known Agent 000 (30sec-1)Music 29236 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 00:31 / MP3: 256 kbps, 44.1 kHz $19.95 Well Known Agent 000 in Action (action-mix)Music 29242 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 02:11 / MP3: 256 kbps, 44.1 kHz $29.75 Well Known Agent 000 (60sec-1)Music 29238 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... 01:04 / MP3: 256 kbps, 44.1 kHz $49.00 Bach - Cello Suite #1Music 341123 / Music For MediaOne of the most famous pieces of cello music ever written. You may not recognize the name but you have heard the melody. Perfect for adverti... 01:13 / MP3: 320 kbps, 44.1 kHz / WAV: 16-bit, 44100 kHz $75.00 Bach Cello Suite #1 (30 Sec Edit)Music 341124 / Music For MediaOne of the most famous pieces of cello music ever written. You may not recognize the name but you have heard the melody. Perfect for adverti... 00:31 / MP3: 320 kbps, 44.1 kHz / WAV: 16-bit, 44100 kHz Well Known Agent 000 (stinger-1)Music 29240 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $9.95 00:09 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000Package 29244 Instrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $79.75 8 Files Well Known Agent 000 (30sec-2)Music 29237 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $19.95 00:35 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000Music 29235 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $29.75 02:02 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000 (60sec-2)Music 29239 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $24.75 01:02 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000 (stinger-2)Music 29241 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $9.95 00:08 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000 (30sec-1)Music 29236 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $19.95 00:31 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000 in Action (action-mix)Music 29242 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $29.75 02:11 / MP3: 256 kbps, 44.1 kHz Well Known Agent 000 (60sec-1)Music 29238 / Dimitris PlagiannisInstrumental electronic/orchestral theme, reminescant of a famous spy-series theme but different enough. Works great as spy soundalike, but ... $24.75 01:04 / MP3: 256 kbps, 44.1 kHz Pease Pudding HotMusic 23310 / Colin WillsherPease Pudding Hot - A light, tinkly version of the well-known nursery rhyme. Two verses to singalong to in children's music groups, schools ... $49.00 00:42 / MP3: 256 kbps, 44.1 kHz Sing A Song Of SixpenceMusic 23664 / Colin WillsherSing A Song Of Sixpence - A chirpy, classical rendition of the well-known nursery rhyme for children. Featuring oboe and flute accompanied b... $49.00 00:41 / MP3: 256 kbps, 44.1 kHz Head, Shoulders, Knees And ToesMusic 22894 / Colin WillsherHead, Shoulders, Knees And Toes - Kiddy-pop sing-along version of this well-known tune. Speeds up in traditional style and features vinyl sc... $49.00 00:57 / MP3: 256 kbps, 44.1 kHz Bach - Cello Suite #1Music 341123 / Music For MediaOne of the most famous pieces of cello music ever written. You may not recognize the name but you have heard the melody. Perfect for adverti... $75.00 01:13 / MP3: 320 kbps, 44.1 kHz / WAV: 16-bit, 44100 kHz Bach Cello Suite #1 (30 Sec Edit)Music 341124 / Music For MediaOne of the most famous pieces of cello music ever written. You may not recognize the name but you have heard the melody. Perfect for adverti...
{ "pile_set_name": "Pile-CC" }
Since the invention of integrated circuits, the semiconductor industry has experienced a continuous rapid growth due to constant improvements in the integration density of various electronic components (i.e., transistors, diodes, resistors, capacitors, etc.). For the most part, this improvement in integration density has come from repeated reductions in minimum feature size, which allows for more components to be integrated into a given chip area. These integration improvements are essentially two-dimensional (2D) in nature, in that the volume occupied by the integrated components is essentially on the surface of the semiconductor wafer. Although dramatic improvement in lithography has resulted in considerable improvements in 2D integrated circuit formation, there are physical limitations to the density that can be achieved in two dimensions. One of these limitations is the minimum size needed to make these components. Also, when more devices are put into one chip, more complex designs are required. Three-dimensional integrated circuits (3D IC) are therefore created to resolve the above-discussed limitations. Higher device density has been achieved using 3D IC technology, and allowing the bonding of up to six layers of wafers. As a result, the total wire length is significantly reduced. The number of vias is also reduced. Accordingly, 3D IC technology has the potential of being the mainstream technology of the next-generation integrated circuit. Conventional methods for forming 3D IC also include die-to-wafer bonding, wherein a plurality of individual dies is bonded to a same wafer. An advantageous feature of the die-to-wafer bonding is that the size of the dies may be smaller than the size of the chips on the wafer. Through-silicon vias, also referred to as through-wafer vias, are used to connect the integrated circuits in the dies and the integrated circuits in the wafer. FIG. 1 illustrates a conventional 3D IC including through-silicon vias. Dies 4 and 6 are stacked on bottom wafer 2, wherein each of the dies 4, 6 and bottom wafer 2 includes integrated circuits. Through-silicon vias 8 are formed in dies 4 to connect the underlying wafer 2 to the overlying dies 6. After dies 4 and 6 are bonded onto wafer 2, a wafer probing is performed to the stacked dies. Only those stacked dies that pass the probe tests are packaged. By identifying problematic stacked dies at an early stage, packaging costs are saved. Typically, the test programs for probing the stacked dies are generated by merging individual test programs for testing dies 4 and 6, and dies on wafer 2. However, since dies 4 and 6 and wafer 2 are separately manufactured, their test programs may be generated for different platforms. For example, some of the test programs are UNIX-based, and some are Window-based. Merging these test programs thus becomes a very challenging task. Accordingly, what is needed in the art are test methods and/or test structures for probing stacked dies without incurring difficulties in merging the test programs.
{ "pile_set_name": "USPTO Backgrounds" }
One day we may be able to ingest tiny robots that deliver drugs directly to diseased tissue, thanks to research being carried out at EPFL and ETH Zurich. The group of scientists - led by Selman Sakar at EPFL and Bradley Nelson at ETH Zurich - drew inspiration from bacteria to design smart, biocompatible microrobots that are highly flexible. Because these devices are able to swim through fluids and modify their shape when needed, they can pass through narrow blood vessels and intricate systems without compromising on speed or maneuverability. They are made of hydrogel nanocomposites that contain magnetic nanoparticles allowing them to be controlled via an electromagnetic field. In an article appearing in Science Advances, the scientists describe the method they have developed for "programming" the robot's shape so that it can easily travel through fluids that are dense, viscous or moving at rapid speeds. Embodied intelligence When we think of robots, we generally think of bulky machines equipped with complex systems of electronics, sensors, batteries and actuators. But on a microscopic scale, robots are entirely different. Fabricating miniaturized robots presents a host of challenges, which the scientists addressed using an origami-based folding method. Their novel locomotion strategy employs embodied intelligence, which is an alternative to the classical computation paradigm that is performed by embedded electronic systems. "Our robots have a special composition and structure that allow them to adapt to the characteristics of the fluid they are moving through. For instance, if they encounter a change in viscosity or osmotic concentration, they modify their shape to maintain their speed and maneuverability without losing control of the direction of motion," says Sakar. These deformations can be "programmed" in advance so as to maximize performance without the use of cumbersome sensors or actuators. The robots can be either controlled using an electromagnetic field or left to navigate on their own through cavities by utilizing fluid flow. Either way, they will automatically morph into the most efficient shape. Inspired by nature "Nature has evolved a multitude of microorganisms that change shape as their environmental conditions change. This basic principle inspired our microrobot design. The key challenge for us was to develop the physics that describe the types of changes we were interested in, and then to integrate this with new fabrication technologies," says Nelson. In addition to offering enhanced effectiveness, these miniaturized soft robots can also be manufactured easily at a reasonable cost. For now, the research team is working on improving the performance for swimming through complex fluids like those found in the human body. ### Source: H.-W. Huang, B.J. Nelson, F.E. Uslu, M.S. Sakar, P. Katsamba, E. Lauga, Adaptive locomotion of artificial microswimmers, Science Advances
{ "pile_set_name": "OpenWebText2" }
X-Point-Position-Dependent Intrinsic Toroidal Rotation in the Edge of the TCV Tokamak. Edge intrinsic rotation was investigated in Ohmic L-mode discharges on the Tokamak à Configuration Variable, scanning the major radial position of the X point, R(X). Edge rotation decreased linearly with increasing R(X), vanishing or becoming countercurrent for an outboard X point, in agreement with theoretical expectations. The core rotation profile shifted fairly rigidly with the edge rotation, changing the central rotation speed by more than a factor of two. Core rotation reversals had little effect on the edge rotation velocity. Edge rotation was modestly more countercurrent in unfavorable than favorable ∇B shots.
{ "pile_set_name": "PubMed Abstracts" }
ANTIOCH — A motorist died and authorities shut down the eastbound direction of Highway 4 after gunfire erupted Saturday night, the California Highway Patrol reported. Related Articles Man stabbed on BART train in Fremont area, suspect arrested Officers’ response to East Bay apartment complex tied to earlier double shooting, police say Two men shot in West Oakland One dead after double shooting at East Bay shopping center 16-year-old East Bay girl fatally shot in Oakland, teen boy arrested Authorities pronounced the male driver of a blue Toyota Solara dead at the scene and did not immediately release his age or identity. The suspect in the shooting fled the scene, and police were searching for the vehicle. They did not provide a description of it. The shooting happened about 9:03 p.m. on eastbound Highway 4 just west of Hillcrest Avenue, according to a statement from the CHP. It was not immediately known what may have led up to the shooting. The CHP shut down all lanes of Highway 4 for more than an hour to investigate. The CHP is seeking witnesses, as well as anyone who may have seen the Toyota Solara before the shooting. They can be reached at 1-800-835-5247. There have been three freeway shootings on Bay Area freeways reported by the CHP since the start of 2017. A person suffered nonfatal injuries in a shooting on southbound Highway 242 near the Grant Street exit in Concord on Jan. 15, hours after a shooting on westbound Interstate 80 near Carlson Boulevard. Check back for updates Contact Rick Hurd at 925-945-4789.
{ "pile_set_name": "OpenWebText2" }
September 11 Vs. December 7 DID AMERICANS BEHAVE BETTER BACK THEN? The other great social transformation the war brought was the mass entrance of women into the workplace, especially into workplaces formerly reserved for men. The enduring image of this phenomenon, and rightly so, is Rosie the Riveter, determined and capable, the sleeves of her work shirt rolled up as she grasps a wrench. Largely forgotten today, though, is the enormous problem that was posed by two working—or fighting—parents in a country virtually unequipped with day care. Although the federal government funded an unprecedented daycare system for the children of tens of thousands of war-production workers, many desperate mothers were forced to lock up their children in cars or homes, while other youths roamed the streets in new “zip-gun” gangs, and every train and bus depot had its own coterie of underage victory girls. Venereal disease and illegitimacy rates soared. It was during the war that the term juvenile delinquency came into common parlance. The business of the war was sometimes just as sordid. Harry Truman’s Senate committee turned up one case after another of war profiteering, and at least 20 percent of Americans surveyed admitted that they viewed the black market as a legitimate means of procuring consumer goods. Even the unprecedented wealth now spurring the economy brought on a deep social uneasiness. Boomtowns sprang up all across the nation. They were full of transient men and women, unable to spend their newfound wealth anywhere else, piling into raucous new bars. OVERALL, AMERICANS HAVE KEPT ON A REMARKABLY EVEN KEEL, COMPARED WITH WHAT HAPPENED IN PAST WARS. “No country could have survived America’s convulsive transformations of 1941-45 without altering its essence and its view of itself,” William Manchester wrote in The Glory and the Dream . “The home front was in reality a battleground of ideas, customs, economic theory, foreign policy, and relationships between the sexes and social classes.” We should not be too shocked. Democracy is a messy business, and freedom is built on struggle. Government interventions and private initiatives were able to mitigate the worst of the wartime excesses and produce the means for a renewal of liberty at home. Interracial boards and commissions were set up in many cities for the first time, and A. Philip Randolph’s threatened protest would eventually become Martin Luther King, Jr.’s March on Washington. In Chicago, A. J. Muste and James Farmer began a series of peaceful sit-ins and protests to desegregate restaurants and other businesses. Despite a concerted propaganda effort to get women to return to the home, they never would—not anything like before the war. A year after September 11 we can take credit for having largely spared ourselves such excesses. True, we have yet to endure, so far, anything like the sustained pressure that World War II brought to bear on our society. And the days after the attack on the World Trade Center saw some deplorable assaults on Arab- and Asian-Americans, and on civil liberties. Overall, though, we have kept a remarkably even keel, compared with past wars. There have been no race riots, no mass imprisonments, no vast social disruptions. Leaders at every level have made strong pleas for tolerance, most civil liberties have been respected, and demonstrations of all kinds have continued to be held without incident. The American reaction has been decidedly free of panic or paranoia. With nations as with individuals, it is hard to believe that a traumatic event actually changes their character; more likely, it illuminates just what that character already is. Dr. Putnam’s fears not-withstanding, could it be that our ironic, mistrustful, solo bowling society has also reached a new level of democratic maturity, one that serves it very well indeed against the hazards of the twenty-first century?
{ "pile_set_name": "Pile-CC" }
Intel i5-4460 overview 1Description The Core i5-4460 3.2 GHz Processor from Intel is equipped with 4 cores and 4 threads to run multiple programs and can be installed in the FCLGA1150 socket. This processor comes equipped with 6MB of cache. It provides performance for your computer system with a base clock speed of 3.2 GHz. When you need a boost to run a program, it utilizes Intel Turbo Boost technology 2.0 that increases the clock rate to 3.4 GHz. Connect a variety of peripherals to a PC equipped with this processor as it supports PCIe revision 3.0 in x16, x8, and x4 configurations. In addition, this processor includes Intel Virtualization Technology*, Intel My Wi-Fi Technology, and more. Intel Turbo Boost Technology dynamically increases the processor's frequency as needed, by taking advantage of thermal and power headroom to give you a burst of speed when you need it, and increased energy efficiency when you don't. Intel Virtualization Technology for Directed I/O continues from the existing support for IA-32 (VT-x) and Itanium processor (VT-i) virtualization, adding new support for I/O-device virtualization. Intel VT-d can help end users improve the security and reliability of systems, and also improve performance of I/O devices in virtualized environments. Intel VT-x with Extended Page Tables (EPT)* Intel VT-x with Extended Page Tables, also known as Second Level Address Translation (SLAT), provides acceleration for memory-intensive virtualized applications. Extended Page Tables in Intel Virtualization Technology platforms reduce the memory and power overhead costs, and increase battery life through hardware optimization of page table management. Intel My Wi-Fi Technology enables wireless connection of an Ultrabook or laptop to Wi-Fi-enabled devices such as printers, stereos, etc. Idle States Idle States (C-states) are used to save power when the processor is idle. C0 is the operational state, meaning that the CPU is doing useful work. C1 is the first idle state, C2 the second, and so on, where more power saving actions are taken for numerically higher C-states. Enhanced Intel SpeedStep Technology Enhanced Intel SpeedStep Technology is an advanced means of enabling high performance while meeting the power-conservation needs of mobile systems. Conventional Intel SpeedStep Technology switches both voltage and frequency in tandem between high and low levels in response to processor load. Enhanced Intel SpeedStep Technology builds upon that architecture using design strategies such as Separation between Voltage and Frequency Changes, and Clock Partitioning and Recovery. Thermal Monitoring Technologies Thermal Monitoring Technologies protect the processor package and the system from thermal failure through several thermal management features. An on-die Digital Thermal Sensor (DTS) detects the core's temperature, and the thermal management features reduce package power consumption (and thereby temperature, when required) in order to remain within normal operating limits. Intel Identity Protection Technology* Intel Identity Protection Technology is a built-in security token technology that helps provide a simple, tamper-resistant method for protecting access to your online customer and business data from threats and fraud. Intel IPT provides a hardware-based proof of a unique user's PC to websites, financial institutions, and network services, thus providing verification that it is not malware attempting to log in. Intel IPT can be a key component in two-factor authentication solutions to protect your information. AES New Instructions Advanced Encryption Standard New Instructions (AES-NI) are a set of instructions that enable fast and secure data encryption and decryption. AES-NI are valuable for a wide range of cryptographic applications, such as applications that perform bulk encryption/decryption, authentication, random number generation, and authenticated encryption. Execute Disable Bit is a hardware-based security feature that can reduce exposure to viruses and malicious-code attacks, and prevent harmful software from executing and propagating on the server or network. Anti-Theft Technology Intel Anti-Theft Technology helps keep your laptop safe and secure in the event that it's ever lost or stolen. Intel AT requires a service subscription from an Intel AT-enabled service provider.
{ "pile_set_name": "Pile-CC" }
--- name: Feature request about: Suggest an idea for this project --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
{ "pile_set_name": "Github" }
List of breweries in British Columbia Breweries See also Beer in Canada List of breweries in Canada References Breweries * * British Columbiai *British Columbia
{ "pile_set_name": "Wikipedia (en)" }
Sofia Leather Diaper Bag - Taupe / Grey Storksak Sofia Leather Diaper Bag is made from genuine soft leather and a durable washed cow hide. Includes 5 outer pockets & 9 inner compartments for cell phone and all baby's necessities.The Sofia is one of Storksak most exclusive styles, made from beautifully soft twisted leather which is hard-wearing and has a sumptuous, chalky finish. It has been treated to give it a vintage feel, and with brushed gold hardware, this style is as eye catching as it is functional. Amongst its many features, the Sofia boasts four large external magnetic snap closure pockets, ten inner compartments, a thermo-insulated bottle holder, detachable vanity case and wipe clean linings throughout. For leather, use a clear leather cleaner/protector, test a small section first on the base or a discreet area of the bag to test result. Change mat - machine wash at 30 degrees Celsius and hang to dry. - Made from beautifully soft leather - Luxury machine washable padded changing mat - Drawstring insulated bottle holder keeps fluids warm or cool for up to four hours - Long detachable webbing shoulder strap - Wide opening with zipped top closure - Four external magnetic pockets - Nine internal drop in compartments for easy access and a large zipped pocket for valuables - Signature Storksak wipe clean linings throughout - Compatible with Storksak Stroller clips sold separately - Detachable zipped accessories pouch H11" W16.1" D7.1" Weight: 1.5lb
{ "pile_set_name": "Pile-CC" }
The previous incarnation of Flattr, the equivalent of an Internet tip jar, worked only when users hit a "Flattr" button. Now, it's all automated. The browser extension works with Flattr's algorithm (which is privacy focused) to figure out how often you visit websites. Users previously allocate a set budget that was divided between the websites they "Flattred." Now, users can use their credit cards for automatic monthly payments. Additionally, Flattr is switching from Euro denominations to US dollars. For content creators, things have also changed. Creators will be levied a processing fee of 7.5 percent for all payments, along with an initial payment processing fee of 9 percent. To start, websites and creators must sign up with Flattr and link up sites and social media. Flattr also (unsurprisingly) encourages creators to spread the word about Flattr, so more of their readers will use it. If a creator has not signed up for Flattr, the service will hold their payments in reserve for them until they do. The entire idea of a service to pay creators from the same company that is responsible for ad blocking software is certainly interesting. The idea is sound; after all, internet content should be able to be monetized, and if people choose to block ads (a traditional source of revenue), then there should be some sort of alternative option. But it's hard to say whether the company behind AdBlock Plus will actually entice creators to sign up for the platform after the mistrust it has fostered over the years. As it currently has Wordpress, Twitter, YouTube, Flickr and more on board, it's worth keeping an eye on Flattr to see what happens in the future.
{ "pile_set_name": "OpenWebText2" }
Various types of structures that are generally referred to as towers are in use on recreational and pleasure boats. The towers are typically fabricated from metal tubing or pipe. The towers form a structure over part of the deck surface of the boat. The tower is typically fastened to some part of the deck of the boat and extends upward from the deck surface. The towers are also known to those of ordinary skill in the art variously as arches, half towers, tuna towers, towers, hardtops, and hardtop support systems. The towers can be used to provide sunshade, shelter from the elements, mounting points for a variety of equipment for various purposes, and additional control stations. The present invention is directed to a device for permitting multi-directional movement of the tubing framework and for easily mounting, removing and replacing tubing on boats. In the prior methods and devices for attaching these structures to boat decks, the most common method is to utilize mating male and female fittings. Generally, in the prior methods, the female fitting is attached in some manner to the upper surface of the boat deck. The towers all have several legs that form the mounting points on the deck. In order to be able to place and withdraw the male component from the female component of the fitting, it is necessary for the female component of the fittings to all have the same directional orientation. One problem with creating the proper orientation is that the deck mounting surfaces on many boats is generally not flat but varies over its surface at some angle to the horizontal. Due to this variation in the deck surface, it is difficult to install the plurality of fittings with a uniform vertical orientation for the female fitting. Consequently, mounting and removing the towers can be difficult. The prior art presents a variety of approaches that have been engaged to mount, remove and replace tubing on boats. Notwithstanding these efforts to provide suitable mounts and fittings, the existing prior art devices are limited in numerous respects. Accordingly, what is lacking that the prior art has not provided is a simple fitting that provides for multi-directional movement of the tubing framework.
{ "pile_set_name": "USPTO Backgrounds" }
How to get stuff done, automatically - acoleman616 http://www.alexpcoleman.com/productivity/get-stuff-done/?hn ====== digitalsushi I get a db error with the query string referrer [http://www.alexpcoleman.com/productivity/get-stuff- done/](http://www.alexpcoleman.com/productivity/get-stuff-done/) works for me ~~~ theg2 Looks like it went under and the sites down for me. ~~~ acoleman616 Always something fun and new with the server when on HN. Working on getting it back up now... ------ startupclarity I'm sure the irony of reading these sorts of blog posts isn't lost on you all. However, I do believe that some of these strategies and techniques can actually work. I even wrote about it on the post 'how to make time for your side-project'. The key is not _just_ to break things down and to create tiny, regular actions. It's also to _start_. Most of us like to talk and talk and not actually do anything at all. This procrastination and hyperbolic discounting means that we often go for the quick fix rather than the ongoing journey to success. Starting and overcoming our own psychology is often the hardest part. [http://en.wikipedia.org/wiki/Hyperbolic_discounting](http://en.wikipedia.org/wiki/Hyperbolic_discounting) [http://www.startupclarity.com/blog/make-time-side- project/](http://www.startupclarity.com/blog/make-time-side-project/) ~~~ bentcorner Personally, I've found that the process of breaking down a task and writing it down is incredibly useful. As I multitask throughout the day I forget where I am in a particular task item, and having a list of things I'm supposed to do all I need to do is go to the next list item and do that. I also try to have high-level items for the day so that I know what I'm focusing on. Anything that I need to do but can't do today I put on a list for the next day. It seems to be working out alright for me. I currently just throw it all into OneNote, although it's not the greatest for dealing with lists the way I use it, but the freeform writing surface and search it provides makes up for it. ~~~ read _I forget where I am in a particular task item_ Forgetting is my number one problem right now. I also noticed there's some kind of unconscious filtering going on in your mind. Even if you write down the tasks your mind prioritizes them on its own. What I wish list software had was a way to push those less important tasks in the background. ~~~ bentcorner Be religious about writing down what you're doing, make a habit of referring to this list when you find that you're bouncing around from task to task. Sometimes I'll hit HN if I'm waiting for something to finish, and if I was in the middle of something complex I've needed to write down what I was doing, even if it was only for _literally_ a minute. Also, when incrementally learning something it can help. Writing down in your own words how something works can help you if you only have small chunks of time to learn something. ~~~ read Thanks for this, I'll try it. I found writing down things I learned (or typing them in) makes them more likely to stick in my mind. Particularly small phrases that pop out. ------ socrates1998 I have always struggled with automating my life habits. It's not that I don't have goals, I have them in plenty. I have problems with the dehumanizing, machine-like feeling it puts on life. Doing the same thing everyday at the same time sounds horrible. I don't want to program myself. I want to live my life according to how I feel at the moment. But, as you can imagine, this has created problems. You don't keep jobs by living for the moment or doing what you feel like doing. I am not sure if I have a point, but I think there is more to life than becoming a programmable robot. Maybe balance is the key. Have good habits, but try to build some flexibility into them. ~~~ monkmartinez So you say that you want to live as you feel, but this also creates problems. Emotions are the problem. Better stated, lacking control of your emotions is the problem. You are in control of your thoughts and how you react to them. Knowing this and practicing control has been life changing for me. /r/stoicism, my friend. ------ owenversteeg Site's down with a 404 for all pages right now. Cache: [http://webcache.googleusercontent.com/search?q=cache:LwUoGcd...](http://webcache.googleusercontent.com/search?q=cache:LwUoGcds- okJ:www.alexpcoleman.com/productivity/get-stuff- done/+&cd=3&hl=en&ct=clnk&gl=us) ------ jqm Good article. Ben Franklin's schedule shot was interesting. 8-6 workday with a two hour lunch (so 8 hours of work). For some reason I always assumed people toiled very long hours back in those day. ~~~ bhousel Most people _did_ toil very long hours back then, but Franklin was one of the first to make the jump from common class into pseudo-nobility. Class mobility was a new thing back then. The idea of "work" was considered very uncool at the time, especially to upper class, or social climbers like Franklin who mixed with them. I heard somewhere that this is why old school scientific publications have names like "Observations concerning the Increase of Mankind, Peopling of Countries, &c." Because we're not _working_ , we're just _observing_. ~~~ josephjrobison A few counterpoints: "These images are backward projections of modern work patterns. And they are false. Before capitalism, most people did not work very long hours at all. The tempo of life was slow, even leisurely; the pace of work relaxed. Our ancestors may not have been rich, but they had an abundance of leisure. When capitalism raised their incomes, it also took away their time. Indeed, there is good reason to believe that working hours in the mid-nineteenth century constitute the most prodigious work effort in the entire history of humankind."[1] But also: "Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day."[2] [1] [http://groups.csail.mit.edu/mac/users/rauch/worktime/hours_w...](http://groups.csail.mit.edu/mac/users/rauch/worktime/hours_workweek.html) [2][http://eh.net/encyclopedia/hours-of-work-in-u-s- history/](http://eh.net/encyclopedia/hours-of-work-in-u-s-history/) ------ elwell > If you only come away with one thing after reading this, let it be this: > focus on the process. Don’t focus on your output. That kind of flies in the face of the hacker mentality and lean startup methodology. The growing trend seems to be: _focus on creating value_ ; not how many hours you spent working today. And I must agree with the trend in this case. ------ philip1209 The author added a "?hn" to the URL to track referrals from here - I believe that this violates the HN ToS and should be removed by the mods. ~~~ rockdiesel why would a referral parameter even be needed in this case? can't a person just go into their analytics and look at all the referrals from news.ycombinator.com without the need for a parameter in the URL? ------ agueroooo Any ideas about the habits of the brilliant programmers of past and present? e.g. Trovalds, Sysoev etc etc? ~~~ derekp7 There's been posts about daily habits of other accomplished persons (not necessarily programmers), and the conclusion from that discussion was that although these techniques work for them, they would not really apply in general. For example, some would have a glass of wine before starting work on a project, whereas that would put me to sleep. Some athletes eat a big steak dinner before a game, while with others it would hamper their performance. My take on it, is that people that are good at what they do are good because they are good, not because of any rituals.
{ "pile_set_name": "HackerNews" }
Universal Expert Platter Set Beautifully useful. This porcelain set displays appetizers and meals at their best. The carving platter fits a nice chicken, we like the medium one for sides and the smallest dish is great for greens. Don’t forget canapés and desserts!
{ "pile_set_name": "Pile-CC" }
Main menu Post navigation The One Thing Women Always Get Wrong About Men I’m going to share a story because I find narratives to be the best style to show a point. Years ago, I met a guy who I thought was absolutely perfect for me. Actually, I had heard about him before I even met him through reciprocal friends, and merely a quick description of him was enough to let me know that he was “the mens” for me! Eventually, the stars aligned and we met in person, and he was exactly how I imagined he would be. Now how often does a fantasy match reality so perfectly? Not only that, the chemistry was explosive. The dialogue flowed, the body language signals all checked, and our faces were almost ached from smiling so wide. I’m a relationship expert, I write about humen for a living, I know what a guy “in like” looks like and this was it! After we parted from that magical first session, I floated home on a cloud. At last, the wait is over, dating is over, I found the guy for me! Never mind the fact that he didn’t ask for my number or mention getting in touch with me, I was sure that he liked me, and that something magical was brewing. But it didn’t … Hmm, that’s strange. Maybe he guessed I didn’t like him? Or perhaps it’s because I was getting shy and flustered and perhaps I merely wasn’t giving enough green light signals. Or perhaps he’s afraid of being rejected; I know that’s a real fear for most men! And I entail, that would be so embarrassing for him because we have so many mutual friends … so perhaps he’s just waiting until he knows for sure that I’m interested. Yes! That has to be it. The next time I insure him, I will make sure my signals are clear and obvious. About a month or two afterward, we crossed tracks at a friend’s birthday party. Never mind the fact that every time I appeared over at him he was heavily flirting with a different girl; I knew that I was different. I mean, I’m so good at reading situations, that’s what I do for a living! I can’t be wrong about this. Eventually, he swam my way through the sea of populations and again, explosive chemistry, flirty banter, outstretched smiles, this time he even gave me a high five that persisted route too long, so does that count as him holding my hand ?? Yes, this was happening, it was, I feel it. But it didn’t. OK , now I’m so confounded. This doesn’t make any sense. Why isn’t he asking me out? It must be because he doesn’t want to build things weird, that he wants to be sure this will work out before he dives in. Oh! I know! Maybe he senses that I’m “the one” and that’s really scary for a guy … so he’s just processing what’s going on. I spent many more months in the grips of intense disarrays. How do I crack his code? How do I get a relationship to happen? This is all so confusing! Now, whenever I would get stuck in an area concerning my love life it was doubly hard because I am a relationship expert by trade, it’s what I do and I’m good at it. But when faced with a problem I couldn’t solve, well then I genuinely felt like an idiot and a fraud! I knew what I would tell a reader this situation, but my own sound advice just didn’t seem to apply here. It simply feelings different. OK, long narrative short, a little while afterward I brought a friend of mine to a party that he also attended. He took one look at her and that was it. They engaged in heavy flirting, he immediately got her information, and he called and asked her on a date soon after. I literally felt like I had been knocked sideways. The next day I couldn’t even walk straight-out, my whole world felt like it was tilted. And this is what girls always get wrong about men. We think they’re so confusing. We think it doesn’t make any sense. We think they play games and send mixed messages. We don’t understand why. But really, we do. The problem is we don’t want to admit the truth that’s staring us in the face. We don’t want to admit that a guy we like may not like us back, so we invent this whole narrative about how he actually does like us, and about all these menacing obstacles are getting in the way of building true love bloom. Men Aren’t Complicated I know it voices shocking, I know it doesn’t feel like this can possibly is correct to say, but it is! And once you really internalize this fact, your dating life will dramatically change. You can relax and stop stressing and building yourself crazy. When it comes to humen, the most obvious justification is usually the correct one. Feelings aren’t facts. Just because something feels a certain way doesn’t mean that’s the reality. You can’t interpret situations when you have a bias toward a certain outcome. This is main reason why I was so good at breaking down other people’s relationship issues but always got tripped up when it came to my own love life. It’s the reason most of us can solve our friend’s relationship issues such as no problem, but remain utterly befuddled when it comes to our own lives. Here is the straight up truth: Man aren’t complicated, female complicate them . I’m not blaming the woman, but I will say we set ourselves through so much unnecessary heartbreak just because we don’t want to accept that men are actually fairly simple and straightforward creatures. But How Does He Feel About Me? By far the most common questions I get are fluctuations of figuring out how a guy feels … “He texts me from time to time, but we never actually hang out…” “He’s truly flirty when we find one another, and said I’m the coolest girl he knows, so why isn’t he asking me out? ” “We’ve been to be together for months, but he won’t make it official….” “He always watches my Instagram and Snapchat stories, and he likes my scenes! What does that mean ???? ” It entails he is kind of interested in you, but doesn’t want a relationship with you. Guys don’t play games, they don’t try to hide interest, they don’t speak in code, and they aren’t trying to confuse you. Why is it so confusing? Again, because we don’t want to accept the truth and we all want to think we’re the exception. What women always get wrong about humen is believing they’re evil or complicated or commitment-phobes or players. But the majority of cases this just isn’t the case. Men usually demonstrate you exactly who they are. They may not inevitably come right out and say the words, but you’ll always watch the truth when you can honestly look for it. And if a man does ever come right out and tell you he doesn’t want to be in a relationship, believe him . Again, this isn’t him performing some voodoo trap on you and trying to get you to like him more or prove your worth by showing what an amazing girlfriend you would be. If a guy isn’t stimulating it obvious to you that he likes you, it’s because he doesn’t like you that much. If he isn’t committing to you, it’s because he doesn’t want to. If he’s treating you like a loot call, it’s because he doesn’t want a relationship that runs any deeper than that. Don’t delude yourself or adopt a man-hating position. Watch a situation for what it is, and accept that humen merely aren’t that complicated. And in case you’re wondering, it didn’t work up between that guy and my friend and she was utterly devastated( and ok fine, I was slightly smugly satisfied ). He and I ended up becoming really close friends and through our friendship I realized that we would have made a terrible couple! These days he’s married, I’m wedded, and my friend is married, so all’s well that aims well!
{ "pile_set_name": "Pile-CC" }
Kimura HK-1 __NOTOC__ The Kimura HK-1 was a glider built in Japan in 1939 to investigate the possibilities of tailless aircraft. It was a single-seat design with an open cockpit, swept wings, and a single tail fin. The HK-1 made a total of 169 test flights between 15 December 1939 and 7 March 1940, towed aloft behind a car. By this time, the glider's success had attracted the attention of the Army, which arranged to purchase the aircraft. It was taken to the Tachikawa factory for testing, but was crashed after only 13 flights, on 16 April 1940. The design proved sufficiently interesting for the Army to commission further research into the tailless concept, which would lead to the Kayaba Ku-2. Specifications References 日本飞翼的短暂研究 Category:Glider aircraft Category:Tailless aircraft Category:Kayaba aircraft Category:1930s Japanese experimental aircraft
{ "pile_set_name": "Wikipedia (en)" }
11 Times When Being Honest Is Almost Never Worth It We all know the old saying, that "honesty is the best policy." But is it really? Is it actually necessary and good to be truthful 100 percent of the time? If you really think about it, there seem to be a few situations in life where being honest simply isn't worth it.You know the times I'm talking about... like when you're faced with a personal question at work, and an honest answer could get you in trouble. Or when someone's asking for your opinion, and you just don't have anything nice to say. While you should obviously try to be honest whenever possible, it is OK to occasionally tell a little white lie, or withhold the truth, if it means keeping the peace or preserving a relationship. "If telling the whole truth will make you unsafe, ruin your reputation, or cause general headaches, keeping quiet is always an option," certified counselor Jonathan Bennett tells Bustle. "[Also], while authenticity and honesty are important, not everyone is entitled to know your life story or have access to your authentic self." And if that means telling a little white lie? Then go ahead and do it, without feeling guilty. Below, a few situations like these where being honest isn't always worth it. 1. When You're Around Toxic People While I wish it weren't so, almost everyone has a toxic person or two in their life. And, unfortunately, you kinda have to watch what you say around them. "If you make yourself vulnerable to them by sharing your true self, it’s possible they will use it against you in some way," Bennett says. "Being totally honest around them could have negative consequences and it’s best to keep them at arm’s length." 2. When Asked Personal Questions At Work As long as it doesn't pertain to your job, you have every right to keep certain things private at work. And sometimes that means hiding the truth or telling a little white lie. "In these settings, sharing your true opinions and feelings can cause huge headaches and isn’t always worth it," Bennett says. "For example, if your workplace would be highly judgmental of your lifestyle and you could even be fired for it, it’s best to keep that information under wraps." 3. When Someone You Love Has Failed Was your SO rejected from grad school? Or did your friend get fired from her beloved job? During times like these, psychoanalyst Dr. Claudia Luiz says, "they're probably beating themselves up and your honesty will just beat them down. At those times ... it's important to preserve a fragile ego. You need to remind the person and reassure them that everything is OK and that these things happen." Save the advice giving and honesty for when they feel better. 4. When Someone's Looking For Your Approval Another time it's not worth it to be honest? When someone in your life is just looking for a compliment or reassurance, and your true opinion might drag them down. This situation can arise in relationships, like when your SO asks if you had a good time at their [insert thing they invited you to here] and you totally didn't. "You don’t think there are any major problems but its just not your preference," says clinical psychologist Dr. Josh Klapow, host of The Web Radio Show. "Why be honest? What will the honesty do?" 5. When Sharing Embarrassing Stories From Your Past Let's say your new partner asks a question about your past. Nine times out of ten, you should answer truthfully. But if you have a few embarrassing moments you're worried might change their opinion of you — and they don't affect your relationship — it may be a good idea to keep 'em secret. As Klapow says, "Maybe you tried drugs and had a bad experience, or maybe you got into an argument with your parents and cursed them out. Do you need to tell your current significant other all of this? Probably not." 6. When You Don't Like Someone's Food OK, so you shouldn't lie to your partner about food preferences, since doing so can set you up for a lifetime of having to choke down that one dish you secretly despise. But if you've simply been invited to the neighbors' for dinner, politeness should prevail when it comes to what you really think about their food. "White lies — or what we call altruistic lies — are lies that are presented to preserve the relationship and protect the person," Klapow says. "And that's more than OK." 7. When Someone Asks If They Look OK In the same vein, you might consider telling someone they look fine, even if you're not a huge fan of what they're wearing — but only if they're not a close friend, and only when you're already out and they can't go change. Same goes for telling a sick friend they seem better. If it'll cheer them up and give them hope that they're horrible cold is, in fact, sounding better, why not say so? 8. When Someone's Sharing An Exciting Idea Let's say your friend has sat you down for coffee, and is excitedly chatting away about her new business idea. But you hate it. Do you a) say so, and ruin her momentum? Or b) smile and say it sounds like fun? I'd go with b. "We are too quick to give our opinions rather than waiting to see if our opinion is even requested," says counselor Tiffany Ashenfelter, LPC-S. "If our opinion is sought then we can give honest feedback." But if not, it's better to simply offer your support. 9. When Coming Clean After An Affair Unless your partner expressly asks for the intimate details, it's almost never a good idea to be brutally honest about your affair. "By doing so it will lead to pain for [your] partner while alleviating [you] of the guilt of the secret," says Ashenfelter. In other words, coming clean is almost always more for you, than it is for them. So it's not worth it to say it. 10. When Breaking Up With Someone It's up to you to decide what to say when breaking up with someone. But do recognize that the truth is not always what they need to hear. "If you are breaking up with someone because he or she never listens to you, then it might be clarifying and useful to tell the person in those words," says licensed psychotherapist Kenneth Jedding, LCSW. "But if you are breaking up because of, say, something about their physical appearance or some other thing that is beyond their control, then why not find some nice euphemism? Someone invented 'it’s not you, it’s me” for a reason.'" 11. When You Just Don't Want To Share Again, your personal life is yours. So if someone's hounding you for info, it's OK to lie to get them to go away. "One example of that might be when you are in the middle of an incredibly painful time of life, such as infertility and a distant acquaintance or even a stranger begins questioning you on when you plan to have children," says Ashenfelter. Answering them, or even fielding those Qs, might not be worth it if it's going to make you upset. All of that said, it's important to remember that honesty is usually the best policy, so don't make a habit out of fibbing your way through life. But if you come upon a situation where being honest might cause more problems than it's worth — and withholding the truth won't hurt anybody — then consider it OK.
{ "pile_set_name": "Pile-CC" }
The 7 Biggest SEO Lessons I Learned from a Google Employee In the past, I broke down the most vital SEO strategy I learned, which came from a Google employee. This time, I thought I would do something similar and share the 7 biggest SEO lessons that I learned from a Google employee. Some of these things you may already know, but most you probably aren’t too familiar with. And of course, I am not telling you anything that would jeopardize my relationship with or the career of the Google employee. So here goes, these are the 7 biggest SEO lessons I learned from a Google employee. Lesson #1: Penalizations and bans don’t work the way most people think Google’s goal isn’t to penalize sites. Their goal is to serve the most relevant listing to each searcher. For example, if BMW had a handful of bad links pointing to it or they were caught building links, it could be foolish to ban or penalize BMW. The reason being is that BMW is a popular brand… there are millions of people each year who search for the term “BMW.” See, the average person doesn’t know what SEO is. They also don’t care about link building or even Google’s algorithm. They just expect to see BMW.com when they search for “BMW” and if they don’t, they are disappointed in Google. Plus, BMW is a brand. Google loves brands and trusts them more because you as a consumer trust brands. As the Ex-Google CEO Eric Schmidt once said: Brands are the solution, not the problem. Brands are how you sort out the cesspool. Now, hypothetically speaking, if Google decided to remove BMW.com from their index and showed you a random site when you searched for “BMW”, Google knows you will be disappointed based on click-through data. And when users are disappointed with Google, there is a higher chance they won’t come back and use Google again, which means less ad revenue in the long run. For this reason, Google doesn’t just ban or penalize sites, they keep fine-tuning their algorithm to ignore bad signals such as paid links or negative SEO. For example, if your competitor all of a sudden sends 1,000 spammy backlinks to your site, there is a high likelihood that Google sees this as negative SEO and ignores it. I experienced this when I started a nutrition site years ago (I no longer own it). Someone built thousands of adult links to the site and it made up the majority of the backlinks. The site was generating well over 100,000 visitors a month from Google before the adult links kicked in… and can you guess what happened when they indexed all of those bad links? Nothing! Google was smart enough to see that it was unnatural so they just ignored it. My traffic stayed the same. As long as you aren’t doing anything bad, you shouldn’t worry about penalizations. Lesson #2: Google prefers automation Yes, there is a webspam team, but Google prefers automation. They leverage technologies like machine learning to figure out what’s wrong and how to fix it in the future. And, of course, in an automated fashion. They don’t want to hire thousands of people to manually fine-tune their algorithm. This is one of the big reasons that Digg didn’t get acquired by Google years ago… it was because Digg’s algorithm required a lot of human intervention and their engineers weren’t up to par with Google’s. You are going to see constant updates on Google’s algorithm on sites like Search Engine Land and Search Engine Roundtable. But if you focus on what’s best for your users, you should do well in the long run. As for your traffic swings because of algorithm updates, it’s natural. It happens to all of us. If their algorithm was perfect you wouldn’t see constant updates. But like every good company, they learn from their successes and failures and adapt. And, of course, they try to do this in an automated way. Again, as long as you do what’s best for your users, you should see a nice growth in search traffic over time. Don’t worry if you see a slight drop due to an algorithm update if you are doing what’s best for users. And don’t worry if a spammy competitor outranks you because it won’t last forever. Their ranking algorithm isn’t perfect, but it is really good and keeps getting better over time. Lesson #3: Don’t waste your money on expired domains (or other shortcuts) When I was in my early 20s, I thought I was a hotshot marketer. I thought I was smarter than a multi-hundred-billion-dollar search engine and that I figured out a shortcut to climb to the top. One of those tricks was to purchase expired domains and optimize them. I purchased domains that had EDU and GOV backlinks and skyrocketed to the top of Google for terms like “online casino.” Can you guess what eventually happened? My rankings tanked! Just like any shortcut that can drastically boost your rankings, it will get closed. The question becomes when. I know for a fact that expired domains don’t work that well. Not just due to my experience but because Google knows marketers buy them and either 301 redirect them to their site or create a network of blogs to leverage for backlinks. Google is also a registrar like GoDaddy, don’t you think they have all of the information you have, plus more, on domains? 😉 Lesson #4: Google ignores most guest post links Do you get those emails from people offering you paid links on Entrepreneur, Forbes, Buzzfeed, and many other sites that have a high domain authority? Well, we all do. And they just don’t stop… Nowadays, most of the big sites like Entrepreneur nofollow their links. But even if they didn’t, it isn’t hard to figure out which URLs and profiles on these sites are guest posts. Just search for “guest writer” on Entrepreneur and you’ll find tons of articles like this. By no means am I saying that the author above is selling links, I am saying that it isn’t hard for Google to spot these type of posts and devalue the links even if the publication decides to use do-follow links. Heck, Google even commented on how links from Forbes were useless. As Google commented… Google devalues or ignores bad links, which reflects the changes we saw in Penguin, where Google devalues those links rather than penalizing for them. If you want to build links through guest posts… especially obvious ones that clearly state the article was a guest post, don’t expect those links to have much of an impact on your search rankings. Lesson #5: Google isn’t trying to take clicks away from your website, they are trying to build a better product Over the last few years, I continually saw SEOs complaining about how Google is just trying to keep people on Google and not drive any traffic to their websites anymore. Some of these marketers even claim it is unfair because they are just scraping content from your site and using it for their own benefit. Let’s be honest here… none of you are going to block Google from crawling your site. You should be happy that you are getting traffic for free! Who cares if Google scrapes your content… some free traffic is better than none. It’s a big misconception that Google just wants to keep people on their own site. The real truth to this is that Google wants to do what is best for searchers, not marketers. For example, one could say that they only care about ad revenue and they should blanket the page with ads… funny enough, though, over time they have reduced the number of ads per page by removing all sidebar ads. Yes, they are placing a few more ads at the top to make up for it, but overall it is still fewer ads per page. I know many of you don’t like this, but they are a publicly traded company… they have to make money. And, ideally, more money each quarter. Whether it is the knowledge graph or a mobile-first index, their goal is to do what is best for searchers. They know that if they do that their traffic will go up over time and a small portion of you will click on ads. It really is that simple. They don’t make these decisions based on what they want to do… they are logical engineers that use data. For example, if 99% of their traffic said they hate knowledge graphs, there would be no knowledge graphs. Or if 99% of their users wanted more links per page going to external sites, then that is what they would add. They do whatever you want, assuming you have the same opinion as of the masses. The lesson to be learned here, don’t worry about Google taking your content or not driving as many clicks to organic results versus paid. You will constantly see changes coming, especially with the popularity of voice search. Know that these changes are based on data that the masses want. Lesson #6: The biggest search opportunity currently lies in YouTube Google loves text-based content. That’s part of the reason why so many companies have a blog. But it isn’t as easy to rank on Google as it used to be… unless you expand internationally. But even that is getting more competitive as we speak. The biggest opportunity in search is YouTube. According to Alexa, it is the second most popular site on the web and people tend to find their content on YouTube using the search feature. If you aren’t convinced that you should start going after YouTube SEO, here are some interesting stats for you: YouTube has 1.9 billion monthly active users Only 50 million users are creating and sharing video content Average viewing session is 40 minutes Roughly 5 billion videos are watched per day Mobile devices account for 500 million daily video views If that doesn’t convince you to go after YouTube, just look at my stats. I should have done it much sooner as the employee at Google pushed me to create videos years ago but I was a bit slow to move on their advice. And now I am generating 724,464 views a month: Of which 185,290 comes from YouTube search: How many of you can say your website is generating over 100,000 visitors a month from Google? That’s the power of YouTube… it has volume and it is easier to rank than on Google. Just look at me, I generated over 100,000 views a month in less than a year from YouTube SEO. Lesson #7: You are not going to like the future I saved the biggest lesson for last because it affects most marketers who are used to Google in its current form and how it has been for years. From all of my conversations with people who work at Google, they all know the world is changing and they want to make sure Google adapts with it and, more importantly, stays ahead. For example, they know a lot of searches are going through voice devices like Alexa and Google Home. If you just look at mobile devices, voice already accounts for 20% of the searches. And it is happening at a rapid pace based on the graph below: And it isn’t stopping there. Both the Internet world and the real world are starting to be connected. From self-driving cars, which Google has spent billions of dollars on, to simpler things you use every day that Google is starting to connect with (like your stove and fridge). They want to control it all. And not in a creepy way, more so in a way that makes your life easier. For example, if you are cooking and are unsure about a recipe, they want to be there to make sure that you are doing everything correctly. As for what it will look like in the future, no one knows yet, not even Google. But they are paying smart product people and engineers to solve these problems. For example, they know that kids aren’t using Google search the same way adults are. A good example of this is when my 8-year-old nephew isn’t sure about something, he asks Alexa. I, on the other hand, will perform a search on my laptop. All we can do here is make sure that we adapt with technology to ensure that we keep getting traffic from Google. This doesn’t mean to just adapt your SEO strategy but more so to adapt your business and ensure that you are staying on top of things and providing users with what they want. Conclusion As SEOs, we continually try and play a game of cat and mouse. But why? Instead of wasting our time on short-term thinking, why not start putting yourself in your customers’ shoes? That’s what Google is doing. And the changes they are making to their search engine, their future product roadmap, and even to their algorithm are based on what people want. If you want to continually do well, yes you still need to do traditional SEO. But you need to start thinking about your end user and do what’s best for them. So, what’s your SEO game plan now?
{ "pile_set_name": "OpenWebText2" }
265 P.3d 493 (2011) 126 Haw. 24 BATTEY v. STATE. No. CAAP-11-0000156. Intermediate Court of Appeals of Hawai`i. November 30, 2011. Summary Dispositional Order Affirmed.
{ "pile_set_name": "FreeLaw" }
64-millïon Board Feet of Mission Timber to be Sold This Fall Ronan: The Bureau of Indian Affairs Forestry in Ronan has scheduled two timber sales in the Mission Mountains this fall. The sales would total Ronan: A lot of chips are expected to fly before a pair of large contracts are let in two Mission Mountain logging units this fall. The Bureau of Indian Affairs Forestry Department in Ronan has scheduled two logging sales on the west face of the Missions between McDonald Lake and Twin Lakes for fiscal year 1975 (which begins this July 1). The two sales will total some 6 4,0 0 0,0 0 0 board feet of timber. Scheduled for sale within the year are: the Ashley Unit which lies along the valley facing slope between McDonald Lake and Mission Lake... and the St. Mary's Unit stretching from about St. Mary's Lake south-east to the Twin Lakes on top of the Mission Creek drainage. But opposition to the sales has been mounting steadily for the past several months. Logging and especially Mission Mountain logging, was among the hottest issues in the December Tribal Council campaign. Most candidates who discussed logging indicated they would reform some logging practices such as clear-cutting and excessive roading and withdraw areas such as the upper Jocko and the the Mission Mountains from the timber schedule. There seemed to be a general consensus at district candidates meetings in December that Mission Mountain logging should be carefully reviewed. The Tribal Council is also turning a critical eye on many logging practices and there has been some resistance to the idea of logging the Missions and upper Jocko. Two studies have been called for by the Council. One involves the environmental impact of logging on the reservation. This study...known as the Cummins report after Leo Cummins, a University of Montana Forestry Professor who heads up the study team...will be completed in rough draft by February. A final report on the effect of current logging practices on reservation watersheds, wildlife and soils, will be presen- ted to the Tribal Council by May. Another study, requested by the Council last July, will probe into alleged mismanagement of the reservation forests including some 64,000,000 board feet and bring the area between McDonald Lake and Twin Lakes under logging. stumpage overruns (the amount of timber graded at the mill in excess of the board feet stipulated in the contract) thinning reforestation, timber stand improvement practices and the mar keting of tribal timber. This study, to have been conducted by the Department of the Interior has not yet begun. The Council has already asked (cont. on page 18) CHAR-ftoOSTA 15£ THE BI-WEEKLY NEWSPAPER OF THE SAIISIi PEND'd ORIELLES AND KOOTENAI TRIBES OF THE FLATHEAD RESERVATION Volume 3 - Number 19 New Moon of Bands Spread All Over - Feb. 1,19 7 4 Committee Primed on Legal Groundwork Con-con District Meetings Scheduled The Constitutional Committee will take its review of the Tribe's 1935 Constitution to the people. A series of district meetings starts this weekend (see page three for schedule). The Committee will be talking about such things as tribal government and enrollment and they want your ideas. review of the tribe's present con- districts are especially urged to stitution (copies of the 1935 con- attend the meeting in their area stitution will be given to persons Among the things to be dis-attending the meetings), a dis- cussed at the district meetings cussion of ways to update or re vise the constitution and a traditional lunch. All tribal members are invited to attend every meeting but residents of the The Tribal Constitutional Committee will be on the road for the next month trying to find out what the people of the tribe feel about their old constitution and would like to see in a new one. The five men and five women of the committee, who were elected Dec. 15 to review the tribe's 1935 constitution, will go into the district meetings with much of the groundwork for their task cleared away. In four meetings since the committee was sworn in on Jan. 4, they have met with a Bureau of Indian Affairs constitutional expert.....talked over legal problems with the Tribal Council's Attorney....and devised a plan for conducting the district meetings. Six meetings in February and and March have been scheduled for the eight reservation districts. (See schedule on page three.) Fare for the meetings will be: will be the legal restrictions and framework for a new tribal governing document. Tom Whitford, of the Billings Area BIA Tribal (cont. on page 3 ) Tribe Asks State To Drop Auto Assessments Dixon: For years Tribal Members have been paying state property taxes on their cars when they bought their license plates in Lake, Sanders and Missoula Counties. This year there may be a change and the tribal administration is urging members not to buy their licenses until just before the Feb. 15 deadline. The reason, according to Tribal Secretaryjred Houle, Jr., is a petition submitted to State Attorney General Robert Wood-ahl last month asking the Tribal members living on the Flathead reservation be exempted from the property tax. It is hoped Woodahl will make a decision soon but if an agreement is not reached by the deadline, Tribal Members are asked to purchase the license plates and pay the tax under protest.
{ "pile_set_name": "Pile-CC" }
A small but helpful new features arrived in this week’s Chrome OS Dev Channel update (v43.0.2357.3). The sidebar of the Files.app (the Chrome OS File Manager) plays home to a new entry titled ‘Add new services’. Clicking this button opens a modal window that lists File System Provider extensions that are currently available to install from the Chrome Web Store. Among them are a few we’ve written about previously, like add-ons that integrate popular cloud storage services Dropbox, SFTP and Microsoft’s OneDrive directly into the Files.app. But why is Google suddenly promoting these extensions/apps so prominently? ‘Increase Awareness’ Chromium developers themselves say they the feature is designed to help ‘significantly increase awareness of new providers’. Chrome OS’s lack of native integration with cloud services other than Google Drive had been a sticking point for many looking to switch. Business users, for example, may be heavily reliant on Dropbox or a custom in-house server. With the File System Provider API now stable developers are free to help fill in those gaps and bridge the divide. Google plans to make further improvements to the integration of cloud providers in future updates. These tweaks may include allowing users to ‘pin’ files offline; a more thoughtful way to unmount and remount cloud providers (i.e. no more ‘eject’ button) and improvements to the way files are displayed as loading.
{ "pile_set_name": "OpenWebText2" }
Room - Change If your store has several rooms, you can create and use them with your iPad. In the main view, where the room plan with the respective tables is displayed, you can switch between the rooms. In general you can follow these steps:
{ "pile_set_name": "Pile-CC" }
Q: Android Intent Clear Top not working I want to hit an intent to my HomeScreenActivity and clear all the activity that are alive in the Stack, below is the intent code: Intent intent = new Intent(activity, HomeScreenActivity.class); if (Build.VERSION.SDK_INT >= 11) { intent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TASK); } intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); intent.addFlags(Intent.FLAG_ACTIVITY_NO_ANIMATION); startActivity(intent); finish(); The stack is not getting cleared and when i press the back key, all the previous activity are shown, which is not the expected result. Please help! Thanks in advance. A: Use both flags at the same time: intent.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_NEW_TASK); First flag creates new activity when it isn't available in the current task (stack of activities) or reuses existing one. Second flag clears a task associated with requested activity.
{ "pile_set_name": "StackExchange" }
Introduction {#s1} ============ With the increasing affordability of personal genome testing (PGT), the incorporation of patient genotype data into the practice of medicine is becoming more pervasive. Multiple medical centers across the country have begun introducing genetic and genome-wide analysis to make pharmacogenetic testing available to patients [@pone.0068853-Johnson1], [@pone.0068853-Pulley1]. PGT companies offering direct-to-consumer (DTC) tests have empowered individuals to independently obtain their personal genomic profiles, which provide them with a view of their genetic risks for hundreds of diseases and atypical drug responses. Further, applications of genetic testing are expanding to pre-conception genetic screening [@pone.0068853-Srinivasan1], selection of embryos for *in vitro* fertilization [@pone.0068853-Johnson2], non-invasive screening for fetal chromosomal abnormalities [@pone.0068853-Fan1], and the diagnosis of complex medical conditions [@pone.0068853-Worthey1]. Despite this expansion, most medical schools have not kept pace in providing state-of-the-art education in genetics and genomics to medical trainees [@pone.0068853-Salari1]. Healthcare authorities and medical educators now agree that there is a strong need to train medical students and physicians to understand basic principles of genomics and to be able to interpret PGT results [@pone.0068853-Guttmacher1], [@pone.0068853-Wiener1]; however, there has been significant debate over the best educational models to deploy [@pone.0068853-Salari2], [@pone.0068853-Walt1]. Several institutions, including ours, have considered offering students the opportunity to undergo PGT themselves as part of an updated medical school genetics curriculum, with some institutions ultimately deciding against it [@pone.0068853-Walt1]. At Stanford School of Medicine, after a school-wide task force rigorously evaluated potential risks and benefits, PGT was offered to students as part of a first-of-its-kind medical school elective course on genomics and personalized medicine, where students learn principles of genetics and genomics through a combination of interactive lectures and hands-on analysis of genomic data, using either their personal genotype data or publicly available datasets [@pone.0068853-Salari2]. Given the novelty of this educational initiative, there was no data on how PGT impacts student learning and whether its use in the classroom enhances education. Therefore, we used a survey instrument administered before and after the course to examine associations between the use of PGT and student knowledge and attitudes about genomics. Based on previous evidence of the benefit of participatory learning in medical education [@pone.0068853-Genzen1], [@pone.0068853-Knoell1], [@pone.0068853-Mazmanian1], we hypothesized that the use of personal genome data in the classroom would improve knowledge and the learning experience for students. Materials and Methods {#s2} ===================== Subjects {#s2a} -------- Subjects were medical and graduate students enrolled in an elective 8-week course on genomics and personalized medicine (Genetics 210; <http://gene210.stanford.edu/>) offered in the Summer 2010 quarter at Stanford School of Medicine. Forty-six students were enrolled in the course, and participation in this study was voluntary and anonymous. Genotyping {#s2b} ---------- The course started with two weeks of instruction and class discussion led by a clinical geneticist (L.H.), a genetic counselor (K.E.O.), and a bioethicist/lawyer about the risks, benefits, uses, and limitations of PGT; these sessions provided the students with necessary background to provide informed consent should they proceed with PGT. At the end of the second week of instruction, students decided whether to personally undergo genotyping using the PGT services of either one of two PGT companies (23andMe or Navigenics). Of note, at the time of the course offering, 23andMe provided customers with their genotypes for all ∼600 K SNPs on their microarray while Navigenics provided genotypes for only the ∼300 SNPs used in their clinical reports. The subsequent six weeks of instruction included lectures and hands-on data analysis exercises on various topics related to human genetics, genomics, and personalized medicine (see course website for more details of the curriculum; <http://gene210.stanford.edu/>). Each week students were led through classroom exercises to analyze various aspects of whole-genome single nucleotide polymorphism (SNP) data. A dataset comprising 12 diverse individuals from the HapMap project genotyped on Illumina HumanHap 650 K SNP microarrays were provided to all students. Students who underwent PGT were able to complete data analysis exercises using their personal genotype data, and students who did not undergo testing used publicly available genotype data from the 12 HapMap patients. A number of safeguards were implemented to ensure student privacy, confidentiality, and safety, including the provisioning of free genetic and medical counseling and mechanisms that students using their own data could ask questions if they had difficulty in resolving the class exercises without disclosing their genotype results [@pone.0068853-Salari2]. Survey Instrument {#s2c} ----------------- At the start and conclusion of the course, we electronically administered a survey that assessed student attitudes and knowledge about genomics and personalized medicine. The survey (extending the questionnaire developed by Ormond *et al*. [@pone.0068853-Ormond1]) included basic demographic information; assessed attitudes and knowledge about PGT; and solicited students\' feedback on the experience of undergoing testing as it related to the class and their learning experience (**[Methods S1](#pone.0068853.s003){ref-type="supplementary-material"}**). Student attitudes were assessed either via yes/no questions or by asking for extent of agreement with statements on a 5-point Likert scale. Knowledge was assessed by both subjective and objective questionnaire items. For objective knowledge assessment, 6 multiple-choice questions and one free response question were asked; responses were scored blinded to subjects\' genotyping status. Separate from the surveys presented in this study, students were also invited to participate in individual interviews discussing their experience (presented separately [@pone.0068853-Vernez1]). The Stanford University Institutional Review Board approved all study methodology. Data Analysis {#s2d} ------------- We analyzed responses from students who completed both pre- and post-course surveys and attended at least 50% of the eight class sessions. Student responses to the pre- and post-course surveys were linked using a randomly assigned numeric code to maintain anonymity. We considered separating the students who underwent PGT before the course from those who underwent it during the course, but since preliminary statistical comparisons were underpowered to show differences, we elected to combine these groups in our study analysis. Student attitudes assessed on a 5-point Likert scale were collapsed and reported as the percentage of students who agree or strongly agree with the stem statement. Paired pre-course and post-course responses were tested for change using paired non-parametric statistics (McNemar\'s test for binary response questions and the Wilcoxon signed-rank test for Likert items). Comparisons between responses of genotyped and non-genotyped students were made using Fisher\'s exact test for binary response questions and the Mann-Whitney *U*-test for Likert items. The change in student knowledge assessed by pre-course and post-course knowledge scores was evaluated by paired *t*-test. The difference in knowledge improvement between genotyped and non-genotyped students was assessed by Student\'s *t*-test. Results {#s3} ======= Forty-three class participants completed the pre-course survey (93% response rate) and 34 class participants completed the post-course survey (74% response rate). We present data from 31 students who completed both pre- and post-course surveys, and attended at least 50% of the class sessions (67% of the course enrollees). Demographics from this study population are presented in [**Table 1**](#pone-0068853-t001){ref-type="table"}; subjects were evenly split between genders, with slightly fewer medical versus non-medical trainees, and most frequently in their first year of training. Thirteen of the 17 students (76%) who indicated on the pre-course survey that they planned to undergo PGT did ultimately undergo testing. Seven students were initially unsure, of which 3 proceeded with testing. Another seven students had already undergone PGT prior to the course (all by 23andMe); these students indicated that they did not plan to undergo testing again and used their previously obtained data in the course. Due in part to a more limited genotype dataset provided by Navigenics, all 16 students who underwent PGT in the course did so via 23andMe. Thus, 23 students formed the genotyped group, and 8 students formed the non-genotyped group. There were no significant differences in demographics between the genotyped and non-genotyped groups ([**Table 1**](#pone-0068853-t001){ref-type="table"}) or between class participants who completed the study and those who were lost to follow-up (**[Table S1](#pone.0068853.s001){ref-type="supplementary-material"}**). 10.1371/journal.pone.0068853.t001 ###### Subject characteristics. ![](pone.0068853.t001){#pone-0068853-t001-1} Genotyped[a](#nt101){ref-type="table-fn"} Non-genotyped[a](#nt101){ref-type="table-fn"} ---------------------------------------- ------------------------------------------- ----------------------------------------------- ------ Gender (female) 13 (56.5) 3 (37.5) 0.43 Program 0.21 Medical (MD, Clinical Resident/Fellow) 7 (30.4) 5 (62.5) Biomedical (PhD, Post-doctoral Fellow) 16 (69.6) 3 (37.5) Year in Program 0.27 1 11 (47.8) 2 (25.0) 2 1 (4.3) 1 (12.5) 3 3 (13.0) 3 (37.5) 4+ 8 (34.8) 2 (25.0) Previous personal genome testing 7 (30.4) N/A The number (and percentage) of subjects is reported. Fisher\'s exact test comparing genotyped and non-genotyped subjects. Pre and post-course attitudes toward personal genome testing {#s3a} ------------------------------------------------------------ We surveyed student attitudes towards personal genome testing on pre- and post-course surveys ([**Table 2**](#pone-0068853-t002){ref-type="table"}). Among genotyped students, 35--39% indicated that they would recommend PGT for a patient, with no significant difference between pre- and post-course responses. In contrast, among students who elected to not undergo testing, 50% indicated they would recommend PGT for patients before the course, but only 12% maintained this position after the course. Among proponents of PGT for patients, the most common reasons for support were to satisfy general curiosity about their genetic make-up (67%) and to see if a specific disease runs in their family or their DNA (56%). Those opposed to PGT for patients felt that such testing has limited clinical utility (82%), limited clinical validity (77%), individuals have a limited ability to understand and interpret their test results (68%), and not enough trained health care providers are available to help them interpret results (55%). 10.1371/journal.pone.0068853.t002 ###### Student attitudes toward personal genome testing. ![](pone.0068853.t002){#pone-0068853-t002-2} Genotyped group *N* = 23 Non-genotyped group *N* = 8 Genotyped *vs*. Non-genotyped -------------------------------------------------------------------------------------------------------- -------------------------- ----------- ----------------------------- ---------- ------------------------------- ----------- ------ If you were to undergo PGT, would you share your results with a physician? 23 (100.0) -- 6 (75.0) 8 (100.0) If you were to undergo PGT, would you ask a health care provider for help in interpreting the results? 12 (52.2) -- 4 (50.0) 4 (50.0) Would you at this time recommend PGT for a patient? 9 (39.1) 8 (34.8) 1 4 (50.0) 1 (12.5) 0.25 0.38 Most people can accurately interpret their PGT results 0 (0.0) 1 (4.3) **0.025** 0 (0.0) 0 (0.0) 1 0.38 PGT companies provide an accurate analysis and interpretation of genotype data 2 (8.7) 10 (43.5) **0.02** 0 (0.0) 0 (0.0) 1 0.14 PGT companies should be regulated by the federal government 15 (65.2) 18 (78.3) 0.36 4 (50.0) 7 (87.5) **0.037** 0.47 For yes/no questions, the number (and percentage) of subjects responding yes is reported. For Likert items, the number (and percentage) of subjects who agreed or strongly agreed with the statement is reported. McNemar\'s test for binary response questions and Wilcoxon signed-rank test for Likert-scale items comparing pre- to post-course responses. Fisher\'s exact test for binary response questions and Mann-Whitney *U*-test for Likert-scale items comparing post-course responses between genotyped and non-genotyped groups. At the start of the course, 100% of students felt that most people cannot accurately interpret their PGT results. Also, very few students felt that PGT companies provide an accurate analysis and interpretation of genotype data. However, after the course, significantly more students who underwent genotyping themselves believed that people could accurately interpret their results (*P* = 0.025, Wilcoxon signed-rank test) and that PGT companies provide an accurate analysis and interpretation (*P* = 0.02, Wilcoxon signed-rank test) ([**Table 2**](#pone-0068853-t002){ref-type="table"}). In contrast, the non-genotyped group continued to feel that both patients and companies cannot accurately analyze or interpret PGT results. More than half of students felt that PGT companies should be regulated by the federal government, with significantly more non-genotyped students holding this opinion by the end of the course than at the beginning (*P* = 0.037, Wilcoxon signed-rank test; [**Table 2**](#pone-0068853-t002){ref-type="table"}). Notably, the majority of students (62%) indicated that they would undergo whole-genome sequencing in the future once it became affordable to them, including 50% of the students who chose not to undergo SNP-based genotyping at this time. Students overall felt PGT is an important educational topic, as 71% of students agreed or strongly agreed that it will likely play an important role in their future career. Knowledge of genetics and personal genome testing {#s3b} ------------------------------------------------- We next examined students\' reflections on their own knowledge of genetics and personal genome testing as well as that of practicing physicians ([**Table 3**](#pone-0068853-t003){ref-type="table"}). Nearly all students felt that most physicians do not have enough knowledge to help individuals interpret PGT results, on both pre- and post-course surveys. Regarding their own knowledge, by the end of the course students in the genotyped group more strongly indicated that they understood the risks and benefits of using PGT services (*P* = 0.008, Mann-Whitney *U*-test) and that they knew enough about genetics to understand PGT results (*P* = 0.012, Mann-Whitney *U*-test) than students in the non-genotyped group ([**Table 3**](#pone-0068853-t003){ref-type="table"}). 10.1371/journal.pone.0068853.t003 ###### Student perceptions of knowledge about genetics and personal genome testing. ![](pone.0068853.t003){#pone-0068853-t003-3} Genotyped group *N* = 23 Non-genotyped group *N* = 8 Genotyped *vs*. Non-genotyped --------------------------------------------------------------------------------------------------------------------------------------- -------------------------- ----------- ----------------------------- ---------- ------------------------------- ---------- ----------- Most physicians have enough knowledge to help individuals interpret results of personal genome tests 0 (0.0) 1 (4.3) 0.41 0 (0.0) 0 (0.0) 0.77 0.96 I understand the risks and benefits of using PGT services 20 (87.0) 23 (100) **0.003** 4 (50.0) 7 (87.5) 0.2 **0.008** I know enough about genetics to understand PGT results 18 (78.3) 23 (100) **0.005** 4 (50.0) 7 (87.5) **0.02** **0.012** I have a better understanding of principles of human genetics on the basis of undergoing personal genotyping -- 16 (69.6) -- -- -- -- -- Undergoing personal genotyping was an important part of my learning in GENE210 -- 15 (65.2) -- -- -- -- -- I would have learned just as much from GENE210 had I not undergone personal genotyping and only used publicly available genotype data -- 7 (30.4) -- -- -- -- -- I would have learned more from GENE210 had I undergone personal genotyping instead of using publicly available geno type data -- -- -- -- 3 (37.5) -- -- The number (and percentage) of subjects who agreed or strongly agreed with each statement is reported. Wilcoxon signed-rank test comparing pre- to post-course responses. Mann-Whitney *U*-test comparing post-course responses between genotyped and non-genotyped groups. Among genotyped subjects, 70% felt that they acquired a better understanding of principles of human genetics on the basis of undergoing PGT, and 65% felt that undergoing PGT was an important part of their learning in the course. Since all students were provided publicly available genotyping data from HapMap subjects to complete the in-class computer exercises, we specifically asked students to reflect on the use of personal versus publicly available genotype data. Only 30% of students who used personal genotype data felt that they would have learned just as much in the course had they not undergone testing and only used publicly available genotyping data. Conversely, a similar proportion of students in the non-genotyped group (37%) felt that they would have learned more in the course had they used personal genotype data instead of publicly available data. To assess student knowledge of genetics and personal genome testing more objectively, we incorporated a short knowledge assessment in the pre- and post-course surveys, covering basic principles of genetics and clinical scenarios requiring the interpretation of PGT results (the same seven knowledge questions were asked on both surveys). At the start of the course, there was no significant difference between knowledge scores of students who did and students who did not undergo genotyping. However, by the end of the course we noted a significant improvement in knowledge scores only among students who underwent PGT (*P* = 3.5×10^−6^, paired *t*-test; [**Figure 1**](#pone-0068853-g001){ref-type="fig"}). Students in the non-genotyped group did not demonstrate significant improvement in their knowledge scores. The extent of improvement among genotyped students was significantly greater than that of non-genotyped students (31% *vs*. 1%, *P* = 0.002, Student\'s *t*-test). ![Student scores assessing knowledge of genomics.\ Knowledge scores of non-genotyped students on the post-course survey compared to the pre-course survey improved by an average of 1% (46% to 47%), while genotyped students demonstrated an average 31% improvement (38% to 69%). Bar graphs show mean (±S.D.) percentage score on knowledge questions.](pone.0068853.g001){#pone-0068853-g001} Genotyping process and experience {#s3c} --------------------------------- Students in the genotyped group most frequently reported having undergone testing due to general curiosity about their genetic make-up (100%), to help them understand principles of human genetics (57%), to help them understand what patients learn/experience (52%), and to see if a specific disease runs in their family or is in their DNA (52%). Non-genotyped students decided against testing due to concern that a for-profit company would have their DNA or genotype data (50%), concern that their data would not remain private (50%), and feeling that the information from SNP-based genotyping tests would not be useful (50%). Genotyped students were more likely than non-genotyped students to feel the course helped them understand a patient\'s experience in undergoing PGT (*P* = 0.00057, Mann-Whitney *U*-test; [**Table 4**](#pone-0068853-t004){ref-type="table"} **)** and to be pleased with their decision to undergo genotyping (*P* = 0.00058, Mann-Whitney *U*-test; [**Table 4**](#pone-0068853-t004){ref-type="table"}). Most of those who were not genotyped were neutral about their decision (75%). 10.1371/journal.pone.0068853.t004 ###### Student reflection on genotyping offer and experience. ![](pone.0068853.t004){#pone-0068853-t004-4} Question[a](#nt109){ref-type="table-fn"} Genotyped group*N* = 23 Non-genotyped group *N* = 8 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------- ----------------------------- ------------- This course helped me understand what a patient\'s experience might be like if they chose to undergo personal genotyping 23 (100.0) 4 (50.0) **0.00057** Pleased with decision regarding personal genotyping 19 (82.6) 1 (12.5) **0.00058** Experienced anxiety when deciding whether to undergo personal genotyping 3 (13.0) 4 (50.0) **0.0087** Experienced anxiety when awaiting PGT results 3 (13.0) -- Experienced anxiety after receiving PGT results 2 (8.7) -- The opportunity to ask healthcare professional (e.g. genetic counselor, medical geneticist, or other physicians) for help in interpreting the results is an important component to a personal genotyping offer 21 (91.3) 7 (87.5) 0.23 The number (and percentage) of subjects who agreed or strongly agreed with each statement is reported. Mann-Whitney *U*-test comparing post-course responses between genotyped and non-genotyped groups. When deciding whether to undergo PGT, only 13% of students who ultimately underwent testing reported experiencing anxiety, compared to 50% of students who did not undergo testing (*P* = 0.0087, Mann-Whitney *U*-test; [**Table 4**](#pone-0068853-t004){ref-type="table"}). Few students who elected to undergo testing reported anxiety while awaiting their test results (13%) or after receiving the results (8.7%). Nearly all students in both the genotyped and non-genotyped groups agreed that the opportunity to ask healthcare professionals for help in interpreting test results was an important component to the PGT offer ([**Table 4**](#pone-0068853-t004){ref-type="table"}). Effect of genotyping on behavior {#s3d} -------------------------------- When asked prior to the course, 100% of genotyped students and 75% of non-genotyped students indicated a willingness to share their PGT results with their physician ([**Table 2**](#pone-0068853-t002){ref-type="table"}); most indicated they would do so only if they discovered they were at an elevated risk for a condition. Although students indicated a strong willingness to share, significantly fewer (approximately half in each group) indicated they would ask a healthcare professional for help interpreting the results. After actually undergoing PGT, even fewer students reported that they had already asked (13%) or were planning to ask (13%) a healthcare professional for help interpreting their test results; the remainder indicated that they do not plan to ask a healthcare professional for help. All 23 genotyped students reported taking at least one action specifically on the basis of their PGT results (**[Table S2](#pone.0068853.s002){ref-type="supplementary-material"}**). Most frequently, students held discussions with their family about test results (78%) and to learn more about their family history (52%), and 70% of students performed internet searches to educate themselves on conditions for which they were found to be at risk. Three students reported changing their diet in a positive manner and 4 students reported contemplating positive diet, exercise, or smoking habits, but had not yet made changes. The actions were most commonly things that subjects reported already planning to do or were actively doing (30%), but some actions had been previously attempted and students indicated that their PGT results moved them to try again (22%). In 30% of instances, they reported that PGT results moved them to contemplate and/or attempt various positive behavior changes. Course curriculum reflection {#s3e} ---------------------------- Among numerous safeguards built into the course, students who elected to undergo PGT did so privately with a PGT company, and course instructors never asked students whether they had undergone genotyping or for their raw genotype data. Course instructors also asked that students not disclose whether they had undergone genotyping. We asked students to reflect on the experience of using personal genotype data in the classroom and specifically, how well any concerns about privacy and confidentiality were addressed. Few students (2 genotyped, 1 non-genotyped) felt that the professors of the course knew whether they had undergone PGT; no student reported feeling at a disadvantage in the class as a result of this. Overall, most genotyped students (83%) felt they were easily able to go back and forth between their personal genotype data and the publicly available genotype data provided to them when working on the computer exercises, none felt required to divulge their genotype information in order to ask questions of the course professors, and 43% indicated they would have felt comfortable sharing their genotype data in order to ask questions of the course professors. Overall, all students felt that PGT should be made available to medical and graduate students as part of their genetics curriculum in some manner, but varied in their feelings towards whether it should be incorporated as an option in an elective course (61%) or core course (32%). Discussion {#s4} ========== We report here the first study of educational outcomes in a course where students have the option to undergo personal genome testing. Overall, our results suggest that utilizing personal genotype data can augment the educational value of courses teaching concepts of genomics and personalized medicine. At the end of the course, genotyped and non-genotyped students alike viewed the option to undergo genotyping favorably, and PGT was incorporated into the curriculum in a manner that effectively maintained student safety, privacy, and confidentiality. Most students who participated in this study took the course with the intention of undergoing PGT and adhered to their initial plan. However, the decision of a substantial fraction of students was influenced by the first two weeks of the course, which was spent discussing the risks, benefits, uses, and limitations of PGT services. We found that students were more likely to elect to undergo testing if they felt that they understood the risks and benefits of the test and enough about genetics to understand the results. Our data also suggest that students who experience anxiety during the decision-making process are more likely to decide against testing than students who do not. This is not surprising and reflects the self-selection process that is often seen in predictive genetic testing [@pone.0068853-Sanderson1]. Together, these observations highlight the importance of a rigorous informed consent process prior to offering PGT, whether in a classroom setting or elsewhere. Few genotyped students reported anxiety at any point in the process (deciding to undergo PGT, waiting for results, and after results were received), and none of the genotyped students reported regret with their decision on the post-course survey (4 weeks after receiving test results). These results are consistent with a recent report of subjects who underwent PGT with the Navigenics Health Compass [@pone.0068853-Bloss1], where such testing did not result in any measurable short-term changes in psychological health and over 90% of subjects experienced no test-related distress. Students frequently cited gaining a better understanding of the patient experience as a reason that compelled them to undergo PGT. This parallels findings of a recent study of 137 Cleveland Clinic physicians who were offered PGT as a way to increase their familiarity with clinical genetics and PGT [@pone.0068853-Sharp1]. A majority of respondents in that study (77%) felt their personal experience pursuing PGT would benefit their patients directly by improving their ability to advise patients on the testing process and to relate to patients\' experiences interpreting PGT results. Indeed, 100% of the genotyped students in our study reported that the course helped them understand the patient experience of undergoing PGT, compared to only half of non-genotyped students who indicated such an understanding. Despite this, most students still reported that, at this time, they would not recommend PGT for patients. However, students who underwent genotyping more often recommended it for patients than did students who did not undergo genotyping. These results are congruent with those of a recent study, in which primary care physicians currently offering PGT services as part of their practice were more likely to order the test for their patients if they felt well-informed about PGT and if they had undergone testing themselves [@pone.0068853-Haga1]. The primary hypothesis of this study was that undergoing PGT would enhance the learning of students in the course. Since all students were provided publicly available genotype data from HapMap subjects to complete in-class computer exercises, we were able to specifically evaluate the educational utility of analyzing personal versus publicly available genotype data. Regardless of whether they used personal or public genotype data, by the end of the course nearly all students felt confident that they understood the risks and benefits of PGT and the underlying genetics required to understand PGT results. This stands in contrast to the previous study of students in our core medical school genetics course without PGT, in which only 20% of students felt they knew enough about genetics to understand PGT results by the end of the course [@pone.0068853-Ormond1]. While there are significant differences between the two courses (e.g., PGT is a smaller focus of the core course for medical students, and students in this study likely started with a greater level of understanding and interest based on prior coursework), our finding suggests that using genotype data of any sort (personal or public) to perform exercises on data analysis and interpretation enhances the learning experience of students. We also found evidence specifically suggesting that PGT positively impacts learning for those students who self-select to undergo it. The majority of genotyped students felt they acquired a better understanding of the principles of human genetics on the basis of undergoing PGT and that the genotyping was an important part of their learning in the course. Substantiating these beliefs, genotyped students significantly improved their knowledge scores by an average of 31%, while non-genotyped students showed no significant difference in knowledge scores. The performance of non-genotyped students is similar to that described in the study of students in our core medical school genetics course without PGT, where only a modest improvement was noted between pre-course and post-course knowledge scores [@pone.0068853-Ormond1]. Together, these data suggest that some students derive greater educational benefit by undergoing PGT and using personal genotype data in the classroom than students who strictly use publicly available data or no data at all. As has been suggested in other educational contexts [@pone.0068853-Genzen1], [@pone.0068853-Knoell1], [@pone.0068853-Mazmanian1], analyzing and interpreting data with personal relevance may encourage students to be more engaged with the material, leading to greater understanding and retention of knowledge. For example, a recent report describes a genotyping exercise in a pharmacy class where 10 student volunteers provided DNA samples that were subjected to genotype analysis and presented to the class in the context of a genetic counseling session [@pone.0068853-Knoell1]. Students indicated in a survey that the exercise engaged them with the course content and would positively influence their ability to apply pharmacogenetic principles to patient care. Undergoing PGT and interpreting test results also led some students to make or consider behavioral changes. Almost one-third of genotyped students indicated that due to elevated risks, they had already changed or were contemplating changes to their diet, exercise, or smoking habits. However, a longer-term qualitative study conducted on a small number of our students indicates that 6 months after receiving PGT results, none had taken significant behavioral actions [@pone.0068853-Vernez1], suggesting that early behavioral changes may not be sustained. These results mirror the recent study by Bloss *et al*.; while they found no significant change between baseline and follow-up in dietary fat intake or exercise behavior of subjects who underwent PGT with the Navigenics Health Compass, they also found that a substantial fraction of subjects contemplated behavioral changes or intended to undergo more medical tests [@pone.0068853-Bloss1]. The studies\' differences may reflect differences in study population (mean age was 46.7 compared to our younger population of students in their mid-twenties), or more likely, length of follow-up (mean 5.6 months compared to 4 weeks in our study). As an exploratory study of the first iteration of the course, this study has several limitations. The small sample size, self-selection of enrollees in an elective course, and single-institution setting of our study make broad generalizability of our results difficult. We also do not know the extent to which the educational benefits noted here extend to other types of learners, such as undergraduate students or practicing physicians, since our study was conducted primarily on medical and graduate students who expressed a specific interest in the topic of genotyping and personal genome testing. Finally, the survey instrument used in the study is not validated, and thus we cannot exclude the possibility that unclear wording in the questions may have affected some of our findings. These limitations notwithstanding, our study represents the first line of evidence that the use of personal genome testing can enhance genetics education for at least a subset of learners. As personal genome testing becomes more widely-used in the classroom, future work should focus on conducting a randomized study where students who would like to undergo PGT are randomized to either undergo testing and work with their own genotype data or not undergo testing and work with publicly available genotype data. Such a study design would help control for any bias in educational outcomes resulting from self-selection and the results would be of great interest. We believe it is imperative that medical school educators think creatively about how to incorporate education on this rapidly emerging area of medicine and science into their curricula. Our study finds that the interactive and participatory approach of using PGT in the classroom has the potential to increase students\' knowledge and awareness of genetic testing. Although further study of its pedagogical utility is warranted, we believe when thoughtfully implemented, PGT can be used as a powerful and effective tool in genetics education. Supporting Information {#s5} ====================== ###### **Characteristics of subjects who completed study vs. lost to follow-up.** (DOC) ###### Click here for additional data file. ###### **Actions taken specifically as a result of receiving PGT results.** (DOCX) ###### Click here for additional data file. ###### (PDF) ###### Click here for additional data file. The authors would like to thank the Stanford Genotyping Task Force and the Genetics 210 course instructors (in particular Stuart Kim, PhD) for their important contributions to the course and this research study; Mildred Cho, PhD and Jennifer Deitz, MA for assistance in constructing the survey instrument; and Deans Philip Pizzo, MD and Charles Prober, MD for their leadership in medical education that allowed this innovative course to be implemented. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: KS LH KEO. Performed the experiments: KS KJK. Analyzed the data: KS. Wrote the paper: KS KJK LH KEO.
{ "pile_set_name": "PubMed Central" }
16-2711 Qiu v. Sessions BIA Loprest, IJ A205 890 367 UNITED STATES COURT OF APPEALS FOR THE SECOND CIRCUIT SUMMARY ORDER RULINGS BY SUMMARY ORDER DO NOT HAVE PRECEDENTIAL EFFECT. CITATION TO A SUMMARY ORDER FILED ON OR AFTER JANUARY 1, 2007, IS PERMITTED AND IS GOVERNED BY FEDERAL RULE OF APPELLATE PROCEDURE 32.1 AND THIS COURT=S LOCAL RULE 32.1.1. WHEN CITING A SUMMARY ORDER IN A DOCUMENT FILED WITH THIS COURT, A PARTY MUST CITE EITHER THE FEDERAL APPENDIX OR AN ELECTRONIC DATABASE (WITH THE NOTATION “SUMMARY ORDER”). A PARTY CITING TO A SUMMARY ORDER MUST SERVE A COPY OF IT ON ANY PARTY NOT REPRESENTED BY COUNSEL. 1 At a stated term of the United States Court of Appeals for 2 the Second Circuit, held at the Thurgood Marshall United States 3 Courthouse, 40 Foley Square, in the City of New York, on the 4 9th day of April, two thousand eighteen. 5 6 PRESENT: 7 PIERRE N. LEVAL, 8 RICHARD C. WESLEY, 9 CHRISTOPHER F. DRONEY, 10 Circuit Judges. 11 _____________________________________ 12 13 LI QING QIU, 14 Petitioner, 15 16 v. 16-2711 17 NAC 18 JEFFERSON B. SESSIONS III, UNITED 19 STATES ATTORNEY GENERAL, 20 Respondent. 21 _____________________________________ 22 23 FOR PETITIONER: Jay Ho Lee, New York, NY. 24 25 FOR RESPONDENT: Chad A. Readler, Acting Assistant 26 Attorney General; Cindy S. Ferrier, 27 Assistant Director; Brendan P. 28 Hogan, Trial Attorney, Office of 29 Immigration Litigation, United 30 States Department of Justice, 31 Washington, DC. 1 UPON DUE CONSIDERATION of this petition for review of a 2 Board of Immigration Appeals (“BIA”) decision, it is hereby 3 ORDERED, ADJUDGED, AND DECREED that the petition for review is 4 DENIED. 5 Petitioner Li Qing Qiu, a native and citizen of the People’s 6 Republic of China, seeks review of a July 21, 2016, decision 7 of the BIA affirming a May 21, 2015, decision of an Immigration 8 Judge (“IJ”) denying her application for asylum, withholding 9 of removal, and relief under the Convention Against Torture 10 (“CAT”). In re Li Qing Qiu, No. A205 890 367 (B.I.A. July 21, 11 2016), aff’g No. A205 890 367 (Immig. Ct. N.Y. City May 21, 12 2015). We assume the parties’ familiarity with the underlying 13 facts and procedural history in this case. 14 We have reviewed both the IJ’s and the BIA’s opinions “for 15 the sake of completeness.” Wangchuck v. Dep’t of Homeland 16 Sec., 448 F.3d 524, 528 (2d Cir. 2006). The applicable 17 standards of review are well established. 8 U.S.C. 18 § 1252(b)(4)(B); Yanqin Weng v. Holder, 562 F.3d 510, 513 (2d 19 Cir. 2009). 20 Absent past persecution, an applicant may establish 21 eligibility for asylum by demonstrating a well-founded fear of 22 future persecution, 8 C.F.R. § 1208.13(b)(2), which must be 23 both subjectively credible and objectively reasonable, 2 1 Ramsameachire v. Ashcroft, 357 F.3d 169, 178 (2d Cir. 2004). 2 To establish a well-founded fear, an applicant must show either 3 a reasonable possibility that she would be singled out for 4 persecution or that the country of removal has a pattern or 5 practice of persecuting individuals similarly situated to her. 6 8 C.F.R. § 1208.13(b)(2)(i), (iii). “Put simply, to establish 7 a well-founded fear of persecution in the absence of any 8 evidence of past persecution, an alien must make some showing 9 that authorities in [her] country of nationality are either 10 aware of [her] activities or likely to become aware of [her] 11 activities.” Hongsheng Leng v. Mukasey, 528 F.3d 135, 143 (2d 12 Cir. 2008). Qiu failed to establish a well-founded fear of 13 persecution in China on account of her intentions to practice 14 her Catholic faith in an unregistered church and proselytize. 15 The country conditions evidence provides that tens of 16 millions of individuals practice in unregistered churches in 17 China, and that in some areas unsanctioned religious practices 18 are tolerated without interference. The evidence does not 19 discuss any incidents of persecution against Catholics for 20 proselytizing. Therefore, despite evidence of sporadic 21 arrests of religious practitioners and public proselytizers, 22 Qiu did not establish that Chinese officials are likely to 23 become aware of her religious practice (whether worshiping on 3 1 proselytizing) or a reasonable possibility that she would be 2 persecuted as a result. See 8 C.F.R. § 1208.13(b)(2)(i), 3 (iii); see also Hongsheng Leng, 528 F.3d at 142-43; In re A-M-, 4 23 I. & N. Dec. 737, 741 (B.I.A. 2005) (defining pattern or 5 practice as “systemic or pervasive” persecution of a group). 6 Accordingly, the agency did not err in concluding that Qiu 7 failed to establish a well-founded fear of persecution on 8 account of her religion. See 8 C.F.R. § 1208.13(b)(2)(i), 9 (iii); Hongsheng Leng, 528 F.3d at 142-43. That finding was 10 dispositive of asylum, withholding of removal, and CAT relief 11 given that all three claims were based on the same factual 12 predicate. See Paul v. Gonzales, 444 F.3d 148, 156-57 (2d Cir. 13 2006). 14 For the foregoing reasons, the petition for review is 15 DENIED. As we have completed our review, any stay of removal 16 that the Court previously granted in this petition is VACATED, 17 and any pending motion for a stay of removal in this petition 18 is DISMISSED as moot. Any pending request for oral argument 19 in this petition is DENIED in accordance with Federal Rule of 20 Appellate Procedure 34(a)(2), and Second Circuit Local Rule 21 34.1(b). 22 FOR THE COURT: 23 Catherine O’Hagan Wolfe, Clerk 4
{ "pile_set_name": "FreeLaw" }
Quote of the day Nowadays I am realising not many people know the beauty of patience. Try to know everything, and want everything now. But I believe we need to embrace the unknown, enjoy the mystery and go with the flow. Let life guide us and let whatever is meant to be happen. Without pressure and forcing things. Sometimes its ok if we don’t know it right away, if we let it happen, the way its meant to be. Saying all that I wanted to share a daily quote with you, and remember, whatever is meant to be will happen. Just embrace it.
{ "pile_set_name": "Pile-CC" }
On June 5, 2004 at the Tower Hill Church in Red Bank, the Shrewsbury Chorale premiered a choral work entitled As Angels in some brighter dreams, written by English composer Howard Goodall. Mr. Goodall is one of Britain's busiest and most sought-after composers.He is famous for composing British TV show theme songs, most notably Blackadder and Mr. Bean,as well as film music, choral and theatre works. As Angels in some brighter dreams was written for the chorale in memory of Brian Aschinger who was the chorale’s Music Director from 1991-1997 and died in August of 2001. Mr. Aschinger was a good friend of Mr. Goodall. The chorale recently asked Mr. Goodall to tell us about his relationship with Mr. Aschinger and also how he came to write the piece. The following is his reply. “Brian Aschinger was a dear friend to me and a champion of my musical theatre and choral works in the USA. He got in touch with me out of the blue one day in the mid-1980s, having heard a cast recording of my West End musical The Hired Man. From that moment onwards we corresponded regularly and soon he was producing the Off-Broadway US premiere of that piece. This led to further productions, including a wonderful full-scale version at the Walnut Street Theatre, Philadelphia, which Brian directed with his unquenchable spirit, optimism and energy. His passion for the musical theatre was matched by his love of choral music and in this respect he and I were soulmates. It was with huge pleasure that I read his glorious 20- or even 30-page letters of his concerts as director of The Shrewsbury Chorale and I was deeply touched when he first programmed a work of mine. I suspect it is hard for Americans fully to appreciate how much it means - for those of us from smaller communities many thousands of miles away - to have one's work performed, listened to and even enjoyed in the vast and impressive arena of your country. Brian's enthusiasm and hard work on behalf of my compositions was for me a constantly humbling experience.” “In August 2001 I received an email from his friends and colleagues Jim and Ann Crawford with the devastating news of his death. Since I had been a little out of touch with him for the months previously and he had in any case attempted to conceal the gravity of his illness from all but his very closest, this news came as a terrible and an abrupt shock. As fate would have it I was to be in New York a month or so later for a film shoot and was hoping to be able to combine this with any memorial or celebration of Brian's life that was proposed. My first day's filming was in Lower Manhattan, the morning of September 11th. En route to the rendezvous I stood in the street with hundreds of thousands of New Yorkers and watched with my own eyes the horrific unfolding events of that appalling day. When I had returned safely to England, I realised that in all the trauma and noise of the moment, I had not been able to pay my last respects to this remarkable, kind, insightful human being. It occurred to me that one way of doing so was to contact the Shrewsbury Chorale and offer a new choral piece in his memory. They generously agreed to my proposal and As Angels in some brighter dreams is the result. In choosing its texts I first sought out the greatest theatre writer of all time, Shakespeare, to reflect Brian's love of the theatre and found the beautiful valedictory song from Cymbeline, Fear no more the heat of the sun.For a man who so cherished freedom, tolerance and his American heritage, and who never lived to witness 9/11, its chilling phrase fear no more the lightning flash nor the all-dreaded thunderstone caught my eye. Its second text was from the 17th century metaphysical poet Henry Vaughan, whose They are all gone into the world of light, written after the death of the poet's brother, seemed to me to say so much about the passing on of loved ones. So this work is a farewell to Brian but also my way of saying thank you for his generosity, and a small way of demonstrating my solidarity and kinship with the people of his beloved country. It is therefore profoundly appropriate that it is an American chorale - his chorale - that will sing it this June.”
{ "pile_set_name": "Pile-CC" }
Q: Macro to Copy/Paste Data Here is the code I have. I am trying to find the last populated row in a range. Then copy that data as formulas to the next blank row in the range. Finally copy the last populated row again and paste it to the second to last populated row as values. In a nutshell, move the formulas in the last row/cells down one row and retain that data where it was copied as numbers. So when new data is brought into the file, the formulas recalculate and I do not lose the previous calculated data. Hopefully that makes sense. The below code runs but does not appear to do anything other than to paste the data as values in the existing rows. What am I doing wrong? Sub Data Copy() Dim myLastCell As Range Set myLastCell = LastCell(Worksheets("YTD Metric Calculator").Range("E7:J76")) Sheets("YTD Metric Calculator").Range("E7:J" & myLastCell.Row).Copy Sheets("YTD Metric Calculator").Range("E7:J" & myLastCell.Row + 1).PasteSpecial Paste:=xlPasteFormulas Sheets("YTD Metric Calculator").Range("E7:J" & myLastCell.Row).Copy Sheets("YTD Metric Calculator").Range("E7:J" & myLastCell.Row + 1).PasteSpecial Paste:=xlPasteValues 'Retain Yesterday's YTD Metric Data Function LastCell(r As Range) As Range Dim LastRow&, lastCol% On Error Resume Next With r LastRow& = .Cells.Find(What:="*", _ SearchDirection:=xlPrevious, _ SearchOrder:=xlByRows).Row lastCol% = .Cells.Find(What:="*", _ SearchDirection:=xlPrevious, _ SearchOrder:=xlByColumns).Column End With Set LastCell = r.Cells(LastRow&, lastCol%) End Function VBA Goal A: Your LastCell function isn't doing what you think it is doing, because it is determining the row and column within the worksheet of the last used cell in the range, and then using that as an offset from the start of the range. So if the last used row in E7:J76 was row 10, and the last used column was column I, the cell being returned from your function is cell M16 (i.e. 9 rows below E7, and 8 columns to the right). Change Set LastCell = r.Cells(LastRow&, lastCol%) to be Set LastCell = r.Cells(LastRow& - r.Row + 1, lastCol% - r.Column + 1) or to Set LastCell = r.Worksheet.Cells(LastRow&, lastCol%) The parts where you are copying the formulas, and then replacing with values, should be something like: Sheets("YTD Metric Calculator").Range("E" & myLastCell.Row).Resize(1, 6).Copy Sheets("YTD Metric Calculator").Range("E" & myLastCell.Row + 1).Resize(1, 6).PasteSpecial Paste:=xlPasteFormulas Sheets("YTD Metric Calculator").Range("E" & myLastCell.Row).Resize(1, 6).Value = _ Sheets("YTD Metric Calculator").Range("E" & myLastCell.Row).Resize(1, 6).Value That can be rewritten using a With block to save a few keystrokes: With Sheets("YTD Metric Calculator").Range("E" & myLastCell.Row).Resize(1, 6) .Copy .Offset(1, 0).PasteSpecial Paste:=xlPasteFormulas .Value = .Value End With
{ "pile_set_name": "StackExchange" }
Surface-hopping dynamics simulations of malachite green: a triphenylmethane dye. Malachite green is a typical triphenylmethane dye widely used in fundamental and industrial research; however, its excited-state relaxation dynamics remains elusive. In this work we simulate its photodynamics from the S2 and S1 states using the fewest-switches surface-hopping scheme. In the S2 photodynamics, the system first relaxes to the S2 minimum, which immediately hops to the S1 state via an S2/S1 conical intersection. In the S1 state, 90% trajectories evolve into a structurally symmetric S1 minimum; the remaining ones proceed toward two propeller-like S1 minima. Two kinds of S1 minima then decay to the S0 state via the S1/S0 conical intersections. The S1 photodynamics is overall similar to the S1 excited-state dynamics as a result of the ultrafast S2 → S1 internal conversion in the S2 photodynamics, but the weights of the trajectories that decay to the S0 state via three different S1/S0 conical intersections are variational. Moreover, the S2 relaxation dynamics mainly happens in a concerted synchronous rotation of three phenyl rings. In comparison, in the S1 relaxation dynamics, the rotations of two aminophenyl rings can proceed in the same and opposite directions. In certain trajectories, only the rotation of an aminophenyl ring is active. On the basis of the results, the S2 and S1 excited-state lifetimes of malachite green in vacuo are calculated to be 424 fs and 1.2 ps, respectively. The present work provides important mechanistic insights for similar triphenylmethane dyes.
{ "pile_set_name": "PubMed Abstracts" }
Impact craters of Sweden In early 2018 there were eight known impact craters in Sweden. They range in age from 90 mya to 445 mya, and in diameter from 1 km to 52 km. Six of them are exposed, that is they are visible at the surface, in the natural landscape, although their nature and origin might need to be pointed out to the untrained layman. List Dellen Granby crater Hummeln structure Lockne crater Malingen Crater Mien (lake) Siljan Ring Tvären See also Ordovician meteor event References Category:Geology of Sweden Category:Landforms of Sweden
{ "pile_set_name": "Wikipedia (en)" }
Menu Under Armour’s CEO said a nice thing about Trump, and now he’s walking it way back Kevin Plank (second from right) sits in the Roosevelt Room of the White House on Jan. 23. Photo by Chip Somodevilla/Getty Images Under Armour founder and CEO Kevin Plank has had a challenging few weeks. At the end of January, the company’s stock dipped after it reported disappointing earnings for the fourth quarter of 2016. Under Armour’s earnings per share were lower than anticipated, and the company booked about $100 million less than projected. About a week after that, Plank went on CNBC and said Trump would be a “real asset” for the United States, hailing him as a pro-business president. It’s the kind of confident economic talk that, whether it’s true or not, makes good sense for the chief executive of a massive company that’s just gotten bleak financial news. “I’m a big fan of people that operate in the world of ‘publish and iterate’ versus ‘think, think, think, think, think,'” Plank said then. “So there’s a lot that I respect there.” Some big names might not be competing. The aforementioned Foster, as well as Malik Hooker, Corey Davis, and Ryan Ramczyk will likely miss out on the combine due to injuries suffered recently. Joe Mixon and Ishmael Zamora were two of the bigger names who didn’t get invites this year. Mixon was shown on video punching a woman in 2014. Zamora was seen beating his dog with a belt on a leaked Snapchat video. Since last year, it’s standard NFL policy not to invite players who have been charged with domestic violence, assault, or sexual offenses.
{ "pile_set_name": "Pile-CC" }
Effects of nonylphenol on hepatic testosterone metabolism and the expression of acute phase proteins in winter flounder (Pleuronectes americanus): comparison to the effects of Saint John's Wort. 4-Nonylphenol (4-NP), a major by-product of alkylphenol ethoxylates, is used in several industries and as a consequence is quite common in rivers, estuaries and other aquatic environments that receive sewage discharges or are near offshore oil platforms. 4-NP is an environmental estrogen that also binds human and rodent Pregnane X-receptor (PXR), the orphan nuclear receptor that controls the expression of several detoxication genes in mammals, including several CYP3A and CYP2B family members. These P450s preferentially hydroxylate testosterone in the 6beta- and 16beta-positions, respectively. In this study, the effects of 4-NP on testosterone metabolism and hepatic CYP3A induction were compared to the effects of St. John's Wort (SJW), a well established mammalian PXR agonist, in winter flounder. Male winter flounder (Pleuronectes americanus) were injected with 100 mg/kg/day 4-NP or 500 mg/kg/day SJW or both (S and N) every 24 h. Forty-eight hours after the initial injections, flounder were euthanized. Western blots and testosterone 6beta-hydroxylation indicated that CYP3A was increased 50% by 4-NP, but was not affected by SJW. Testosterone 16beta-hydroxylase activity was also significantly increased in flounder treated with 4-NP (2.8 x), but not with SJW. This is not consistent with our hypothesis that both SJW and 4-NP would induce CYP3A. Subtractive hybridization was performed between control and 4-NP treated hepatic mRNA samples to isolate differentially expressed genes. Subtractive hybridization indicated that several acute phase proteins were altered by 4-NP. Quantitative real-time PCR (Q-PCR) confirmed 4-NP altered the expression of complement components C8b, cathepsin L, C-type lectin domain, FK506 binding protein 2 precursor (FKBP2) and an EST (expressed sequence tag). SJW and 4-NP treated flounder demonstrated similar induction profiles for the EST, cathepsin L and FKBP2, suggesting that SJW was at a sufficient dose to alter gene expression but not induce P450s. In conclusion, testosterone hydroxylase activity and Western blots indicate that SJW did not activate detoxication pathways in a similar manner to 4-NP.
{ "pile_set_name": "PubMed Abstracts" }
Taxes are never popular, especially in April of an election year. But the Republicans’ latest effort to tilt the tax code in favor of the wealthy, and starve the government of needed revenue, is particularly cynical. This week, the House Republican leadership is expected to bring up the “Small Business Tax Cut Act,” a bill to let most business owners deduct up to 20 percent of their business income in 2012 — a $46 billion tax cut. Despite the Mom-and-Pop label, it is designed so that nearly half of the tax cut would go to people with annual income over $1 million, and more than four-fifths would go to those making over $200,000, according to the Tax Policy Center. The bill’s proponents, led by Majority Leader Eric Cantor, say that lower taxes would lead to more hiring. But the economic reality is that employers, big and small, are hesitant to hire because of slow or uncertain demand for their products and services, not because of their tax burden. And companies would receive the tax cut even if they did not hire new workers — making it a windfall, not an incentive. The bill is predicated on an overly broad definition of “small business” — one with fewer than 500 employees, which can include multimillion-dollar partnerships and corporations. It is also based on a willful denial of the reality that small businesses are not the big job creators politicians often say they are.
{ "pile_set_name": "OpenWebText2" }
--- abstract: 'Recognition of grocery products in store shelves poses peculiar challenges. Firstly, the task mandates the recognition of an extremely high number of different items, in the order of several thousands for medium-small shops, with many of them featuring small inter and intra class variability. Then, available product databases usually include just one or a few studio-quality images per product (referred to herein as **reference** images), whilst at test time recognition is performed on pictures displaying a portion of a shelf containing several products and taken in the store by cheap cameras (referred to as **query** images). Moreover, as the items on sale in a store as well as their appearance change frequently over time, a practical recognition system should handle seamlessly new products/packages. Inspired by recent advances in object detection and image retrieval, we propose to leverage on state of the art object detectors based on deep learning to obtain an initial product-agnostic item detection. Then, we pursue product recognition through a similarity search between global descriptors computed on *reference* and cropped *query* images. To maximize performance, we learn an ad-hoc global descriptor by a CNN trained on *reference* images based on an image embedding loss. Our system is computationally expensive at training time but can perform recognition rapidly and accurately at test time.' author: - | Alessio Tonioni\ DISI, University of Bologna\ [alessio.tonioni@unibo.it]{} - | Eugenio Serra\ DISI, University of Bologna\ [eugenio.serra@studio.unibo.it]{} - | Luigi Di Stefano\ DISI, University of Bologna\ [luigi.distefano@unibo.it]{} bibliography: - 'biblio.bib' title: A deep learning pipeline for product recognition on store shelves --- Introduction {#sec:introduction} ============ ![Given a *query* image featuring multiple products (a), our system first detects the regions associated with the individual items and then recognizes the product enclosed in each region based on a database featuring only one *reference* image per product (two examples are shown in (b)). All the products are correctly recognized in (a) with bounding boxes colored according to the recognized classes.[]{data-label="fig:teaser"}](tabella.pdf){width="45.00000%"} Recognizing products displayed on store shelves based on computer vision is gathering ever-increasing attention thanks to the potential for improving the customer’s shopping experience (, via augmented reality apps, checkout-free stores, support to the visually impaired …) and realizing automatic store management (, automated inventory, on-line shelf monitoring…). The seminal work on product recognition dates back to [@merler2007recognizing], where Merler highlight the peculiar issues to be addressed in order to achieve a viable approach. First of all, the number of different items to be recognized is huge, in the order of several thousands for small to medium shops, well beyond the usual target for current state-of-the-art image classifiers. Moreover, product recognition can be better described as a hard instance recognition problem, rather than a classification one, as it deals with lots of objects looking remarkably similar but for small details (, different flavors of the same brand of cereals). Then, any practical methodology should rely only on the information available within existing commercial product databases, at most just one high-quality image for each side of the package, either acquired in studio settings or rendered (see -(b)). *Query* images for product recognition are, instead, taken in the store with cheap equipment (, a smart-phone) and featuring many different items displayed on a shelf (see -(a)). Unfortunately, this scenario is far from optimal for state-of-the-art multi-class object detectors based on deep learning [@redmon2016yolo9000; @huang2016speed; @ren2015faster], which require a large corpus of annotated images as similar as possible to the deployment scenario in order to provide good performance. Even acquiring and manually annotating with product labels a huge dataset of in-store images is not a viable solution due to the products on sale in stores, as well as their appearance, changing frequently over time, which would mandate continuous gathering of annotated in-store images and retraining of the system. Conversely, a practical approach should be trained once and then be able to handle seamlessly new stores, new products and/or new packages of existing products (, seasonal packages). To tackle the above issues, we address product recognition by a pipeline consisting of three stages. Given a shelf image, we perform first a class-agnostic object detection to extract region proposals enclosing the individual product items. This stage relies on a deep learning object detector trained to localize product items within images taken in the store; we will refer to this network as to the *Detector*. In the second stage, we perform product recognition separately on each of the region proposal provided by the *Detector*. Purposely, we carry out K-NN (K-Nearest Neighbours) similarity search between a global descriptor computed on the extracted region proposal and a database of similar descriptors computed on the *reference* images available in the product database. Rather than deploying a general-purpose global descriptor (, Fisher Vectors [@perronnin2010large]), we train a CNN using the *reference* images to learn an image embedding function that maps RGB inputs to n-dimensional global descriptors amenable for product recognition; this second network will be referred to as to the *Embedder*. Eventually, to help prune out false detections and improve disambiguation between similarly looking products, in the third stage of our pipeline we refine the recognition output by re-ranking the first K proposals delivered by the similarity search. An exemplary output provided by the system is depicted in -(a)). It is worth pointing out how our approach needs samples of annotated in-store images only to train the product-agnostic *Detector*, which, however, does not require product-specific labels but just bounding boxes drawn around items. In we will show how the product-agnostic *Detector* can be trained once and for all so to achieve remarkable performance across different stores despite changes in shelves disposition and product appearance. Therefore, new items/packages are handled seamlessly by our system simply by adding their global descriptors (computed through the *Embedder*) in the *reference* database. Besides, our system scales easily to the recognition of thousands of different items, as we use just one (or few) *reference* images per product, each encoded into a global descriptor in the order of one thousand float numbers. Finally, while computationally expensive at training time, our system turns out light (memory efficient) and fast at deployment time, thereby enabling near real-time operation. Speed and memory efficiency do not come at a price in performance, as our system compares favorably with respect to previous work on the standard benchmark dataset for product recognition. Related Work {#sec:related} ============ Grocery products recognition was firstly investigated in the already mentioned paper by Merler [@merler2007recognizing]. Together with a thoughtful analysis of the problem, the authors propose a system based on local invariant features. However, their experiments report performance far from conducive to real-world deployment in terms of accuracy and speed. A number of more recent works tried to improve product recognition by leveraging on: a) stronger features followed by classification [@cotter2014hardware], b) the statistical correlation between nearby products on shelves [@advani2015visual; @baz2016context] c) additional information on the expected product disposition [@tonioni2017product] or d) a hierarchical multi-stage recognition pipeline [@franco2017grocery]. Yet, all these recent papers focus on a relatively small-scale problem, recognition of a few hundreds different items at most, whilst usually several thousands products are on sale in a real shop. George [@george2014recognizing] address more realistic settings and propose a multi-stage system capable of recognizing $\sim3400$ different products based on a single model image per product. The authors’ contribution includes releasing the dataset employed in their experiments, which we will use in our evaluation. More recently, [@yoruk2016efficient] has tackled the problem using a standard local feature based recognition pipeline and an optimized Hough Transform to detect multiple object instances and filter out inconsistent matches, which brings in a slight performance improvement. Nowadays, CNN-based systems are dominating object detection benchmarks and can be subdivided into two main families of algorithms based on the number of stages required to perform detection. On one hand, we have the slower but more accurate two stage detectors [@ren2015faster], which decompose object detection into a region proposal followed by an independent classification for each region. On the other hand, fast one stage approaches [@redmon2016yolo9000; @huang2016speed] can perform detection and classification jointly. A very recent work has also addressed the specific domain of grocery products, so as to propose an ad hoc detector [@qiao2017scalenet] that analyzes the image at multiple scales to produce more meaningful region proposals. Besides, deploying CNNs to obtain rich image representations is an established approach to pursue image search, both as a strong off-the-shelf baseline [@sharif2014cnn] and as a key component within more complex pipelines [@gordo2016deep]. Inspiration for our product recognition approach came from [@schroff2015facenet; @bell2015learning]. In [@schroff2015facenet], Schroff train a CNN using triplets of samples to create an embedding for face recognition and clustering while in [@bell2015learning] Bell rely on a CNN to learn an image embedding to recognize the similarity between design products. Similarly, in the related field of fashion items recognition, relying on learned global descriptor rather than classifiers is an established solution shared among many recent works [@hadi2015buy; @wang2014learning; @shankar2017deep]. Proposed Approach {#sec:product_detection_recognition} ================= ![image](pipeline_completa.pdf){width="75.00000%"} shows an overview of our proposed pipeline. In the first step, described , a CNN (*Detector*) extracts region proposals from the *query* image. Then, as detailed in , each region proposal is cropped from the *query* image and sent to another CNN (*Embedder*) which computes an ad-hoc image representation. These will then be deployed to pursue product recognition through a K-NN similarity search in a database of representations pre-computed off-line by the *Embedder* on the *reference* images. Finally, as illustrated in , we combine different strategies to perform a final refinement step which helps to prune out false detections and disambiguate among similar products. Detection {#ssec:detection} --------- Given a *query* image featuring several items displayed in a store shelf, the first stage of our pipeline aims at obtaining a set of bounding boxes to be used as region proposals in the following recognition stage. Ideally, each bounding box should contain exactly one product, fit tightly the visible package and provide a confidence score measuring how much the detection should be trusted. State-of-the-art CNN-based object detectors may fulfill the above requirements for the product recognition scenario, as demonstrated in [@qiao2017scalenet]. Given an input image, these networks can output several accurate bounding boxes, each endowed by a confidence and a class prediction. To train CNN-based object detectors, such as [@redmon2016yolo9000; @huang2016speed; @ren2015faster], a large set of images annotated with the position of the objects alongside with their class labels is needed. However, due to the ever-changing nature of the items sold in stores, we do not train the *Detector* to perform predictions at the fine-grained class level (, at the level of the individual products), but to carry out a product-agnostic item detection. Accordingly, the in-store training images for our *Detector* can be annotated for training just by drawing bounding-boxes around items without specifying the actual product label associated with each bounding-box. This formulation makes the creation of a suitable training set and the training itself easier and faster. Moreover, since the *Detector* is trained only to recognize *generic products* from everything else it is general enough to be deployable across different stores and products. Conversely, training a CNN to directly predict bounding boxes as well as product labels would require a much more expensive and slow image annotation process which should be carried out again and again to keep up with changes of the products/packages to be recognized. This continuous re-training of the *Detector* is just not feasible in any practical settings. Recognition {#ssec:recognition} ----------- Starting from the candidate regions delivered by the *Detector*, we perform recognition by means of K-NN similarity search between a global descriptor computed on each candidate region and a database of similar descriptors (one for each product) pre-computed off-line on the *reference* images. Recent works (, [@sharif2014cnn]) have shown that the activations sampled from layers of pre-trained CNNs can be used as high quality global image descriptors. [@wang2014learning] extended this idea by proposing to train a CNN (, the *Embedder*) to learn a function $E:\mathcal{I}\rightarrow\mathcal{D}$ that maps an input image $i \in I$ into a k-dimensional descriptor $d^k \in \mathcal{D}$ amenable to recognition through K-NN search. Given a set of images with associated class labels, the training is performed sampling triplets of different images, referred to as *anchor* ($i_a$), *positive* ($i_p$) and *negative* ($i_n$), such that $i_a$ and $i_p$ depict the same class while $i_n$ belongs to a different one. Given a distance function in the descriptor space, $d(\mathbf{X},\mathbf{Y})$, with $X,Y\in\mathcal{D}$, and denoted as $E(i)$ the descriptor computed by the the *Embedder* on image $i$, the network is trained to minimize the so called *triplet ranking loss*: $$\begin{gathered} \label{eq:tripletLoss} \mathcal{L} = max( 0,d(E(i_a),E(i_p))-d(E(i_a),E(i_n))+\alpha)\end{gathered}$$ with $\alpha$ a fixed margin to be enforced between the pair of distances. Through minimization of this loss, the network learns to encode into nearby positions within $\mathcal{D}$ the images depicting items belonging to the same class, whilst keeping items of different classes sufficiently well separated. We use the same formulation and cast it for the context of grocery product recognition where different products corresponds to different classes (the two reference images of -(b) corresponds to different classes and could be used as $i_p$ and $i_n$). Unfortunately, we can not sample different images for $i_a$ and $i_p$ due to available commercial datasets featuring just a single exemplar image per product (, per class). Thus, to create the required triplet, at each training iteration we randomly pick two products and use their *reference* images as $i_p$ and $i_n$. Then, we synthesize a new $i_a$ from $i_p$ by a suitable data augmentation function $A:\mathcal{I}\rightarrow\mathcal{I}$, to make it similar to *query* images (, $i_a=A(i_p)$)[^1]. To perform recognition, firstly, the *Embedder* network is used to describe each available *reference* image $i_r$ by a global descriptor $E(i_r)$ and thus create the *reference* database of descriptors associated with the products to be recognized. Then, when a *query* image is processed, the same embedding is computed on each of the candidate regions, $i_{pq}$, cropped from the *query* image, $i_q$, so to get $E(i_{pq})$. Finally, for each $i_{pq}$ we compute the distance in the embedding space with respect to each *reference* descriptor, denoted as $d(E(i_{pq}),E(i_{r}))$, in order to sift-out the first $K$-NN of $E(i_{pq})$ in the *reference* database. These are subject to further processing in the final refinement step. Refinement {#ssec:refinement} ---------- The aim of the final refinement is to remove false detections and re-rank the first K-NN found in the previous step in order to fix possible recognition mistakes. Since the initial ranking is obtained comparing descriptors computed on whole images, a meaningful re-ranking of the first K-NN may be achieved by looking at peculiar image details that may have been neglected while comparing global descriptors and yet be crucial to differentiate a product from others looking very similar. Thus, both the *Query* and each of the first K-NN *reference* images are described by a set of local features ${F_1,F_2,...,F_k}$, each consisting in a spatial position $(x_i,y_i)$ within the image and a compact descriptor $f_i$. Given these features, we look for similarities between descriptors extracted from *query* and *reference* images, to compute a set of matches. Matches are then weighted based on the distance in the descriptor space, $d(f_i,f_j)$ and a geometric consistency criterion relying on the unit-norm vector, $\vec{v}$, from the spatial location of a feature to the image center. In particular, given a match, $M_{ij}=(F^q_i,F^r_j)$, between feature $i$ of the *query* image and feature $j$ of the *reference* image, we compute the following weight: $$W_{ij}=\frac{(\vec{v}^{\,q}_i \cdot \vec{v}^{\,r}_j)+1}{d(f^q_i,f^r_j)+\epsilon}$$ where $\cdot$ marks scalar products between vectors and $\epsilon$ is a small number to avoid potential division by zero. Intuitivelly $W_{ij}$ is bigger for matching features which share the same relative position with respect to the image center (high $(\vec{v}^{\,q}_i \cdot \vec{v}^{\,r}_j)$) and have descriptors close in the feature space (small $d(f^q_i,f^r_j)$). Finally, the first K-NN are re-ranked according to the sum of the weights $W_{ij}$ computed for the matches between the local features. In we will show how good local features can be obtained at zero computational cost as a by-product of our learned global image descriptor. This refinement technique will be referred to as ***+lf***. A simple additional refinement step consists in filtering out wrong recognitions by the *distance ratio* criterion [@lowe2004distinctive] (, by thresholding the ratio of the distances in feature space between the *query* descriptor and its 1-NN and 2-NN). If the ratio is above a threshold, $\tau_{d}$, the recognition is deemed as ambiguous and discarded. In the following, we will denote this refinement technique as ***+th***. Finally, as commercial product databases typically provide a multilevel classification of the items (, at least instance and category level), we propose a re-ranking and filtering method specific to the grocery domain where, as pointed out by [@george2014recognizing], products belonging to the same macro category are typically displayed close one to another on the shelf. In particular, given the candidate regions extracted from the *query* image and their corresponding sets of K-NN, we consider the 1-NN of the region proposals extracted with a high confidence ($>0.1$) by the *Detector* in order to find the main macro category of the image. Then, in case the majority of detections votes for the same macro category, it is safe to assume that the pictured shelf contains almost exclusively items of that category thus filter the K-NN for all candidate regions accordingly. It is worth observing how this strategy implicitly leverages on those products easier to identify (, the high-confidence detections) to increase the chance to correctly recognize the harder ones. We will refer to this refinement strategy as to ***+mc***. Experimental Results {#sec:experimental} ==================== To validate the performance of our product recognition pipeline we take into account two possible use cases dealing with different final users: - **Customer**: the system should be deployed for a guided or partially automated shopping experience (, product localization inside the shop or augmented reality overlays or support to visually impaired). As proposed in [@george2014recognizing], in this use case the goal is to detect *at least one* instance of each visible type of products displayed in a shelf picture. - **Management**: the system will be used to partially automate the management of a shop (, automatic inventory and restocking). Here, the goal is to recognize *all* visible product instances displayed in a shelf picture. Datasets and Evaluation Metrics {#ssec:datasets} ------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![In (a) the system should identify at least one instance for each product type, while in (b) it should find and correctly localize all the displayed product instances.[]{data-label="fig:bboxes"}](bb_zurigo.jpg "fig:"){width="22.00000%"} ![In (a) the system should identify at least one instance for each product type, while in (b) it should find and correctly localize all the displayed product instances.[]{data-label="fig:bboxes"}](bb_iciap.jpg "fig:"){width="22.00000%"} (a)-Customer[@george2014recognizing] (b)-Management[@tonioni2017product] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- For our experimental evaluation we rely on the publicly available *Grocery Products* dataset [@george2014recognizing], which features more than 8400 grocery products organized in hierarchical classes and with each product described by exactly one *reference* image. The dataset contains also 680 in-store (*query*) images that display items belonging to the *Food* subclass of the whole dataset. The annotations released by the authors for the *query* images allow for evaluating performance in the *Customer* use case, as they consist in bounding boxes drawn around spatial clusters of instances of the same products. To test our system also in the *Management* use case, we deploy the annotations released by the authors of [@tonioni2017product], which consist of boxes around each product instance for a subset of 70 in-store pictures of the *Grocery Products* dataset. shows examples of the two kinds of annotations used to evaluate the system in the two different use cases. To compare our work with previously published results we use the metrics proposed by the authors in [@george2014recognizing]: mean average precision-*mAP* (the approximation of the area under the Precision-Recall curve for the detector) and Product Recall-*PR* (average product recall across all the test image). As for scenario (a), we report also mean average multi-label classification accuracy-*mAMCA*. To train the *Detector* we acquired multiple videos with a tablet mounted on a cart facing the shelves and manually labeled a subset of sampled frames to create a training set of 1247 images. Thus, our videos are acquired in different stores with respect to those depicted in the *Grocery Products* dataset and feature different products on different shelves, vouching for the generalization ability of our system. Implementation Details {#ssec:ideal_study} ---------------------- For all our tests we have used as *Detector* the state-of-the-art one-stage object detector known as yolo\_v2 (shortened in *yolo*) [@redmon2016yolo9000]. We choose this network as it grants real-time performance on a GPU and for the availability of the original implementation. Starting from the publicly available weights[^2], we have fine tuned the network on our 1247 in-store images for 80000 steps keeping the hyperparameters suggested by the original authors. The backbone network for our *Embedder* is a VGG\_16 [@Simonyan14c] pre-trained on the Imagenet-1000 classification task (weights publicly available[^3]). From this network we obtain global image descriptors by computing *MAC* features [@tolias2015particular] on the conv4\_3 convolutional layer and applying L2 normalization to obtain unit-norm embedding vectors. To carry out the comparison between descriptors, both at training and test time, we used as distance function $d(X,Y)=1-X \cdot Y$ with $X,Y \in \mathcal{D}$ (, 1 minus the cosine similarity between the two descriptors). To better highlight the advantage of learning an ad-hoc descriptor in the following we will report experiments using general purpose descriptors obtained without fine tuning the *Embedder* with the suffix $\_gd$ (general descriptor), while descriptors obtained after fine-tuning as described in will be denoted by the suffix $\_ld$ (learned descriptor). To train the *Embedder* we use the *reference* images of *Grocery Products* dealing with the products belonging to the *Food* subclass (, 3288 different product with exactly one training image for each). To enrich the dataset and create the anchor images, $i_a$, we randomly perform the following augmentation functions $A$: blur by a Gaussian kernel with random $\sigma$, random crop, random brightness and saturation changes. This augmentations were engineered so to transform the *reference* images in a way that renders them similar to the proposals cropped from the *query* images. The hyper-parameters obtained by cross validation for the training process are as follows: $\alpha=0.1$ for the triplet loss, learning rate $lr=0.000001$, ADAM optimizer and fine-tuning by 10000 steps with batch size 24. We propose a novel kind of local features for the *+lf* refinement: as *MAC* descriptors are obtained by applying a max-pool operation over all the activations of a convolutional layer, by changing the size and stride of the pool operation it is possible to obtain a set of local descriptor with the associated location being the center of the pooled area reprojected into the original image. By leveraging on this intuition, we can obtain in a single forward computation both a global descriptor for the initial K-NN search () as well as a set of local features to be deployed in the refinement step (). For our test we choose kernel size equal $16$ and stride equals $2$ as to have 64 features per *reference* image. Customer Use Case {#ssec:customer_fine} ----------------- In this sub-section we evaluate the effectiveness of our system in the *Customer* scenario. To measure performance we rely on the annotations displayed in -(a) and score a correct recognition when the product has been correctly identified and its bounding box has a non-empty intersection with that provided as ground-truth. We compare our method with already published work tested on the same dataset: FV+RANSAC (Fisher Vector classification re-ranked with RANSAC) [@george2014recognizing], RF+PM+GA (Category prediction with Random Forests, dense Pixel Matching and Genetic Algorithm) [@george2014recognizing] and FM+HO (local feature matching optimized with Hough) [@yoruk2016efficient]. We report the results obtained by the different methods according to the tree metrics in . As [@yoruk2016efficient] does not provide the mAP figure but only the values of precision and recall, for FM+HO we report an approximate mAP computed by multiplying precision and recall. Using our trained *yolo* network for product detection, in we report the results obtained by either deploying a general purpose VGG-based descriptor ($yolo_{gd}$) or learning an ad-hoc embedding for grocery products $yolo_{ld}$. Moreover, we report the results achieved with the different refinement strategies presented in . shows that our pipeline can provide a higher recall than previous methods even with a general purpose image descriptor (*yolo\_gd*), although with a somehow lower precision, as demonstrated by the slightly inferior mAP score. However, our complete proposal relies on learning an ad-hoc descriptor for grocery products (*yolo\_ld*), which yields a significant performance improvement, as vouched by an average gain of about 6% in terms of both Recall and mAP. Wrongly classified proposals can be discarded to further improve accuracy by the threshold refinement strategy (*yolo\_ld + th* - with $\tau_d=0.9$), thereby increasing the mAMCA from 16.32% to 28.74%. Re-ranking based on the proposed local features (*yolo\_ld+lf*) turns out an effective approach to ameliorate both precision and recall, as demonstrated by a gain of about 5% in mAP and PR with respect to the pipeline without final refinement (*yolo\_ld*). The category-based re-ranking strategy (*yolo\_ld + mc*) seems to fix some of the recognition mistakes and improve the recognition rate with respect to (*yolo\_ld*) , providing gains in all metrics. Finally, by mixing all the refinement strategies to obtain our overall pipeline (*yolo\_ld+lf-mc-th*), we neatly get the best trade-off between precision and recall, as vouched by the 57.07% PR and 36.02% mAP, about 14% and 12.5% better than previously published results, respectively, with a mAMCA turning out nearly on par with the best previous result. ![Precision-recall curves obtained in the *customer* use case by the *yolo\_ld* system when trying to recognize either the individual products or just their category.[]{data-label="fig:catInstance"}](category_instance.pdf){width="40.00000%"} We found that casting recognition as a similarity search through learned global descriptors has the nice property that even when the 1-NN does not correspond to the right product, it usually corresponds to items belonging the correct category (cereals, coffee,…). We believe this behavior being due to items belonging to the same category sharing similar peculiar visual patterns that are effectively captured by the descriptor and help to cluster nearby in descriptor space items belonging to the same categories (, coffee cups often displayed on coffee packages or flowers on infusions). To highlight this generalization property, we perform here an additional test in the Customer scenario by considering a recognition as correct if the category of the 1-NN match is the same as those of the annotated bounding box. Accordingly, we compare the performance of *yolo\_ld* when trying to recognize either the individual products or their category. The results of this experiment are reported as Precision-Recall curves in . The large difference between the two curves proves that very often the system mistakes items at product level though correctly recognizing their category. Eventually, it is worth pointing out that our method not only provides a significant performance improvement with respect to previously published results but turns out remarkably fast. Indeed, our whole pipeline can be run on a GPU in less than one second per image. ![Precision-Recall curves for our full pipeline in the *Management* use case. full@180 denotes performing recognition on the small *reference* database of [@tonioni2017product] ($\sim 180$ entries), full@3200 against all the products in the *Food* category of *Grocery Products* ($\sim 3200$).[]{data-label="fig:zmiePrecRec"}](zmie.pdf){width="40.00000%"} Management Use Case {#ssec:management_fine} ------------------- -------------------------------------------- -------------------------------------------- -------------------------------------------- ![image](qualita_1.png){width="32.00000%"} ![image](qualita_2.png){width="32.00000%"} ![image](qualita_3.png){width="32.00000%"} (a) (b) (c) -------------------------------------------- -------------------------------------------- -------------------------------------------- The experiments presented in this Section concern the *Management* use case, a task requiring correct recognition of all the individual products displayed on shelves. Thus, we rely on the annotations shown in -(b) and consider a recognition as correct when the item has been correctly recognized and the intersection over union (IoU) between the predicted and ground truth bounding boxes is higher than 0.5. To the best of our knowledge, the only published results on the *Grocery Products* dataset that deals with recognition of all the individual products are reported in [@tonioni2017product]. However, in [@tonioni2017product] the authors address a specific task referred to as *Planogram Compliance*, which consists in checking the compliance between the actual product disposition and the planned one. Accordingly, the pipeline proposed in [@tonioni2017product] includes an initial unconstrained product recognition stage, which addresses the same settings as our *Management* use case, followed by a second stage that deploys the exact knowledge on the planned product disposition in order to improve recognition accuracy and detect compliance issues (, missing or misplaced products). Therefore, we compare our proposal to the most effective configuration of the first stage of the pipeline presented in [@tonioni2017product], referred to hereinafter as *FS*, which is based on matching BRISK local features[@leutenegger2011brisk] followed by Hough Voting and pose estimation through RANSAC. To compare our pipeline with respect to *FS*, we use the annotations provided by the authors and perform recognition against the smaller *reference* database of 182 products used in [@tonioni2017product]. The results are reported in . Firstly, it is worth pointing out how, despite the task being inherently more difficult than in the *Customer* use case, we record higher recognition performance. We ascribe this mainly to the smaller subset of in-store images used for testing, (, 70 vs. 680) as well as to these images featuring mainly rigid packaged products, which are easier to recognize than deformable packages. Once again, in our pipeline, the use of a learned descriptor (*yolo\_ld*) provides a substantial performance gain with respect to a general purpose descriptor (*yolo\_gd*), as the mAP improves from 66.95% to 74.32% and the PR from 78.89% to 84.75%. The different refinement strategies provide advantages similar to those discussed in , the best improvement yielded by re-ranking recognitions based on the local features extracted from the *Embedder* network (*yolo\_ld+lf*). The optimal trade-off between precision and recall is achieved again by deploying together all the refinement strategies (*yolo\_ld+lf-mc-th*), which provided a mAP and PR as high as 76.93% and 84.75%, respectively (, both about 10% better than the previously published results on this dataset). reports results aimed at assessing the scalability of the methods with respect to the number of products in the *reference* database. We carried out an additional experiment by performing the recognition of each item detected within the 70 *query* images against all the 3200 products of the “Food” category in *Grocery Products* rather than the smaller subset of 182 products proposed in [@tonioni2017product]. By comparing the values in and , we can observe how, unlike *FS*, our method can scale nicely from few to thousands of different products: our full method *yolo\_ld+lf-mc-th* looses only 3.43% mAP upon scaling-up the *reference* database quite significantly, whilst the performance drop for *FS* is as relevant as 19.05%. In we also plot the precision-recall curves obtained by our full pipeline (*yolo\_ld+lf-mc-th*) using the smaller (full@180) and larger (full@3200) sets of reference products. The curves show clearly how our pipeline can deliver almost the same performance in the two setups, which vouches for the ability of our proposal to scale smoothly to the recognition of thousands of different products. As far as recognition time is concerned, our pipeline can scale fairly well regardless of the size of the *reference* database, due to the NN search, even if extensive, amounting to a negligible fraction of the overall computation: the difference in inference time between recognizing 180 and 3200 product is less than a tenth of a second. Qualitative Results {#sec:qualitative} =================== reports some qualitative results obtained by our pipeline. Picture (a) shows the recognition results on an image taken quite far from the shelf and featuring a lot of different items; (b) deal with some successful recognitions in a close-up *query* image, where only a few items are visible at once. Finally, (c) refers to recognition of products featuring deformable and highly reflective packages, which are quite challenging to recognize due to the appearance of the items within the *query* images turning out significantly different than in the available *reference* images. Yet, in (c) our system was able to find at least one item for each product type (, as required in the *Customer* use case). Conclusion {#sec:conclusion} ========== In this paper we have proposed a fast and effective approach to the problem of recognizing grocery products on store shelves. Our proposal addresses the task by three main steps: class agnostic object detection to identify the individual items appearing on a shelf image, recognition through K-NN similarity search based on a global image descriptor, final refinement to further boost performance. All the three steps deploy modern deep learning techniques, as we detect items by a state-of-the-art CNN (*Detector*), learn the image descriptor by another CNN trained to disentangle the appearance of grocery products (*Embedder*) and extract local cues key to refinement as MAC features computed alongside with the global embedding. The experiments prove that our pipeline compares favourably to the state-of-the-art on the public dataset available for performance assessment while being remarkably fast. Yet, we plan to investigate how to further improve the speed at test time (, to enable execution on a low-cost and/or mobile device). Purposely, we envisage devising a unified CNN architecture acting as both *Detector* and *Embedder*. Furthermore we are currently investigating on the use of generative models (, GANs) to augment the number of samples per product to train the *Embedder*. A generative model could also be trained to render *reference* more similar to proposals cropped from *query* images in order to shrink the gap between the training and testing domains. [^1]: The details concerning the adopted augmentation function are reported in [^2]: <https://github.com/pjreddie/darknet> [^3]: <https://github.com/tensorflow/models/tree/master/research/slim>
{ "pile_set_name": "ArXiv" }
{ "word": "Ink", "definitions": [ "A coloured fluid or paste used for writing, drawing, printing, or duplicating.", "Publicity in the written media.", "A tattoo or tattoos.", "A black liquid ejected by a cuttlefish, octopus, or squid to confuse a predator." ], "parts-of-speech": "Noun" }
{ "pile_set_name": "Github" }
Q: After each push to Heroku I get 404 errors on images I am having issues with my Rails App on Heroku. code-dojo.herokuapp.com After every push to heroku any images I uploaded with Carrierwave Gem return a 404 error message. Do I need to precompile this folder or point to it ? Does Heroku replace this folder with a blank one? Should I create my app with all the images on locathost and then push the database? A: Heroku is Read-only Filesystem The following types of behaviors are not supported: Caching pages in the public directory Saving uploaded assets to local disk (e.g. with attachment_fu or paperclip) Writing full-text indexes with Ferret Writing to a filesystem database like SQLite or GDBM Accessing a git repo for an app like git-wiki
{ "pile_set_name": "StackExchange" }
Martin Walter Martin Walter (born October 23, 1983) is a, Czech-born German professional ice hockey defenceman. He is currently an Unrestricted Free Agent. He most recently played for Grizzly Adams Wolfsburg in the Deutsche Eishockey Liga (DEL). References External links Category:1983 births Category:Living people Category:German ice hockey defencemen Category:Hamburg Freezers players Category:Thomas Sabo Ice Tigers players Category:Grizzlys Wolfsburg players
{ "pile_set_name": "Wikipedia (en)" }
In his final days in office, President Barack Obama pardoned retired Gen. James Cartwright, former vice chairman of the US Joint Chiefs of Staff. Cartwright pleaded guilty in federal court in October 2016, admitting he lied to investigators in 2012 when questioned about whether he leaked top secret information to journalists about US efforts to sabotage Iran's nuclear program. Hide Caption 2 of 17 Photos:Famous pardons Willie "Big Mac" McCovey, a baseball Hall of Famer and former San Francisco Giants player, also received a pardon from Obama in January 2017. McCovey, now 79, was sentenced in 1996 to two years' probation and a $5,000 fine for tax evasion. Hide Caption 3 of 17 Photos:Famous pardons Obama pardoned Ian Schrager, a New York hotelier and club owner famous for the parties at his clubs Studio 54 and Palladium. Schrager was convicted of filing fake tax returns between 1977 and 1978, and was sentenced to 20 months in prison and a $20,000 fine. The 71-year-old thanked Obama, saying he had tried "to lead a good and productive life" since his conviction. Hide Caption 4 of 17 Photos:Famous pardons Before he was "Iron Man," actor Robert Downey Jr. had multiple run-ins with the law. He served one year and three months in prison for a 1996 conviction on drug and weapons charges. California Gov. Jerry Brown granted Downey a full and unconditional pardon on Christmas Eve 2015. He said Downey had "lived an honest and upright life, exhibited good moral character and conducted himself as a law-abiding citizen." Hide Caption 5 of 17 Photos:Famous pardons In late 2014, outgoing Arkansas Gov. Mike Beebe formally announced his intention to pardon his son, Kyle, who served three years of supervised probation after being convicted of possession of marijuana with intent to sell. Hide Caption 6 of 17 Photos:Famous pardons Anthony McCray is one of eight men convicted of killing their wives or girlfriends who were pardoned by Mississippi Gov. Haley Barbour. They had served in the governor's mansion, where the most well-behaved of convicts in Mississippi get to work and are commonly pardoned by the governor. Hide Caption 7 of 17 Photos:Famous pardons For Ricky Walters, aka Slick Rick, a pardon from New York Gov. David Paterson in 2008 ended his fear of being deported back to his native Britain. The hip-hop star had served six years in prison on an attempted murder and weapons charge, but faced deportation because of a federal statute to deport resident aliens convicted of violent felonies or weapons charges. Hide Caption 8 of 17 Photos:Famous pardons Billionaire investor and commodities trader Marc Rich, who violated the embargo on Iran, was pardoned by President Bill Clinton. The controversial pardon even came despite the fact that Rich fled to Switzerland and was on the FBI's most wanted list. Clinton issued about 450 pardons and commutations during his presidency. Hide Caption 9 of 17 Photos:Famous pardons Rich's wasn't the only controversial pardon issued by Clinton. Clinton also pardoned a dozen members of the nationalist terrorist group FALN, several of whom were expected to serve out their terms until their death. Hide Caption 10 of 17 Photos:Famous pardons Clinton's controversial pardon streak continued with former Rep. Mel Reynolds of Illinois, who was convicted of corruption and statutory rape of a 16-year-old campaign volunteer. Hide Caption 11 of 17 Photos:Famous pardons On his last day in office, Clinton pardoned his half-brother Roger Clinton, who was convicted of dealing cocaine. Hide Caption 12 of 17 Photos:Famous pardons President Ronald Reagan's secretary of defense secured a presidential pardon from President George H.W. Bush in 1992. Caspar Weinberger had been indicted on perjury and obstruction of justice charges related to the Iran-Contra scandal. He was one of several officials involved in the affair whom Bush pardoned. Hide Caption 13 of 17 Photos:Famous pardons All Vietnam-era draft dodgers were unconditionally pardoned by President Jimmy Carter in 1977, indemnifying hundreds of thousands who evaded or attempted to evade the draft. The blanket pardon was one of Carter's top campaign promises. Hide Caption 14 of 17 Photos:Famous pardons Carter also used his presidential power to pardon famed musician and activist Peter Yarrow, who had been convicted of taking "indecent liberties" with a 14-year-old fan. Hide Caption 15 of 17 Photos:Famous pardons President Richard Nixon avoided being indicted in the Watergate scandal after his former vice president and successor, President Gerald Ford, pardoned him for crimes he "committed or may have committed." His pardon came about a month after he resigned from office in wake of the scandal. Hide Caption 16 of 17 Photos:Famous pardons Call it good karma. Before Nixon got his own pardon, he pardoned several others, including infamous union leader Jimmy Hoffa in 1971. Hoffa had been convicted of jury tampering and fraud. But the pardon didn't keep him out of trouble, as Hoffa vanished in 1974. His body was never found.
{ "pile_set_name": "Pile-CC" }
// // main.m // ViroRenderer // // Created by Raj Advani on 10/13/15. // Copyright © 2015 Raj Advani. All rights reserved. // #import <Cocoa/Cocoa.h> int main(int argc, const char * argv[]) { return NSApplicationMain(argc, argv); }
{ "pile_set_name": "Github" }
<?xml version="1.0" encoding="ISO-8859-1" ?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <taglib xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd" version="2.1"> <tlib-version>1.0</tlib-version> <short-name>Ignored</short-name> <tag> <name>Foo</name> <tag-class>org.apache.tomcat.ignored.Anything.class</tag-class> <body-content>empty</body-content> </tag> </taglib>
{ "pile_set_name": "Github" }
List of United States tornadoes in April 2017 This page documents all tornadoes confirmed by various weather forecast offices of the National Weather Service in the United States during April 2017. United States yearly total April April 2 event April 3 event April 4 event April 5 event April 6 event April 9 event April 10 event April 12 event April 13 event April 14 event April 15 event April 16 event April 18 event April 19 event April 20 event April 21 event April 22 event April 25 event April 26 event April 27 event April 28 event April 29 event April 30 event See also Tornadoes of 2017 List of United States tornadoes from January to March 2017 List of United States tornadoes in May 2017 Notes References Category:2017 natural disasters in the United States Category:2017-related lists Category:Tornadoes of 2017 Tornadoes 2017, 04
{ "pile_set_name": "Wikipedia (en)" }
A federal judge had tough questions for the lawyers leading the effort by President Trump and his businesses to block a House subpoena of Trump’s accounting firm for his financial records. At a hearing on Tuesday in D.C., U.S. District Judge Amit Mehta grilled Trump’s lawyer William Consovoy about his arguments that the subpoena exceeded Congress’ constitutional authorities. At one point, Mehta asked Consovoy if it was his view that Congress’ investigations into Whitewater and Watergate were beyond the scope of their authority. Consovoy stammered a bit, before telling the judge he would need to look more closely at the bases of those investigations. Tuesday’s proceeding was the first public hearing in a matter related to the multi-front showdown between President Trump and House Democrats over Congress’ oversight abilities. But it’s likely to be the first of many. In this case, private attorneys working on behalf of Trump and his business sued to stop their accounting firm from complying with a House subpoena for business records. The House Democrats have intervened to defend their subpoena. The accounting firm, Mazars has not taken a position in the matter. Judge Mehta indicated at Tuesday’s hearing that he would decide matter quickly based on what had already been argued. It is almost guaranteed, however, that whatever his decision is, it will be appealed to a higher court. Trump has claimed in legal filings that the subpoena should be blocked because it serves “no legitimate legislative purpose.” House Democrats argued that Trump is misconstruing their investigative authorities. Outside legal scholars have mostly sided with Democrats in their analysis of the case so far. Trump and his family brought a similar lawsuit in New York against their banks Deutsche Bank and Capital One. On Tuesday, Consovoy argued that Congress was inappropriately seeking to take on the role of law enforcement in its investigation to Trump. “They have made clear this is not about legislation,” he said. Mehta asked Consovoy if he was asking the judge to look beyond what facially acceptable reasons for Congress’ investigations are and to get to their motives, which the judge was “expressly prohibited” from getting to. Consovoy claimed he was not asking the court to look for “secret motives.” “We think that when you look at those letters in the full context,” Consovoy said, referring to the letters that the committee sent regarding the document requests, “this is law enforcement.” The judge, however, seemed wary of the argument that Congress could only investigate matters it could specifically tie to legislation. He asked Consovoy whether Congress “has a function of informing itself and informing the public” of wrongdoing, such as corruption. Judge Mehta also grilled Consovoy on the relevant case law and asked the lawyer specifically whether there was a case more recent than one decided in 1880 that found a congressional subpoena overstepped its authority. After some quibbling, Consovoy went on to concede that the 1880 case, Kilbourn v. Thompson, was the most recent one dealing with that particular issue but he stressed that the case was “good law” and relevant to the Trump lawsuit. When it was the House’s turn in front of the judge, Mehta asked their lawyer Doug Letter about the lack of an official statement or resolution defining their accusations. “It really does, open the door, it seems to me to the accusations… valid or not,” that this was about an effort to get into the President’s private affairs for political reasons, the judge said. The judge also asked Letter if there were any limits, under his theory of Congress’ investigative theory, to what the committee can investigate about the president’s private life. Letter said he could think of some hypotheticals — like a subpoena for a president’s diary from when he was seven years old — that would stretch his legal arguments, but “fortunately, for the House, they are nowhere even close to that.” Letter argued that it was within Congress’ interest to investigate whether the President broke any laws, and stressed that didn’t mean Congress was trying to prosecute the President. He put forth the hypothetical that a president, before taking office, may have committed bank fraud by lying on a loan application, and that he might now be “beholden” to a bank or foreign government that knows he broke the law. Mehta last week indicated that he planned to consolidate some of the procedural steps in the case — putting it on a fast track for resolution. Trump’s lawyers objected to the move on Monday, in a filing that asked that the judge either postpone Tuesday’s hearing or reverse his decision to consolidate the matters that he planned to hear on Tuesday. House Democrats opposed Trump’s request. Mehta declined to cancel Tuesday’s hearing but said on Monday he’d hear Trump’s objections to the consolidation move. At the beginning of the hearing he made clear that he was not going to rule from the bench on Tuesday. However, he also quizzed Consovoy about what more Trump’s legal team needed to do to complete the record so that the judge could make a ruling on the merits of the case. The judge ultimately denied Consovoy’s request for additional briefing in the case, and instead said he would keep the record open until May 18 for any additional information that the parties wanted to submit. “We’re not going to drag this out,” the judge said.
{ "pile_set_name": "OpenWebText2" }
Monday, September 14, 2015 Changes September is a month of new beginnings for many of us who head back to school during this month. But, boy, has September been a month of change for me a few years running. September 2013: I was teaching second grade in the district I had worked for since I started my career as a teacher. No big changes other than a new class and a new superintendent. September 2014: We welcomed Ella Grace into the world and EVERYTHING changed {for the better}. I put teaching on the back burner and became a stay at home mom for the entire school-year. September 2015: I started a new job, in a new district, with a new title. {and my baby became a toddler...sigh...where does the time go?} That's right, if you follow me on Instagram, you *may* have seen my subtle announcement that I have accepted a different position in a different district and I am pretty darn excited about it! I will now be teaching Language Arts to grades K-2 as a pull-out teacher. My position is pretty new to the district and school where I now work. Because of this, I'm not quite sure what my responsibilities will be, what programs I will be using, etc. I'm not quite sure what lies ahead, but I do know I made the right decision. I had contemplated not going back to work at all, and actually, for about two weeks over the summer that was the case. I resigned my former position without having anything else lined up. We were fine with me staying home a for a few years, but there was a small voice in my head telling me I wanted to be back in the classroom sooner rather than later. I started searching for part-time teaching positions and I found the posting for the district where I now work. I am hopeful that my new position will offer me a deeper understanding of literacy and language arts in the primary grades, which will in turn lead me to creating resources for a variety of grade levels. I am also hopeful it will create the work-home balance I was searching for when I applied for the job. Because the job is part-time, I get to devote more of myself to my family, friends, and home. I am excited to hopefully have the best of both worlds. Because of these changes, my blog is undergoing a little bit of change as well. I will of course be blogging about all the things my new position entails: lessons, funny stories, products, and pictures. And I also hope to blog a little bit more about my personal life as well: parenting a toddler {again, sigh}, house/home/decor, favorite things, and just any little thing I want to! Congratulations! I am so proud of you for recognizing the need for balance in your life. Your baby will only be little once, you can't ever "go back" or "undo". You are a gift to all teachers through your wonderful products. Selfishly, I hope that part of you continues, but I will be the first to understand if it takes a back burner to life. Enjoy! And best wishes!!
{ "pile_set_name": "Pile-CC" }
Complementary and conventional cancer care: the integration of two cultures. There is now increasing evidence that many cancer patients are seeking unorthodox forms of support and treatment. These range from measures to enhance the quality of life, such as counselling and relaxation therapy, through to alternative cancer remedies, such as extreme diets with detoxification by coffee enemas. Much of complementary care is provided outside the hospital setting, often without the knowledge of the health care professionals involved in a patient's treatment. The Bristol Cancer Help Centre was founded in 1980 to provide an alternative approach to cancer treatment. It became the leading British centre for such therapies, although, from its outset, it had an uneasy relationship with conventional medicine. Some clinicians were highly critical of its perceived methods even though very few actually visited the centre to examine the evolving programmes of care offered. In 1988, a group of oncologists from Hammersmith Hospital visited the Bristol Centre. A joint development programme was conceived in which the lessons from Bristol were integrated into a busy academic oncology unit prior to the design and construction of a new cancer centre. Many problems emerged in trying to merge the two cultures, one driven by technology, the other by human need. Several other oncology units are adopting complementary strategies within their services. Here we describe our joint experience and outline the Hammersmith supportive care model currently in use.
{ "pile_set_name": "PubMed Abstracts" }
Coronary heart disease is the leading cause of death in the United States. The elderly, those older than 65 years of age, are more likely to have coronary disease and this problem will become even more pervasive as the population ages. Stenting of coronary blockages has become a conventional technique in both the emergency setting of a heart attack as well as more routine therapy to reduce chest pain. In general, there are two types of coronary stents: drug-coated (or drug-eluting stents, DES) and non-drug- coated (or bare metal stents, BMS). Placement of either type of stent requires treatment with clot- preventing medications such as aspirin and clopidogrel or prasugrel to keep them open. The latter two are newer types of medications that work in conjunction with aspirin to reduce the function of cells that help form blood clots in the body (platelets), thereby preventing clot formation on stents (stent thrombosis). Currently, the American College of Cardiology / American Heart Association recommend continuation of these medications for at least one year following DES and for at least one month following BMS placement. Initial studies comparing DES to BMS showed that by reducing scar formation around stents, DES are less likely to close up and require repeat procedures. Recruitment of elderly in these studies was limited and subjects were generally healthier when compared to the community-dwelling elderly. Despite lack of such information, the elderly routinely receive DES during cardiac procedures, requiring the obligatory one-year of anti-platelet drugs. On average, the elderly are more likely to be taking multiple medications, have a lower body weight, and have worse kidney function than the relatively healthier population in randomized trials, thereby putting them at higher risk of drug-drug interactions and unwanted bleeding. Bleeding, which was originally considered just a nuisance in the early days of stenting, has more recently been recognized to increase mortality in patients with coronary heart disease. The balance of possible increased risk of repeat procedures with the use of BMS and potentially increased risk of bleeding with long-term use of dual antiplatelet therapy needs further investigation. The Dual Antiplatelet (DAPT) Study will randomize >20,000 patients to either 12 versus 30 months of dual antiplatelet drugs. This trial is unique in that it will study all-comers, there is no upper age limit, and will collect data on bleeding, interruptions of medications, and adverse cardiac events. We intend to study the DAPT Study elderly subset and compare differences in effectiveness and risk of dual antiplatelet therapy and stent types. Data on the rate of heart attacks, stroke, repeat procedures, and bleeding from a state-mandated registry which collects information on all stenting procedures in Massachusetts will also be analyzed for 3 and 4 year follow-up. Information obtained from this project will provide guidance to physicians caring for elderly patients toward stenting and medical treatment of coronary heart disease, particularly in regards to type and duration of antiplatelet drugs. PUBLIC HEALTH RELEVANCE: An important aspect of the management of coronary heart disease entails placement of drug coated and uncoated stents in the heart arteries to open up blockages. Medications required to prevent the stents from clotting can also increase the risk of bleeding and safety of these stents and medication in the elderly is not entirely clear based on current available data. The goal of our project is to study the safety and efficacy of these medications and stents in the elderly as part of a 20,000 patient randomized trial of these clot prevention medications and a statewide registry that manages information on all coronary stenting procedures done in Massachusetts.
{ "pile_set_name": "NIH ExPorter" }
2013/1/23 Joachim Wilke <joachim.wilke at gmail.com>: > 2013/1/22 Klaus Schmidinger <Klaus.Schmidinger at tvdr.de> >>>> On 20.01.2013 20:24, Andreas Brachold wrote: >>>>>> I think into vdr.pc is parameter includedir= missed, if $(INCDIR) not a >>> standard directory. >>>>>>> If you have an installed version of VDR on your system (i.e. there is >> a vdr.pc file in /usr/share/pkgconfig) and you 'make' a plugin from >> within the plugin's source directory, the information stored in that >> vdr.pc file will be used. >>>> I think what Andreas means ist, that the INCDIR of the *installed* VDR > headers is currently not included in the CXXFLAGS in vdr.pc. This causes > plugins to not compile for themselves even if VDR has installed (using make > install) before. > But it works without any problems here. You don't need to add /usr/include or /usr/local/include to the CXXFLAGS because they are always included. I can't imagine a reason why the includedir should not be a "standard directory".
{ "pile_set_name": "Pile-CC" }
Simon Completes London Marathon for EWB-UK This year’s Flora London Marathon saw Simon Hills, 24, struggle through in the name of Engineers Without Borders – UK. The money raised by Simon will help to fund EWB-UK's projects both in the UK and across the world, and ensure the longevity of the charity's efforts. Simon is an engineering graduate who lives in Surrey and trains in the beautiful Epson downs. Having already survived an 'Iron Man' challenge and with plans to complete a triple marathon this summer, he is confident that he will do his sponsors proud and complete this marathon with a smile on his face. Simon chose to support EWB-UK after maintaining an active interest in the charity and particularly their placement opportunities abroad for a number of years and they are proud to be represented by him. Although developing some unsightly and painful blisters due to not being able to train as hard as he would've like, Simon finished the course in a fantastic time of 3:52:28, beating his colleague by a single second at the finish line. Simon gives us this review of how the race went... The race started slowly due to the sheer number of runners and didn't begin to spread out until well beyond the Cutty Sark at 10km. I joined forces with an unknown runner named Leroy at around 14 miles who was running for a muscular dystrophy charity. We pushed each other right through to the end and managed to keep around 8:30 mile pace throughout. Everything was going a bit too well until mile 24 when after seeing my fiancée in the crowd I did some showboating and landed hard on my left foot causing a massive blister on the sole of my feet to explode... Possibly the most disgusting feeling ever, like stamping on a giant slug in bare feet! The showboating also sparked off some cramp in both quad muscles meaning that the last 2.2miles was agony. Leroy helped me through the pain and due to my stubbornness to slow down we both ran the distance. The crowd was electric throughout and almost became overwhelming towards the end as everything got a bit blurry I felt like an animal in a zoo with everyone banging on the glass... under extreme tiredness, your mind can do very strange things. A massive congratulations and thanks to Simon for completing the race on behalf of everyone here at EWB-UK.
{ "pile_set_name": "Pile-CC" }
Rhelena of Crochet Pattern Bonanza AND CrochetN’Crafts When choosing guest bloggers for my Cre8tion Crochet I primarily work with blogs that I either really love or with bloggers who are open to sharing and helping each other. Rhelena of Crochet Pattern Bonanza fits into both categories. Crochet Pattern Bonanza is a pattern directory run by Rhelena that offers free crochet patterns. Crochet Pattern Bonanza is the sister site to CrochetN’Crafts which is where Rhelena primarily shares her own patterns, as well as patterns from other crochet designers. I “met” Rhelena when she approached me about putting some of my patterns on Crochet Pattern Bonanza. We have since formed a great friendship. I truly believe in reciprocity and so does Rhelena. We have both tried to help each other along the way. Rhelena is also a fantastic crochet designer. I love her patterns. This pattern is offered exclusively to fans of Cre8tion Crochet. This gorgeous Picot Arm Warmer and Picot Cowl are definitely going on my list of things to crochet. Click HERE for FREE Crochet Pattern Picot Cowl and matching Picot Arm Warmers Lets get to know a little more about Rhelena and her sites Crochet Pattern Bonanza and CrochetN’Crafts 1- How long have you been crocheting? I was taught the basics of crocheting when I was about 9 or 10 and crocheted on and off throughout my childhood and teenage years. It wasn’t until my early 20’s that I got serious about it and taught myself to read patterns. In 2006 I started playing around with my own designs and started publishing them online in 2008. 2- How long have you been blogging? I started my first website in late 2008 after I lost my job due to health reasons. 3- What are your favorite items to make? When I first got serious about crocheting I enjoyed making thread doilies. I made them for all my coffee tables and dressers. Now I enjoy anything that allows my mind to be free, such as blankets, bags, hats and scarves. I also enjoy making tops for myself and hope to get into designing more of those in the near future. 4- What is your favorite thing about crocheting and or blogging? My favorite thing about crocheting is that it helps me to relax while giving me a sense of accomplishment. As for having a website, it gives me the hope of earning a living one day by doing what I enjoy most. 5- What are your favorite yarns (brands or types) to work with? It depends on how I look at it in terms of price and what I enjoy working with most. But I tend to favor Patons Canadiana – The New Generation and Aunt Lydia’s Size 3 Crochet Thread because they are reasonably priced and result in beautiful fabrics. 6- What was your inspiration for your pattern? A lot of my inspirations come from what I actually need for myself, or from requests. It forces me to brainstorm and often one project idea will lead to another. I keep a list of things I need/want to make for myself, and I usually take it from there as time and inspiration allows. Also, a lot of my inspiration comes from other designs. That’s one of the beauties of owning a crochet pattern directory as it allows me to see many inspirational designs. 7- Tell me a little something about your personal life… kids, family, pets, etc. Currently I live by myself in a small apartment. I’m trying to overcome a life-long illness with natural remedies and cleansing. So naturally, solitude is one of the best friends I could ask for at this time. Also, I love to do yoga and meditation. 8- Tell us a little bit about Crochet Pattern Bonanza and CrochetN’Crafts. When were they created. CrochetN’Crafts was created in late 2008. That’s where I have published the majority of my free patterns. All patterns are free at the time, but I hope to design some for sale in the future so as not to put all your eggs in one basket. As for Crochet Pattern Bonanza, a free crochet pattern directory, it was started in 2011. 9- What type of patterns are you looking for. I’m looking for all types of crochet patterns as long as they are free and the designer/publisher is allowing me to use the image in a thumbnail link. 11- How do fellow bloggers get their patterns featured on your sites? To get listed on the directory, simply send me an email or fill out the submission form on my directory. You’ll find my contact information under the “contact” and “submit link” tabs at the top of the page. You can write up a short summary of yourself or website and I’ll put you in the “featured designers” spot for a day. I hope you enjoyed getting to know Rhelena as much as I have. If you are a crocheter you will want to visit Crochet Pattern Bonanza for lots of free patterns. Be sure to take a peek at CrochetN’Crafts which features more of Rhelena’s original designs.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention This invention relates to a tinted, green colored soda-lime-silica glass having a low luminous transmittance that makes it desirable for use as a privacy glazing in vehicles, such as the side and rear windows in vans. As used herein, the term "green colored" is meant to include glasses that have a dominant wavelength of about 480 to 510 nanometers (nm) and may also be characterized as green blue, green yellow, or green gray in color. In addition, the glass should exhibit lower infrared and ultraviolet radiation transmittance when compared to typical green glasses used in automotive applications and be compatible with float glass manufacturing methods. 2. Technical Considerations and Prior Art Various dark tinted, infrared and ultraviolet radiation absorbing glass compositions are known in the art. The primary colorant in typical dark tinted automotive privacy glasses is iron, which is usually present in both the Fe.sub.2 O.sub.3 and FeO forms. Some glasses use cobalt, selenium and, optionally, nickel in combination with iron to further control infrared and ultraviolet radiation and color, for example as disclosed in U.S. Pat. No. 4,873,206 to Jones; U.S. Pat. No. 5,278,108 to Cheng, et al.; U.S. Pat. No. 5,308,805 to Baker, et al.; U.S. Pat. No. 5,393,593 to Gulotta, et al.; U.S. Pat. No. 5,545,596 and U.S. Pat. No. 5,582,455 to Casariego, et al.; and European Patent application no. 0 705 800. Others also include chromium with this combination of colorants as disclosed in U.S. Pat. No. 4,104,076 to Pons; U.S. Pat. No. 4,339,541 to Dela Ruye; U.S. Pat. No. 5,023,210 to Krumwiede, et al; and U.S. Pat. No. 5,352,640 to Combes, et al.; European Patent application no. 0 536 049; French Patent 2,331,527 and Canadian Patent 2,148,954. Still, other glasses may include additional materials, such as disclosed in WO 96/00194, which teaches the inclusion of fluorine, zirconium, zinc, cerium, titanium and copper in the glass composition and requires that the sum of the alkaline earth oxides be less than 10 wt. % of the glass. In producing infrared and ultraviolet radiation absorbing glasses, the relative amounts of iron and other additives must be closely monitored and controlled within an operating range to provide the desired color and spectral properties. It would be desirable to have a dark tinted green colored glass that may be used as a privacy glazing for vehicles to complement the green colored glasses typically used in automobiles and vans that exhibits superior solar performance properties and is compatible with commercial float glass manufacturing techniques.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Losing php empty post entries I have a strange situation in one of my servers. Whenever I post a form that contains an empty value for a given post key, when I try to read that key from the $_POST array it is not set as expected. I expect: isset($_POST[$k]) == true But I get: isset($_POST[$k]) == false I have not been able to find evidence of this problem anywhere else in the web. I have two servers and in one of them it happens and in the other one it does not. I have no idea if it could related to my version of PHP, or Apache, or some configuration file. Test scenario: <?php print_r($_POST['param']); ?> <form action="posttest.php" method="post"> <input type="text" name="param[text1]" value="1"><br> <input type="text" name="param[text2]" value=""><br> <input type="text" name="param[text3]" value="3"><br> <input type="text" name="param[text4]" value=""><br> <input type="text" name="param[text5]" value="4"><br> <button type="submit">Send</button> </form> Server A (the good one) Data: PHP Version 5.4.45 Echoes this: [param] => Array ( [text1] => 1 [text2] => [text3] => 3 [text4] => [text5] => 4 ) Server B (the faulty one) Data: PHP Version 5.6.20 Echoes this: [param] => Array ( [text1] => 1 [text3] => 3 [text5] => 4 ) I don't know what more info to add to the question, so if you have a clue and need more information please let me know and I will update the question A: The problem has disappeared and it was after running an EasyApache on my WHM console. Maybe it was a misconfiguration or a bug in the php binary I was using, who knows.
{ "pile_set_name": "StackExchange" }
A new approach for enhanced multiplication of arbuscular mycorrhizal fungi and isolation of ITS regions from Glomus deserticola and Laccaria fraterna. Rootlets induced from the petiole base of L. purpureus, using IAA and kinetin was used for enhanced multiplication of arbuscular mycorrhizal (AM) fungus, G. deserticula. Using conserved short arbitrary oligonucleotides, as specific primers, we amplified the ITS-region, a molecular marker for fungal identification, from the genomic DNA extracted from cultured spores of G. deserticola, and genomic DNA extracted from the mycelium of L. fraterna. The capacity of fungal colonization and subsequent spore formation of G. deserticola, compared with the natural root system was evaluated. This technology would provide a simple way to multiply AM fungi and to produce spores without microbial contamination useful for further molecular characterization.
{ "pile_set_name": "PubMed Abstracts" }
[The effect of spinal anesthesia on pulmonary function tests in old patients]. Pulmonary function test (PFT) results are mainly dependent on age, sex, height, weight, pulmonary mechanics disturbances and cooperation of the subjects. The position and anesthesia type may also influence the PFT results. In this study we aimed to evaluate spirometric changes in old and young patients who performed spinal anesthesia. Fifty patients performed spinal anesthesia were randomized in two groups: Group 1 (n= 25) aged 60-85 years old and group 2 (n= 25) aged 20-59 years old. After electrocardiography, noninvasive blood pressure and peripheral oxygen saturation (SpO2) monitorization, spinal anesthesia using 0.5% hyperbaric bupivacain from L 3-4 intervertebral space was applied. Sensory block levels, hemodynamics and PFT such as forced vital capacity (FVC), forced expiratory volume/1 second (FEV(1)), peak expiratory flow (PEF), and forced expiratory flow at the 25 and 75% of the pulmonary volume (FEF(25-75)) were performed before and after spinal anesthesia in 10th, 40th and 100th minutes in supine and 30 degrees head position using hand type spirometry. Wilcoxon paired two tests statistical analysis was used to compare PFT changes of the subjects. Mean arterial blood pressure levels and spirometric measurements of FVC, FEV(1) and FEF25-75 decrease with respect to basal values in 40th minutes was significant in old patients whom spinal anesthesia was over Th6 level but in young patients the changes were not significant. PFT decrement probabilities should be taken in account in old patients supposing for spinal anesthesia and be paid attention for high level spinal blocks in risk group patients.
{ "pile_set_name": "PubMed Abstracts" }
Merusk Posts: 27447 Badge Whore Terracotta ArmyPosts: 27447 Reply #1 on: December 12, 2008, 04:11:14 PM I can't say I disagree with the assessment. Ed 12/23: The Times fails on all kinds of levels here. Wow. « Last Edit: December 23, 2008, 10:13:32 PM by Merusk » The past cannot be changed. The future is yet within your power. Ingmar Posts: 19280 Auto Assault Affectionado Terracotta ArmyPosts: 19280 Reply #2 on: December 12, 2008, 04:23:18 PM What a bunch of crap (he said while posting from work...) The Transcendent One: AH... THE ROGUE CONSTRUCT. Nordom: Sense of closure: imminent. Hawkbit Posts: 5391 Like a Klansman in the ghetto. Terracotta ArmyPosts: 5391 Reply #3 on: December 12, 2008, 04:24:29 PM Generally, I don't tell people that I game on a computer. I usually chat with people irl about consoles and their games, but there has always been a stigma about PC gamers that sticks wtih people. It's like telling them you're into BDSM or something... once they know they'll always look at you differently. It sounds stupid, but it's the truth. Nebu Posts: 17613 Terracotta ArmyPosts: 17613 Reply #4 on: December 12, 2008, 04:31:49 PM If they were smart, they'd avoid people in fantasy leagues too. "Always do what is right. It will gratify half of mankind and astound the other." - Mark Twain Lucas Posts: 3283 Further proof that Italians have suspect taste in games. Terracotta ArmyPosts: 3283 Reply #5 on: December 12, 2008, 04:35:05 PM Quote from: Hawkbit on December 12, 2008, 04:24:29 PM It's like telling them you're into BDSM or something... once they know they'll always look at you differently. It sounds stupid, but it's the truth. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!". Alright. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!".Alright. " He's so impatient, it's like watching a teenager fuck a glorious older woman." - Ironwood on J.J. Abrams sam, an eggplant Terracotta Army Posts: 1518 Reply #6 on: December 12, 2008, 05:14:44 PM Shrug. They recently fired someone with a MMO addiction. It doubtless impacted his performance at work. I conduct interviews and hire/fire all the time and I don't ask (or care) what people do when they're not working. It's none of my business so long as it doesn't impact their performance, and I haven't seen anything to indicate that MMOs are worse than spending all your money/time on filthy streetwalkers ("setting up whore websites for freebies"), smoking a shitton of pot, or singing in a church choir. Actually the choir guy was the worst, because the fucker kept asking to leave early to make rehersals. I liked the whoremonger and pothead while the churchmonkey, umm, decided to quit. « Last Edit: December 12, 2008, 05:16:37 PM by sam, an eggplant » Rasix Posts: 14705 I am the harbinger of your doom! ModeratorPosts: 14705 Reply #8 on: December 12, 2008, 05:27:27 PM Quote from: Lucas on December 12, 2008, 04:35:05 PM Quote from: Hawkbit on December 12, 2008, 04:24:29 PM It's like telling them you're into BDSM or something... once they know they'll always look at you differently. It sounds stupid, but it's the truth. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!". Alright. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!".Alright. Say anything other than Madden, Halo, or "the Wii" and most people will look at you like you've got some sort of contagious nerd super-flu. I don't even mention MMOs. Those creep out even the few "gamers" I know at work. I've had some people that know I play MMOs talk to me about WoW due to its cultural relevance. It's mostly in hushed tones and away from other people like they're buying weed off me or something. Most people are going to have things that make them shitty hires. It's called vetting and people management. Try it! Say anything other than Madden, Halo, or "the Wii" and most people will look at you like you've got some sort of contagious nerd super-flu. I don't even mention MMOs. Those creep out even the few "gamers" I know at work. I've had some people that know I play MMOs talk to me about WoW due to its cultural relevance. It's mostly in hushed tones and away from other people like they're buying weed off me or something.Most people are going to have things that make them shitty hires. It's called vetting and people management. Try it! -Rasix Ingmar Posts: 19280 Auto Assault Affectionado Terracotta ArmyPosts: 19280 Reply #9 on: December 12, 2008, 05:31:58 PM Quote from: Rasix on December 12, 2008, 05:27:27 PM Quote from: Lucas on December 12, 2008, 04:35:05 PM Quote from: Hawkbit on December 12, 2008, 04:24:29 PM It's like telling them you're into BDSM or something... once they know they'll always look at you differently. It sounds stupid, but it's the truth. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!". Alright. Hehe, yes: in the past, I tried to speak with some of my close friends (none of them is into gaming) about videogames, and when I tell them I still play regularly, try out different gaming genres and stuff, *that* look starts surfacing on their faces and it ends with: "yeah, but c'mon, that is kids' stuff, I sure played them but I stopped when I was around twelve, you need to grow up!".Alright. Say anything other than Madden, Halo, or "the Wii" and most people will look at you like you've got some sort of contagious nerd super-flu. I don't even mention MMOs. Those creep out even the few "gamers" I know at work. I've had some people that know I play MMOs talk to me about WoW due to its cultural relevance. It's mostly in hushed tones and away from other people like they're buying weed off me or something. Say anything other than Madden, Halo, or "the Wii" and most people will look at you like you've got some sort of contagious nerd super-flu. I don't even mention MMOs. Those creep out even the few "gamers" I know at work. I've had some people that know I play MMOs talk to me about WoW due to its cultural relevance. It's mostly in hushed tones and away from other people like they're buying weed off me or something. At least it isn't D&D? At least it isn't D&D? The Transcendent One: AH... THE ROGUE CONSTRUCT. Nordom: Sense of closure: imminent. NiX Posts: 7770 Locomotive Pandamonium Wiki AdminPosts: 7770 Reply #10 on: December 12, 2008, 05:53:58 PM Up here that would be a very terrible thing to say to a candidate. Discrimination for the win. Teleku Posts: 10276 https://i.imgur.com/mcj5kz7.png Terracotta ArmyPosts: 10276 Reply #11 on: December 12, 2008, 06:33:56 PM . He only said WoW because thats the big name one that everybody and their mother plays. The basic jist though is "Don't hire people who play MMO'S". Which I'm not sure I can disagree with based on the reasoning they are applying "My great-grandfather did not travel across four thousand miles of the Atlantic Ocean to see this nation overrun by immigrants. He did it because he killed a man back in Ireland. That's the rumor." -Stephen Colbert Ghambit Posts: 5575 Terracotta ArmyPosts: 5575 Reply #12 on: December 12, 2008, 06:40:04 PM Seriously though; at least when hiring gamerz you know where and what your workers are doing. It's sort of like having a pseudo-RFID tag on everyone, which I find empowering (plus zombies make good slaves). I've found that "normal folk" tend to phuck up WAAAY worse when they've got nothing to do with their time except work. So no, he's not right. Not by a longshot. Quote from: sam, an eggplant on December 12, 2008, 05:14:44 PM I conduct interviews and hire/fire all the time and I don't ask (or care) what people do when they're not working. It's none of my business so long as it doesn't impact their performance, and I haven't seen anything to indicate that MMOs are worse than spending all your money/time on filthy streetwalkers ("setting up whore websites for freebies"), smoking a shitton of pot, or singing in a church choir. Actually the choir guy was the worst, because the fucker kept asking to leave early to make rehersals. I liked the whoremonger and pothead while the churchmonkey, umm, decided to quit. exactly If I had a company of my own I'd probably only HIRE PC gamers, and probably give the MMO peeps management positions.Seriously though; at least when hiring gamerz you know where and what your workers are doing. It's sort of like having a pseudo-RFID tag on everyone, which I find empowering (plus zombies make good slaves). I've found that "normal folk" tend to phuck up WAAAY worse when they've got nothing to do with their time except work.So no, he's not right. Not by a longshot.exactly « Last Edit: December 12, 2008, 06:42:21 PM by Ghambit » "See, the beauty of webgames is that I can play them on my phone while I'm plowing your mom." -Samwise Ratman_tf Posts: 3818 Terracotta ArmyPosts: 3818 Reply #13 on: December 12, 2008, 06:47:00 PM My gaming habits come up in interviews since I'm into testing. But I'm wondering why anyone would bring up WoW in any other kind of job interview? Most people probably don't even understand what they hell you'd be talking about. I DPS for DKP on teh mobs, lol! "What I'm saying is you should make friends with a few catasses, they smell funny but they're very helpful." -Calantus makes the best of a smelly situation. "What I'm saying is you should make friends with a few catasses, they smell funny but they're very helpful."-Calantus makes the best of a smelly situation. Ghambit Posts: 5575 Terracotta ArmyPosts: 5575 Reply #14 on: December 12, 2008, 07:30:22 PM Quote from: Ratman_tf on December 12, 2008, 06:47:00 PM My gaming habits come up in interviews since I'm into testing. But I'm wondering why anyone would bring up WoW in any other kind of job interview? Most people probably don't even understand what they hell you'd be talking about. I DPS for DKP on teh mobs, lol! Kinda sad that most managers dont know what WoW is, given the share they've got of our GDP and how successful their business model was/is. Ironically, the only businesses that seem to be working lately are drugs, oil, and WoW in the USA. Naturally, all 3 are pretty well hated-on, but all 3 still go strong. Kinda sad that most managers dont know what WoW is, given the share they've got of our GDP and how successful their business model was/is. Ironically, the only businesses that seem to be working lately are drugs, oil, and WoW in the USA. Naturally, all 3 are pretty well hated-on, but all 3 still go strong. "See, the beauty of webgames is that I can play them on my phone while I'm plowing your mom." -Samwise Tale Posts: 8021 sıɥʇ ǝʞıן sʞןɐʇ Terracotta ArmyPosts: 8021 Reply #15 on: December 12, 2008, 08:07:19 PM Quote from: Teleku on December 12, 2008, 06:33:56 PM . He only said WoW because thats the big name one that everybody and their mother plays. The basic jist though is "Don't hire people who play MMO'S". Which I'm not sure I can disagree with based on the reasoning they are applying But the genre is unknown outside gaming circles. In most people's minds, WoW is a kind of game, not a kind of MMO. It is the first game of its type that most people are aware of, and that's if they've even heard of WoW at all. That's why he only said WoW. Quote from: Ratman_tf on December 12, 2008, 06:47:00 PM My gaming habits come up in interviews since I'm into testing. But I'm wondering why anyone would bring up WoW in any other kind of job interview? Most people probably don't even understand what they hell you'd be talking about. I DPS for DKP on teh mobs, lol! Correct. I did not specifically bring it up. But it wasn't a job interview, we were just doing lunch. He had a new iPhone 3G, we started talking technology and games, and I happened to mention I considered myself to have played too many online games several years ago. When I said "the ones that came before World of Warcraft", he had heard of WoW so he told me what employers had told him about its players. And I thought "I'll post that on f13". If I had a reason to bring something like that up in an interview, it would be to demonstrate my broad awareness of what's going on culturally. My employability in online media is partly based on having been an oldschool internet user. It gives me a bigger bag of tricks to engage today's online users and make them click on stuff. So having done the MMO addiction thing and got out before most of today's WoW players got in, I might have some insight into "what the kids are doing". But that's not a particularly strong thing to bring up, especially as WoW players are probably too busy to click on my content. (edit - whew - post restored via Google cache after accidental edit) But the genre is unknown outside gaming circles. In most people's minds, WoW is a kind of game, not a kind of MMO. It is the first game of its type that most people are aware of, and that's if they've even heard of WoW at all. That's why he only said WoW.Correct. I did not specifically bring it up. But it wasn't a job interview, we were just doing lunch. He had a new iPhone 3G, we started talking technology and games, and I happened to mention I considered myself to have played too many online games several years ago. When I said "the ones that came before World of Warcraft", he had heard of WoW so he told me what employers had told him about its players. And I thought "I'll post that on f13".If I had a reason to bring something like that up in an interview, it would be to demonstrate my broad awareness of what's going on culturally. My employability in online media is partly based on having been an oldschool internet user. It gives me a bigger bag of tricks to engage today's online users and make them click on stuff. So having done the MMO addiction thing and got out before most of today's WoW players got in, I might have some insight into "what the kids are doing". But that's not a particularly strong thing to bring up, especially as WoW players are probably too busy to click on my content. « Last Edit: December 16, 2008, 02:07:50 PM by Tale » Slyfeind Posts: 2037 Terracotta ArmyPosts: 2037 Reply #16 on: December 12, 2008, 08:37:00 PM Those darn kids, next thing you know they'll be hanging in pool halls, and saying things like "sure" and "swell!" (The ladies gasp and swoon) "Ooooooo!" We got trouble! (Trouble! Trouble!) Right here in River City! (Right here in River City!) With a capital T and that rhymes with P and that stands for pool! THAT STANDS FOR POOL! I SAY YOUR YOUNG MEN WILL BE FRITTERIN AWAY!!!!11 FUCKING FRITTEREN OMFG WATCHTHETAIL50DKPMINUS111!!!!!!11111 "Role playing in an MMO is more like an open orchestra with no conductor, anyone of any skill level can walk in at any time, and everyone brings their own instrument and plays whatever song they want. Then toss PvP into the mix and things REALLY get ugly!" -Count Nerfedalot Numtini Posts: 7675 Terracotta ArmyPosts: 7675 Reply #17 on: December 12, 2008, 08:47:20 PM I tend to not be obsessed by money though that is tempered by always being lucky enough to end up with enough. But really? I wouldn't want a job with a company like that. Life is too short. If you can read this, you're on a board populated by misogynist assholes. angry.bob Posts: 5442 We're no strangers to love. You know the rules and so do I. Terracotta ArmyPosts: 5442 Reply #20 on: December 13, 2008, 01:25:34 AM Quote from: Nebu on December 12, 2008, 04:31:49 PM If they were smart, they'd avoid people in fantasy leagues too. Supertruth. Fantasy Leagues are a fucking blight upon all of mankind. Every IT shop I've worked in, people have spent the bulk of their day fucking around on the web with their teams and looking up stats for trades or whatever stupid shit you do with those things. Even though sports fans are usually far more retarded about their mancrush obsession than videogamers are, it's a "normal" obsession. When I point out that videogamers at least interact with their medium and most sports enthusiasts pay tons of money to sit on their ass doing nothing but eating shit food while they pretend they're the "XXth player" on the team when in reality no one on the team really gives a fuck about the city they "represent" in some quasi-tribal pseudo warfare, they just shrug and go back to trying to find player's medical reports or whatever. I will make the qualification that despite the universally low quality of professional athletes, Lebron James is constantly doing really great things here in Akron locally and seems to have a strong sense of connection to the city, especially to children's causes. I'm glad I changed my mind at the last minute about running onto the court when he was in high school and beating on his knees and hips with a claw hammer. Supertruth. Fantasy Leagues are a fucking blight upon all of mankind. Every IT shop I've worked in, people have spent the bulk of their day fucking around on the web with their teams and looking up stats for trades or whatever stupid shit you do with those things. Even though sports fans are usually far more retarded about their mancrush obsession than videogamers are, it's a "normal" obsession. When I point out that videogamers at least interact with their medium and most sports enthusiasts pay tons of money to sit on their ass doing nothing but eating shit food while they pretend they're the "XXth player" on the team when in reality no one on the team really gives a fuck about the city they "represent" in some quasi-tribal pseudo warfare, they just shrug and go back to trying to find player's medical reports or whatever.I will make the qualification that despite the universally low quality of professional athletes, Lebron James is constantly doing really great things here in Akron locally and seems to have a strong sense of connection to the city, especially to children's causes. I'm glad I changed my mind at the last minute about running onto the court when he was in high school and beating on his knees and hips with a claw hammer. Wovon man nicht sprechen kann, darüber muß man schweigen. Murgos Posts: 7474 Terracotta ArmyPosts: 7474 Reply #22 on: December 13, 2008, 09:21:59 AM My office is very nerd-centric. Many people actually know what WoW is and it's not hard to find someone actively playing, usually because they are surfing WoW forums all day. However, you can start a fantasy sports conversation with anyone. "You have all recieved youre last warning. I am in the process of currently tracking all of youre ips and pinging your home adressess. you should not have commencemed a war with me" - Aaron Rayburn FatuousTwat Posts: 2223 Terracotta ArmyPosts: 2223 Reply #24 on: December 13, 2008, 05:29:19 PM I spend most of my free time either gaming or reading sci-fi or fantasy. So, pretty much nothing to bring up in any social situation, unless I want a bunch of morons staring at me. Has anyone really been far even as decided to use even go want to do look more like? Malakili Posts: 10596 Terracotta ArmyPosts: 10596 Reply #25 on: December 13, 2008, 05:29:52 PM Quote from: Slyfeind on December 12, 2008, 08:37:00 PM Those darn kids, next thing you know they'll be hanging in pool halls, and saying things like "sure" and "swell!" (The ladies gasp and swoon) "Ooooooo!" We got trouble! (Trouble! Trouble!) Right here in River City! (Right here in River City!) With a capital T and that rhymes with P and that stands for pool! THAT STANDS FOR POOL! I SAY YOUR YOUNG MEN WILL BE FRITTERIN AWAY!!!!11 FUCKING FRITTEREN OMFG WATCHTHETAIL50DKPMINUS111!!!!!!11111 This just made my entire day better. This just made my entire day better. Tale Posts: 8021 sıɥʇ ǝʞıן sʞןɐʇ Terracotta ArmyPosts: 8021 Reply #26 on: December 13, 2008, 09:22:42 PM Quote from: Slyfeind on December 12, 2008, 08:37:00 PM Those darn kids, next thing you know they'll be hanging in pool halls, and saying things like "sure" and "swell!" (The ladies gasp and swoon) "Ooooooo!" We got trouble! (Trouble! Trouble!) Right here in River City! (Right here in River City!) With a capital T and that rhymes with P and that stands for pool! THAT STANDS FOR POOL! I SAY YOUR YOUNG MEN WILL BE FRITTERIN AWAY!!!!11 FUCKING FRITTEREN OMFG WATCHTHETAIL50DKPMINUS111!!!!!!11111 Yeah, I meant to say, that was awesome. Yeah, I meant to say, that was awesome. Tale Posts: 8021 sıɥʇ ǝʞıן sʞןɐʇ Terracotta ArmyPosts: 8021 Reply #27 on: December 13, 2008, 09:24:34 PM Quote from: Tarami on December 13, 2008, 05:22:31 PM I can't say I disagree. I wouldn't have hired me in 2006. Would you hire you now? I mean, has it made you less employable for life? Or did you go back to being as employable as before, or more so? Would you hire you now? I mean, has it made you less employable for life? Or did you go back to being as employable as before, or more so? sam, an eggplant Terracotta Army Posts: 1518 Reply #29 on: December 13, 2008, 11:47:26 PM Shrug. Everybody does something when they're not working. Just as an example, (and to be clear I do not do this, it's very illegal) I would love to avoid hiring married people with young children. Nothing sucks more time than kids, and children are always prioritized over work. I've had major problems with parents in the past. Major, major problems. If it weren't illegal, I'd hire the unkempt surly gamer with a neckbeard over the married professional guy with a lovely wife and infant at home any day of the week. Any day, any way. If only it were possible. I also wouldn't hire anyone over 50, women, or cripples. Old people leave at 4:59:59.999, women get married and quit working or take long maternity leaves or sue your ass for harassment, and no matter what they may think, being unable to hobble with that crutch faster than 0.4MPH does impair your ability to do a white collar job, Quasimodo. But hey, all illegal. sinij Posts: 2597 Terracotta ArmyPosts: 2597 Reply #30 on: December 14, 2008, 04:45:06 AM Just like I wouldn't talk with anyone who is not involved in it about my sex life, I wouldn't discuss my gaming habits. For example I wouldn't discuss WoW at my BDSM meets, they would think I am a pervert. « Last Edit: December 14, 2008, 04:52:57 AM by sinij » Eternity is a very long time, especially towards the end.
{ "pile_set_name": "OpenWebText2" }
Initial contact is free of charge in order be to understand your situation Of course everything we discuss is always confidential! I apologize if there is redundant information on other pages, but I am revising about 6 different websites with lots of different information, and I will edit when I have the time. I believe that I have addressed immediate concerns so you can understand your situation. This will be geared to auto theft claims, but I consult on all personal property claims such as renters, homeowners, and boat claims. I have also been involved in commercial business claims with successful outcomes. I average a 90 to 95% success rate in which the claims are settled with my intervention. One might think with this success, I have an insurance sales or underwriting background. No, I do not have such experience. Some might think I have some type of legal background. That would be correct, but not in the stadardized way like being a lawyer or a paralegal. My experience has come the hard way serving as an expert witness giving hundreds and hundreds of hours of sworn testimoñy combined with the fact that I have been involved in every area of auto theft investigations for over 20 years working with Special Investigation Units across the country for just about any auto insurer out there. I have learned a lot which I exploit for my client and I have taught these investigators for years. I have formed great relationships with many of these people. Of course there are some who hate me, because I may destroy their case they were building for denial. When an insured suffers a loss on the beginning there is very little concern. After all, they have insurance, there should be a few questions about the theft event, but now it’s time to pay up with in 30 days! Yeah, that was the good ole days, but it is not like that anymore. When you submit your claim you will be required to give a recorded statement. This will be basic information about the theft and details surrounding it. They want accountability for all keys. Who else uses the vehicle? Any enemies? Any broken glass at the theft scene? Did the vehicle have an alarm? II highly suggest that you listen to the question well before answering. It simply does not look good for you to have to go back and change your answers. I realize you may be in shock because someone iñvaded you personal space by stealing your property. Most people do not understand the emotions that pop up in these situations 7nless they have been in the same situation at one time in thier lives. For example: you tell the claims rep that the car was parked at 6 pm. You went back to the car at 7 pm and you found it was no longer there. You speak with the friends you met that day and although the meeting was at 6, you were an hour late and did not arrive until 7 pm. This means your time line is totally off and an investigator will find out. Instead if correcting that answer that the insurance will look at as a misrepresentation of material fact, confirm with witnesses or friends what time the vehicle was stolen. These investigations are not to find out who stol3 the vehicle! You need to be aware, this is an insurance fraud investigation and you are the target! The first knee-jerk reaction is to retain an attorney. That is the last thing you would ever want to do in an SIU Investigation. This situation does not merit an attorney because it takes the Investigation to a snail’s pace. Now you can no longer have the option to speak with the investigator. Everything has to go through the attorney. Anyone that deals with attorneys know they are difficult to contact. Another down side of using an attorney is a huge majority know less than you about auto theft claims protocol, putting you in the position of paying to train them! Auto theft claims are much different than personal injury or slip and fall. They require a specialist like me that has 28 years experience in these claims. If you hire me to consult you through this witch Hunt, I make this so less a burden on your shoulders and chances of getting the claim settled go up exponentially, because thier ultimate goal is to deny the claim. This would be denied by the investigator, but just follow the money. Who pays the investigator to investigate? The insurance carrier! If all claims are paid, what is the sense of paying the investigator? You were probably denied because of the vehicle you drive. Most vehicles equipped with an anti-theft transponder are deemed unstealable by the insurance companies. This is due to the free training they get from their forensic experts. There are major problems on that front that I will go into. Basically, they are not a w-2 employee of the insurance company and are termed independent. These experts have the audacity to accuse the insured of fraud! You really can’t blame the investigator or the company. They don’t know you and you could certainly we’ll be submitting a false claim. That happens a lot! It’s not always professional crooks submitting false claims. It could be a neighbor that fell on hard times, possibly losing his job, owing more than the car is worth (under water), or maybe the wife went through an emergency operation. Bottom line, whatever the reason, here they are trying to illicitly collect that claim money. Although I feel bad for whatever financial situation they got themselves into, it is still a criminal act and I feel they should be prosecuted. I have seen juries over look the crime andinsert emotion into their verdict in these hardship situations and that is something no one has control of. There are also the professional scammers submitting bogus claims or at the very least ripped off the buyer of the car and when he submitted a theft claim after owning it for 3 years turned out to be an inadvertantly fraudulent clam. My urpose here is not to attack the companies for investigating claims. Without the investigators, none of us would be able to afford comprehensive insurance. Before I go on as to the consulting, I want to put in context what these investigators deal with day in and day out, and you will realize the investigation is nothing personal, but some investigators get overzealous and in many cases, they were deliberately deceptive information by thier so-called “forensic” experts who’s reports are responsible for the subsequent investigation and potential denial of the claim. I assure you, this is not an accident or a simple mistake. These experts know the investigator is totally reliant on their bogus conclusions and they figure they will have opposition. When I have cases opposing them .as an expert, I slice and dice this clowns and illistrate to a Jury thier misrepresentations. As the insured being investigated, it may be because the forensic expert inferred you are a liar and the car was last driven with one of your keys, even though it wasn’t. What? The investigator is supposed to believe you over their hand picked indepemdent forensic experts? Good luck on that! Insurance Investigations Some quick context of what investigators are up against: 10 years prior to a theft claim, a guy got a really good deal on a Ferrari. He purportedly paid $50k for a car valued over $100k at the time. I don’t know about you, but I would be thinking this deal is too good to be true and I would have ran away as fast as possible, but that is me. In defense of the owner to justify this price was that the car did have a salvage/rebuilt title devaluing the vehicle by 35% do at that rate the vehicle was worth about $65k retail.$15k is one heck of a mark down. Something wasn’t right. The owner told me that the guy was his friend and was a dealer buying cars from the auctions all the time. In fact, he had what he thought to be receipts for the replacement parts to rebuild the car. Fast forward 10 years. The buddy he bought the car from had died 5 years previous. The Ferarri got stolen. The car can had been insured for 10 years by the same major carrier. The car was not recovered, however the investigator reached out to their forensic experts to author a report about how the car was possible to steal because of it’s sophisticated anti-theft transponder system requiring the owner’s key to start the engine. In case you did not realize it, these experts implicated owner’s involvement in the theft of the vehicle. They inspected absolutely no physical evidence and went solely off the description and operation theory about the design of the system in a service manual. The fact (assumption) was that the vehicle was driven at the time of the Theft. Towing could not even be eliminated, yet here the Certified Forensic Locksmith as an agent (subcontractor vendor) is accusing the insured of a crime with absolutely no physical evidence to support the accusation! Unfortunately, this happens Nationwide far too often. Unopposed, the insured’s lawyer would not know how to refuted the expert. Who would the jury believe? The Certified Forensic locksmith of course, because the jury would put this expert in a professional catagory because of the self-described term forensics in his title. Many members of juries from what I have been told are fascinated by TV shows like Forensic Files iles and CSI and put these experts in that real life TV catagory. Unfortunately, I have run into far too many incompetent attorneys confused by it all without thinking of the basics! The rreport and conclusions should be thrown out of court automatically because they are giving testimony on physical evidence they never viewed! Duh!This blows past attorneys all the time. Instead, they are more concerned about if that anti-theft system can be defeated. Of course it can, but what is the point? You are feeding into their bull crap that they actually we’re testifying on evidence they never had! Yeah, and these attorneys are not ashamed to charge lots of bucks for their idiocy! This insured was dragged through an investigation because of these moron forensic experts because of a potential fraud claim that did not happen. A day before the examination under oath, the insured called me back very concerned. I asked him what the problem was. He noticed the title for the Ferrari was a duplicate. 10 years he had to look at the title and the day before the EUO he tells me the title is a duplicate! Now, could be the original salvage got lost and a duplicate was issued. It could be there was a lien on the salvage title and someone “washed it” with a duplicate. At any rate it was a problem. We also don’t know when it happened through which state. The car was bought in Illinois but he had it registered in Florida where he moved to and his dead buddy had handled all of the paperwork. I had spent 4 hours on the phone with him scripting him as to how to honestly answer all the questions. I gave him questions he never saw coming so he was fully prepared. I told him just like all my clients, my phone is open for any hicups during the EUO, take a break and call me if you have a problem question. And last but not least, call me after the EUO immediately in case we have to clarify something. Well, I hadn’t heard from him. I don’t know how many times I tried to reach him, but on the fourth day, he called me! I asked him what happened, everything should have gone very smooth. I had him covered to attack the forensic report and by all my work, the claim should have been paid. He told me he went out and got drunk for 3 days. I didn’t get it until he told me what went down. He told me the Insurance company told him if he admitted today that the car was Ferrari was stolen they would not press charges for receing stolen property. He did so accordingly. Had he called me during the EUO, I would have told him what to say and do and chances are results would have been much more favorable to him. After all the carrier could have checked this out 10 years ago but didn’t and for 10 years had their hands out for the premium every year. In my view the insurce carrier aided and abetted in the posession of the stolen peoperty at the very least! Anyway, that is a claim that went wrong for everyone involved. At least the insurance company did not have to pay. The vehicle was never recovered. The insured didn’t have to contest the denial and get stuck with an incompetent attorney not having to cross examine the expert on their report based on no physical evidence. The weirdest thing was the insured suspected forany reasons he told me afterward he had bought a stolen Ferrari. I sure don’t have $50k to throw away, but evidently the insured did. That consultant claim is one out of 4 that did not get to as planned. Over 10 years, I have hundreds and hundreds of claims with I succesfully consulted in. Claims the insurance company had altready hadthe claim scheduled for denial. The problem here was not me but information the insured did not tell me about. The Cloned Car That Was A Ghost A cloned car does not exist and in merely a mirage. In this case I was serving as an expert witness. The insured bought the car from a Florida dealer. The vehicle was .Corverte. it was being transported from NYC to Florida. The Vette had a NY title. Because being from out of state, the vehicle required inspection. Vehicle was recorded as being inspected in Florida. Tirle was transferred from New York to Florida and car was registered and insured to the owner. He took photos of the Vette. The outside was green and interior was black. The insured evidently enjoyed driving this car for 3 years. It was stolen from his driveway one night. He did everything right. He reported the theft the cops. He submitted a claim to his insurer. He went over a no f above quizzing neighbors if they heard it saw anything coming from his driveway. No one saw or heard anything. Since this vehicle was equipped with an anti-theft transponder Systems, it drew scrutiny because these vehicles according to insurance investigators can’t be stolen without the use of the owner’s key. The insured, because of the transponder was now under investigation. NICB was now involved (National Insurance Crime Bureau) because a national check of the VIN was listed in Iowa. NICB went to Iowa to check on this. They interviewed the owner who had bought the car new. He was asked if the car had been to New York? He said no. Has the car been out of state. He confirmed it had been to Illinois for car show. NICB took photos of the car, which happened to be the same colors the insured had in his photos. He was asked if he ever had the car for sale on Craigslist list or anywhere else? His answer was no. Next, the NICB attempted to contact the seller in New York finding the phone number was not valid anymore They went to the address you seller had listed. That was a vacant block where houses no longer on the road. They contacted New York Department of motor vehicle and gave them their summary about the Vette
{ "pile_set_name": "Pile-CC" }
Liquid-liquid extraction, also referred to as solvent extraction, processes are widely used to purify products in which the product contaminants are preferentially soluble in one of the liquid phases and the desired product is preferentially soluble in the other phase which is immiscible with the first liquid phase. The commercial extractors employing solvent extraction include both unagitated and agitated columns such as mixer-settlers all of which rely on the transfer of the component to be separated (the consolute component) from one liquid phase through the interface with the other (second) liquid phase and into the second phase. It is common for gelation and or emulsification between the first and second liquid phases to develop when so influenced by the consolute component. Illustrative of industrial applications of solvent extraction are separation of aromatic and aliphatic hydrocarbons, desulfurization of hydrocarbons, butadiene separation, caprolactam extraction, manufacture of acetic acid, pharmaceutical processes such as for antibiotics and vitamins, phenol production, manufacture of synthetic elastomers and inorganic metal extraction applications such as is used in conversion of ilmenite to titantium dioxide. Oftentimes these extraction processes utilized a hydrocarbon such as hexane as one liquid component and water as the other liquid component relying on the solubility of the consolute component in water for its removal from and resultant purification of the desired product which is soluble in the hexane. Such a liquid-liquid extraction procedure is commonly used in the production of saturated or unsaturated, amorphous, elastomeric olefin copolymers. These include copolymers of ethylene with propylene, and terpolymers of ethylene with propylene and with a cyclic or acyclic nonconjugated diolefin, such as 1,5-cyclooctadiene, 1,4-hexadiene, 6-methyltetrahydroindene, methylene-norbornene, ethylidene-norbornene, etc. These copolymers and terpolymers are usually produced by solution processes, in the presence of catalytic systems consisting of compounds of the transition metals of Group V of the Periodic Table of Elements, preferably vanadium compounds, such as vanadium triacetylacetonate, vanadium oxychloride, and of organometallic compounds of metals of Groups I to III of the Periodic Table of Elements, preferably aluminum compounds. When the process provides the polymers of appropriate molecular weight, it is useful to quench the polymerization by introducing a quenching agent such as water. In such processes, it is necessary to purify the polymeric compounds by removing the catalyst residues therefrom because, besides contributing to the formation of ash content in the polymeric product, the presence in the polymer of these residues promotes the occurrence of oxidative degradation processes with consequent loss of quality of the polymeric product. It is known from the prior art to purify elastomeric products in solution in suitable organic solvents, by a solvent extraction process involving washing the polymer solutions with aqueous solutions of suitable extracting agents which are capable of forming water soluble compounds with the catalyst residues and subsequently separating the aqueous phase from the organic phase. Unfortunately, quenching with water and/or washing with aqueous solutions of extracting agents, complexing agents, pH of the system, viscosity of one or both phases and the efficacy of mixing can result in emulsification and/or gelation of one or more of the components and/or poor contact between the extracting agent and the catalyst residues being extracted. Various specific approaches to removing catalyst residue from the polymers are hereafter set forth: U.S. Pat. No. 3,337,514, to Knabeachuh et al, requires contacting a solution of an alpha-olefin copolymer with steam, then with aqueous mineral acid, then with water under turbulent conditions, and separating the copolymer solution from the aqueous phases; U.S. Pat. No. 3,740,381 approaches purification by washing with an aqueous solution of extracting agent in the presence of a propylene monomer and a solvent for the polymer product; U.S. Pat. No. 3,804,815 treats the polymer product with aqueous caustic, followed by filtration and removal of the aqueous phase; and, U.S. Pat. No. 4,016,349 utilizes polar liquid extractants (such as water) supported on a finely divided, solid material such as diatomaceous silica, silica gel, alumina or molecular sieves which material is then separated from the polymer solution. There is also a need to prevent gel/emulsion formation in liquid-liquid extraction processes, particularly those in which water is one of the immiscible phases, and a need for a simple and efficient process for removing metal catalyst residues from alpha-olefin hydrocarbon polymer solutions, and especially a process suitable for continuous catalyst residue removal.
{ "pile_set_name": "USPTO Backgrounds" }
Q: What does this error in Sass mean? "Illegal nesting: Only properties may be nested beneath properties." This is my code html, body { width: 100%; height: 100%; padding: 0; margin: 0; } body { font-family: 'Open Sans'; } .navigation { padding: 0; margin: 0; background: #333; position: fixed; top: 0; z-index: 999; width: 100% li { display: inline; padding: 5px 10px; a { color: #e1e1e1; text-decoration: none; a:hover{color: lighten(#e1e1e1, 20%);} } } } But whenever I build it and refresh the webpage I get this error: Syntax error: Illegal nesting: Only properties may be nested beneath properties. on line 23 of style.scss here is the my css code with line numbers 18: z-index: 999; 19: width: 100% 20: li { 21: display: inline; 22: padding: 5px 10px; 23: a { 24: color: #e1e1e1; 25: text-decoration: none; 26: a:hover{color: lighten(#e1e1e1, 20%);} 27: } 28: } I guess it's the anchor tag that's creating the problem but I don't understand why. Theoretically it should work. A: You're missing a semicolon: .navigation { padding: 0; margin: 0; background: #333; position: fixed; top: 0; z-index: 999; width: 100%; <=== right here A: Although this isn't the correct answer for this specific question, others who stumble upon this article because they encounter the same error message ("Illegal nesting: Only properties may be nested beneath properties.") may find the cause to be related to using the scss_lint gem. In particular, if you've upgraded from scss-lint to scss_lint, you may need to add require: false to your Gemfile to get things working again. Reference: https://github.com/brigade/scss-lint/issues/496 Docs: https://github.com/brigade/scss-lint#installation
{ "pile_set_name": "StackExchange" }
Q: ICommand vs IValueConverter I am new to WPF. I am currently developing an application in WPF where I had to enable/disable a button based on a value from database. I found solutions on the net to do so using Command as well as Converters. So which one is a better solution and why? A: When working with buttons it would be best to use the command implementation , since it is built in and you can provide a command parameter for predicate , the converter is something you would need to write , instance and place in each place you would wan't to use it . To summarize a command with CanExecute would be more reusable and maintainable .
{ "pile_set_name": "StackExchange" }
Defiant Comics Defiant Comics was a comic book publishing imprint of Enlightened Entertainment Partners, LP. Defiant was established in 1993 by former Marvel Comics and Valiant Comics editor-in-chief Jim Shooter. Publication history Defiant was founded in the wake of Jim Shooter's departure from Valiant. After attempting, unsuccessfully, to retain his partial ownership of Voyager Communications (Valiant's parent company) Shooter founded a new company that included some Valiant artists and writers on its staff. He formed a business venture with The River Group to help finance Defiant. In early 1993, Defiant announced that its first title, Plasm, would be released as a series of trading cards that could be put together in an album to form "issue #0". Upon hearing the news, Marvel Comics threatened a lawsuit against Defiant, claiming the new title violated a Marvel UK trademark for their book/character Plasmer. Though Defiant changed the title to Warriors of Plasm, Marvel continued its lawsuit. While the court eventually ruled in favor of Defiant, the legal process depleted the company's capital, having cost over $300,000 in legal fees. Defiant ceased publication in Summer 1995. Announced plans Shooter had originally planned to publish an intracompany "crossover" featuring all the characters and titles in the self-contained Defiant universe, similar to the Secret Wars crossover miniseries he had done at Marvel and the Unity crossover miniseries he had also completed before his dismissal at Valiant. To have been titled "Schism", the crossover was intended to take place in a four-issue miniseries, with the regular ongoing titles retelling the parts relevant to the respective characters of each. Only two crossover-related issues (Dogs of War No. 5 and Warriors of Plasm #13) were published before the company went out of business. The plots for the miniseries were eventually posted online. Titles Initial Dark Dominion The Good Guys Warriors of Plasm (originally Plasm) Second wave Charlemagne Dogs of War Prudence & Caution War Dancer One-shots The Birth of The Defiant Universe Glory Great Grimmax The Origin of The Defiant Universe Splatterball Graphic novels Warriors of Plasm – Home For the Holidays (officially titled as Warriors of Plasm Graphic Novel # 1 in the comic's legal indicia) Trade paperback Warriors of Plasm: The Collected Edition (Feb. 1994) (reprints Warriors of Plasm #0–4 and Splatterball) Notes References Defiant Comics at the Grand Comics Database World Talk Radio: Interview with Jim Shooter Category:Defunct comics and manga publishing companies Category:Comic book publishing companies of the United States Category:1993 establishments in New York (state) Category:Publishing companies established in 1993
{ "pile_set_name": "Wikipedia (en)" }
If you’re looking for a home gyms reviews, this Muorka Pair of Adjustable Barbell Stands Racks Bench Press Stands Rack Sturdy Steel Squat Dumbbell Racks is the best cheapest price on the web we have searched. Many good reviews already proving the quality of this product. The Muorka Pair of Adjustable Barbell Stands Racks Bench Press Stands Rack Sturdy Steel Squat Dumbbell Racks is equipped with a large number of features that makes it great product. The most sold product... Read More
{ "pile_set_name": "Pile-CC" }
Graphic scenes show Neda – her name means "the call" – walking with her father among demonstrators, then separately when she was shot as well as attempts to save her life.
{ "pile_set_name": "OpenWebText2" }
Hunter S. Thompson & His 1958 Vancouver Sun Application In 1958, the great Hunter S. Thompson applied for a reporter spot at the Vancouver Sun. The paper reprinted his cover letter, reminding us all that even the most original writers were once looking for work. Here’s more from the cover letter: “I’ve taken some writing courses from Columbia in my spare time, learned a hell of a lot about the newspaper business, and developed a healthy contempt for journalism as a profession. As far as I’m concerned, it’s a damned shame that a field as potentially dynamic and vital as journalism should be overrun with dullards, bums, and hacks, hag-ridden with myopia, apathy, and complacence, and generally stuck in a bog of stagnant mediocrity. If this is what you’re trying to get The Sun away from, then I think I’d like to work for you.” Last year, this GalleyCat editor took Thompson‘s novel, The Rum Diary, along on a vacation to Puerto Rico. The video essay embedded above traced Thompson’s footsteps around 1959, when he struggled as a reporter at a Puerto Rican newspaper–one year after his Vancouver application. (Via Bud Parr)
{ "pile_set_name": "Pile-CC" }
// Boost.Geometry (aka GGL, Generic Geometry Library) // Copyright (c) 2014-2015, Oracle and/or its affiliates. // Contributed and/or modified by Adam Wulkiewicz, on behalf of Oracle // Use, modification and distribution is subject to the Boost Software License, // Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #ifndef BOOST_GEOMETRY_ALGORITHMS_DETAIL_SIGNED_SIZE_TYPE_HPP #define BOOST_GEOMETRY_ALGORITHMS_DETAIL_SIGNED_SIZE_TYPE_HPP #include <cstddef> #include <boost/type_traits/make_signed.hpp> namespace boost { namespace geometry { typedef boost::make_signed<std::size_t>::type signed_size_type; }} // namespace boost::geometry #endif // BOOST_GEOMETRY_ALGORITHMS_DETAIL_SIGNED_SIZE_TYPE_HPP
{ "pile_set_name": "Github" }
1938 in music This is a list of notable events in music that took place in the year 1938. Specific locations 1938 in British music 1938 in Norwegian music Specific genres 1938 in country music 1938 in jazz Events January 16 Benny Goodman plays the first jazz concert at Carnegie Hall in New York City, considered a legitimisation of the genre. It is recorded live and issued in 1950 as The Famous 1938 Carnegie Hall Jazz Concert. Béla Bartók's Sonata for two pianos and percussion is premièred in Basel. First recording of Mahler's Symphony No. 9, a live performance by the Vienna Philharmonic conducted by Bruno Walter at the Musikverein, the same location, conductor and orchestra that had presented the première 26 years earlier, but now in the face of the Anschluss. May 12 – Arthur Honegger's oratorio Jeanne d'Arc au Bûcher is premièred in Basel, with Ida Rubinstein as Jeanne. June 5 – Glenn Gould plays in public for the first time at a church service held at the Business Men's Bible Class in Uxbridge, Ontario to a congregation of about two thousand people. September 22 – Anton Webern's String Quartet is premièred in Pittsfield, Massachusetts. October 5 – Ralph Vaughan Williams' Serenade to Music is premièred at the Royal Albert Hall in London to mark the 50th anniversary of conductor Henry Wood's first concert. October 31 – Sister Rosetta Tharpe makes her first recording. Composer Ralph Vaughan Williams begins an affair with Ursula Wood. Roy Acuff and the Crazy Tennesseans win a contract with the Grand Ole Opry. Pete Seeger drops out of college to begin his career as a folk singer. Jelly Roll Morton speaks, sings, and plays piano for an eight-hour Library of Congress recorded sound documentary produced by Alan Lomax. Fred Buscaglione and Leo Chiosso meet. The Andrews Sisters enjoy their first major hit with "Bei Mir Bist Du Schoen". John Serry Sr. appears with Shep Fields in Paramount Pictures film extravaganza The Big Broadcast of 1938. Roman Catholic hymnal Kirchenlied first published in Germany. Albums Released Carnegie Hall Jazz Concert – Benny Goodman Biggest hit songs The following songs achieved the highest chart positions in the limited set of charts available for 1938. Top hit recordings "A Gypsy Told Me" by Ted Weems And His Orchestra With Perry Como "A-Tisket, A-Tasket" by Ella Fitzgerald with Chick Webb "Begin the Beguine" by Artie Shaw "Bei Mir Bist Du Schoen" by The Andrews Sisters "Cry, Baby, Cry" by Larry Clinton "Don't Be That Way" by Benny Goodman "I Won't Tell a Soul (I Love You)" by Andy Kirk and his Twelve Clouds of Joy "I've Got a Pocketful of Dreams" by Bing Crosby "Jumpin' at the Woodside" by Count Basie "Music, Maestro, Please" by Tommy Dorsey "My Reverie" by Larry Clinton "Roll 'Em Pete" by Big Joe Turner And Pete Johnson "Thanks for the Memory" recorded by Shep Fields Bob Hope And Shirley Ross "Ti-Pi-Tin" by Horace Heidt "Walking in The Kings Highway" by The Carter Family Top Christmas hits "Don't Wait 'Till The Night Before Christmas" – Eddy Duchin and His Orchestra Top blues records "Sunnyland" – Sonny Boy Williamson Published popular music "And the Angels Sing" w. Johnny Mercer m. Ziggy Elman "At Long Last Love" w.m. Cole Porter "At The Roxy Music Hall" w. Lorenz Hart m. Richard Rodgers "Back Bay Shuffle" m. Artie Shaw & Teddy McRae "Be A Good Scout" w. Harold Adamson m. Jimmy McHugh Introduced by Deanna Durbin in the film That Certain Age "Big Noise From Winnetka" w.m. Ray Bauduc, Bob Crosby, Bob Haggart & Gil Rodin "The Biggest Aspidistra In The World" w.m. Thomas Connor, W. G. Haines & James S. Hancock "Bolero at the Savoy" w.m. Charles Carpenter, Gene Krupa & James Mundy "Boom!" w.m. E. Ray Goetz & Charles Trenet "Boomps-A-Daisy" w.m. Annette Mills "Boum !" w.m. Charles Trenet "Change Partners" w.m. Irving Berlin introduced by Fred Astaire in Carefree "Cherokee" m. Ray Noble "Cinderella, Stay In My Arms" w. Jimmy Kennedy m. Michael Carr "Colorado Sunset" w. L. Wolfe Gilbert m. Con Conrad "Daydreaming (All Night Long)" w. Johnny Mercer m. Harry Warren. Introduced by Rudy Vallee in the film Gold Diggers in Paris. "Dearest Love" w.m. Noël Coward "Deep In A Dream" w. Eddie DeLange m. Jimmy Van Heusen "Do You Wanna Jump Children?" w.m. Jimmy Van Heusen, Willie Bryant & Victor Selsman "Doin' the Jive" m.w. Glenn Miller & Chummy MacGregor "Don't Be That Way" w. Mitchell Parish m. Edgar Sampson & Benny Goodman "Don't Let That Moon Get Away" w. Johnny Burke m. James V. Monaco "Don't Worry 'Bout Me" w. Ted Koehler m. Rube Bloom "Double Trouble" w. Leo Robin m. Ralph Rainger & Richard A. Whiting "Exhibition Swing" m. Chalmers Wood "F.D.R. Jones" w.m. Harold Rome. Introduced by Rex Ingram in the revue Sing Out the News. "Falling In Love With Love" w. Lorenz Hart m. Richard Rodgers. Introduced by Muriel Angelus in the musical The Boys from Syracuse. Performed in the 1940 film by Allan Jones. "Ferdinand the Bull" w. Larry Morey m. Albert Hay Malotte. Performed by Sterling Holloway in the animated film of the same name. "Flat Foot Floogie (with a Floy Floy)" w.m. Slim Gaillard, Slam Stewart & Bud Green "Get Out of Town" w.m. Cole Porter from the musical Leave It to Me! "Hawaiian War Chant" w. (Eng) Ralph Freed m. Johnny Noble & Prince Leleiohaku "Heart And Soul" w. Frank Loesser m. Hoagy Carmichael "Hi-Yo Silver" Erickson, De Leath "Hold Tight – Hold Tight" w.m. Leonard Kent, Edward Robinson, Leonard Ware, Jerry Brandow & Willie Spotswood "Hong Kong Blues" w.m. Hoagy Carmichael "Hooray for Hollywood" w. Johnny Mercer m. Richard A. Whiting "I Can Dream, Can't I?" w. Irving Kahal m. Sammy Fain. Performed by Tamara in the 1938 musical Right This Way "I Hadn't Anyone Till You" w.m. Ray Noble "I Have Eyes" w. Leo Robin m. Ralph Rainger "I Let a Song Go Out of My Heart" w. Henry Nemo, John Redmond & Irving Mills m. Duke Ellington "I Love To Whistle" Harold Adamson, Jimmy McHugh "I Married An Angel" w. Lorenz Hart m. Richard Rodgers "I Must See Annie Tonight" w.m. Cliff Friend & Dave Franklin "I'll Be Seeing You" w. Irving Kahal m. Sammy Fain "I'll Tell the Man in the Street" w. Lorenz Hart m. Richard Rodgers. Introduced by Vivienne Segal and Walter Slezak in the musical I Married an Angel. Performed in the film version by Nelson Eddy. "I'm Gonna Lock My Heart" w. Jimmy Eaton m. Terry Shand "I'm In Love With Vienna" w. Oscar Hammerstein II m. Johann Strauss II "In A Little Toy Sailboat" Mandell, Littau "In My Little Red Book" w.m. Ray Bloch, Nat Simon & Al Stillman. "I've Got a Pocketful of Dreams" w. Johnny Burke m. James V. Monaco "Jeepers Creepers" w. Johnny Mercer m. Harry Warren. Introduced by Louis Armstrong in the film Going Places. "Joseph! Joseph!" w.(Eng) Sammy Cahn & Saul Chaplin m. Nellie Casman & Samuel Steinberg "Jumpin' at the Woodside" m. Count Basie "Just Let Me Look at You" w. Dorothy Fields m. Jerome Kern. Introduced by Irene Dunne in the film Joy of Living "Knees Up Mother Brown" w.m. Harris Weston & Bert Lee "Love Walked In" w. Ira Gershwin m. George Gershwin "March of the Bob Cats" m. The Bob Cats "Mister Crosby And Mister Mercer" w. Johnny Mercer m. Ed Gallagher & Al Shean "Moments Like This" w. Frank Loesser m. Burton Lane. Introduced by Florence George in the film College Swing. "Most Gentlemen Don't Like Love" w.m. Cole Porter. Introduced by Sophie Tucker in the musical Leave It to Me! "Music, Maestro, Please" w. Herb Magidson m. Allie Wrubel "My Heart Belongs to Daddy" w.m. Cole Porter. Introduced by Mary Martin in the musical Leave It to Me!. Miss Martin also performed it in the 1946 film Night and Day. Marilyn Monroe sang the song in the 1960 film Let's Make Love. "My Heart Is Taking Lessons" w. Johnny Burke m. James V. Monaco. Introduced by Bing Crosby in the film Doctor Rhythm "My Heaven On Earth" w. Charles Tobias m. Phil Baker & Samuel Pokrass. Introduced by Gertrude Niesen in the film Start Cheering "My Own" w. Harold Adamson m. Jimmy McHugh from the film That Certain Age "My Reverie" w.m. Larry Clinton "Nice People" w.m. Nat Mills & Fred Malcolm "Nightmare" m. Artie Shaw "Now it Can Be Told" w.m. Irving Berlin "Oh! Ma-Ma!" w. (Eng) Lew Brown & Rudy Vallee m. Paolo Citorello "One Day When We Were Young" w. Oscar Hammerstein II m. Johann Strauss II arr. Tiomkin "The One I Love Will Come Along Some Day" w. Gus Kahn m. Bronislaw Kaper & Walter Jurmann. Introduced by Allan Jones in the film Everybody Sing "Paradise In The Moonlight" w.m. Gene Autry & Fred Rose from the film Western Jamboree "Penny Serenade" w. Hal Halifax m. Melle Weersma "Please Be Kind" w.m. Sammy Cahn & Saul Chaplin "Ride, Tenderfoot, Ride" w. Johnny Mercer m. Richard A. Whiting "Rockin' The Town" w. Ted Koehler m. Johnny Green from the film Start Cheering "San Antonio Rose" m. Bob Wills "Says My Heart" w. Frank Loesser m. Burton Lane. Introduced by Harriet Hilliard with Harry Owens & his Orchestra in the film Cocoanut Grove "Sent for You Yesterday, and Here You Come Today" w.m. Count Basie, Eddie Durham & Jimmy Rushing "September Song" w. Maxwell Anderson m. Kurt Weill "Shadows On The Moon" w. Gus Kahn m. Sigmund Romberg from the film The Girl Of The Golden West "Sing for Your Supper" w. Lorenz Hart m. Richard Rodgers. Introduced by Marcy Westcott, Muriel Angelus and Wynn Murray in the musical The Boys from Syracuse. Performed in the 1940 film by Martha Raye. "Small Fry" w. Frank Loesser m. Hoagy Carmichael. Introduced by Bing Crosby, Fred MacMurray and Donald O'Connor in the film Sing You Sinners. "Sold American" m. Glenn Miller & Chummy MacGregor "Spring Is Here" w. Lorenz Hart m. Richard Rodgers. Introduced by Dennis King and Vivienne Segal in the musical I Married an Angel "Start Cheering" w. Milton Drake m. Ben Oakland. Introduced by Gertrude Niesen in the film Start Cheering. "The Stately Homes Of England" w.m. Noël Coward "Thanks for the Memory" w. Leo Robin m. Ralph Rainger "That Certain Age" w. Harold Adamson m. Jimmy McHugh from the film That Certain Age "This Can't Be Love" w. Lorenz Hart m. Richard Rodgers "This Is My Night To Dream" w. Johnny Burke m. James V. Monaco "Ti-Pi-Tin" w. (English) Raymond Leveen, w. (Spanish) María Grever, m. María Grever "Two Sleepy People" w. Frank Loesser m. Hoagy Carmichael "The Umbrella Man" w. James Cavanaugh m. Vincent Rose & Larry Stock "Undecided" w. Sid Robin m. Charlie Shavers "We're Off to See the Wizard" w. E. Y. Harburg m. Harold Arlen "When I Strut Away In My Cutaway" w.m. Jimmy Durante from the film Start Cheering "Where Are the Songs We Sung?" w.m. Noël Coward "Where the Dog Sits on the Tuckerbox" w.m. Jack O'Hagan "While a Cigarette Was Burning" w.m. Charles Kenny & Nick Kenny "Who Are We to Say? (Obey Your Heart)" w. Gus Kahn m. Sigmund Romberg from the film The Girl Of The Golden West "You Couldn't Be Cuter" w. Dorothy Fields m. Jerome Kern Introduced by Irene Dunne in the film Joy of Living "You Go To My Head" w. Haven Gillespie m. J. Fred Coots "You Leave Me Breathless" w. Ralph Freed m. Friedrich Hollaender. Introduced by Fred MacMurray in the film Cocoanut Grove. "You Must Have Been A Beautiful Baby" w. Johnny Mercer m. Harry Warren "You're A Sweet Little Headache" w. Leo Robin m. Ralph Rainger "You're As Pretty As A Picture" w. Harold Adamson m. Jimmy McHughfrom the film That Certain Age "You're What's The Matter With Me" w.m. Jimmy Kennedy and Michael Carr. Introduced by Harry Richman and Evelyn Dall in the film Kicking the Moon Around. Classical music Premieres Compositions Jean Absil – Concerto for Piano and Orchestra No. 1 Alan Bush – Piano Concerto, Op. 18, with baritone and male choir in last movement Aaron Copland – Billy the Kid (ballet) Hanns Eisler – Roman Cantata George Enescu – Orchestral Suite No. 3 "Villageoise" in D major, Op. 27 Hamilton Harty – The Children of Lir Roy Harris – Symphony No. 3 Herbert Howells – Hymnus Paradisi Janis Ivanovs – Symphony No. 3 Frank Martin – Sonata da chiesa Carl Orff – Carmina Burana Walter Piston – Symphony No. 1 Silvestre Revueltas – Sensemayá (Canto para matar una culebra [Chant for the Killing of a Snake]) Dmitri Shostakovich – String Quartet No. 1 Ernst Toch – Cantata of Bitter Herbs Heitor Villa-Lobos – String Quartet No. 6 (Quarteto brasileiro no. 2) Leó Weiner – Divertimento for Strings No. 2 Opera Jenő Ádám – Mária Veronika Paul Frederic Bowles – Denmark Vesey Paul Hindemith – Mathis der Maler Dmitri Kabalevsky – Colas Breugnon Jeronimas Kacinskas – Nonet Ernst Krenek – Karl V (composed 1931–33), Neues Deutsches Theater in Prague, 22 June 1938 Mark Lothar – Tailor Wibbel Douglas Stuart Moore – The Devil and Daniel Webster Jazz Musical theater The Boys From Syracuse (Richard Rodgers and Lorenz Hart) – Broadway production opened at the Alvin Theatre on November 23 and ran for 235 performances Great Lady Broadway production opened at the Majestic Theatre on December 1 and ran for only 20 performances Maritza aka Countess Maritza, London production opened at the Palace Theatre on July 6 The Fleet's Lit Up, London opened at the London Hippodrome and ran for 191 performances Hellzapoppin', Broadway revue opened at the 46th Street Theatre on September 22 and ran for 1404 performances I Married An Angel, Broadway production opened at the Sam S. Shubert Theatre on May 11 and ran for 338 performances Knickerbocker Holiday, Broadway production opened at the Ethel Barrymore Theatre on October 19 and ran for 168 performances Leave It to Me!, Broadway production opened at the Imperial Theatre on November 9 and ran for 291 performances Nine Sharp, London production opened at The Little Theatre on January 26 and ran for 405 performances Operette, London production opened at His Majesty's Theatre on March 16 Right This Way, Broadway production opened at the 46th Street Theatre on January 5 and ran for 14 performances Sing Out The News, Broadway revue opened at the Music Box Theatre on September 24 and ran for 105 performances These Foolish Things London revue opened at the Palladium on September 29 You Never Know, Broadway production opened at the Winter Garden Theatre on September 21 and ran for 78 performances Musical films The Big Broadcast of 1938, starring W.C. Fields, Bob Hope, Dorothy Lamour and Martha Raye <ref>[https://www.imdb.com/title/tt0029912/ The Big Broadcast of 1938 on imdb.com]</ref> Carefree, starring Fred Astaire and Ginger Rogers Champagnegaloppen, starring Svend Methling and Valdemar Møller. Cocoanut Grove, starring Fred MacMurray, Harriet Hilliard, Ben Blue and Eve Arden. Cowboy from Brooklyn, starring Dick Powell and Priscilla Lane Doctor Rhythm, starring Bing Crosby, Mary Carlisle and Beatrice Lillie. Dos amigos y un amor, directed by Lucas Demare Es leuchten die Sterne, starring Ernst Fritz Fürbringer and Fridtjof Mjøen Freshman Year, starring Constance Moore, William Lundigan and Dixie Dunbar. Directed by Frank McDonald. The Girl Of The Golden West, starring Jeanette MacDonald and Nelson Eddy Going Places',' starring Dick Powell, Anita Louise, Allen Jenkins and Ronald Reagan and featuring Louis Armstrong and Maxine Sullivan Gold Diggers in Paris, starring Rudy Vallee, Rosemary Lane, Hugh Herbert and Allen Jenkins. Directed by Ray Enright. The Great Waltz',' released November 4 starring Luise Rainer and Miliza Korjus. Oscar Hammerstein II contributed new English lyrics to the music of Johann Strauss IIHappy Landing, starring Sonja Henie, Don Ameche and Ethel Merman and featuring the Raymond Scott Quintet Hold That Co-ed, starring John Barrymore, George Murphy and Joan Davis Honeysuckle, starring Hugo del Carril and Libertad LamarqueIt's in the Air, starring George Formby, Polly Ward and Jack Hobbs. Directed by Anthony Kimmins. Jettatore, starring Tito Lusiardo, directed by Luis Bayon Herrera Joy of Living, starring Irene Dunne and Douglas Fairbanks, Jr. Kicking the Moon Around, starring Bert Ambrose, Evelyn Dall, Harry Richman and Florence Desmond. Kilómetro 111, starring Delia Garcés and Pepe Arias, directed by Mario Soffici La Route Enchantée, starring Charles Trenet, directed by Pierre Caron La Valentina starring Jorge Negrete and Esperanza Baur Love Finds Andy Hardy starring Mickey Rooney and Judy Garland Mad About Music, starring Deanna Durbin. Directed by Norman Taurog. My Irish Molly, directed by Alex Bryce, starring Binkie Stuart, Tom Burke and Maureen O'Hara My Lucky Star, starring Sonja Henie, Richard Greene, Joan Davis and Art Jarrett Napoli d'altri tempi, starring Vittorio De Sica, Emma Gramatica and Elisa Cegani. Outside of Paradise, starring Phil Regan and Penny SingletonRadio City Revels, released February 11, starring Bob Burns, Jack Oakie and Kenny Baker and featuring Jane Froman performing with Hal Kemp's orchestra. Romance in the Dark, starring Gladys Swarthout, John Boles, John Barrymore and Claire Dodd. Directed by H. C. Potter. Sally, Irene and Mary, starring Alice Faye, Tony Martin, Fred Allen, Jimmy Durante, Joan Davis and Marjorie Weaver Sing You Sinners, starring Bing Crosby, Fred MacMurray and Donald O'Connor. The Singing Cop, starring Keith Falkner, Marta Labarr, Ivy St Helier and Bobbie Comber Start Cheering, released March 3, starring Jimmy Durante, Gertrude Niesen and the Three Stooges. Sweethearts, starring Jeanette MacDonald and Nelson Eddy That Certain Age, released October 7, starring Deanna Durbin. Songs by (lyrics) Harold Adamson and (music) Jimmy McHugh Tropic Holiday, released July 1, starring Bob Burns, Dorothy Lamour, Ray Milland and Martha Raye Volga-Volga, starring Lyubov Orlova, directed by Grigori Aleksandrov We're Going to Be Rich starring Gracie Fields, Victor McLaglen and Brian Donlevy Births January 6 – Adriano Celentano, singer-songwriter January 11 – Narvel Felts, country singer January 13 January 13 Daevid Allen, Australian musician (d. 2015) Paavo Heininen, Finnish composer Shivkumar Sharma, santoor player January 14 Jack Jones, singer Allen Toussaint, songwriter and record producer (d. 2015) January 18 – Hargus "Pig" Robbins, session piano player January 21 – Wolfman Jack, DJ (d. 1995) January 25 Etta James, blues singer (d. 2012) Vladimir Vysotsky, singer, songwriter, poet and actor (d. 1980) February 11 Edith Mathis, Swiss operatic soprano Bobby "Boris" Pickett, singer ("Monster Mash") (d. 2007) February 16 – John Corigliano, composer February 22 – Bobby Hendricks, R&B singer (The Drifters) February 27 Mobarak Hossain Khan, surbahar player and musicologist (d. 2019) Jake Thackray, singer-songwriter (d. 2002) March 2 Simon Estes, operatic bass Lawrence Payton, Motown tenor (The Four Tops) (d. 1997) March 3 – Douglas Leedy, composer (d. 2015) March 9 – Lill-Babs, pop singer (d. 2018) March 12 – Dimitri Terzakis, composer March 13 Hans-Joachim Hespos, composer Jean-Claude Risset, composer (d. 2016) March 18 – Charley Pride, country singer March 25 – Hoyt Axton, country singer-songwriter and actor (d. 1999) April 2 – Booker Little, jazz trumpeter and composer (d. 1961) April 3 – Jeff Barry, songwriter April 4 – Declan Mulligan, rock guitarist (The Beau Brummels) April 7 Spencer Dryden, rock drummer (Jefferson Airplane, The Dinosaurs) (d. 2005) Freddie Hubbard, jazz trumpeter (d. 2008) April 13 – Frederic Rzewski, composer April 19 – Jonathan Tunick, composer April 26 Duane Eddy, guitarist Maurice Williams, doo-wop vocalist (Maurice Williams and the Zodiacs) April 29 – Klaus Voormann, rock guitarist, producer and sleeve designer (Manfred Mann) May 4 – Tyrone Davis, singer (d. 2005) May 10 Henry Fambrough, R&B vocalist (The Spinners) Maxim Shostakovich, orchestral conductor May 11 – Bruce Langhorne, guitarist (d. 2017) May 13 Frankie Smith, African American doo-wop bass vocalist (The Monotones) (d. 2000) Lucille Starr, French-Canadian singer May 15 – Lenny Welch, singer May 26 – Teresa Stratas, operatic soprano May 27 – Elizabeth Harwood, operatic soprano (d. 1990) May 28 – Prince Buster, ska musician (d. 2016) June 9 – Charles Wuorinen, composer June 13 – Gwynne Howell, opera singer June 14 – Julie Felix, folk singer June 15 – Jean-Claude Eloy, composer June 20 – Mickie Most, record producer (d. 2003) June 23 – Alan Vega, American rock singer, musician (Suicide) (d. 2016) June 24 Edmund Falkiner, jazz saxophonist (d. 1997) Edoardo Vianello, Italian singer and composer June 26 – Billy Davis Jr., pop singer (The 5th Dimension) July 1 – Pandit Hariprasad Chaurasia, bansuri player July 4 – Bill Withers, singer-songwriter July 5 – Ronnie Self, American singer-songwriter (d. 1981) July 9 – Paul Chihara, American composer July 14 – Tommy Vig, Hungarian composer, arranger and vibraphonist July 17 – Stanley Bronstein (Elephant's Memory Band) July 27 – Isabelle Aubret, singer July 28 – George Cummings, rock guitarist and songwriter (Dr. Hook & The Medicine Show) July 31 – Bonnie Brown (The Browns) (d. 2016) August 8 – Jacques Hétu, composer (d. 2010) August 13 – Dave "Baby" Cortez, pop keyboard player August 23 – Roger Greenaway, singer-songwriter (David & Jonathan) August 24 David Freiberg, rock musician (Quicksilver Messenger Service) Mason Williams, guitarist and composer August 26 – Jet Black (The Stranglers) August 28 – Clem Cattini (The Tornados) September 3 – Larry Grossman, composer of Broadway musicals September 6 – Joan Tower, composer and singer September 19 – Zygmunt Krauze, pianist and composer September 21 Atli Heimir Sveinsson, composer (d. 2019) Yuji Takahashi, composer September 28 – Ben E. King, singer (d. 2015) October 3 – Eddie Cochran, singer (d. 1960) October 15 Marv Johnson, singer (d. 1993) Fela Kuti, Afrobeat multi-instrumentalist (d. 1997) October 16 – Nico, singer-songwriter, actress and model (d. 1988) October 18 – Ronnie Bright, The Coasters (d. 2015) November 2 – Jay Black (Jay and the Americans) November 4 – Harry Elston (Friends Of Distinction) November 6 Jim Pike (The Lettermen) (d. 2019) P.J. Proby, singer November 7 – Dee Clark, soul singer (d. 1990) November 16 – Troy Seals, singer, songwriter November 17 – Gordon Lightfoot, singer-songwriter November 19 – Hank Medress (The Tokens) (d. 2007) December 1 – Sandy Nelson, drummer December 5 – J. J. Cale, singer-songwriter (d. 2013) December 8 – Bernie Krause, bioacoustician December 10 – Yuri Temirkanov, conductor December 12 – Connie Francis, singer December 15 – Fela Kuti, Afrobeat pioneer (d. 1997) December 18 – Chas Chandler, musician, record producer and manager (d. 1996) December 20 – John Harris Harbison, composer December 24 – Mesías Maiguashca, composer December 28 – Charles Neville (The Neville Brothers) (d. 2018)date unknownFanta Damba, jalimosolu singer Abdul Jabbar, singer (d. 2017) Bruce Langhorne, folk musician (d. 2017) Atli Heimir Sveinsson, composer (d. 2019) Deaths January 19 – Rosa Mayreder, feminist writer, artist and musician, 79 January 20 – Nikolai Zhilyayev, musicologist, 56 January 29 – Carl Venth, violinist and composer, 77 February 4 – Dominique Heckmes, composer and music critic, 59 February 25 – Růžena Maturová, operatic contralto, 68 February 27 – Gianni Bettini, phonograph maker (born 1860) March 2 – Ben Harney, ragtime composer & entertainer, 65 March 12 – Lyda Roberti, actress and singer, 31 (heart attack) March 18 – Cyril Rootham, composer, 62 April 5 – Reine Davies, actress and singer, 51 (heart attack) April 8 – Joe "King" Oliver, jazz trumpeter & band leader, 52 April 12 – Feodor Chaliapin, operatic bass, 65 April 18 – Richard Runciman Terry, musicologist, 72 May 7 – Papa Charlie Jackson, blues musician, 50 June 26 – James Weldon Johnson, US songwriter, author, diplomat and educationalist, 67 July 27 – James Thornton, English-born US songwriter and vaudeville comedian, 76 August 14 – Landon Ronald, pianist and composer, 65 August 16 – Robert Johnson, blues musician, 27 (suspected strychnine poisoning) August 30 – James Scott, ragtime composer, 53 September 4 – Oreste Candi, violin-maker, 72 September 8 – Agustín Magaldi, tango singer, 39 September 12 – Mary Elizabeth Turner Salter, American soprano singer and composer, 82 September 28 – Con Conrad, songwriter, 47 October 22 – May Irwin, vaudeville star, 76 October 23 – Fred Barnes, music hall entertainer, 53 (Tuberculosis) October 27 Alma Gluck, soprano, 54 (liver failure) Khadija Gayibova, Azerbaijani pianist, 45 (executed) November 21 – Leopold Godowsky, pianist and composer, 68 December 10 – Mario Pilati, composer, 35 December 21 – James Milton Black, hymn-writer and choir-master, 82date unknownMinnie Egener, operatic mezzo-soprano (born 1881) Attilio Salvaneschi, operatic tenor (born 1873)probable'' – Oskar Böhme, trumpeter and composer (born 1870) References Category:20th century in music Category:Music by year
{ "pile_set_name": "Wikipedia (en)" }
Knowledge-based authentication (KBA) has gained prominence as a user authentication method for electronic transactions. This paper presents a Bayesian network model of KBA grounded in probabilistic reasoning and information theory. The probabilistic semantics of the model parameters naturally lead to the definitions of two key KBA metrics—guessability and memorability. The statistical modeling approach allows parameter estimation using methods such as the maximum likelihood estimator (MLE). The information-theoretic view helps to derive the closed-form solutions to estimating the guessability and guessing entropy metrics. The results related to KBA metrics and the models under different attacking strategies and factoid distributions are unified under a game-theoretic framework that yields lower and upper bounds of optimal guessability. The paper also proposes a methodology for implementing a Bayesian network-based KBA system. Further, an empirical evaluation of the relative merits of two Bayesian network structures for KBA, the Naive Bayes (NB) and the Tree Augmented Naive Bayes (TAN), confirms the hypothesis that the TAN structure is superior in terms of authentication accuracy and error rates. The results of the theoretical analysis and the empirical study provide insights into the KBA design problem and establish a foundation for future research in the KBA area.
{ "pile_set_name": "Pile-CC" }
<?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Qt 4.6: sslclient.cpp Example File (network/securesocketclient/sslclient.cpp)</title> <link href="classic.css" rel="stylesheet" type="text/css" /> </head> <body> <table border="0" cellpadding="0" cellspacing="0" width="100%"> <tr> <td align="left" valign="top" width="32"><a href="http://qt.nokia.com/"><img src="images/qt-logo.png" align="left" border="0" /></a></td> <td width="1">&nbsp;&nbsp;</td><td class="postheader" valign="center"><a href="index.html"><font color="#004faf">Home</font></a>&nbsp;&middot; <a href="classes.html"><font color="#004faf">All&nbsp;Classes</font></a>&nbsp;&middot; <a href="functions.html"><font color="#004faf">All&nbsp;Functions</font></a>&nbsp;&middot; <a href="overviews.html"><font color="#004faf">Overviews</font></a></td></tr></table><h1 class="title">sslclient.cpp Example File<br /><span class="small-subtitle">network/securesocketclient/sslclient.cpp</span> </h1> <pre><span class="comment"> /**************************************************************************** ** ** Copyright (C) 2009 Nokia Corporation and/or its subsidiary(-ies). ** All rights reserved. ** Contact: Nokia Corporation (qt-info@nokia.com) ** ** This file is part of the examples of the Qt Toolkit. ** ** $QT_BEGIN_LICENSE:LGPL$ ** Commercial Usage ** Licensees holding valid Qt Commercial licenses may use this file in ** accordance with the Qt Commercial License Agreement provided with the ** Software or, alternatively, in accordance with the terms contained in ** a written agreement between you and Nokia. ** ** GNU Lesser General Public License Usage ** Alternatively, this file may be used under the terms of the GNU Lesser ** General Public License version 2.1 as published by the Free Software ** Foundation and appearing in the file LICENSE.LGPL included in the ** packaging of this file. Please review the following information to ** ensure the GNU Lesser General Public License version 2.1 requirements ** will be met: http://www.gnu.org/licenses/old-licenses/lgpl-2.1.html. ** ** In addition, as a special exception, Nokia gives you certain additional ** rights. These rights are described in the Nokia Qt LGPL Exception ** version 1.1, included in the file LGPL_EXCEPTION.txt in this package. ** ** GNU General Public License Usage ** Alternatively, this file may be used under the terms of the GNU ** General Public License version 3.0 as published by the Free Software ** Foundation and appearing in the file LICENSE.GPL included in the ** packaging of this file. Please review the following information to ** ensure the GNU General Public License version 3.0 requirements will be ** met: http://www.gnu.org/copyleft/gpl.html. ** ** If you have questions regarding the use of this file, please contact ** Nokia at qt-info@nokia.com. ** $QT_END_LICENSE$ ** ****************************************************************************/</span> #include &quot;certificateinfo.h&quot; #include &quot;sslclient.h&quot; #include &quot;ui_sslclient.h&quot; #include &quot;ui_sslerrors.h&quot; #include &lt;QtGui/QScrollBar&gt; #include &lt;QtGui/QStyle&gt; #include &lt;QtGui/QToolButton&gt; #include &lt;QtNetwork/QSslCipher&gt; SslClient::SslClient(QWidget *parent) : QWidget(parent), socket(0), padLock(0), executingDialog(false) { form = new Ui_Form; form-&gt;setupUi(this); form-&gt;hostNameEdit-&gt;setSelection(0, form-&gt;hostNameEdit-&gt;text().size()); form-&gt;sessionOutput-&gt;setHtml(tr(&quot;&amp;lt;not connected&amp;gt;&quot;)); connect(form-&gt;hostNameEdit, SIGNAL(textChanged(QString)), this, SLOT(updateEnabledState())); connect(form-&gt;connectButton, SIGNAL(clicked()), this, SLOT(secureConnect())); connect(form-&gt;sendButton, SIGNAL(clicked()), this, SLOT(sendData())); } SslClient::~SslClient() { delete form; } void SslClient::updateEnabledState() { bool unconnected = !socket || socket-&gt;state() == QAbstractSocket::UnconnectedState; form-&gt;hostNameEdit-&gt;setReadOnly(!unconnected); form-&gt;hostNameEdit-&gt;setFocusPolicy(unconnected ? Qt::StrongFocus : Qt::NoFocus); form-&gt;hostNameLabel-&gt;setEnabled(unconnected); form-&gt;portBox-&gt;setEnabled(unconnected); form-&gt;portLabel-&gt;setEnabled(unconnected); form-&gt;connectButton-&gt;setEnabled(unconnected &amp;&amp; !form-&gt;hostNameEdit-&gt;text().isEmpty()); bool connected = socket &amp;&amp; socket-&gt;state() == QAbstractSocket::ConnectedState; form-&gt;sessionBox-&gt;setEnabled(connected); form-&gt;sessionOutput-&gt;setEnabled(connected); form-&gt;sessionInput-&gt;setEnabled(connected); form-&gt;sessionInputLabel-&gt;setEnabled(connected); form-&gt;sendButton-&gt;setEnabled(connected); } void SslClient::secureConnect() { if (!socket) { socket = new QSslSocket(this); connect(socket, SIGNAL(stateChanged(QAbstractSocket::SocketState)), this, SLOT(socketStateChanged(QAbstractSocket::SocketState))); connect(socket, SIGNAL(encrypted()), this, SLOT(socketEncrypted())); connect(socket, SIGNAL(sslErrors(QList&lt;QSslError&gt;)), this, SLOT(sslErrors(QList&lt;QSslError&gt;))); connect(socket, SIGNAL(readyRead()), this, SLOT(socketReadyRead())); } socket-&gt;connectToHostEncrypted(form-&gt;hostNameEdit-&gt;text(), form-&gt;portBox-&gt;value()); updateEnabledState(); } void SslClient::socketStateChanged(QAbstractSocket::SocketState state) { if (executingDialog) return; updateEnabledState(); if (state == QAbstractSocket::UnconnectedState) { form-&gt;hostNameEdit-&gt;setPalette(QPalette()); form-&gt;hostNameEdit-&gt;setFocus(); form-&gt;cipherLabel-&gt;setText(tr(&quot;&lt;none&gt;&quot;)); if (padLock) padLock-&gt;hide(); socket-&gt;deleteLater(); socket = 0; } } void SslClient::socketEncrypted() { if (!socket) return; <span class="comment">// might have disconnected already</span> form-&gt;sessionOutput-&gt;clear(); form-&gt;sessionInput-&gt;setFocus(); QPalette palette; palette.setColor(QPalette::Base, QColor(255, 255, 192)); form-&gt;hostNameEdit-&gt;setPalette(palette); QSslCipher ciph = socket-&gt;sessionCipher(); QString cipher = QString(&quot;%1, %2 (%3/%4)&quot;).arg(ciph.authenticationMethod()) .arg(ciph.name()).arg(ciph.usedBits()).arg(ciph.supportedBits());; form-&gt;cipherLabel-&gt;setText(cipher); if (!padLock) { padLock = new QToolButton; padLock-&gt;setIcon(QIcon(&quot;:/encrypted.png&quot;)); #ifndef QT_NO_CURSOR padLock-&gt;setCursor(Qt::ArrowCursor); #endif padLock-&gt;setToolTip(tr(&quot;Display encryption details.&quot;)); int extent = form-&gt;hostNameEdit-&gt;height() - 2; padLock-&gt;resize(extent, extent); padLock-&gt;setSizePolicy(QSizePolicy::Fixed, QSizePolicy::Ignored); QHBoxLayout *layout = new QHBoxLayout(form-&gt;hostNameEdit); layout-&gt;setMargin(form-&gt;hostNameEdit-&gt;style()-&gt;pixelMetric(QStyle::PM_DefaultFrameWidth)); layout-&gt;setSpacing(0); layout-&gt;addStretch(); layout-&gt;addWidget(padLock); form-&gt;hostNameEdit-&gt;setLayout(layout); connect(padLock, SIGNAL(clicked()), this, SLOT(displayCertificateInfo())); } else { padLock-&gt;show(); } } void SslClient::socketReadyRead() { appendString(QString::fromUtf8(socket-&gt;readAll())); } void SslClient::sendData() { QString input = form-&gt;sessionInput-&gt;text(); appendString(input + &quot;\n&quot;); socket-&gt;write(input.toUtf8() + &quot;\r\n&quot;); form-&gt;sessionInput-&gt;clear(); } void SslClient::sslErrors(const QList&lt;QSslError&gt; &amp;errors) { QDialog errorDialog(this); Ui_SslErrors ui; ui.setupUi(&amp;errorDialog); connect(ui.certificateChainButton, SIGNAL(clicked()), this, SLOT(displayCertificateInfo())); foreach (const QSslError &amp;error, errors) ui.sslErrorList-&gt;addItem(error.errorString()); executingDialog = true; if (errorDialog.exec() == QDialog::Accepted) socket-&gt;ignoreSslErrors(); executingDialog = false; <span class="comment">// did the socket state change?</span> if (socket-&gt;state() != QAbstractSocket::ConnectedState) socketStateChanged(socket-&gt;state()); } void SslClient::displayCertificateInfo() { CertificateInfo *info = new CertificateInfo(this); info-&gt;setCertificateChain(socket-&gt;peerCertificateChain()); info-&gt;exec(); info-&gt;deleteLater(); } void SslClient::appendString(const QString &amp;line) { QTextCursor cursor(form-&gt;sessionOutput-&gt;textCursor()); cursor.movePosition(QTextCursor::End); cursor.insertText(line); form-&gt;sessionOutput-&gt;verticalScrollBar()-&gt;setValue(form-&gt;sessionOutput-&gt;verticalScrollBar()-&gt;maximum()); }</pre> <p /><address><hr /><div align="center"> <table width="100%" cellspacing="0" border="0"><tr class="address"> <td width="40%" align="left">Copyright &copy; 2009 Nokia Corporation and/or its subsidiary(-ies)</td> <td width="20%" align="center"><a href="trademarks.html">Trademarks</a></td> <td width="40%" align="right"><div align="right">Qt 4.6.0</div></td> </tr></table></div></address></body> </html>
{ "pile_set_name": "Github" }
Lance Banning Lance Banning (January 24, 1942 – January 31, 2006) was an American historian who specialized in studying the politics of the United States' founding fathers. He taught mostly at the University of Kentucky. Life Banning was a native of Kansas City, Missouri. He graduated from the University of Missouri, and from Washington University in St. Louis with a master's and PhD. He taught at Brown University, and University of Kentucky. He served as the Leverhulme Visiting Professor at the University of Edinburgh. In 1997, he taught at the University of Groningen. He was among the scholars who was commissioned by the newly formed Thomas Jefferson Heritage Society in 1999 to review materials about Thomas Jefferson and Sally Hemings, after the 1998 DNA study was published indicating a match between the Jefferson male line and a descendant of Eston Hemings, the youngest son. The commission thought there was not sufficient evidence to conclude that Jefferson was the father of Hemings' children, and proposed his younger brother Randolph Jefferson, who had never seriously been put forward until after the 1998 DNA study. Legacy and honors 1997 Merle Curti Award for his Sacred Fire of Liberty: James Madison 1997, Fulbright Fellowship at the University of Groningen in the Netherlands. National Endowment for the Humanities fellowship 1979 Guggenheim Fellowship National Humanities Center Works Criticism "A review of "Negro President": Jefferson and the Slave Power, by Garry Wills", The Claremont Institute, August 31, 2004 References Category:1942 births Category:2006 deaths Category:20th-century American historians Category:University of Missouri alumni Category:Washington University in St. Louis alumni Category:Brown University faculty Category:University of Kentucky faculty Category:University of Groningen faculty Category:Academics of the University of Edinburgh Category:Guggenheim Fellows Category:Writers from Kansas City, Missouri
{ "pile_set_name": "Wikipedia (en)" }
There are several errors in Table 2. Please refer to the corrected version of Table 2 here: ![](pone.a8dfd4ee-f4b7-443e-bf78-ebb0dab4e55b.g001){#pone-a8dfd4ee-f4b7-443e-bf78-ebb0dab4e55b-g001} **Competing Interests:**No competing interests declared.
{ "pile_set_name": "PubMed Central" }
Q: Why are browsers not caching these static files? Here's an example JavaScript file request/response: Request URL:http://local/index.js?time=1367958844038 Request Method:GET Status Code:200 OK Request Headers Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:en-US,en;q=0.8 Connection:keep-alive DNT:1 User-Agent:Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31 Response Headers cache-control:max-age=31536000 content-encoding:gzip content-type:application/javascript expires:Wed, 07 May 2014 20:34:04 GMT last-modified:Tue, 07 May 2013 20:34:04 GMT transfer-encoding:chunked As you can see, the server responds with cache control, expires and even last modified, but everytime I reload with either F5 or clicking enter in location bar the request looks the same (I'd expect browser to send if-modified-since, etc.) This happens in Chrome and Firefox at least. A: Probably because the URL's time parameter changes with every request. Since the URL is different, the browser can't use the previously cached response.
{ "pile_set_name": "StackExchange" }
Maintain stock and cleanliness of stations for all meal periods with necessary equipment including silverware, linen and condiments. Take care of guests withduring their breakfast experience and replenish as necessary. Transport all dirty tableware from dining room to dishwashing area for proper cleaning. Comply with attendance rules and be available to work on a regular basis. Perform any other job related duties as assigned. About Us Prestigious, but not pretentious, Hotel Republic stands out among downtown San Diego hotels. Appealing to the social discoverer and the savvy business traveler, Hotel Republic warmly extends to you, a Southern California welcome. Hotel Republic serves as a vibrant new social hub with a signature business beat. Hotel Republic is a bald beacon for trendsetters, food lovers, and well-connected locals who cultivate network connections and well-earned time to play. We strive to deliver our guests warm, Southern California hospitality, where our associates are hired for being individuals, who have a passion for taking care of our guests. We offer competitive benefits including health benefits, an employee meal per shift, discounted parking, incentives, and more. Requirements Qualifications Knowledge of the appropriate table settings and service ware. Ability to grasp, lift and/or carry, or otherwise, transport up to 50 lbs with or without reasonable accommodations. Ability to move or push goods on a hand cart/truck weighing a maximum of 150 lbs with or without reasonable accommodations. Effective verbal and written communication skills. Ability to adapt communication style to suit different audiences, such as effectively communicating with supervisors, coworkers, public etc. About Us HEI views our associates as our greatest resource, and we hire and develop the best. We have a culture of winning, we embrace change, thrive on challenge and always seek to surpass our personal bests. Defined by our unique business platform, our environment is fast-paced, expansive and ever-evolving, yet stable, consistent, enduring. It is a high-performance culture where innovation and entrepreneurial spirit are evident; initiative and results are rewarded. Our people know they're in on the ground floor of a company with all the cornerstones of success in place: experienced leaders, steady investors, a strong track record and long - term career potential. And in turn, each of our associates thrive because HEI's distinctly advantaged infrastructure creates a supportive, educational culture as well as unlimited opportunities for personal and professional success on every level. An engaging, high-energy environment, HEI presents a rare career opportunity: the chance to shape a powerful leader in the hospitality industry. Maintain stock and cleanliness of stations for all meal periods with necessary equipment including silverware, linen and condiments. Take care of guests withduring their breakfast experience and replenish as necessary. Transport all dirty tableware from dining room to dishwashing area for proper cleaning. Comply with attendance rules and be available to work on a regular basis. Perform any other job related duties as assigned. About Us Prestigious, but not pretentious, Hotel Republic stands out among downtown San Diego hotels. Appealing to the social discoverer and the savvy business traveler, Hotel Republic warmly extends to you, a Southern California welcome. Hotel Republic serves as a vibrant new social hub with a signature business beat. Hotel Republic is a bald beacon for trendsetters, food lovers, and well-connected locals who cultivate network connections and well-earned time to play. We strive to deliver our guests warm, Southern California hospitality, where our associates are hired for being individuals, who have a passion for taking care of our guests. We offer competitive benefits including health benefits, an employee meal per shift, discounted parking, incentives, and more. Requirements Qualifications Knowledge of the appropriate table settings and service ware. Ability to grasp, lift and/or carry, or otherwise, transport up to 50 lbs with or without reasonable accommodations. Ability to move or push goods on a hand cart/truck weighing a maximum of 150 lbs with or without reasonable accommodations. Effective verbal and written communication skills. Ability to adapt communication style to suit different audiences, such as effectively communicating with supervisors, coworkers, public etc. About Us HEI views our associates as our greatest resource, and we hire and develop the best. We have a culture of winning, we embrace change, thrive on challenge and always seek to surpass our personal bests. Defined by our unique business platform, our environment is fast-paced, expansive and ever-evolving, yet stable, consistent, enduring. It is a high-performance culture where innovation and entrepreneurial spirit are evident; initiative and results are rewarded. Our people know they're in on the ground floor of a company with all the cornerstones of success in place: experienced leaders, steady investors, a strong track record and long - term career potential. And in turn, each of our associates thrive because HEI's distinctly advantaged infrastructure creates a supportive, educational culture as well as unlimited opportunities for personal and professional success on every level. An engaging, high-energy environment, HEI presents a rare career opportunity: the chance to shape a powerful leader in the hospitality industry.
{ "pile_set_name": "Pile-CC" }
The early development of nerve-muscle synapses is characterized by three major events: (a) the accumulation of junctional acetylcholine receptors (AChRs), (b) the localization of synaptic acetylcholinesterase (AChE), and (c) the elimination of extrajunctional AChRs. This last event is mechanistically distinct from the accumulation of junctional AChRs and seems to result from the suppression of AChR synthesis in extrajunctional regions of muscle cells caused by muscle contraction. The final result is a muscle cell with a highly elaborated apparatus specialized for efficient synaptic transmission. Experiments in this proposal are designed to understand at the molecular level the way in which neurons and muscle cells communicate to establish this apparatus. The proposal is divided into three major sections. The first will make use of immunocytochemistry and antibody microinjection to demonstrate a functional association between the presence of a newly discovered muscle component (a 37 kilodalton nonmyofibrillar tropomyosin) and the ability to cluster AChRs. This molecule was first identified by its absence from vitally transformed muscle cells which are unable to cluster AChRs at all. The second section of this proposal describes similar techniques to probe other cytoskeletal elements involved in clustering and subsequent structural changes in the muscle cell. These studies are based on the observation from this laboratory that clustering causes a subset of organelles, including myonuclei and the Golgi apparatus, to assume a constant sub-cluster localization. The final section is a study of changes in the levels of AChRs and AChE caused by the increase in muscle cell Ca2+ which occurs during contraction. Also, experiments are designed to test the hypothesis that regional differences in the amount of Ca2+ released during contraction or in the levels of particular Ca2+-binding proteins underly the ability of these cells to specify where particular macromolecules are synthesized. Ca2+ concentration will be measured using the Ca2+-sensitive fluorescent dye fura-2 and optical image processing. Ca2+-binding proteins will be investigated using biochemical and immunological techniques. These experiments should add considerably to our knowledge of how neurons influence properties of their target cells and may contribute to an understanding of a variety of developmental and neurological disorders. In addition, certain results may provide information useful in comprehending cell transformation.
{ "pile_set_name": "NIH ExPorter" }
Q: Android- Create tabs inside fragment without v4 support library I am trying to create a 'tabs' w/o using support library for my application, but I didn't find even a simple working example/tutorial anywhere. Since my entire application is using android.app.fragment, so I cannot change all of the existing fragments to android.support.v4.app.Fragment. Creating a tab within fragment using support library is a piece of cake, but I am wondering whether we can create one w/o android.support.v4.app.Fragment. A: This can be achieved, it's easily, follow my step: 1.Copy the offical source code of FragmentTabHost to your project, and change following 3 class refrence: android.support.v4.app.Fragment to android.app.Fragment android.support.v4.app.FragmentManager to android.app.FragmentManager android.support.v4.app.FragmentTransaction to android.app.FragmentTransaction 2.Use the modify version of your FragmentTabHost in project, and forget the android.support.v4 library. My edited version: FragmentTabHost Sample code: (Show the first tab, after 5 sec delay, you will see the second tab, I run on my Android Studio successfully!) MainActivity.java public class MainActivity extends Activity { FragmentTabHost mTabHost; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mTabHost = (FragmentTabHost) findViewById(R.id.tabhost); mTabHost.setup(this, getFragmentManager(), R.id.container); // Add each tab mTabHost.addTab(mTabHost.newTabSpec("first").setIndicator("first"), BlankFragment1.class, null); mTabHost.addTab(mTabHost.newTabSpec("second").setIndicator("second"), BlankFragment2.class, null); mTabHost.postDelayed(new Runnable() { @Override public void run() { mTabHost.setCurrentTabByTag("second"); } }, 5000); } } activity_main.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" android:paddingBottom="@dimen/activity_vertical_margin" tools:context=".MainActivity"> <com.example.bill.myapplication.FragmentTabHost android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/tabhost"> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <TabWidget android:id="@android:id/tabs" android:orientation="horizontal" android:layout_width="0dp" android:layout_height="0dp"/> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="0dp" android:layout_height="0dp"/> <FrameLayout android:id="@+id/container" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1"/> </LinearLayout> </com.example.bill.myapplication.FragmentTabHost> </RelativeLayout>
{ "pile_set_name": "StackExchange" }
[Device] Name=Logitech M720 DeviceMatch=usb:046d:405e;bluetooth:046d:b015 Driver=hidpp20
{ "pile_set_name": "Github" }
Q: Finding a closed form expression for the following recurrence relation. Given that: $S_0 = 0$ $S_1 = 3$ $S_n = S_{n-1} + 6S_{n-2}$ for $n ≥ 2$ What are the steps I would take to find the closed form expression for the following recurrence relation? A: What you are being asked for is an equation that has $x_n$ on the left side and a formula in $n$ on the right side containing just familiar functions like polynomials and exponentials and only finitely many of them. First, write it like this for simplicity: $S_0=0$, $S_1=3$, $S_n=S_{n-1}+6S_{n-2}$. Now, let's suppose there is a solution of the form $S_n=x^n$ for some $x$. Then the equation says $x^n=x^{n-1}+6x^{n-2}$, simplifies to $x^2-x-6=0$, which undoubtedly you would be able to solve. You'll get two values of $x$ that work, let's call them $x_1$ and $x_2$, and then anything of the form $$\alpha x_1^n+\beta x_2^n$$ will work also, for any numbers $\alpha$ and $\beta$.
{ "pile_set_name": "StackExchange" }
#if FEAT_COMPILER //#define DEBUG_COMPILE using System; using System.Threading; using ProtoBuf.Meta; using ProtoBuf.Serializers; #if FEAT_IKVM using Type = IKVM.Reflection.Type; using IKVM.Reflection; using IKVM.Reflection.Emit; #else using System.Reflection; using System.Reflection.Emit; #endif namespace ProtoBuf.Compiler { internal struct CodeLabel { public readonly Label Value; public readonly int Index; public CodeLabel(Label value, int index) { this.Value = value; this.Index = index; } } internal class CompilerContext { public TypeModel Model { get { return model; } } #if !(FX11 || FEAT_IKVM) readonly DynamicMethod method; static int next; #endif internal CodeLabel DefineLabel() { CodeLabel result = new CodeLabel(il.DefineLabel(), nextLabel++); return result; } internal void MarkLabel(CodeLabel label) { il.MarkLabel(label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine("#: " + label.Index); #endif } #if !(FX11 || FEAT_IKVM) public static ProtoSerializer BuildSerializer(IProtoSerializer head, TypeModel model) { Type type = head.ExpectedType; CompilerContext ctx = new CompilerContext(type, true, true, model); ctx.LoadValue(Local.InputValue); ctx.CastFromObject(type); ctx.WriteNullCheckedTail(type, head, null); ctx.Emit(OpCodes.Ret); return (ProtoSerializer)ctx.method.CreateDelegate( typeof(ProtoSerializer)); } /*public static ProtoCallback BuildCallback(IProtoTypeSerializer head) { Type type = head.ExpectedType; CompilerContext ctx = new CompilerContext(type, true, true); using (Local typedVal = new Local(ctx, type)) { ctx.LoadValue(Local.InputValue); ctx.CastFromObject(type); ctx.StoreValue(typedVal); CodeLabel[] jumpTable = new CodeLabel[4]; for(int i = 0 ; i < jumpTable.Length ; i++) { jumpTable[i] = ctx.DefineLabel(); } ctx.LoadReaderWriter(); ctx.Switch(jumpTable); ctx.Return(); for(int i = 0 ; i < jumpTable.Length ; i++) { ctx.MarkLabel(jumpTable[i]); if (head.HasCallbacks((TypeModel.CallbackType)i)) { head.EmitCallback(ctx, typedVal, (TypeModel.CallbackType)i); } ctx.Return(); } } ctx.Emit(OpCodes.Ret); return (ProtoCallback)ctx.method.CreateDelegate( typeof(ProtoCallback)); }*/ public static ProtoDeserializer BuildDeserializer(IProtoSerializer head, TypeModel model) { Type type = head.ExpectedType; CompilerContext ctx = new CompilerContext(type, false, true, model); using (Local typedVal = new Local(ctx, type)) { if (!type.IsValueType) { ctx.LoadValue(Local.InputValue); ctx.CastFromObject(type); ctx.StoreValue(typedVal); } else { ctx.LoadValue(Local.InputValue); CodeLabel notNull = ctx.DefineLabel(), endNull = ctx.DefineLabel(); ctx.BranchIfTrue(notNull, true); ctx.LoadAddress(typedVal, type); ctx.EmitCtor(type); ctx.Branch(endNull, true); ctx.MarkLabel(notNull); ctx.LoadValue(Local.InputValue); ctx.CastFromObject(type); ctx.StoreValue(typedVal); ctx.MarkLabel(endNull); } head.EmitRead(ctx, typedVal); if (head.ReturnsValue) { ctx.StoreValue(typedVal); } ctx.LoadValue(typedVal); ctx.CastToObject(type); } ctx.Emit(OpCodes.Ret); return (ProtoDeserializer)ctx.method.CreateDelegate( typeof(ProtoDeserializer)); } #endif internal void Return() { Emit(OpCodes.Ret); } static bool IsObject(Type type) { #if FEAT_IKVM return type.FullName == "System.Object"; #else return type == typeof(object); #endif } internal void CastToObject(Type type) { if(IsObject(type)) { } else if (type.IsValueType) { il.Emit(OpCodes.Box, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Box + ": " + type); #endif } else { il.Emit(OpCodes.Castclass, MapType(typeof(object))); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Castclass + ": " + type); #endif } } internal void CastFromObject(Type type) { if (IsObject(type)) { } else if (type.IsValueType) { switch (MetadataVersion) { case ILVersion.Net1: il.Emit(OpCodes.Unbox, type); il.Emit(OpCodes.Ldobj, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Unbox + ": " + type); Helpers.DebugWriteLine(OpCodes.Ldobj + ": " + type); #endif break; default: #if FX11 throw new NotSupportedException(); #else il.Emit(OpCodes.Unbox_Any, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Unbox_Any + ": " + type); #endif break; #endif } } else { il.Emit(OpCodes.Castclass, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Castclass + ": " + type); #endif } } private readonly bool isStatic; #if !SILVERLIGHT private readonly RuntimeTypeModel.SerializerPair[] methodPairs; internal MethodBuilder GetDedicatedMethod(int metaKey, bool read) { if (methodPairs == null) return null; // but if we *do* have pairs, we demand that we find a match... for (int i = 0; i < methodPairs.Length; i++ ) { if (methodPairs[i].MetaKey == metaKey) { return read ? methodPairs[i].Deserialize : methodPairs[i].Serialize; } } throw new ArgumentException("Meta-key not found", "metaKey"); } internal int MapMetaKeyToCompiledKey(int metaKey) { if (metaKey < 0 || methodPairs == null) return metaKey; // all meta, or a dummy/wildcard key for (int i = 0; i < methodPairs.Length; i++) { if (methodPairs[i].MetaKey == metaKey) return i; } throw new ArgumentException("Key could not be mapped: " + metaKey, "metaKey"); } #else internal int MapMetaKeyToCompiledKey(int metaKey) { return metaKey; } #endif private readonly bool nonPublic, isWriter; internal bool NonPublic { get { return nonPublic; } } #if !(SILVERLIGHT || PHONE8) private readonly string assemblyName; internal CompilerContext(ILGenerator il, bool isStatic, bool isWriter, RuntimeTypeModel.SerializerPair[] methodPairs, TypeModel model, ILVersion metadataVersion, string assemblyName) { if (il == null) throw new ArgumentNullException("il"); if (methodPairs == null) throw new ArgumentNullException("methodPairs"); if (model == null) throw new ArgumentNullException("model"); if (Helpers.IsNullOrEmpty(assemblyName)) throw new ArgumentNullException("assemblyName"); this.assemblyName = assemblyName; this.isStatic = isStatic; this.methodPairs = methodPairs; this.il = il; nonPublic = false; this.isWriter = isWriter; this.model = model; this.metadataVersion = metadataVersion; } #endif #if !(FX11 || FEAT_IKVM) private CompilerContext(Type associatedType, bool isWriter, bool isStatic, TypeModel model) { if (model == null) throw new ArgumentNullException("model"); #if FX11 metadataVersion = ILVersion.Net1; #else metadataVersion = ILVersion.Net2; #endif this.isStatic = isStatic; this.isWriter = isWriter; this.model = model; nonPublic = true; Type[] paramTypes; Type returnType; if (isWriter) { returnType = typeof(void); paramTypes = new Type[] { typeof(object), typeof(ProtoWriter) }; } else { returnType = typeof(object); paramTypes = new Type[] { typeof(object), typeof(ProtoReader) }; } int uniqueIdentifier; #if PLAT_NO_INTERLOCKED uniqueIdentifier = ++next; #else uniqueIdentifier = Interlocked.Increment(ref next); #endif method = new DynamicMethod("proto_" + uniqueIdentifier.ToString(), returnType, paramTypes, associatedType.IsInterface ? typeof(object) : associatedType, true); this.il = method.GetILGenerator(); } #endif private readonly ILGenerator il; private void Emit(OpCode opcode) { il.Emit(opcode); #if DEBUG_COMPILE Helpers.DebugWriteLine(opcode.ToString()); #endif } public void LoadValue(string value) { if (value == null) { LoadNullRef(); } else { il.Emit(OpCodes.Ldstr, value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldstr + ": " + value); #endif } } public void LoadValue(float value) { il.Emit(OpCodes.Ldc_R4, value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldc_R4 + ": " + value); #endif } public void LoadValue(double value) { il.Emit(OpCodes.Ldc_R8, value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldc_R8 + ": " + value); #endif } public void LoadValue(long value) { il.Emit(OpCodes.Ldc_I8, value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldc_I8 + ": " + value); #endif } public void LoadValue(int value) { switch (value) { case 0: Emit(OpCodes.Ldc_I4_0); break; case 1: Emit(OpCodes.Ldc_I4_1); break; case 2: Emit(OpCodes.Ldc_I4_2); break; case 3: Emit(OpCodes.Ldc_I4_3); break; case 4: Emit(OpCodes.Ldc_I4_4); break; case 5: Emit(OpCodes.Ldc_I4_5); break; case 6: Emit(OpCodes.Ldc_I4_6); break; case 7: Emit(OpCodes.Ldc_I4_7); break; case 8: Emit(OpCodes.Ldc_I4_8); break; case -1: Emit(OpCodes.Ldc_I4_M1); break; default: if (value >= -128 && value <= 127) { il.Emit(OpCodes.Ldc_I4_S, (sbyte)value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldc_I4_S + ": " + value); #endif } else { il.Emit(OpCodes.Ldc_I4, value); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldc_I4 + ": " + value); #endif } break; } } MutableList locals = new MutableList(); internal LocalBuilder GetFromPool(Type type) { int count = locals.Count; for (int i = 0; i < count; i++) { LocalBuilder item = (LocalBuilder)locals[i]; if (item != null && item.LocalType == type) { locals[i] = null; // remove from pool return item; } } LocalBuilder result = il.DeclareLocal(type); #if DEBUG_COMPILE Helpers.DebugWriteLine("$ " + result + ": " + type); #endif return result; } // internal void ReleaseToPool(LocalBuilder value) { int count = locals.Count; for (int i = 0; i < count; i++) { if (locals[i] == null) { locals[i] = value; // released into existing slot return; } } locals.Add(value); // create a new slot } public void LoadReaderWriter() { Emit(isStatic ? OpCodes.Ldarg_1 : OpCodes.Ldarg_2); } public void StoreValue(Local local) { if (local == Local.InputValue) { byte b = isStatic ? (byte) 0 : (byte)1; il.Emit(OpCodes.Starg_S, b); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Starg_S + ": $" + b); #endif } else { #if !FX11 switch (local.Value.LocalIndex) { case 0: Emit(OpCodes.Stloc_0); break; case 1: Emit(OpCodes.Stloc_1); break; case 2: Emit(OpCodes.Stloc_2); break; case 3: Emit(OpCodes.Stloc_3); break; default: #endif OpCode code = UseShortForm(local) ? OpCodes.Stloc_S : OpCodes.Stloc; il.Emit(code, local.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": $" + local.Value); #endif #if !FX11 break; } #endif } } public void LoadValue(Local local) { if (local == null) { /* nothing to do; top of stack */} else if (local == Local.InputValue) { Emit(isStatic ? OpCodes.Ldarg_0 : OpCodes.Ldarg_1); } else { #if !FX11 switch (local.Value.LocalIndex) { case 0: Emit(OpCodes.Ldloc_0); break; case 1: Emit(OpCodes.Ldloc_1); break; case 2: Emit(OpCodes.Ldloc_2); break; case 3: Emit(OpCodes.Ldloc_3); break; default: #endif OpCode code = UseShortForm(local) ? OpCodes.Ldloc_S : OpCodes.Ldloc; il.Emit(code, local.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": $" + local.Value); #endif #if !FX11 break; } #endif } } public Local GetLocalWithValue(Type type, Compiler.Local fromValue) { if (fromValue != null) { return fromValue.AsCopy(); } // need to store the value from the stack Local result = new Local(this, type); StoreValue(result); return result; } internal void EmitBasicRead(string methodName, Type expectedType) { MethodInfo method = MapType(typeof(ProtoReader)).GetMethod( methodName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance); if (method == null || method.ReturnType != expectedType || method.GetParameters().Length != 0) throw new ArgumentException("methodName"); LoadReaderWriter(); EmitCall(method); } internal void EmitBasicRead(Type helperType, string methodName, Type expectedType) { MethodInfo method = helperType.GetMethod( methodName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static); if (method == null || method.ReturnType != expectedType || method.GetParameters().Length != 1) throw new ArgumentException("methodName"); LoadReaderWriter(); EmitCall(method); } internal void EmitBasicWrite(string methodName, Compiler.Local fromValue) { if (Helpers.IsNullOrEmpty(methodName)) throw new ArgumentNullException("methodName"); LoadValue(fromValue); LoadReaderWriter(); EmitCall(GetWriterMethod(methodName)); } private MethodInfo GetWriterMethod(string methodName) { Type writerType = MapType(typeof(ProtoWriter)); MethodInfo[] methods = writerType.GetMethods(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static); foreach (MethodInfo method in methods) { if(method.Name != methodName) continue; ParameterInfo[] pis = method.GetParameters(); if (pis.Length == 2 && pis[1].ParameterType == writerType) return method; } throw new ArgumentException("No suitable method found for: " + methodName, "methodName"); } internal void EmitWrite(Type helperType, string methodName, Compiler.Local valueFrom) { if (Helpers.IsNullOrEmpty(methodName)) throw new ArgumentNullException("methodName"); MethodInfo method = helperType.GetMethod( methodName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static); if (method == null || method.ReturnType != MapType(typeof(void))) throw new ArgumentException("methodName"); LoadValue(valueFrom); LoadReaderWriter(); EmitCall(method); } public void EmitCall(MethodInfo method) { Helpers.DebugAssert(method != null); CheckAccessibility(method); OpCode opcode = (method.IsStatic || method.DeclaringType.IsValueType) ? OpCodes.Call : OpCodes.Callvirt; il.EmitCall(opcode, method, null); #if DEBUG_COMPILE Helpers.DebugWriteLine(opcode + ": " + method + " on " + method.DeclaringType); #endif } /// <summary> /// Pushes a null reference onto the stack. Note that this should only /// be used to return a null (or set a variable to null); for null-tests /// use BranchIfTrue / BranchIfFalse. /// </summary> public void LoadNullRef() { Emit(OpCodes.Ldnull); } private int nextLabel; internal void WriteNullCheckedTail(Type type, IProtoSerializer tail, Compiler.Local valueFrom) { if (type.IsValueType) { Type underlyingType = null; #if !FX11 underlyingType = Helpers.GetUnderlyingType(type); #endif if (underlyingType == null) { // not a nullable T; can invoke directly tail.EmitWrite(this, valueFrom); } else { // nullable T; check HasValue using (Compiler.Local valOrNull = GetLocalWithValue(type, valueFrom)) { LoadAddress(valOrNull, type); LoadValue(type.GetProperty("HasValue")); CodeLabel @end = DefineLabel(); BranchIfFalse(@end, false); LoadAddress(valOrNull, type); EmitCall(type.GetMethod("GetValueOrDefault", Helpers.EmptyTypes)); tail.EmitWrite(this, null); MarkLabel(@end); } } } else { // ref-type; do a null-check LoadValue(valueFrom); CopyValue(); CodeLabel hasVal = DefineLabel(), @end = DefineLabel(); BranchIfTrue(hasVal, true); DiscardValue(); Branch(@end, false); MarkLabel(hasVal); tail.EmitWrite(this, null); MarkLabel(@end); } } internal void ReadNullCheckedTail(Type type, IProtoSerializer tail, Compiler.Local valueFrom) { #if !FX11 Type underlyingType; if (type.IsValueType && (underlyingType = Helpers.GetUnderlyingType(type)) != null) { if(tail.RequiresOldValue) { // we expect the input value to be in valueFrom; need to unpack it from T? using (Local loc = GetLocalWithValue(type, valueFrom)) { LoadAddress(loc, type); EmitCall(type.GetMethod("GetValueOrDefault", Helpers.EmptyTypes)); } } else { Helpers.DebugAssert(valueFrom == null); // not expecting a valueFrom in this case } tail.EmitRead(this, null); // either unwrapped on the stack or not provided if (tail.ReturnsValue) { // now re-wrap the value EmitCtor(type, underlyingType); } return; } #endif // either a ref-type of a non-nullable struct; treat "as is", even if null // (the type-serializer will handle the null case; it needs to allow null // inputs to perform the correct type of subclass creation) tail.EmitRead(this, valueFrom); } public void EmitCtor(Type type) { EmitCtor(type, Helpers.EmptyTypes); } public void EmitCtor(ConstructorInfo ctor) { if (ctor == null) throw new ArgumentNullException("ctor"); CheckAccessibility(ctor); il.Emit(OpCodes.Newobj, ctor); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Newobj + ": " + ctor.DeclaringType); #endif } public void EmitCtor(Type type, params Type[] parameterTypes) { Helpers.DebugAssert(type != null); Helpers.DebugAssert(parameterTypes != null); if (type.IsValueType && parameterTypes.Length == 0) { il.Emit(OpCodes.Initobj, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Initobj + ": " + type); #endif } else { ConstructorInfo ctor = Helpers.GetConstructor(type, parameterTypes, true); if (ctor == null) throw new InvalidOperationException("No suitable constructor found for " + type.FullName); EmitCtor(ctor); } } #if !(PHONE8 || SILVERLIGHT || FX11) BasicList knownTrustedAssemblies, knownUntrustedAssemblies; #endif bool InternalsVisible(Assembly assembly) { #if PHONE8 || SILVERLIGHT || FX11 return false; #else if (Helpers.IsNullOrEmpty(assemblyName)) return false; if (knownTrustedAssemblies != null) { if (knownTrustedAssemblies.IndexOfReference(assembly) >= 0) { return true; } } if (knownUntrustedAssemblies != null) { if (knownUntrustedAssemblies.IndexOfReference(assembly) >= 0) { return false; } } bool isTrusted = false; Type attributeType = MapType(typeof(System.Runtime.CompilerServices.InternalsVisibleToAttribute)); if(attributeType == null) return false; #if FEAT_IKVM foreach (CustomAttributeData attrib in assembly.__GetCustomAttributes(attributeType, false)) { if (attrib.ConstructorArguments.Count == 1) { string privelegedAssembly = attrib.ConstructorArguments[0].Value as string; if (privelegedAssembly == assemblyName) { isTrusted = true; break; } } } #else foreach(System.Runtime.CompilerServices.InternalsVisibleToAttribute attrib in assembly.GetCustomAttributes(attributeType, false)) { if (attrib.AssemblyName == assemblyName) { isTrusted = true; break; } } #endif if (isTrusted) { if (knownTrustedAssemblies == null) knownTrustedAssemblies = new BasicList(); knownTrustedAssemblies.Add(assembly); } else { if (knownUntrustedAssemblies == null) knownUntrustedAssemblies = new BasicList(); knownUntrustedAssemblies.Add(assembly); } return isTrusted; #endif } internal void CheckAccessibility(MemberInfo member) { if (member == null) { throw new ArgumentNullException("member"); } if (!NonPublic) { bool isPublic; switch (member.MemberType) { case MemberTypes.TypeInfo: // top-level type isPublic = ((Type)member).IsPublic || InternalsVisible(((Type)member).Assembly); break; case MemberTypes.NestedType: Type type = (Type)member; do { isPublic = type.IsNestedPublic || type.IsPublic || ((type.DeclaringType == null || type.IsNestedAssembly || type.IsNestedFamORAssem) && InternalsVisible(type.Assembly)); } while (isPublic && (type = type.DeclaringType) != null); // ^^^ !type.IsNested, but not all runtimes have that break; case MemberTypes.Field: FieldInfo field = ((FieldInfo)member); isPublic = field.IsPublic || ((field.IsAssembly || field.IsFamilyOrAssembly) && InternalsVisible(field.DeclaringType.Assembly)); break; case MemberTypes.Constructor: ConstructorInfo ctor = ((ConstructorInfo)member); isPublic = ctor.IsPublic || ((ctor.IsAssembly || ctor.IsFamilyOrAssembly) && InternalsVisible(ctor.DeclaringType.Assembly)); break; case MemberTypes.Method: MethodInfo method = ((MethodInfo)member); isPublic = method.IsPublic || ((method.IsAssembly || method.IsFamilyOrAssembly) && InternalsVisible(method.DeclaringType.Assembly)); if (!isPublic) { // allow calls to TypeModel protected methods, and methods we are in the process of creating if( #if !SILVERLIGHT member is MethodBuilder || #endif member.DeclaringType == MapType(typeof(TypeModel))) isPublic = true; } break; case MemberTypes.Property: isPublic = true; // defer to get/set break; default: throw new NotSupportedException(member.MemberType.ToString()); } if (!isPublic) { switch (member.MemberType) { case MemberTypes.TypeInfo: case MemberTypes.NestedType: throw new InvalidOperationException("Non-public type cannot be used with full dll compilation: " + ((Type)member).FullName); default: throw new InvalidOperationException("Non-public member cannot be used with full dll compilation: " + member.DeclaringType.FullName + "." + member.Name); } } } } public void LoadValue(FieldInfo field) { CheckAccessibility(field); OpCode code = field.IsStatic ? OpCodes.Ldsfld : OpCodes.Ldfld; il.Emit(code, field); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + field + " on " + field.DeclaringType); #endif } #if FEAT_IKVM public void StoreValue(System.Reflection.FieldInfo field) { StoreValue(MapType(field.DeclaringType).GetField(field.Name, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance)); } public void StoreValue(System.Reflection.PropertyInfo property) { StoreValue(MapType(property.DeclaringType).GetProperty(property.Name, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance)); } public void LoadValue(System.Reflection.FieldInfo field) { LoadValue(MapType(field.DeclaringType).GetField(field.Name, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance)); } public void LoadValue(System.Reflection.PropertyInfo property) { LoadValue(MapType(property.DeclaringType).GetProperty(property.Name, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance)); } #endif public void StoreValue(FieldInfo field) { CheckAccessibility(field); OpCode code = field.IsStatic ? OpCodes.Stsfld : OpCodes.Stfld; il.Emit(code, field); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + field + " on " + field.DeclaringType); #endif } public void LoadValue(PropertyInfo property) { CheckAccessibility(property); EmitCall(Helpers.GetGetMethod(property, true, true)); } public void StoreValue(PropertyInfo property) { CheckAccessibility(property); EmitCall(Helpers.GetSetMethod(property, true, true)); } internal void EmitInstance() { if (isStatic) throw new InvalidOperationException(); Emit(OpCodes.Ldarg_0); } internal static void LoadValue(ILGenerator il, int value) { switch (value) { case 0: il.Emit(OpCodes.Ldc_I4_0); break; case 1: il.Emit(OpCodes.Ldc_I4_1); break; case 2: il.Emit(OpCodes.Ldc_I4_2); break; case 3: il.Emit(OpCodes.Ldc_I4_3); break; case 4: il.Emit(OpCodes.Ldc_I4_4); break; case 5: il.Emit(OpCodes.Ldc_I4_5); break; case 6: il.Emit(OpCodes.Ldc_I4_6); break; case 7: il.Emit(OpCodes.Ldc_I4_7); break; case 8: il.Emit(OpCodes.Ldc_I4_8); break; case -1: il.Emit(OpCodes.Ldc_I4_M1); break; default: il.Emit(OpCodes.Ldc_I4, value); break; } } private bool UseShortForm(Local local) { #if FX11 return locals.Count < 256; #else return local.Value.LocalIndex < 256; #endif } #if FEAT_IKVM internal void LoadAddress(Local local, System.Type type) { LoadAddress(local, MapType(type)); } #endif internal void LoadAddress(Local local, Type type) { if (type.IsValueType) { if (local == null) { throw new InvalidOperationException("Cannot load the address of a struct at the head of the stack"); } if (local == Local.InputValue) { il.Emit(OpCodes.Ldarga_S, (isStatic ? (byte)0 : (byte)1)); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldarga_S + ": $" + (isStatic ? 0 : 1)); #endif } else { OpCode code = UseShortForm(local) ? OpCodes.Ldloca_S : OpCodes.Ldloca; il.Emit(code, local.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": $" + local.Value); #endif } } else { // reference-type; already *is* the address; just load it LoadValue(local); } } internal void Branch(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Br_S : OpCodes.Br; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void BranchIfFalse(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Brfalse_S : OpCodes.Brfalse; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void BranchIfTrue(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Brtrue_S : OpCodes.Brtrue; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void BranchIfEqual(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Beq_S : OpCodes.Beq; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void TestEqual() { Emit(OpCodes.Ceq); } internal void CopyValue() { Emit(OpCodes.Dup); } internal void BranchIfGreater(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Bgt_S : OpCodes.Bgt; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void BranchIfLess(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Blt_S : OpCodes.Blt; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal void DiscardValue() { Emit(OpCodes.Pop); } public void Subtract() { Emit(OpCodes.Sub); } public void Switch(CodeLabel[] jumpTable) { Label[] labels = new Label[jumpTable.Length]; #if DEBUG_COMPILE StringBuilder sb = new StringBuilder(OpCodes.Switch.ToString()); #endif for (int i = 0; i < labels.Length; i++) { labels[i] = jumpTable[i].Value; #if DEBUG_COMPILE sb.Append("; ").Append(i).Append("=>").Append(jumpTable[i].Index); #endif } il.Emit(OpCodes.Switch, labels); #if DEBUG_COMPILE Helpers.DebugWriteLine(sb.ToString()); #endif } internal void EndFinally() { il.EndExceptionBlock(); #if DEBUG_COMPILE Helpers.DebugWriteLine("EndExceptionBlock"); #endif } internal void BeginFinally() { il.BeginFinallyBlock(); #if DEBUG_COMPILE Helpers.DebugWriteLine("BeginFinallyBlock"); #endif } internal void EndTry(CodeLabel label, bool @short) { OpCode code = @short ? OpCodes.Leave_S : OpCodes.Leave; il.Emit(code, label.Value); #if DEBUG_COMPILE Helpers.DebugWriteLine(code + ": " + label.Index); #endif } internal CodeLabel BeginTry() { CodeLabel label = new CodeLabel(il.BeginExceptionBlock(), nextLabel++); #if DEBUG_COMPILE Helpers.DebugWriteLine("BeginExceptionBlock: " + label.Index); #endif return label; } #if !FX11 internal void Constrain(Type type) { il.Emit(OpCodes.Constrained, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Constrained + ": " + type); #endif } #endif internal void TryCast(Type type) { il.Emit(OpCodes.Isinst, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Isinst + ": " + type); #endif } internal void Cast(Type type) { il.Emit(OpCodes.Castclass, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Castclass + ": " + type); #endif } public IDisposable Using(Local local) { return new UsingBlock(this, local); } private class UsingBlock : IDisposable{ private Local local; CompilerContext ctx; CodeLabel label; /// <summary> /// Creates a new "using" block (equivalent) around a variable; /// the variable must exist, and note that (unlike in C#) it is /// the variables *final* value that gets disposed. If you need /// *original* disposal, copy your variable first. /// /// It is the callers responsibility to ensure that the variable's /// scope fully-encapsulates the "using"; if not, the variable /// may be re-used (and thus re-assigned) unexpectedly. /// </summary> public UsingBlock(CompilerContext ctx, Local local) { if (ctx == null) throw new ArgumentNullException("ctx"); if (local == null) throw new ArgumentNullException("local"); Type type = local.Type; // check if **never** disposable if ((type.IsValueType || type.IsSealed) && !ctx.MapType(typeof(IDisposable)).IsAssignableFrom(type)) { return; // nothing to do! easiest "using" block ever // (note that C# wouldn't allow this as a "using" block, // but we'll be generous and simply not do anything) } this.local = local; this.ctx = ctx; label = ctx.BeginTry(); } public void Dispose() { if (local == null || ctx == null) return; ctx.EndTry(label, false); ctx.BeginFinally(); Type disposableType = ctx.MapType(typeof (IDisposable)); MethodInfo dispose = disposableType.GetMethod("Dispose"); Type type = local.Type; // remember that we've already (in the .ctor) excluded the case // where it *cannot* be disposable if (type.IsValueType) { ctx.LoadAddress(local, type); switch (ctx.MetadataVersion) { case ILVersion.Net1: ctx.LoadValue(local); ctx.CastToObject(type); break; default: #if FX11 throw new NotSupportedException(); #else ctx.Constrain(type); break; #endif } ctx.EmitCall(dispose); } else { Compiler.CodeLabel @null = ctx.DefineLabel(); if (disposableType.IsAssignableFrom(type)) { // *known* to be IDisposable; just needs a null-check ctx.LoadValue(local); ctx.BranchIfFalse(@null, true); ctx.LoadAddress(local, type); } else { // *could* be IDisposable; test via "as" using (Compiler.Local disp = new Compiler.Local(ctx, disposableType)) { ctx.LoadValue(local); ctx.TryCast(disposableType); ctx.CopyValue(); ctx.StoreValue(disp); ctx.BranchIfFalse(@null, true); ctx.LoadAddress(disp, disposableType); } } ctx.EmitCall(dispose); ctx.MarkLabel(@null); } ctx.EndFinally(); this.local = null; this.ctx = null; label = new CodeLabel(); // default } } internal void Add() { Emit(OpCodes.Add); } internal void LoadLength(Local arr, bool zeroIfNull) { Helpers.DebugAssert(arr.Type.IsArray && arr.Type.GetArrayRank() == 1); if (zeroIfNull) { Compiler.CodeLabel notNull = DefineLabel(), done = DefineLabel(); LoadValue(arr); CopyValue(); // optimised for non-null case BranchIfTrue(notNull, true); DiscardValue(); LoadValue(0); Branch(done, true); MarkLabel(notNull); Emit(OpCodes.Ldlen); Emit(OpCodes.Conv_I4); MarkLabel(done); } else { LoadValue(arr); Emit(OpCodes.Ldlen); Emit(OpCodes.Conv_I4); } } internal void CreateArray(Type elementType, Local length) { LoadValue(length); il.Emit(OpCodes.Newarr, elementType); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Newarr + ": " + elementType); #endif } internal void LoadArrayValue(Local arr, Local i) { Type type = arr.Type; Helpers.DebugAssert(type.IsArray && arr.Type.GetArrayRank() == 1); type = type.GetElementType(); Helpers.DebugAssert(type != null, "Not an array: " + arr.Type.FullName); LoadValue(arr); LoadValue(i); switch(Helpers.GetTypeCode(type)) { case ProtoTypeCode.SByte: Emit(OpCodes.Ldelem_I1); break; case ProtoTypeCode.Int16: Emit(OpCodes.Ldelem_I2); break; case ProtoTypeCode.Int32: Emit(OpCodes.Ldelem_I4); break; case ProtoTypeCode.Int64: Emit(OpCodes.Ldelem_I8); break; case ProtoTypeCode.Byte: Emit(OpCodes.Ldelem_U1); break; case ProtoTypeCode.UInt16: Emit(OpCodes.Ldelem_U2); break; case ProtoTypeCode.UInt32: Emit(OpCodes.Ldelem_U4); break; case ProtoTypeCode.UInt64: Emit(OpCodes.Ldelem_I8); break; // odd, but this is what C# does... case ProtoTypeCode.Single: Emit(OpCodes.Ldelem_R4); break; case ProtoTypeCode.Double: Emit(OpCodes.Ldelem_R8); break; default: if (type.IsValueType) { il.Emit(OpCodes.Ldelema, type); il.Emit(OpCodes.Ldobj, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldelema + ": " + type); Helpers.DebugWriteLine(OpCodes.Ldobj + ": " + type); #endif } else { Emit(OpCodes.Ldelem_Ref); } break; } } internal void LoadValue(Type type) { il.Emit(OpCodes.Ldtoken, type); #if DEBUG_COMPILE Helpers.DebugWriteLine(OpCodes.Ldtoken + ": " + type); #endif EmitCall(MapType(typeof(System.Type)).GetMethod("GetTypeFromHandle")); } internal void ConvertToInt32(ProtoTypeCode typeCode, bool uint32Overflow) { switch (typeCode) { case ProtoTypeCode.Byte: case ProtoTypeCode.SByte: case ProtoTypeCode.Int16: case ProtoTypeCode.UInt16: Emit(OpCodes.Conv_I4); break; case ProtoTypeCode.Int32: break; case ProtoTypeCode.Int64: Emit(OpCodes.Conv_Ovf_I4); break; case ProtoTypeCode.UInt32: Emit(uint32Overflow ? OpCodes.Conv_Ovf_I4_Un : OpCodes.Conv_Ovf_I4); break; case ProtoTypeCode.UInt64: Emit(OpCodes.Conv_Ovf_I4_Un); break; default: throw new InvalidOperationException("ConvertToInt32 not implemented for: " + typeCode); } } internal void ConvertFromInt32(ProtoTypeCode typeCode, bool uint32Overflow) { switch (typeCode) { case ProtoTypeCode.SByte: Emit(OpCodes.Conv_Ovf_I1); break; case ProtoTypeCode.Byte: Emit(OpCodes.Conv_Ovf_U1); break; case ProtoTypeCode.Int16: Emit(OpCodes.Conv_Ovf_I2); break; case ProtoTypeCode.UInt16: Emit(OpCodes.Conv_Ovf_U2); break; case ProtoTypeCode.Int32: break; case ProtoTypeCode.UInt32: Emit(uint32Overflow ? OpCodes.Conv_Ovf_U4 : OpCodes.Conv_U4); break; case ProtoTypeCode.Int64: Emit(OpCodes.Conv_I8); break; case ProtoTypeCode.UInt64: Emit(OpCodes.Conv_U8); break; default: throw new InvalidOperationException(); } } internal void LoadValue(decimal value) { if (value == 0M) { LoadValue(typeof(decimal).GetField("Zero")); } else { int[] bits = decimal.GetBits(value); LoadValue(bits[0]); // lo LoadValue(bits[1]); // mid LoadValue(bits[2]); // hi LoadValue((int)(((uint)bits[3]) >> 31)); // isNegative (bool, but int for CLI purposes) LoadValue((bits[3] >> 16) & 0xFF); // scale (byte, but int for CLI purposes) EmitCtor(MapType(typeof(decimal)), new Type[] { MapType(typeof(int)), MapType(typeof(int)), MapType(typeof(int)), MapType(typeof(bool)), MapType(typeof(byte)) }); } } internal void LoadValue(Guid value) { if (value == Guid.Empty) { LoadValue(typeof(Guid).GetField("Empty")); } else { // note we're adding lots of shorts/bytes here - but at the IL level they are I4, not I1/I2 (which barely exist) byte[] bytes = value.ToByteArray(); int i = (bytes[0]) | (bytes[1] << 8) | (bytes[2] << 16) | (bytes[3] << 24); LoadValue(i); short s = (short)((bytes[4]) | (bytes[5] << 8)); LoadValue(s); s = (short)((bytes[6]) | (bytes[7] << 8)); LoadValue(s); for (i = 8; i <= 15; i++) { LoadValue(bytes[i]); } EmitCtor(MapType(typeof(Guid)), new Type[] { MapType(typeof(int)), MapType(typeof(short)), MapType(typeof(short)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)), MapType(typeof(byte)) }); } } internal void LoadValue(bool value) { Emit(value ? OpCodes.Ldc_I4_1 : OpCodes.Ldc_I4_0); } internal void LoadSerializationContext() { LoadReaderWriter(); LoadValue((isWriter ? typeof(ProtoWriter) : typeof(ProtoReader)).GetProperty("Context")); } private readonly TypeModel model; internal Type MapType(System.Type type) { return model.MapType(type); } private readonly ILVersion metadataVersion; public ILVersion MetadataVersion { get { return metadataVersion; } } public enum ILVersion { Net1, Net2 } internal bool AllowInternal(PropertyInfo property) { return nonPublic ? true : InternalsVisible(property.DeclaringType.Assembly); } } } #endif
{ "pile_set_name": "Github" }
2015–16 Scottish Professional Football League Statistics of the Scottish Professional Football League in season 2015–16. Scottish Premiership Scottish Championship Scottish League One Scottish League Two Award winners See also 2015–16 in Scottish football References Category:Scottish Professional Football League seasons
{ "pile_set_name": "Wikipedia (en)" }
Q: Bash Functions - Create a variable with parameter's label I'm trying to create a function which will create a variable with the tag $1 (first param) and with the value $3 (third param). Here is my code: function var { if [ "$2" = "=" ]; then $1 = $3 fi } Now, I believe the reason this doesn't work is because the way I wrote this function makes bash try to overwrite the value of $1 with that of $3. Is there a work-around for this? This is what I'm aiming to do: var i = 1 var greeting = "hello" and both of these variables would be assigned their value. A: Don't use eval, use declare with the -g flag (create global variables) function var { if [ "$2" = "=" ]; then declare -g "$1=$3" fi } var foo = bar echo $foo bar
{ "pile_set_name": "StackExchange" }
<!DOCTYPE html> <html> <body> <p>The following button shows an alert box.</p> <button onclick="myAlertFunction()">Click me</button> <br> <p>The following button shows an confirm box.</p> <button onclick="myConfirmFunction()">Click me</button> <br> <p>The following button shows an Prompt box.</p> <button onclick="myPromptFunction()">Click me</button> <br> <p id="demo"></p> <script> function myAlertFunction() { alert('Custom dialog for JS Alert functions'); } function myConfirmFunction() { confirm("Press a button!"); } function myPromptFunction() { var person = prompt("Please enter your name", "Harry Potter"); if (person != null) { document.getElementById("demo").innerHTML = "Hello " + person + "! How are you today?"; } } </script> </body> </html>
{ "pile_set_name": "Github" }
1 (e) 1/32 (f) -0.2 d Which is the third smallest value? (a) -2.13 (b) 2/27 (c) 0.5 (d) -3 b Which is the biggest value? (a) 463 (b) -3 (c) 2/3 a Which is the third smallest value? (a) -1/2 (b) 0.4 (c) -0.09 (d) 2 (e) -0.42 c What is the third biggest value in -4, 2/5, -11, -7, -0.5, -2/5? -0.5 Which is the smallest value? (a) -2.1 (b) -2 (c) 4/55 (d) 3 a What is the biggest value in 0.7, 1/6, 4, 0? 4 Which is the biggest value? (a) 4 (b) -2701 (c) -5 a Which is the third biggest value? (a) 0.3 (b) -1313 (c) -37 (d) -1/8 c Which is the biggest value? (a) -1 (b) -2 (c) 2/29 (d) -71 (e) -3/8 (f) -0.4 c What is the fourth smallest value in -0.4, 0.5, 0.4, 555, 44? 44 What is the smallest value in 2/95, -0.027, 6/5? -0.027 What is the second smallest value in 187, 5, 2? 5 Which is the third biggest value? (a) 5 (b) -2 (c) -5 (d) 106 (e) 1 e Which is the third smallest value? (a) -4 (b) 0.19 (c) 0.02 (d) -5/4 (e) -3 (f) -2 f Which is the biggest value? (a) -43 (b) 5 (c) -3/16 b What is the third smallest value in -347, -2, 93/2? 93/2 Which is the fifth biggest value? (a) -2/19 (b) 14/197 (c) -14/9 (d) 5 (e) 0.3 c Which is the fourth biggest value? (a) 19/3 (b) -2/21 (c) 1 (d) 1/8 (e) -25 (f) -2 b Which is the fourth biggest value? (a) 0.3 (b) 101 (c) -0.5 (d) -813/2 d Which is the fifth biggest value? (a) -92 (b) 5 (c) 2/115 (d) -62 (e) -0.1 a Which is the fourth biggest value? (a) -2 (b) 0.6 (c) 6 (d) 5 (e) 0.197 (f) -0.07 e What is the third smallest value in 2/63, 0.1, 6, 4, -18/23, -0.5? 2/63 Which is the second smallest value? (a) -3/8 (b) 2 (c) 0 (d) -5 (e) -0.9 e Which is the fifth smallest value? (a) 0.3 (b) 2/31 (c) 3.19 (d) -3 (e) -0.4 c What is the third biggest value in -5, 3/7, 10499, -0.14? -0.14 What is the third smallest value in 2/17, -5, 2/21, 3, -1/112, -0.0025? -0.0025 What is the fourth smallest value in 9579, -4, -5, 4? 9579 Which is the third smallest value? (a) 85 (b) -4 (c) 2 (d) 94 (e) -3 c Which is the second biggest value? (a) 7069 (b) 2 (c) -2 b What is the second biggest value in 0.4, 2, 11.545, -3? 2 What is the biggest value in 5, -12519, -3? 5 Which is the fourth biggest value? (a) -0.05 (b) -8 (c) -1 (d) 3 (e) 1/5 (f) -14/11 c Which is the fifth smallest value? (a) -0.3 (b) -1/4 (c) 3 (d) -1 (e) 4 (f) 1/4 c What is the third smallest value in -414, -0.22, -0.6, -3? -0.6 What is the biggest value in 4/7, 2, 1/72, -4, 1/5? 2 Which is the third biggest value? (a) -2/13 (b) 3.6 (c) 4 (d) 36 b Which is the third biggest value? (a) -2/3 (b) -0.1 (c) 5 (d) -2005 a What is the smallest value in -0.3, 19/5, -182? -182 Which is the fourth smallest value? (a) 0.4 (b) -1/30 (c) 4 (d) -56 (e) -0.3 a Which is the fifth smallest value? (a) 15/14 (b) -5 (c) -0.2 (d) 0.1 (e) -4 (f) 0 d What is the second biggest value in -1695, -2, 1/19? -2 Which is the fourth smallest value? (a) 101 (b) 6 (c) 2/7 (d) -0.4 a Which is the second smallest value? (a) -3 (b) 1.2 (c) 31 (d) 0.156 d What is the second biggest value in -0.04, -39/8, 0.09? -0.04 What is the second smallest value in -1.18, 22/9, 2/5? 2/5 What is the second smallest value in -0.4, -17003, 1/4, -6, 0.1, 3? -6 Which is the fourth smallest value? (a) -2.34 (b) -9 (c) 3 (d) -14 c What is the biggest value in -2, -3, 3, 4, 16, -0.2? 16 What is the fifth smallest value in -1, -0.2, -4, 12/313, 4? 4 What is the second smallest value in -3647, 5, -5? -5 Which is the second biggest value? (a) 2 (b) 62 (c) -0.1 (d) -3 (e) 5 e Which is the second smallest value? (a) 1/6 (b) -15 (c) -880 (d) 2 (e) -0.02 b What is the third biggest value in -0.1, 2, 4, 0.2, 50, -0.0495? 2 Which is the third smallest value? (a) 0.2 (b) -0.6 (c) 0 (d) 2 (e) 21 a Which is the third smallest value? (a) -4 (b) -0.4 (c) 56 (d) -792 (e) 5 b What is the smallest value in -0.13, 3/259, -5? -5 What is the second smallest value in 0, -11, 10.5, 1, -0.2? -0.2 What is the third biggest value in 271, 0.5, -3, 0.1? 0.1 What is the fourth biggest value in 4, -3, -1/8, -1, -23? -3 What is the third biggest value in -3309/5, 10, 1/8? -3309/5 Which is the smallest value? (a) -2/3 (b) 3 (c) 2/9 (d) 320 (e) 13 a What is the second smallest value in -3, 9, -1/10, -2/2677? -1/10 What is the biggest value in -7, 207.6, -0.2? 207.6 What is the biggest value in 3, -0.2, 4/159, -5, 37? 37 Which is the third smallest value? (a) -4 (b) 96 (c) 0.17 b What is the third biggest value in -18, 170, -0.03? -18 Which is the biggest value? (a) -0.417 (b) -31/4 (c) -6 a What is the fourth smallest value in -7/5, 2.312, 2, 1? 2.312 What is the fourth smallest value in 0.03, -3/4, -5, 4, -2.054? 0.03 Which is the second biggest value? (a) -3 (b) -2679 (c) 0.2 (d) -0.039 (e) 6/5 c Which is the second biggest value? (a) 1/316 (b) 34 (c) -0.3 (d) 0.3 d Which is the second smallest value? (a) 5 (b) -54/95 (c) 0 c Which is the third biggest value? (a) 21.657 (b) 2/9 (c) -2/7 c Which is the smallest value? (a) 0 (b) -4 (c) 95 b What is the second smallest value in 3/7, 0.03, -759? 0.03 Which is the fifth smallest value? (a) 61 (b) 1.6 (c) -0.2 (d) -3/8 (e) 20 a Which is the second smallest value? (a) 1/5 (b) 1 (c) -2/9 (d) 1462 a What is the third biggest value in 5, -1/437, 1? -1/437 Which is the smallest value? (a) 0.1 (b) -5 (c) 23 (d) 1/7 (e) 4 (f) 2 b Which is the smallest value? (a) 0.17 (b) 14.757 (c) 0.8 a What is the second smallest value in 0.2, 2, -49, -0.4, -0.5? -0.5 Which is the fourth biggest value? (a) -1/4 (b) -2/13 (c) -0.5 (d) 0.5 (e) 7 (f) -2/5 a What is the second smallest value in -3/10, -1/3, -1/4, -72/25, 1/4? -1/3 Which is the smallest value? (a) -0.5 (b) 3 (c) -3/23 (d) 34 (e) -0.2 (f) -3 f Which is the fourth biggest value? (a) 3 (b) -3/2 (c) 2/191 (d) -5 (e) 4 (f) -1/4 f What is the biggest value in -2/13, 2/3, 15.7, -10, 14? 15.7 Which is the third biggest value? (a) 2/5 (b) -1/3211 (c) -35 c Which is the third smallest value? (a) -4/45 (b) 4 (c) -1.6 (d) -1/2 a Which is the third smallest value? (a) 55 (b) 1 (c) 2 (d) -0.1 (e) 4 (f) -19 b Which is the sixth biggest value? (a) -0.04 (b) -0.1 (c) -0.5 (d) -1 (e) -0.3 (f) 95 d What is the smallest value in 3, -936, 5, 4? -936 Which is the third biggest value? (a) 4.7 (b) 0.05 (c) 58 (d) -0.2 (e) 3 e Which is the second biggest value? (a) -3 (b) -12 (c) -3/2 (d) 41 (e) 5 (f) -5 e Which is the fifth smallest value? (a) -3 (b) 3/4 (c) -100/9 (d) 5 (e) 2 (f) 12 d What is the second smallest value in -2/7, 4, 19, 69, 2, -3? -2/7 Which is the third biggest value? (a) -213 (b) -0.4 (c) 555/4 a What is the fifth biggest value in -4, 41/2, -78, 5, 0? -78 Which is the smallest value? (a) 0.17 (b) 0.4 (c) 48 a Which is the fourth biggest value? (a) -3.16 (b) -4 (c) -0.1 (d) 0.5 (e) 2/43 a What is the third smallest value in -1, -5058, 1, -2, 2/11? -1 What is the smallest value in -4, 1, -748, -1/4? -748 Which is the smallest value? (a) 14/6723 (b) 2 (c) -2 c What is the fourth smallest value in -1, -2/3, -5, 284.89, 1/5? 1/5 What is the smallest value in -1.08, 0, -140? -140 Which is the second smallest value? (a) -0.7 (b) 6/7 (c) -1 (d) -20 (e) 0.1 c What is the smallest value in 11.3121, -1/18, -3? -3 What is the second biggest value in -5, -6, 5, -0.06? -0.06 Which is the biggest value? (a) -4 (b) -2 (c) 3 (d) 0.41 (e) 24 (f) -13 e Which is the fourth biggest value? (a) 2/15 (b) 3/5 (c) 0.4 (d) -12396 (e) 0 e Which is the biggest value? (a) 5 (b) 4.821 (c) 16 c Which is the biggest value? (a) -1/3 (b) 1152 (c) 0.5 (d) 0.23 b What is the third biggest value in 5430/221, 0.06, -0.2? -0.2 What is the biggest value in 54/11, 3, 1/48, -0.4, -5? 54/11 What is the biggest value in -8, -1/3, 5, 2, 3, 4/19? 5 Which is the fourth smallest value? (a) -20 (b) 3 (c) -3 (d) 0.0295 b Which is the fourth biggest value? (a) -2 (b) -0.4 (c) -29/5 (d) 1 (e) -243 c What is the smallest value in 20, -44/15, 2/3? -44/15 What is the third biggest value in -64/5, 0.6, -11.71? -64/5 What is the fourth small
{ "pile_set_name": "DM Mathematics" }
The Pentagon quietly announced on Friday that US military troops are on the ground in yet another Middle Eastern country – this time in Yemen – and have been there for the last two weeks. That’s how US wars get started these days: no public debate, no congressional authorization, no presidential address; just an after-the-fact pre-weekend news dump. This time, instead of Isis, the US military “advisers” (don’t call them “boots on the ground”!) are supposedly assisting Yemeni forces fighting a resurgent al-Qaida organization, the same terrorist group that the US has helped strengthen over the past year by giving Saudi Arabia all sorts of support for their appalling and destructive war against Yemen. The US has sold the Saudis billions of dollars worth of weapons, handed them surveillance data from drones for picking targets and given military assistance with other logistics in their fight with Iran-linked Houthi rebels who stormed the presidential palace last year. As the Intercept reported on Sunday, the Saudis have “used US-produced aircraft, laser-guided bombs, and internationally banned cluster bombs to target and destroy schools, markets, power plants, and a hospital, resulting in thousands of civilian deaths.” While the Saudi’s bombing campaign hasn’t defeated the Houthi rebels or solidified the Yemeni government’s grip on power, it has been a boon to al-Qaida, which has quietly grown in stature and size inside the country as the US media has stayed mostly focused on Isis. Reuters published an important investigation last month in which its journalists detailed how the US-backed war “has helped al-Qaida in the Arabian Peninsula (AQAP) to become stronger than at any time since it first emerged almost 20 years ago.” But the investigation wasn’t picked up at all by US television media. As Reuters reported: Once driven to near irrelevance by the rise of Islamic State abroad and security crackdowns at home, al-Qaida in Yemen now openly rules a mini-state with a war chest swollen by an estimated $100m in looted bank deposits and revenue from running the country’s third largest port. This should have been entirely predictable: since the bloodshed in Yemen started, it has been obvious to experts in the region that the Saudi bombing campaign would only turn the Yemeni population against the United States, create sympathy for al-Qaida and a breeding ground for terrorists. So now the US apparently has to – or at least is choosing to – help fight a separate military campaign within the country’s civil war, in an attempt to tamp down the terrorist organization that the initial Saudi bombing campaign helped bolster. The fact that the Obama administration has not only stayed largely silent on the war but actively supported it behind the scenes should be one of the true scandals of its foreign policy posture toward the Middle East, yet you can bet the vast majority of Americans have no idea it’s happening. As far as I can tell, not a single question has been put to any of the presidential candidates about US policy in Yemen, despite an obsession with covering the ”war on terror” and its latest metamorphosis with Isis. You can expect the US troops, or “advisers”, that are supposedly “assisting” Yemeni and Emirati forces, will soon morph into a direct fighting force, just like when the Pentagon was saying the same thing about troops Iraq and Syria just months ago, only to later announce that some troops are now, in fact, fighting themselves. And we will spin around, once again, in the never-ending cycle that we refuse to escape in the now entirely predictable ”war on terror”.
{ "pile_set_name": "OpenWebText2" }
1. Field of the Invention The present invention pertains to an exhaust gas control system of internal combustion engine for purifying exhaust gas by adding urea water pumped out from a urea water tank to the exhaust gas. 2. Description of Related Art Urea SCR (Selective Catalytic Reduction) apparatus is known as an exhaust gas control system of internal combustion engine. Urea SCR apparatus is arranged to add urea water to exhaust gas. The urea water is thermally decomposed and hydrolyzed with exhaust gas heat, leading to generation of ammonia. With use of generated ammonia serving as a reduction agent, nitrogen oxide (NOx) in exhaust gas is subjected to reduction reaction on a selective reduction catalyst. This leads to a decomposition of NOx into water and nitrogen. Such a urea SCR apparatus is provided with a urea water tank for storing urea water, and a urea water addition valve for adding urea water pumped out from the urea water tank to the exhaust gas. Urea water gets freezing at −11° C. or lower. During cold period, urea water in the urea water tank is possibly frozen so as not to be allowed to be supplied to the urea water addition valve. In a urea water SCR apparatus in Japanese Patent Application Publication No. 2008-115784 (JP 2008-115784A), the urea water tank is provided with a heater for melting frozen urea water in the urea water tank by heating with the heater. When an internal combustion engine is left for a long time while stopping under cold circumstance, urea water in the urea water tank entirely freezes. In this circumstance, the internal combustion engine may start operating. In this instance, after the internal combustion engine starts operating, the heating is initiated with a heater. The urea water is gradually melted from a periphery of the heater, and the melted urea water is pumped out with a pump, and supplied to the urea water addition valve. However, when the urea water is consumed at a speed larger than a melting speed of the heater, the melting urea water is exhausted, not allowing to be added to the exhaust gas. Besides, the urea water is entirely pumped out at the periphery of the heater, leaving a void space between residual frozen urea water and the periphery of the heater within the urea water tank. In this condition, the heat of the heater is hardly transmitted to the frozen urea water, causing delay in restart of the melting of urea water and addition of urea water.
{ "pile_set_name": "USPTO Backgrounds" }
" vim suffers: exec vam#DefineAndBind('s:c','g:vim_tiny_cmd', '{}') fun! tiny_cmd#Put(a) let new = get(s:c,'next',0) +1 let s:c['next'] = new let s:c[new] = a:a return new endf fun! tiny_cmd#Get(nr) return s:c[a:nr] endf " Get and remove item fun! tiny_cmd#Pop(nr) let r = s:c[a:nr] | unlet s:c[a:nr] | return r endf
{ "pile_set_name": "Github" }
Jacobsville Jacobsville may refer to: Places Jacobsville, Evansville, a neighborhood of Evansville, Indiana Jacobsville, Maryland Jacobsville, Michigan Other Jacobsville Finnish Lutheran Church Jacobsville Sandstone
{ "pile_set_name": "Wikipedia (en)" }
Home to wholesale decorating showrooms, art galleries, and a technology incubator, Chicago’s 4 million square foot Merchandise Mart is preparing to take on yet another role as a giant projection screen. According to the Chicago Tribune, the iconic 1928 Art Deco building will host projected, multimedia art along its nearly three-acre, river-facing southern facade beginning in 2018. The possibility of utilizing the Merchandise Mart as an artistic canvas was first revealed back in 2014 as part of a preliminary study conducted by the Mayor’s Office and Choose Chicago to light the Chicago River and boost tourism. Known as the Lighting Framework Plan (LFP) initiative, the document also looked at creative ways to illuminate other waterfront structures such as Chicago’s bridges, Civic Opera House, and Lower Wacker Drive. New York based architecture firm A+I and creative partner Obscura Digital are currently evaluating lighting options for the building’s 25-story river frontage. Obscura has previous experience with similar lighting projects including St. Peter's Basilica in Vatican City and New York’s Empire State Building. While next year’s lighting of the Merchandise Mart compliments the city’s commitment to invest more in public art, the Tribune reports that the project will be funded through private—rather than public—channels.
{ "pile_set_name": "OpenWebText2" }
Sick and Wrong When we sent our son, Waylon, to school, we knew that eventually some kid would tell him that it’s sick and wrong to be fat. Apparently, some denizen of the playground has taken it upon himself to inform Waylon that he has “fat cheeks.” Now, whenever Waylon looks in the mirror, he sucks in his cheeks like a six-year-old Zoolander. Suddenly he shuns his puffy coat. The other night at dinner, when I told him he needed to eat his chicken for a little protein and fat, he looked at me with panic in his eyes. “I don’t want to be fat!” I find these developments more than a little disturbing. Before Waylon was born, my wife, Katy, and I made one solemn vow: no fat talk in front of the kid. Whatever our private struggles, we promised to abstain from negative body talk about ourselves, other people, and especially our son. It may seem strange that we prioritized body-positive parenting before, say, saving for Waylon’s college fund. But it’s all a question of context. To say that our families were fat phobic is a little like saying that Fred Phelps has a problem with gay people. Katy grew up hearing “lose some weight” as the one-size-fits-all response to every dilemma. When she became addicted to speed in the eighties, her mother was initially blinded by her miraculous weight loss. There’s actually a picture of Katy in the family photo album, looking skeletal and vacant, with the breezy caption “a size six!!!!” My parents’ attitudes toward weight were similarly disordered. They approached dieting with punitive, penitential fervor. At one point, when I was 13, my dad was exercising two hours a day and subsisting entirely on raisins, grapes, and bagels. “You don’t want to be unattractive,” he’d say when he dropped in between workouts to admonish me and my sister for eating junk. “Unattractive” was the code word for fat. Its connotations were lazy, undisciplined, stupid, feminine, and self-indulgent. These kinds of messages, which mistakenly equate physical attributes with moral qualities, were shaming and insidious. Luckily, I had one natural ally in sniffing out hypocrisy: my metabolism. I have a fairly fast metabolism. Whether I eat a lot or a little, whether I eat healthy food or junk, my weight stays within the same 10-pound range. By the time I reached adulthood, I had realized that the social approbation I received for being thin had nothing to do with self-discipline or moral righteousness. It was just genes and pure, dumb luck. Thus, when Waylon was born, Katy and I were determined to disrupt old family patterns. As a baby, Waylon demonstrated a marked preference for well-cushioned bodies. This was most apparent when we traveled to France and decided to save money by holding him in our laps for the 11-hour flight. When I held him, Waylon would toss-and-turn, trying to find a comfy way to rest his head against my bony clavicles. Again and again, he’d give up and reach for Katy’s more comfortable belly. As a toddler, Waylon preferred to rest on Katy’s belly while we read bedtime stories. He could fit his body between the crook of her neck and the cradle of her hips. Before he tottered off to bed, he’d squeeze her and bestow rows of tiny kisses. “Belly, I love you! You are the most comfortable belly in the whole world.” Of course, raising a fat accepting child turned out to be easier said than done. Although Katy and I had vowed to eschew negative body talk, that didn’t mean that we’d successfully jettisoned all of our negative baggage about our bodies. One summer, when I ventured to the pool in a new two-piece bathing suit, Waylon patted my midsection. “Hey,” he said, in a tone of pleasant surprise, “your belly looks kind of fat in that.” I resisted the urge to shroud myself in a giant beach towel, but I can’t say that my reply, “thanks a lot,” wasn’t shrouded in sarcasm. When you’ve grown up in a fat phobic family, it’s pretty hard to leave all those old habits behind. I came up listening to my mother bemoan her wide thighs and child-bearing hips. I know I’ve slipped up once or twice and said disparaging things about my own body within earshot of my son. And although we try our best to love the bodies we’ve got, it’s not like we don’t watch what we eat. Katy’s a performer, and she has a target weight that makes her feel more comfortable on stage. When Waylon first realized that she was dieting for an upcoming show, he was absolutely stricken. “Please Mommy,” he begged, “please don’t get rid of your fat.” It was perhaps the first time in her life that Katy had to assure someone that her diet would not be too successful. But the most challenging thing about raising a fat accepting child has been helping him make sense of the social stigma attached to fat in our culture. The necessity of introducing some context became clear when Waylon was three and we took him to our favorite pizza place. The waiter came to our table, and Waylon greeted him with a cheerful “Hi Fat!” The young man blushed and avoided my eyes for the rest of the evening, which was excruciating. I wanted to tell him that Waylon’s words weren’t meant to wound, but I doubted my ability to explain our parenting philosophy quickly and convincingly enough to avoid causing the man further mortification. When I talked to Waylon about that incident, I tried to explain it in children’s terms. Being fat doesn’t make someone bad, but calling someone fat can make that person feel bad. It’s complicated and contradictory, but I think he gets it. Just to make sure, I backed it up with some good, old-fashioned parental guilt: “If I ever hear you call someone fat in a mean way, I will be very, very upset,” I told him. “I know, I know,” he said, in the impatient voice he uses when I tell him something obvious. These days, when I see Waylon sucking in his lovely round cheeks, I wonder if we’ve succeeded at all. It’s easy to see his self-consciousness about his appearance as an external manifestation of my inner demons, a reflection of my own not-fully-expurgated fat phobia. I think, if only I hadn’t said that thing about my butt, if only we’d never told him that Katy was dieting… But part of me recognizes that there’s no way to shelter Waylon from the prejudices in the world around him. And fat is hardly the only issue where there’s a gap between our family worldview and the ideology of the larger culture. The other day, one of his classmates told him it was “strange” that he had two moms. Yes, we had to tell him, kids are going to say that. Not everybody knows gay people. Some families don’t know that it’s okay to be different. As a parent, I have to trust that Waylon can encounter other people’s assumptions without losing touch with our family’s core values: justice, compassion, and self-acceptance. But today he still doesn’t want to wear his bulky winter coat. Still, yesterday he hugged Katy’s middle and said, “you are the best belly in the whole world.” I hope that, eventually, the same love will extend to his cheeks, his belly, and every other part of his beautiful, perfect body. 2 responses to “Sick and Wrong” I hope it gets better for you. My 4-year old son has come home from daycare so far with the “F” bomb and the “N” word. Really … at pre-school already? It sucks to think of all the subtle and outright hatefulness our little ones will experience as they go through school, and we can only protect them so much. I suppose we can just try to prepare them for the ugliness of the world by nurturing the pureness of their young souls and hope they stay somewhat innocent. My fingers are crossed anyway.
{ "pile_set_name": "Pile-CC" }
[Everted intestinal sac method for quick finding absorption ingredients of Wuzhuyu decoction]. To establish a method for quick finding the absorption ingredients of Wuzhuyu decoction in order to select the index to control its quality. The absorption of three concentration of Wuzhuyu decotion was investigated with the in vitro-everted intestinal sac model. The intestinal bag fluid of jejunum and ileum were collected in different time and the eight ingredients, which were evodiamine (Ev), rutaecarpine (Ru), limonin (Li), ginsenoside-Rb1, -Rg1, -Re (Rb1, Rg1, Re), isorhamnetin-3-O-beta-D-glucosyl(6''-->1'")-alpha-L-rhamnoside (Irs)and 6-gingerol (6-Gi), were detected by HPLC as the represent constituents in samples. Eight ingredients except Ru in samples could be detected, but Ev could not be detected in high concentration samples. The ratios between absorption ingredients were different from in Wuzhuyu decotion. The in vitro-everted intestinal sac canc absorb the ingredients of Wuzhuyu decotion selectivity. Compare with the ileum, the jejunum can provide the more absorption information and faster, the best test time is 60-90 min.
{ "pile_set_name": "PubMed Abstracts" }
It’s common for major corporations to sponsor political conventions to buy favor with political parties. But what about when the convention nominates a presidential candidate who’s an out-and-out racist? That’s a deal breaker, right? For some big tech companies, apparently not. Facebook recently announced that it will provide funding and other support for the Donald Trump-led Republican National Convention. And Google will be the event’s official livestream provider via YouTube. These companies need to find their moral compass and divest from hate. [[{“type”:”media”,”view_mode”:”media_original”,”fid”:”612662″,”attributes”:{“alt”:””,”class”:”media-image”,”height”:”456″,”style”:”width: 600px; height: 456px;”,”typeof”:”foaf:Image”,”width”:”600″}}]] "Trumped into a Corner," an OtherWords cartoon by Khalil Bendib Trump’s violent rhetoric has inflamed a national atmosphere that’s already hostile toward Latino, Muslim, and black communities, as well as women and people with disabilities. He’s called for the mass deportation of undocumented immigrants, promised to build a wall on the U.S.-Mexico border, and vowed to ban all Muslims from entering the United States. Trump has also incited actual physical violence against people of color, and refused to denounce the white supremacist organizations that openly support him. If that weren’t enough, Trump’s also threatened to shut down the open internet, censoring the dissident voices standing up against his hate and racism. He’s called for greater surveillance of communities of color, and has encouraged violence against protesters and journalists. In short, Trump’s campaign isn’t “business as usual”—and corporations shouldn’t treat it as such. That’s why the racial justice group ColorOfChange has launched a campaign called Divest from Hate. They’re urging major tech companies not to bankroll a platform for hate while Trump continues to incite violence against marginalized communities. Other groups, including my own, have joined the effort to push tech companies to pull their support from the Republican convention, including both direct financial donations and in-kind contributions. This isn’t about left or right, but right and wrong. People of color make up a large portion of the users of services like YouTube and Facebook. These companies are essentially profiting off the very communities that Trump’s rallying against. Erin Egan, a Facebook vice president for publicity, claims that the company’s involvement in the convention will “facilitate an open dialogue among voters, candidates, and elected officials.” But throwing a coronation ball for Trump and his white supremacist supporters has nothing to do with democracy. It’s important to note that these companies have taken stands on other political issues. Both Google and Facebook recently spoke out against North Carolina’s transphobic “bathroom bill.” And earlier this year, Facebook CEO Mark Zuckerberg circulated an internal memo calling out employees who crossed out the words “Black Lives Matter” on the signature wall at the company’s headquarters. He called the behavior “malicious” and “unacceptable.” Now it’s time for Facebook and Google to take another stand against hate—and to join companies like Coca-Cola, Hewlett-Packard, and Microsoft that have already scaled back or cut their support to the Republican convention.
{ "pile_set_name": "OpenWebText2" }
Q: How can you restore health in Sword & Sworcery? Playing through Sword and Sworcery on the iPad - I've reached the point where I'm battling the first trigon. It defeated me - so I was down to 1 star health. How can I regain some health? I've gone through my stash of mushrooms and can't leave the current screen. A: Method 1 : during a fight hold up the shield for a while and you'll see a circle closing in on you - when it gets close in you gain an extra star. A: You can restore health between battles by sitting down (such as in Logfella's cabin) or by eating a mushroom.
{ "pile_set_name": "StackExchange" }
Q: How Class/Static Methods get invoked While reading The Ruby Programming Language I came to know that, class methods are invoked on an object which got the same name as class. I found objective-c also does a similar thing from a blog. Here Classes happens to be instances of meta-classes. As another major Object Oriented language i would love to know how Java implements/achieve this. Edit I appreciate the answers from everyone. But my question wasn't about how to invoke a class method. As many answers were answering that. Apologies if my question was not well framed or it gave you a wrong idea. A: In the Java language, static methods are invoked on the class instead of an object e.g. System.currentTimeMillis. So conceptually this is very similar to Ruby, ObjC, Smalltalk, etc. Unlike Ruby and Objective C, there is no object instance that those methods get invoked upon: The Java bytecode has a special bytecode instruction that invokes the static method; this bytecode instruction does not use an object pointer from the stack: INVOKESTATIC "java/lang/System" "currentTimeMillis" "()J" When using reflection, this special handling of static methods is represented by the fact that you don't need to specify an object that the method is called upon. Instead, you can supply null as the target of the call.
{ "pile_set_name": "StackExchange" }
Rating: 9.5 / 10 "You’d think, to hear some people talk, That lads go West with sobs and curses, And sullen faces white as chalk, Hankering for wreaths and tombs and hearses. But they’ve been taught the way to do it Like Christian soldiers; not with haste And shuddering groans; but passing through it With due regard for decent taste." — Siegfried Sassoon, How to Die (1918) War, to me, had always been intriguing as I reiterate through questioning its existence and its purpose while continuing to look into the dehumanization that the cyclical hell of history has brought upon mankind. Dehumanization, on one hand, constitutes the treatment of troops as mere pawns in a blood-smeared checkerboard, while on the other, it reeks of the decrepit morals of said pawns like a crumbling house of cards as they were trained to accept their designations as cogs in the global wheel of organised pandemonium and not living, thinking, feeling beings in a functional society. Melville's Army of Shadows, despite being a major overhaul of the base literature, is the very sombre take on warfare that exposes the latter form of mass slaughtering of the inner humanity through a small sample set of fringe participants. The French Resistance is outnumbered and outgunned and in spite of the fullest efforts of the crew, members land themselves in unexpected predicaments due to no fault of their own. The visual aesthetic in pastel palettes of muted cobalt, cyan and teal accompanied by foggy, bleak and quiet streets manifest the seeping doubts and internal turmoil eroding the crew bit by bit, mistrust turns corrosive and is complemented by executions based on probabilistic guessing. No decisions are objectively clear, no member can be deemed objectively correct. Having been a fighter in the French Resistance himself and clearly not interested in the combative aspects of the war or of the Resistance, Melville paints the picture of the human mind, a depiction of its solemn misery in the midst of chaos. Such intentions can be best comprehended in the scene where Philippe Gerbier is in London in the midst of a ball — soldiers dancing away with their ladies, straight out of Byron's "The Eve of Waterloo". However, there is neither hopeful light nor glory in this; dust falls at the impact of bombs dropping in the vicinity, and unsure of what to do, mirroring the central theme of moral uncertainty in the film, Gerbier is seen to be genuinely scared for the first time in the film. British men continue to dance away. This is normal, this too shall pass. Concluding with a hastily carried out execution of a supposedly compromised agent based on a plausible "theory", Melville deconstructs the fatality of war through intimately personal subjects. It's not people getting squashed like ants in a siege, it is the banality of war-induced chaos striking the characters we know, the reward for their courage, determination and moral fortitude is as bleak as choosing not to run in the race against their failure and their own mortality. A low-key suspenseful, gritty masterpiece.
{ "pile_set_name": "OpenWebText2" }
--- author: - 'Quanshi Zhang, Ruiming Cao, Ying Nian Wu, and Song-Chun Zhu *Fellow, IEEE*' bibliography: - 'TheBib.bib' title: Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering --- [Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{} Introduction ============ Convolutional neural networks [@CNN; @CNNImageNet; @ResNet; @DenseNet] (CNNs) have achieved superior performance in many visual tasks, such as object detection and segmentation. However, in real-world applications, current neural networks still suffer from low interpretability of their middle-layer representations and data-hungry learning methods. Thus, the objective of this study is to mine thousands of *latent patterns* from the mixed representations in conv-layers. Each latent pattern corresponds to a constituent region or a contextual region of an object part. We use an interpretable graphical model, namely an And-Or graph (AOG), to organize latent patterns hidden in conv-layers. The AOG maps implicit latent patterns to explicit object parts, thereby explaining the hierarchical representation of objects. We use very few (*e.g.* 3–20) part annotations to mine latent patterns and construct the AOG to ensure high learning efficiency. As shown in Fig. \[fig:rawMapToModel\], compared to ordinary CNN representations where each filter encodes a mixture of textures and parts (evaluated by [@Interpretability]), we extract clear object-part representations from CNN features. Our weakly-supervised learning method enables people to model objects or object parts on-the-fly, thereby ensuring broad applicability. ![image](rawMapToModel.pdf){width="\linewidth"} **And-Or graph representations:**[` `]{} As shown in Fig. \[fig:rawMapToModel\], the AOG represents a semantic hierarchy on the top of conv-layers, which consists of four layers, *i.e.* the *semantic part*, *part templates*, *latent patterns*, to *CNN units*. In the AOG, AND nodes represent compositional regions of a part, and OR nodes represent a list of alternative template/deformation candidates for a local region. - Layer 1: the top *semantic part* node is an OR node, whose children represent template candidates for the part. - Layer 2: a *part template* in the second layer describes a certain part appearance with a specific pose, *e.g.* a black sheep head from a side view. A part template is an AND node, which uses its children latent patterns to encode its constituent regions. - Layer 3: a *latent pattern* in the third layer represents a constituent region of a part (*e.g.* an eye in the head part) or a contextual region (*e.g.* the neck region *w.r.t.* the head). A latent pattern is an OR node, which naturally corresponds to a group of units within the feature map of a certain CNN filter. The latent pattern selects one of its children *CNN units* as the configuration of the geometric deformation. - Layer 4: terminal nodes are *CNN units*, *i.e.* raw activation units on feature maps of a CNN filter. In this hierarchy, the AOG maps implicit latent patterns in raw CNN feature maps to explicit semantic parts. We can use the AOG to localize object parts and their constituent regions for hierarchical object parsing. The AOG is interpretable and can be used for communications with human users. **Weakly-supervised learning via active question-answering:**[` `]{} We propose a new active learning strategy to build an AOG in a weakly-supervised manner. As shown in Fig. \[fig:QA\], we use an active question-answering (QA) process to mine latent patterns from raw feature maps and gradually grow the AOG. ![image](QA.pdf){width="0.99\linewidth"} The input is a pre-trained CNN and its training samples (*i.e.* object images without part annotations). The QA method actively discovers the missing patterns in the current AOG and asks human users to label object parts for supervision. In each step of the QA, we use the current AOG to localize a certain semantic part among all unannotated images. Our method actively identifies object images, which cannot fit well to the AOG. *I.e.* the current AOG cannot explain object parts in these images. Our method estimates the potential gain of asking about each of the unexplained objects, thereby determining an optimal sequence of questions for QA. Note that the QA is implemented based on pre-define ontology, instead of using open-ended questions or answers. As in Fig. \[fig:QA\], the user is asked to provide five types of answers (*e.g.* labeling the correct part position when the AOG cannot accurately localize the part), in order to guide the growth of the AOG. Given each specific answer, our method may either refine the AOG branch of an existing part template or construct a new AOG branch for a new part template. Based on human answers, we mine latent patterns for new AOG branches as follows. We require the new latent patterns - to represent a region highly related to the annotated object parts, - to frequently appear in unannotated objects, - to consistently keep stable spatial relationships with other latent patterns. Similar requirements were originally proposed in studies of pursuing AOGs, which mined hierarchical object structures from Gabor wavelets on edges [@MiningAOG] and HOG features [@OurICCV15AoG]. We extend such ideas to feature maps of neural networks. The active QA process mines object-part patterns from the CNN with fewer human supervision. There are three mechanisms to ensure the stability of weakly-supervised learning. - Instead of learning all representations from scratch, we transfer patterns in a pre-trained CNN to the target object part, which boosts the learning efficiency. Because the CNN has been trained using numerous images, latent patterns in the AOG are supposed to consistently describe the same part region among different object images, instead of over-fitting to part annotations obtained during the QA process. For example, we use the annotation of a specific tiger head to mine latent patterns. The mined patterns are not over-fitted to the head annotation, but represent generic appearances of different tiger heads. In this way, we can use very few (*e.g.* 1–3) part annotations to extract latent patterns for each part template. - It is important to maintain the generality of the pre-trained CNN during the learning procedure. *I.e.* we do not change/fine-tune the original convolutional weights within the CNN, when we grow new AOGs. This allows us to continuously learn new semantic parts from the same CNN, without the model drift. - The active QA strategy reduces the excessive usage of the human labor of annotating object parts that have been well explained by the current AOG. In addition, we use object-level annotations for pre-training, considering the following two facts: 1) Only a few datasets [@SemanticPart; @CUB200] provide part annotations, and most benchmark datasets [@PascalVOC; @ImageNet; @MSCOCO] mainly have annotations of object bounding boxes. 2) More crucially, real-world applications may focus on various object parts on-the-fly, and it is impractical to annotate a large number of parts for each specific task. This paper makes the following three contributions. 1\) From the perspective of object representations, we semanticize a pre-trained CNN by mining reliable latent patterns from noisy feature maps of the CNN. We design an AOG to represent the semantic hierarchy inside conv-layers, which associates implicit neural patterns with explicit semantic parts. 2\) From the perspective of learning strategies, based on the clear semantic structure of the AOG, we present an active QA method to learn each part template of the object sequentially, thereby incrementally growing AOG branches on a CNN to enrich part representations in the AOG. 3\) In experiments, our method exhibits superior performance to other baselines of weakly-supervised part localization. For example, our methods with 11 part annotations outperformed fast-RCNNs with 60 annotations on the Pascal VOC Part dataset. A preliminary version of this paper appeared in [@CNNAoG] and [@DeepQA]. Related work ============ **CNN visualization:**[` `]{} Visualization of filters in a CNN is a direct way of exploring the pattern hidden inside a neural unit. Lots of visualization methods have been used in the literature. Gradient-based visualization [@CNNVisualization_1; @CNNVisualization_2; @CNNVisualization_3] estimates the input image that maximizes the activation score of a neural unit. Dosovitskiy *et al.* [@FeaVisual] proposed up-convolutional nets to invert feature maps of conv-layers to images. Unlike gradient-based methods, up-convolutional nets cannot mathematically ensure the visualization result reflects actual neural representations. In recent years, [@olah2017feature] provided a reliable tool to visualize filters in different conv-layers of a CNN. Zhou *et al.* [@CNNSemanticDeep] proposed a method to accurately compute the image-resolution receptive field of neural activations in a feature map. Theoretically, the actual receptive field of a neural activation is smaller than that computed using the filter size. The accurate estimation of the receptive field is crucial to understand a filter’s representations. Unlike network visualization, our mining part representations from conv-layers is another choice to interpret CNN representations. **Active network diagnosis:**[` `]{} Going beyond “passive” visualization, some methods “actively” diagnose a pre-trained CNN to obtain insight understanding of CNN representations. [@CNNAnalysis_1] explored semantic meanings of convolutional filters. [@CNNAnalysis_2] evaluated the transferability of filters in intermediate conv-layers. [@CNNAnalysis_3; @CNNVisualization_5] computed feature distributions of different categories in the CNN feature space. Methods of [@visualCNN_grad; @visualCNN_grad_2] propagated gradients of feature maps *w.r.t.* the CNN loss back to the image, in order to estimate the image regions that directly contribute the network output. [@trust] proposed a LIME model to extract image regions that are used by a CNN to predict a label (or an attribute). Network-attack methods [@pixelAttack; @CNNInfluence; @CNNAnalysis_1] diagnosed network representations by computing adversarial samples for a CNN. In particular, influence functions [@CNNInfluence] were proposed to compute adversarial samples, provide plausible ways to create training samples to attack the learning of CNNs, fix the training set, and further debug representations of a CNN. [@banditUnknown] discovered knowledge blind spots (unknown patterns) of a pre-trained CNN in a weakly-supervised manner. Zhang *et al.* [@CNNBias] developed a method to examine representations of conv-layers and automatically discover potential, biased representations of a CNN due to the dataset bias. Furthermore, [@wu2007compositional; @yang2009evaluating; @wu2011numerical] mined the local, bottom-up, and top-down information components in a model for prediction. **CNN semanticization:**[` `]{} Compared to the diagnosis of CNN representations, semanticization of CNN representations is closer to the spirit of building interpretable representations. Hu *et al.* [@LogicRuleNetwork] designed logic rules for network outputs, and used these rules to regularize neural networks and learn meaningful representations. However, this study has not obtained semantic representations in intermediate layers. Some studies extracted neural units with certain semantics from CNNs for different applications. Given feature maps of conv-layers, Zhou *et al.* [@CNNSemanticDeep; @CNNSemanticDeep2] extracted scene semantics. Simon *et al.* mined objects from feature maps of conv-layers [@ObjectDiscoveryCNN_2], and learned explicit object parts [@CNNSemanticPart]. Unlike above research, we aim to explore the entire semantic hierarchy hidden inside conv-layers of a CNN. Because the AOG structure [@MumfordAOG; @MiningAOG] is suitable for representing the semantic hierarchy of objects, our method uses an AOG to represent the CNN. In our study, we use semantic-level QA to incrementally mine object parts from the CNN and grow the AOG. Such a “white-box” representation of the CNN also guided further active QA. With clear semantic structures, the AOG makes it easier to transfer CNN patterns to other part-based tasks. **Unsupervised/active learning:**[` `]{} Many methods have been developed to learn object models in an unsupervised or weakly supervised manner. Methods of [@Gpt_WeaklyCNN; @WeaklyMIL; @OurICCV15AoG; @ObjectDiscoveryCNN_2] learned with image-level annotations without labeling object bounding boxes. [@UnsuperCNN; @ChoDiscovery] did not require any annotations during the learning process. [@OnlineMetric] collected training data online from videos to incrementally learn models. [@Language2VideoAlign; @Language2ActionAlign] discovered objects and identified actions from language Instructions and videos. Inspired by active learning [@Active4; @i13; @Active2], the idea of learning from question-answering has been used to learn object models [@KB_Fei_Annotation; @KB_Fei_InteractionLabel; @TuQA]. Branson *et al.* [@ActivePart] used human-computer interactions to label object parts to learn part models. Instead of directly building new models from active QA, our method uses the QA to mine AOG part representations from CNN representations. **AOG for knowledge transfer:** Transferring hidden patterns in the CNN to other tasks is important for neural networks. Typical research includes end-to-end fine-tuning and transferring CNN representations between different categories [@CNNAnalysis_2; @CNNSemantic] or datasets [@UnsuperTransferCNN]. In contrast, we believe that a good explanation and transparent representation of parts will create a new possibility of transferring part features. As in [@AllenAoG; @MiningAOG], the AOG is suitable to represent the semantic hierarchy, which enables semantic-level interactions between human and neural networks. **Modeling “objects” vs. modeling “**parts**” in un-/weakly-supervised learning:**[` `]{} Generally speaking, in the scenario of un-/weakly-supervised learning, it is usually more difficult to model object parts than to represent entire objects. For example, object discovery [@ObjectDiscoveryCNN_1; @ObjectDiscoveryCNN_2; @ObjectDiscoveryCNN_3] and co-segmentation [@InteractiveCoseg] only require image-level labels without object bounding boxes. Object discovery is mainly implemented by identifying common foreground patterns from the noisy background. People usually consider closed boundaries and common object structure as a strong prior for object discovery. In contrast to objects, it is difficult to mine true part parsing of objects without sufficient supervision. Up to now, there is no reliable solution to distinguishing semantically meaningful parts from other potential divisions of object parts in an unsupervised manner. In particular, some parts (*e.g.* the abdomen) do not have shape boundaries to determine their shape extent. **Part localization/detection vs. semanticizing CNN patterns:** There are two key points to differentiate our study from conventional part-detection approaches. First, most detection methods deal with classification problems, but inspired by graph mining [@OurICCV15AoG; @OurSAPPAMI; @OurCVPR14Graph], we mainly focus on a mining problem. *I.e.* we aim to discover meaningful latent patterns to clarify CNN representations. Second, instead of summarizing common knowledge from massive annotations, our method requires very limited supervision to mine latent patterns. Method ====== The overall objective is to sequentially minimize the following three loss terms. $${Loss}={Loss}^{\textrm{CNN}}+{Loss}^{\textrm{QA}}+{Loss}^{\textrm{AOG}} \label{eqn:obj}$$ ${Loss}^{\textrm{CNN}}$ denotes the classification loss of the CNN. ${Loss}^{\textrm{QA}}$ is referred as to the loss for active QA. Given the current AOG, we use ${Loss}^{\textrm{QA}}$ to actively determine a sequence of questions about objects that cannot be explained by the current AOG, and require people to annotate bounding boxes of new object parts for supervision. ${Loss}^{\textrm{AOG}}$ is designed to learn an AOG for the CNN. ${Loss}^{\textrm{AOG}}$ penalizes 1) the incompatibility between the AOG and CNN feature maps of unannotated objects and 2) part-location errors *w.r.t.* the annotated ground-truth part locations. It is essential to determine the optimization sequence for the three losses in the above equation. We propose to first learn the CNN by minimizing ${Loss}^{\textrm{CNN}}$ and then build an AOG based on the learned CNN. We use the active QA to obtain new part annotations and use new part annotations to grow the AOG by optimizing ${Loss}^{\textrm{QA}}$ and ${Loss}^{\textrm{AOG}}$ alternatively. We introduce details of the three losses in the following subsections. Learning convolutional neural networks -------------------------------------- To simplify the story, in this research, we just consider a CNN for single-category classification, *i.e.* identifying object images of a specific category from random images. We use the log logistic loss to learn the CNN. $${Loss}^{\textrm{CNN}}=\mathbb{E}_{I\in{\bf I}}\big[{Loss}(\hat{y}_{I},y^{*}_{I})\big]$$ where $\hat{y}_{I}$ and $y^{*}_{I}$ denote the predicted and ground-truth labels of an image $I$. If the image $I$ belongs to the target category, then $y^{*}_{I}=+1$; otherwise $y^{*}_{I}=-1$. Learning And-Or graphs ---------------------- We are given a pre-trained CNN and its training images without part annotations. We use an active QA process to obtain a small number of annotations of object-part bounding boxes, which will be introduced in Section \[sec:QA\]. Based on these inputs, in this subsection, we focus on the approach for learning an AOG to represent the object part. ### And-Or graph representations Before the introduction of learning AOGs, we first briefly overview the structure of the AOG and the part parsing (inference) based on the AOG. As shown in Fig. \[fig:rawMapToModel\], an AOG represents the semantic structure of a part at four layers. Layer Name Node type ------- ---------------- --------------- 1 semantic part OR node 2 part template AND node 3 latent pattern OR node 4 neural unit Terminal node In the AOG, each OR node encodes a list of alternative appearance (or deformation) candidates as children. Each AND node uses its children to represent its constituent regions. More specifically, the top node is an OR node, which represents a certain semantic part, *e.g.* the head or the tail. The semantic part node encodes some part templates as children. Each part template corresponds to a specific part appearance from a certain perspective. During the inference process, the semantic part (an OR node) selects the best part template among all template candidates to represent the object. The part template in the second layer is an AND node, which uses its children latent patterns to represent a constituent region or a contextual region *w.r.t.* the part template. The part template encodes spatial relationships between its children. The latent pattern in the third layer is an OR node, whose receptive field is a square block within the feature map of a specific convolutional filter. The latent pattern takes neural units inside its receptive field as children. Because the latent pattern may appear at different locations in the feature map, the latent pattern uses these neural units to represent its deformation candidates. During the inference process, the latent pattern selects the strongest activated child unit as its deformation configuration. Given an image $I$[^1], we use the CNN to compute feature maps of all conv-layers on image $I$. Then, we can use the AOG for hierarchical part parsing. *I.e.* we use the AOG to semanticize the feature maps and localize the target part and its constituent regions in different layers. The parsing result is illustrated as red lines in Fig. \[fig:rawMapToModel\]. From a top-down perspective, the parsing procedure 1) identifies a part template for the semantic part; 2) parses an image region for the selected part template; 3) for each latent pattern under the part template, it selects a neural unit within a specific deformation range to represent this pattern. **OR nodes:** Both the top semantic-part node and latent-pattern nodes in the third layer are OR nodes. The parsing process assigns each OR node $u$ with an image region $\Lambda_{u}$ and an inference score $S_{u}$. $S_{u}$ measures the fitness between the parsed region $\Lambda_{u}$ and the sub-AOG under $u$. The computation of $\Lambda_{u}$ and $S_{u}$ for all OR nodes shares the same paradigm. $$S_{u}=\max_{v\in Child(u)}S_{v},\qquad\Lambda_{u}=\Lambda_{\hat{v}}$$ where let $u$ have $m$ children nodes $Child(u)=\{v_{1},v_{2},\ldots,v_{m}\}$. $S_{v}$ denotes the inference score of the child $v$, and $\Lambda_{v}$ is referred to as the image region assigned to $v$. The OR node selects the child with the highest score $\hat{v}={\arg\!\max}_{v\in Child(u)}S_{v}$ as the true parsing configuration. Node $\hat{v}$ propagates its image region to the parent $u$. More specifically, we introduce detailed settings for different OR nodes. - The OR node of the top semantic part contains a list of alternative part templates. We use $top$ to denote the top node of the semantic part. The semantic part chooses a part template to describe each input image $I$. - The OR node of each latent pattern $u$ in the third layer naturally corresponds to a square deformation range within the feature map of a convolutional filter of a conv-layer. All neural units within the square are used as deformation candidates of the latent pattern. For simplification, we set a constant deformation range (with a center $\overline{{\bf p}}_{u}$ and a scale of $\frac{h}{3}\times\frac{w}{3}$ in the feature map where $h$ and $w$ ($h=w$) denote the height and width of the feature map) for each latent pattern. $\overline{{\bf p}}_{u}$ is a parameter that needs to be learned. Deformation ranges of different patterns in the same feature map may overlap. Given parsing configurations of children neural units as input, the latent pattern selects the child with the highest inference score as the true deformation configuration. **AND nodes:** Each part template is an AND node, which uses its children (latent patterns) to represent its constituent or contextual regions. We use $v$ and $Child(v)=\{u_{1},u_{2},\ldots,u_{m}\}$ to denote the part template and its children latent patterns. We learn the average displacement from $\Lambda_{u}$ to $\Lambda_{v}$ among different image, denoted by $\Delta{\bf p}_{u}$, as a parameter of the AOG. Given parsing results of children latent patterns, we use the image region of each child node $\Lambda_{u}$ to infer the region for the parent $v$ based on its spatial relationships. Just like a deformable part model, the parsing of $v$ can be given as $$S_{v}\!=\!\!\!\!\!\!\!\sum_{u\in Child(v)}\!\!\!\!\!\!\!\big[S_{u}\!+\!S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})\big],\;\;\Lambda_{v}\!=\!f(\Lambda_{u_{1}},\ldots,\Lambda_{u_{m}})\!$$ where we use parsing results of children nodes to infer the parent part template $v$. $S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})$ denotes the spatial compatibility between $\Lambda_{u}$ and $\Lambda_{v}$ *w.r.t.* their average displacement $\Delta{\bf p}_{u}$. Please see the appendix for details of $S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})$. For the region parsing of the part template $v$, we need to estimate two terms, *i.e.* the center position ${\bf p}_{v}$ and the scale $scale_{v}$ of $\Lambda_{v}$. We learn a fixed scale for each part template, which will be introduced in Section \[sec:learnAOG\]. In this way, we can simply implement region parsing by computing the region position that maximizes the inference score ${\bf p}_{v}=f(\Lambda_{u_{1}},\Lambda_{u_{2}},\ldots,\Lambda_{u_{m}})={\arg\!\max}_{{\bf p}_{v}}S_{v}$. **Terminal nodes (neural units):** Each terminal node under a latent pattern represents a deformation candidate of the latent pattern. The terminal node has a fixed image region, *i.e.* we propagate the neural unit’s receptive field back to the image plane as its image region. We compute a neural unit’s inference score based on both its neural response value and its displacement *w.r.t.* its parent latent pattern. Please see the appendix for details. Based on the above node definitions, we can use the AOG to parse each given image $I$ by dynamic programming in a bottom-up manner. ### Learning And-Or graphs {#sec:learnAOG} The core of learning AOGs is to distinguish reliable latent patterns from noisy neural responses in conv-layers and select reliable latent patterns to construct the AOG. **Training data:**[` `]{} Let ${\bf I}^{\textrm{obj}}\subset{\bf I}$ denote the set of object images of a target category. During the active question-answering, we obtain bounding boxes of the target object part in a small number of images, ${\bf I}^{\textrm{ant}}\!=\!\{I_1,I_2,\ldots,I_{M}\}\subset{\bf I}^{\textrm{obj}}$ among all objects. The other images without part annotations are denoted by ${\bf I}^{\textrm{unant}}={\bf I}^{\textrm{obj}}\setminus{\bf I}^{\textrm{ant}}$. In addition, the question-answering process collects a number of part templates. Thus, for each image $I\in{\bf I}^{\textrm{ant}}$, we annotate $(\Lambda_{top}^{*},v^{*})$, where $\Lambda_{top}^{*}$ denotes the ground-truth bounding box of the part in $I$, and $v^{*}\in Child(top)$ specifies the ground-truth template for the part. **Which AOG parameters to learn:**[` `]{} We can use human annotations to define the first two layers of the AOG. If human annotators specify a total of $m$ different part templates during the annotation process, correspondingly, we can directly connect the top node with $m$ part templates as children. For each part template $v\in Child(top)$, we fix a constant scale for its region $\Lambda_{v}$. *I.e.* if there are $n$ ground-truth part boxes that are labeled for $v$, we compute the average scale among the $n$ part boxes as the constant scale $scale_{v}$. Thus, the key to AOG construction is to mine children latent patterns for each part template $v$. We need to mine latent patterns from a total of $K$ conv-layers. We select $n_{k}$ latent patterns from the $k$-th ($k=1,2,\ldots,K$) conv-layer, where $K$ and $\{n_{k}\}$ are hyper-parameters. Let each latent pattern $u$ in the $k$-th conv-layer correspond to a square deformation range, which is located in the $D_{u}$-th slice of the conv-layer’s feature map. $\overline{\bf p}_{u}$ denotes the center of the range. As analyzed in the appendix, we only need to estimate the parameters of $D_{u},\overline{\bf p}_{u}$ for $u$. **How to learn:**[` `]{} Just like the pattern pursuing in Fig. \[fig:rawMapToModel\], we mine the latent patterns by estimating their best locations $D_{u},\overline{\bf p}_{u}\in{\boldsymbol\theta}$ that maximize the following objective function, where ${\boldsymbol\theta}$ denotes the parameter set of the AOG. $$\begin{split} {Loss}^{\textrm{AOG}}=\mathbb{E}_{I\in{\bf I}^{\textrm{ant}}}\big[-S_{top}+L(\Lambda_{top},\Lambda_{top}^{*})\big]\qquad\\ +\lambda^{\textrm{unant}}\mathbb{E}_{I\in{\bf I}^{\textrm{obj}}}\big[-S^{\textrm{unant}}_{\textrm{AOG}}+L^{\textrm{unant}}({\boldsymbol\Lambda}_{\textrm{AOG}})\big] \end{split} \label{eqn:LossAOG}$$ First, let us focus on the first half of the equation, which learns from part annotations. $S_{top}$ and $L(\Lambda_{top},\Lambda_{top}^{*})$ denote the final inference score of the AOG on image $I$ and the loss of part localization, respectively. Given annotations $(\Lambda_{top}^{*},v^{*})$ on $I$, we get $$\begin{split} &S_{top}=\max_{v\in Child(top)}S_{v}\approx S_{v^{*}}\\ &L(\Lambda_{top},\Lambda_{top}^{*})=-\lambda_{v^{*}}\Vert{\bf p}_{top}-{\bf p}^{*}_{top}\Vert \end{split}$$ where we approximate the ground-truth part template $v^{*}$ as the selected part template. We ignore the small probability of the AOG assigning an annotated image with an incorrect part template to simplify the computation. The part-localization loss $L(\Lambda_{top},\Lambda_{top}^{*})$ measures the localization error between the parsed part region ${\bf p}_{top}$ and the ground truth ${\bf p}^{*}_{top}={\bf p}(\Lambda_{top}^{*})$. The second half of Equation (\[eqn:LossAOG\]) learns from objects without part annotations. $$\begin{split} S^{\textrm{unant}}_{\textrm{AOG}}&={\sum}_{u\in Child(v^{*})}S^{\textrm{unant}}_{u}\\ L^{\textrm{unant}}({\boldsymbol\Lambda}_{\textrm{AOG}})&={\sum}_{u\in Child(v^{*})}\lambda^{\textrm{close}}\Vert\Delta{\bf p}_{u}\Vert^2 \end{split} \label{sec:unsuper}$$ where the first term $S^{\textrm{unant}}_{\textrm{AOG}}$ denotes the inference score at the level of latent patterns without ground-truth annotations of object parts. Please see the appendix for the computation of $S^{\textrm{unant}}_{u}$. The second term $L^{\textrm{unant}}({\boldsymbol\Lambda}_{\textrm{AOG}})$ penalizes latent patterns that are far from their parent $v^{*}$. This loss encourages the assigned neural unit to be close to its parent latent pattern. We assume that 1) latent patterns that frequently appear among unannotated objects may potentially represent stable part appearance and should have higher priorities; and that 2) latent patterns spatially closer to their parent part templates are usually more reliable. When we set $\lambda_{v^{*}}$ to a constant $\lambda^{\textrm{inf}}\sum_{k=1}^{K}n_{k}$, we can transform the learning objective in Equation (\[eqn:LossAOG\]) as follows. $$\forall v\in Child(top), \quad\max_{{\boldsymbol\theta}_{v}}{\bf L}_{v},\quad {\bf L}_{v}\!=\!\!\!\!\!\!\!\!\sum_{u\in Child(v)}\!\!\!\!\!\!\!Score(u) \label{eqn:subAOG}$$ where [$Score(u)\!=\!\mathbb{E}_{I\in{\bf I}_{v}}[S_{u}+S^{\textrm{inf}}(\Lambda_{u}|\Lambda^{*}_{v})]$ $+\mathbb{E}_{I'\in{\bf I}^{\textrm{obj}}}$ $\lambda^{\textrm{unant}}[S^{\textrm{unant}}_{u}-\lambda^{\textrm{close}}\Vert\Delta{\bf p}_{u}\Vert^2]$]{}. ${\boldsymbol\theta}_{v}\subset{\boldsymbol\theta}$ denotes the parameters for the sub-AOG of the part template $v$. We use ${\bf I}_{v}\subset{\bf I}^{\textrm{ant}}$ to denote the subset of images that are annotated with $v$ as the ground-truth part template. **Learning the sub-AOG for each part template:**[` `]{} Based on Equation (\[eqn:subAOG\]), we can mine the sub-AOG for each part template $v$, which uses this template’s annotations on images $I\in{\bf I}_{v}\subset{\bf I}^{\textrm{ant}}$, as follows. 1\) We first enumerate all possible latent patterns corresponding to the $k$-th CNN conv-layer ($k=1,\ldots,K$), by sampling all pattern locations *w.r.t.* $D_{u}$ and $\overline{\bf p}_{u}$. 2\) Then, we sequentially compute $\Lambda_{u}$ and $Score(u)$ for each latent pattern. 3\) Finally, we sequentially select a total of $n_{k}$ latent patterns. In each step, we select $\hat{u}\!=\!{\arg\!\max}_{u\in Child(v)}\Delta{\bf L}_{v}$. *I.e.* we select latent patterns with top-ranked values of [$Score(u)$]{} as children of part template $v$. Learning via active question-answering {#sec:QA} -------------------------------------- We propose a new learning strategy, *i.e.* active QA, which is more efficient than conventional batch learning. The QA-based learning algorithm actively detects blind spots in feature representations of the model and ask questions for supervision. In general, blind spots in the AOG include 1) neural-activation patterns in the CNN that have not been encoded in the AOG and 2) inaccurate latent patterns in the AOG. The unmodeled neural patterns potentially reflect new part templates, while inaccurate latent patterns correspond to sub-optimized part templates. As an interpretable representation of object parts, the AOG can represent blind spots using linguistic description. We design five types of answers to project these blind spots onto semantic details of objects. Our method selects and asks a series of questions. We then collect answers from human users, in order to incrementally grow new AOG branches to explain new part templates and refine existing AOG branches of part templates. Our approach repeats the following QA process. As shown in Fig. \[fig:QA\], at first, we use the current AOG to localize object parts on all unannotated objects of a category. Based on localization results, the algorithm selects and asks about the object $I$, from which the AOG can obtain the most information gain. A question [$q\!=\!(I,\hat{v},\Lambda_{\hat{v}})$]{} requires people to determine whether our approach predicts the correct part template $\hat{v}$ and parses a correct region $\Lambda_{top}=\Lambda_{\hat{v}}$ for the part. Our method expects one of the following answers. **Answer 1:** the part detection is correct. **Answer 2:** the current AOG predicts the correct part template in the parse graph, but it does not accurately localize the part. **Answer 3:** neither the part template nor the part location is correctly estimated. **Answer 4:** the part belongs to a new part template. **Answer 5:** the target part does not appear in the image. In particular, in case of receiving Answers 2–4, our method will ask people to annotate the target part. In case of getting Answer 3, our method will require people to specify its part template and whether the object is flipped. Our method uses new part annotations to refine (for Answers 2–3) or create (for Answer 4) an AOG branch of the annotated part template based on Equation (\[eqn:LossAOG\]). ### Question ranking The core of the QA-based learning is to select a sequence of questions that reduce the uncertainty of part localization the most. Therefore, in this section, we design a loss function to measure the incompatibility between the AOG and real part appearances in object samples. Our approach predicts the potential gain (decrease of the loss) of asking about each object. Objects with large gains usually correspond to not well explained CNN neural activations. Note that annotating a part in an object may also help localize parts on other objects, thereby leading to a large gain. Thus, we use a greedy strategy to select a sequence of questions [$\Omega=\{q_{i}|i=1,2,\ldots\}$]{}, *i.e.* asking about the object that produces the most gain in each step. For each object image $I$, we use [${\bf P}(y|I)$]{} and [${\bf Q}(y|I)$]{} to denote the prior distribution and the estimated distribution of an object part on $I$, respectively. A label [$y\in\{+1,-1\}$]{} indicates whether $I$ contains the target part. The AOG estimates the probability of object $I$ containing the target part as [${\bf Q}(y\!=\!+1|I)\!=\!\frac{1}{Z}\exp[\beta S_{top}]$]{}, where $Z$ and $\beta$ are parameters for scaling (see Section \[sec:implement\] for details); [${\bf Q}(y=-1|I)\!=\!1-{\bf Q}(y=+1|I)$]{}. Let [${\bf I}^{\textrm{ant}}$]{} denote the set of objects without being asked during previous QA. For each asked object [$I\in{\bf I}^{\textrm{ant}}$]{}, we set its prior distribution [${\bf P}(y=+1|I)=1$]{} if $I$ contains the target part; [${\bf P}(y=+1|I)=0$]{} otherwise. For each un-asked object [$I\in{\bf I}^{\textrm{unant}}$]{}, we set its prior distribution based on statistics of previous answers, [${\bf P}(y=+1|I)=\mathbb{E}_{I'\in{\bf I}^{\textrm{ant}}}{\bf P}(y=+1|I')$]{}. Therefore, we formulate the loss function as the KL divergence between the prior distribution [${\bf P}$]{} and the estimated distribution [${\bf Q}$]{}. $$\begin{split} \!\!{Loss}^{\textrm{QA}}\!\!\!=\!{\bf KL}({\bf P}\Vert{\bf Q})\!=\!&\sum_{I\in{\bf I}^{\textrm{obj}}}\sum_{y}{\bf P}(y,I)\log\frac{{\bf P}(y,I)}{{\bf Q}(y,I)}\!\!\!\!\!\!\!\\ =&\lambda\sum_{I\in{\bf I}^{\textrm{obj}}}\sum_{y}{\bf P}(y|I)\log\frac{{\bf P}(y|I)}{{\bf Q}(y|I)} \end{split}$$ where [${\bf P}(y,I)\!=\!{\bf P}(y|I)P(I)$; ${\bf Q}(y,I)\!=\!{\bf Q}(y|I)P(I)$; $\lambda=P(I)\!=\!1/\vert{\bf I}^{\textrm{obj}}\vert$]{} is a constant prior probability for $I$. We keep modifying both the prior distribution ${\bf P}$ and the estimated distribution ${\bf Q}$ during the QA process. Let the algorithm select an unannotated object [$\tilde{I}\in{\bf I}^{\textrm{unant}}={\bf I}^{\textrm{obj}}\setminus{\bf I}^{\textrm{ant}}$]{} and ask people to label its part. The annotation would encode part representations of $\tilde{I}$ into the AOG and significantly change the estimated distribution for objects that are similar to $\tilde{I}$. For each object $I'\in{\bf I}^{\textrm{obj}}$, we predict its estimated distribution after a new part annotation as $$\begin{split} \tilde{\bf Q}(y=+1|I')=&\frac{1}{Z}\exp[\beta S_{top,I'}^{\textrm{new}}|_{\tilde{I}}]\\ S_{top,I'}^{\textrm{new}}|_{\tilde{I}}=&S_{top,I'}+\Delta S_{top,\tilde{I}}e^{-\alpha\cdot dist(I',\tilde{I})}\!\!\!\!\!\!\!\!\!\! \end{split} \label{eqn:predict}$$ where $S_{top,I'}$ indicates the current AOG’s inference score of $S_{top}$ on image $I'$. $S_{top,I'}^{\textrm{new}}|_{\tilde{I}}$ denotes the predicted inference score of $I'$ when people annotate $\tilde{I}$. We assume that if object $I'$ is similar to object $\tilde{I}$, the inference score of $I'$ will have an increase similar to that of $\tilde{I}$. [$\Delta S_{top,\tilde{I}}\!=\!\mathbb{E}_{I\in{\bf I}^{\textrm{ant}}}S_{top,I}-S_{top,\tilde{I}}$]{} denotes the score increase of $\tilde{I}$. $\alpha$ is a scalar weight. We formulate the appearance distance between $I'$ and $\tilde{I}$ as [$dist(I',\tilde{I})\!=\!1-\frac{\phi(I')^{T}\phi(\tilde{I})}{\vert\phi(I')\vert\cdot\vert\phi(\tilde{I})\vert}$]{}, where [$\phi(I')\!=\!{\bf M}\,{\bf f}_{I'}$]{}. ${\bf f}_{I'}$ denotes features of $I'$ at the top conv-layer after ReLU operation, and ${\bf M}$ is a diagonal matrix representing the prior reliability for each feature dimension[^2]. In addition, if $I'$ and $\tilde{I}$ are assigned with different part templates by the current AOG, we set an infinite distance between $I'$ and $\tilde{I}$ to achieve better performance. Based on Equation (\[eqn:predict\]), we can predict the changes of the KL divergence after the new annotation on $\tilde{I}$ as $$\Delta{\bf KL}(\tilde{I})=\lambda{\sum}_{I\in{\bf I}^{\textrm{obj}}}{\sum}_{y}{\bf P}(y|I)\log\frac{\tilde{\bf Q}(y|I)}{{\bf Q}(y|I)} \label{eqn:delta}$$ Thus, in each step, our method selects and asks about the object that decreases the KL divergence the most. $$\hat{I}={\arg\!\max}_{I\in{\bf I}^{\textrm{unant}}}\Delta{\bf KL}(I) \label{eqn:select}$$ **QA implementations:**[` `]{} In the beginning, for each object $I$, we initialize [${\bf P}(y\!=\!+1|I)\!=\!1$]{} and [${\bf Q}(y\!=\!+1|I)\!=\!0$]{}. Then, our approach selects and asks about an object $\hat{I}$ based on Equation (\[eqn:select\]). We use the answer to update ${\bf P}$. If a new object part is labeled during the QA process, we apply Equation (\[eqn:LossAOG\]) to update the AOG. More specifically, if people label a new part template, our method will grow a new AOG branch to encode this template. If people annotate a part for an old part template, our method will update its corresponding AOG branch. Then, we compute the new distribution [${\bf Q}$]{} based on the new AOG. In this way, the above QA procedure gradually grows the AOG. Experiments =========== Implementation details {#sec:implement} ---------------------- We used a 16-layer VGG network (VGG-16) [@VGG], which was pre-trained for object classification using 1.3M images in the ImageNet ILSVRC 2012 dataset [@ImageNet]. Then, for each testing category, we further fine-tune the VGG-16 using object images in this category to classify target objects from random images. We selected the last nine conv-layers of VGG-16 as valid conv-layers. We extracted neural units from these conv-layers to build the AOG. **Active question-answering:** Three parameters were involved in our active-QA method, *i.e.* $\alpha$, $\beta$, and $Z$. Because most objects of the category contained the target part, we ignored the small probability of ${\bf P}(y=-1|I)$ in Equation (\[eqn:delta\]) to simplify the computation. As a result, $Z$ was eliminated in Equation (\[eqn:delta\]), and the constant weight $\beta$ did not affect object-selection results in Equation (\[eqn:select\]). We set $\alpha=4.0$ in our experiments. **Learning AOGs:** Multiple latent patterns corresponding to the same convolutional filter may have similar positions $\overline{\bf p}_{u}$, and their deformation ranges may highly overlap. Thus, we selected the latent pattern with the highest $Score(u)$ within a small range of $\epsilon\times\epsilon$ in the filter’s feature map and removed other nearby patterns to obtain a spare AOG. Besides, for each part template $v$, we estimated $n_{k}$ latent patterns in the $k$-th conv-layer. We assumed that scores of all latent patterns in the $k$-th conv-layer follow the distribution of [$Score(u)\sim\alpha\exp[-(\xi\cdot{rank})^{0.5}]+\gamma$]{}, where $rank$ denotes the score rank of [$u$]{}. We set [$n_{k}=\lceil0.5/\xi\rceil$]{}, which learned the best AOG. Datasets -------- Because evaluation of part localization requires ground-truth annotations of part positions, we used the following three benchmark datasets to test our method, *i.e.* the PASCAL VOC Part Dataset [@SemanticPart], the CUB200-2011 dataset [@CUB200], and the ILSVRC 2013 DET Animal-Part dataset [@CNNAoG]. Just like in [@SemanticPart; @CNNAoG], we selected animal categories, which prevalently contain non-rigid shape deformation, for testing. *I.e.* we selected six animal categories—*bird, cat, cow, dog, horse*, and *sheep*—from the PASCAL Part Dataset. The CUB200-2011 dataset contains 11.8K images of 200 bird species. We followed [@ActivePart; @CNNSemanticPart; @CNNAoG] and used all these images as a single bird category for learning. The ILSVRC 2013 DET Animal-Part dataset [@CNNAoG] contains part annotations of 30 animal categories among all the 200 categories in the ILSVRC 2013 DET dataset [@ImageNet]. ![image](energyCurve.pdf){width="\linewidth"} The 2nd column shows the number of part annotations for training. The 3rd column indicates whether the baseline used all object-box annotations in the category to pre-fine-tune a CNN before learning the part (*object-box annotations are more than part annotations*). Baselines --------- We used the following thirteen baselines for comparison. The first two baselines were based on the Fast-RCNN [@FastRCNN]. We fine-tuned the fast-RCNN with a loss of detecting a single class/part for a fair comparison. The first baseline, namely *Fast-RCNN (1 ft)*, fine-tuned the VGG-16 using part annotations to detect parts on well-cropped objects. To enable a more fair comparison, we conducted the second baseline based on two-stage fine-tuning, namely *Fast-RCNN (2 fts)*. This baseline first fine-tuned the VGG-16 using numerous object-box annotations in the target category, and then fine-tuned the VGG-16 using a few part annotations. The third baseline was proposed in [@CNNSemanticPart], namely *CNN-PDD*. *CNN-PDD* selected a filter in a CNN (pre-trained using ImageNet ILSVRC 2012 dataset) to represent the part on well-cropped objects. Then, we slightly extended [@CNNSemanticPart] as the fourth baseline *CNN-PDD-ft*. *CNN-PDD-ft* first fine-tuned the VGG-16 using object bounding boxes, and then applied [@CNNSemanticPart] to learn object parts. The strongly supervised DPM (*SS-DPM-Part*) [@SSDPM] and the approach of [@PLDPM] (*PL-DPM-Part*) were the fifth and sixth baselines. These methods learned DPMs for part localization. The graphical model proposed in [@SemanticPart] was selected as the seventh baseline, namely *Part-Graph*. The eighth baseline was the interactive learning for part localization [@ActivePart] (*Interactive-DPM*). Without lots of training samples, “simple” methods are usually insensitive to the over-fitting problem. Thus, we designed the last four baselines as follows. We first fine-tuned the VGG-16 using object bounding boxes, and collected image patches from cropped objects based on the selective search [@SelectiveSearch]. We used the VGG-16 to extract *fc7* features from image patches. The two baselines (*i.e.* *fc7+linearSVM* and *fc7+RBF-SVM*) used a linear SVM and an RBF-SVM, respectively, to detect object parts. The other baselines *VAE+linearSVM* and *CoopNet+linearSVM* used features of the VAE network [@VAE] and the CoopNet [@CoopNet], respectively, instead of *fc7* features, for part detection. The last baseline [@CNNAoG] learned AOGs without QA (*AOG w/o QA*). We randomly selected objects and annotated their parts for training. Both object annotations and part annotations are used to learn models in all the thirteen baselines (including those without fine-tuning). *Fast-RCNN (1 ft)* and *CNN-PDD* used the cropped objects as the input of the CNN; *SS-DPM-Part*, *PL-DPM-Part*, *Part-Graph*, and *Interactive-DPM* used object boxes and part boxes to learn models. *CNN-PDD-ft*, *Fast-RCNN (2 fts)*, and methods based on *fc7* features used object bounding boxes for fine-tuning. Evaluation metric ----------------- As discussed in [@SemanticPart; @CNNAoG], a fair evaluation of part localization requires removing factors of object detection. Thus, we used ground-truth object bounding boxes to crop objects as testing images. Given an object image, some competing methods (*e.g.* *Fast-RCNN (1 ft)*, *Part-Graph*, and *SS-DPM-Part*) estimate several bounding boxes for the part with different confidences. We followed [@CNNSemanticPart; @SemanticPart; @ObjectDiscoveryCNN_1; @CNNAoG] to take the most confident bounding box per image as the part-localization result. Given part-localization results of a category, we applied the *normalized distance* [@CNNSemanticPart] and the *percentage of correctly localized parts* (PCP) [@fineGrained1; @fineGrained2; @fineGrained3] to evaluate the localization accuracy. We measured the distance between the predicted part center and the ground-truth part center, and then normalized the distance using the diagonal length of the object as the normalized distance. For the PCP, we used the typical metric of “$IoU\geq0.5$” [@FastRCNN] to identify correct part localizations. See Table \[tab:imgnet\] for the introduction of the 2nd and 3rd columns. The 4th column shows the number of questions for training. The 4th column indicates whether the baseline used all object annotations (*more than part annotations*) in the category to pre-fine-tune a CNN before learning the part. The 3rd and 4th columns show the number of part annotations and the average number of questions for training. Experimental results -------------------- We learned AOGs for the head, the neck, and the nose/muzzle/beak parts of the six animal categories in the Pascal VOC Part dataset. For the ILSVRC 2013 DET Animal-Part dataset and the CUB200-2011 dataset, we learned an AOG for the head part[^3] of each category. It is because all categories in the two datasets contain the head part. We did not train human annotators. Shape differences between two part templates were often very vague, so that an annotator could assign a part to either part template. ![image](results.pdf){width="\linewidth"} ![image](visualization_QA.pdf){width="\linewidth"} ![Image patches corresponding to different latent patterns.[]{data-label="fig:patches"}](patches.pdf){width="0.8\linewidth"} Table \[tab:stat\] shows how the AOG grew when people annotated more parts during the QA process. Given AOGs learned for the PASCAL VOC Part dataset, we computed the average number of children for each node in different AOG layers. The AOG mainly grew by adding new branches to represent new part templates. The refinement of an existing AOG branch did not significantly change the node number of the AOG. Fig. \[fig:energyCurve\] analyzes activation states of latent patterns in AOGs that were learned with different numbers of part annotations. Given a testing image $I$ for part parsing, we only focused on the inferred latent patterns and neural units, *i.e.* latent patterns and their inferred neural units under the selected part template. Let ${\bf V}$ and ${\bf V'}\subset{\bf V}$ denote all units in a specific conv-layer and the inferred units, respectively. $a_{v}$ denotes the activation score of $v\in{\bf V}$ after the ReLU operation. $a_{v}$ is also normalized by the average activation level of $v$’s corresponding feature maps *w.r.t.* different images. Thus, in Fig. \[fig:energyCurve\](left), we computed the ratio of the inferred activation energy as $\frac{\sum_{v\in{\bf V'}}a_{v}}{\sum_{v\in{\bf V}}a_{v}}$. For each inferred latent pattern $u$, $a_{u}$ denotes the activation score of its selected neural unit[^4]. Fig. \[fig:energyCurve\](middle) measures the relative magnitude of the inferred activations, which was measured as $\frac{\mathbb{E}_{u\in{\bf U}}[a_{u}]}{\mathbb{E}_{v\in{\bf V}}[a_{v}]}$. Fig. \[fig:energyCurve\](right) shows the ratio of the latent patterns being strongly activated. We used a threshold $\tau=\mathbb{E}_{v\in{\bf V}}[a_{v}]$ to identify strong activations, *i.e.* computing the activation ratio as $\mathbb{E}_{u\in{\bf U}}[{\bf 1}(a_{u}>\tau)]$. Curves in Fig. \[fig:energyCurve\] were reported as the average performance using images in the CUB200-2011 dataset. Fig. \[fig:visualization\] visualizes latent patterns in the AOG based on the technique of [@FeaVisual]. More specifically, Fig. \[fig:patches\] lists images patches inferred by different latent patterns in the AOG with high inference scores. It shows that each latent pattern corresponds to a specific part shape through different images. Fig. \[fig:results\] shows part localization results based on AOGs. Tables \[tab:imgnet\], \[tab:VOC\], and \[tab:cub200\] compare the part-localization performance of different baselines on different benchmark datasets using the evaluation metric of the normalized distance. Tables \[tab:VOC\], and \[tab:cub200\] show both the number of part annotations and the number of questions. Fig. \[fig:curve\] shows the performance of localizing the head part on objects in the PASCAL VOC Part Dataset, when people annotated different numbers of parts for training. Table \[tab:pcp\] lists part-localization performance, which was evaluated by the PCP metric. In particular, the method of *Ours+fastRCNN* combined our method and the fast-RCNN to refine part-localization results[^5]. Our method learned AOGs with about $1/6$–$1/2$ part annotations, but exhibited superior performance to the second best baseline. ![Part localization performance on the Pascal VOC Part dataset.[]{data-label="fig:curve"}](curve.pdf){width="\linewidth"} Justification of the methodology -------------------------------- We have three reasons to explain the good performance of our method. First, **generic information**: the latent patterns in the AOG were pre-fine-tuned using massive object images in a category, instead of being learned from a few part annotations. Thus, these patterns reflected generic part appearances and did not over-fit to a few part annotations. Second, **less model drifts:** Instead of learning new CNN parameters, our method just used limited part annotations to mine the related patterns to represent the part concept. In addition, during active QA, Equation (\[eqn:predict\]) usually selected objects with common poses for QA, *i.e.* choosing objects sharing common latent patterns with many other objects. Thus, the learned AOG suffered less from the model-drift problem. Third, **high QA efficiency:** Our QA process balanced both the commonness and the accuracy of a part template in Equation (\[eqn:predict\]). In early steps of QA, our approach was prone to asking about new part templates, because objects with un-modeled part appearance usually had low inference scores. In later QA steps, common part appearances had been modeled, and our method gradually changed to ask about objects belonging to existing part templates to refine the AOG. Our method did not waste much labor of labeling objects that had been well modeled or had strange appearance. Summary and discussion ====================== In this paper, we have proposed a method to bridge and solve the following three crucial issues in computer vision simultaneously. - Removing noisy representations in conv-layers of a CNN and using an AOG model to reveal the semantic hierarchy of objects hidden in the CNN. - Enabling people to communicate with neural representations in intermediate conv-layers of a CNN directly for model learning, based on the semantic representation of the AOG. - Weakly-supervised transferring of object-part representations from a pre-trained CNN to model object parts at the semantic level, which boosts the learning efficiency. Our method incrementally mines object-part patterns from conv-layers of a pre-trained CNN and uses an AOG to encode the mined semantic hierarchy. The AOG semanticizes neural units in intermediate feature maps of a CNN by associating these units with semantic parts. We have proposed an active QA strategy to learn such an AOG model in a weakly-supervised manner. We have tested the proposed method for a total of 37 categories in three benchmark datasets. Our method has outperformed other baselines in the application of weakly-supervised part localization. For example, our method with 11 part annotations performed better than fast-RCNN with 60 part annotations on the ILSVRC dataset in Fig. \[fig:curve\]. Acknowledgments {#acknowledgments .unnumbered} =============== Acknowledgment {#acknowledgment .unnumbered} ============== This work is supported by ONR MURI project N00014-16-1-2007, DARPA XAI Award N66001-17-2-4029, and NSF IIS 1423305. [Quanshi Zhang]{} received the B.S. degree in machine intelligence from Peking University, China, in 2009 and M.S. and Ph.D. degrees in center for spatial information science from the University of Tokyo, Japan, in 2011 and 2014, respectively. In 2014, he went to the University of California, Los Angeles, as a post-doctoral associate. Now, he is an associate professor at the Shanghai Jiao Tong University. His research interests include computer vision, machine learning, and robotics. [Ruiming Cao]{} received the B.S. degree in computer science from the University of California, Los Angeles, in 2017. Now, he is a master student at the University of California, Los Angeles. His research mainly focuses on computer vision. [Ying Nian Wu]{} received a Ph.D. degree from the Harvard University in 1996. He was an Assistant Professor at the University of Michigan between 1997 and 1999 and an Assistant Professor at the University of California, Los Angeles between 1999 and 2001. He became an Associate Professor at the University of California, Los Angeles in 2001. From 2006 to now, he is a professor at the University of California, Los Angeles. His research interests include statistics, machine learning, and computer vision. [Song-Chun Zhu]{} Song-Chun Zhu received a Ph.D. degree from Harvard University, and is a professor with the Department of Statistics and the Department of Computer Science at UCLA. His research interests include computer vision, statistical modeling and learning, cognition and AI, and visual arts. He received a number of honors, including the Marr Prize in 2003 with Z. Tu et. al. on image parsing,the Aggarwal prize from the Int’l Association of Pattern Recognition in 2008, twice Marr Prize honorary nominations in 1999 for texture modeling and 2007 for object modeling with Y.N. Wu et al., a Sloan Fellowship in 2001, the US NSF Career Award in 2001, and the US ONR Young Investigator Award in 2001. He is a Fellow of IEEE. And-Or graph representations {#and-or-graph-representations-1 .unnumbered} ============================ Parameters for latent patterns {#parameters-for-latent-patterns .unnumbered} ------------------------------ We use the notation of ${\bf p}_{u}$ to denote the central position of an image region $\Lambda_{u}$. For simplification, all position variables ${\bf p}_{u}$ are measured based on the image coordinates by propagating the position of $\Lambda_{u}$ to the image plane. Each latent pattern $u$ is defined by its location parameters $\{L_{u},D_{u},\overline{\bf p}_{u},\Delta{\bf p}_{u}\}\subset{\boldsymbol\theta}$, where ${\boldsymbol\theta}$ is the set of AOG parameters. It means that a latent pattern $u$ uses a square within the $D_{u}$-th channel of the $L_{u}$-th conv-layer’s feature map as its deformation range. The center position of the square is given as $\overline{\bf p}_{u}$. When latent pattern $u$ is extracted from the $k$-th conv-layer, $u$ has a fixed value of $L_{u}=k$. $\Delta{\bf p}_{u}$ denotes the average displacement from $u$ and $u$’s parent part template $v$ among various images, and $\Delta{\bf p}_{u}$ is used to compute $S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})$. Given parameter $\overline{\bf p}_{u}$, the displacement $\Delta{\bf p}_{u}$ can be estimated as $$\Delta{\bf p}_{u}=\overline{\bf p}^{*}_{v}-\overline{\bf p}_{u}\nonumber$$ where $\overline{\bf p}^{*}_{v}$ denotes the average position of all ground-truth parts that are annotated for part template $v$. As a result, for each latent pattern $u$, we only need to learn its channel $D_{u}\in{\boldsymbol\theta}$ and central position $\overline{\bf p}_{u}\in{\boldsymbol\theta}$. Scores of terminal nodes {#scores-of-terminal-nodes .unnumbered} ------------------------ The inference score for each terminal node $v^{\textrm{unt}}$ under a latent pattern $u$ is formulated as $$\begin{aligned} &S_{v^{\textrm{unt}}}=S_{v^{\textrm{unt}}}^{\textrm{rsp}}+S_{v^{\textrm{unt}}}^{\textrm{loc}}+S_{v^{\textrm{unt}}}^{\textrm{pair}}\nonumber\\ &S_{v^{\textrm{unt}}}^{\textrm{rsp}}=\left\{\begin{array}{ll}\lambda^{\textrm{rsp}}X(v^{\textrm{unt}}),& X(v^{\textrm{unt}})>0\\ \lambda^{\textrm{rsp}}S_{none},& X(v^{\textrm{unt}})\leq0\end{array}\right.\nonumber\\ &S_{v^{\textrm{unt}}}^{\textrm{pair}}=-\lambda^{\textrm{pair}}\!\!\!\!\!\!\!\!\underset{u_{\textrm{upper}}\in\!\textrm{Neighbor}(u)}{\mathbb{E}}\!\!\!\!\!\!\Vert[{\bf p}_{v^{\textrm{unt}}}-{\bf p}_{u_{\textrm{upper}}}]-[\overline{\bf p}_{u_{\textrm{upper}}}-\overline{\bf p}_{u}]\Vert\nonumber\end{aligned}$$ The score of $S_{v^{\textrm{unt}}}$ consists of the following three terms: 1) $S_{v^{\textrm{unt}}}^{\textrm{rsp}}$ denotes the response value of the unit $v^{\textrm{unt}}$, when we input image $I$ into the CNN. $X(v^{\textrm{unt}})$ denotes the normalized response value of $v^{\textrm{unt}}$; $S_{none}=-3$ is set for non-activated units. 2) When the parent $u$ selects $v^{\textrm{unt}}$ as its location inference (*i.e.* $\Lambda_{u}\leftarrow\Lambda_{v^{\textrm{unt}}}$), $S_{v^{\textrm{unt}}}^{\textrm{loc}}$ measures the deformation level between $v^{\textrm{unt}}$’s location ${\bf p}_{v^{\textrm{unt}}}$ and $u$’s ideal location $\overline{\bf p}_{u}$. 3) $S_{v^{\textrm{unt}}}^{\textrm{pair}}$ indicates the spatial compatibility between neighboring latent patterns: we model the pairwise spatial relationship between latent patterns in the upper conv-layer and those in the current conv-layer. For each $v^{\textrm{unt}}$ (with its parent $u$) in conv-layer $L_{u}$, we select 15 nearest latent patterns in conv-layer $L_{u}+1$, *w.r.t.* $\Vert\overline{\bf p}_{u}-\overline{\bf p}_{u_{\textrm{upper}}}\Vert$, as the neighboring latent patterns. We set constant weights $\lambda^{\textrm{rsp}}=1.5,\lambda^{\textrm{loc}}=1/3,\lambda^{\textrm{pair}}=10.0$, $\lambda^{\textrm{unant}}=5.0$, and $\lambda^{\textrm{close}}=0.4$ for all categories. Based on the above design, we first infer latent patterns corresponding to high conv-layers, and use the inference results to select units in low conv-layers. During the learning of AOGs, we define $S^{\textrm{unant}}_{u}=S_{\hat{v}^{\textrm{unt}}}^{\textrm{rsp}}+S_{\hat{v}^{\textrm{unt}}}^{\textrm{loc}}$ to measure the latent-pattern-level inference score in Equation (5), where $\hat{v}^{\textrm{unt}}$ denotes the neural unit assigned to $u$. Scores of AND nodes {#scores-of-and-nodes .unnumbered} ------------------- $$S^{\textrm{inf}}(\Lambda_{u}|\Lambda_{v})=-\lambda^{\textrm{inf}}\min\{\Vert{\bf p}(\Lambda_{u})+\Delta{\bf p}_{u}-{\bf p}(\Lambda_{v})\Vert^2,d^2\}\nonumber$$ where we set $d=37$ pixels and $\lambda^{\textrm{inf}}=5.0$. [^1]: Because the CNN has demonstrated its superior performance in object detection, we assume that the target object can be well detected by the pre-trained CNN. As in [@SemanticPart], we regard object detection and part localization as two separate processes for evaluation. Thus, to simplify the learning scenario, we crop $I$ only to contain the object, resize it to the image size for CNN inputs, and just focus on the part localization task to simplify the scenario of learning for part localization. [^2]: ${\bf M}_{ii}\!\propto\!\exp[\mathbb{E}_{I\in{\bf I}}S_{v^{\textrm{unt}}_{i}}]$, where $v^{\textrm{unt}}_{i}$ is the neural unit corresponding to the $i$-th element of ${\bf f}_{I'}$. [^3]: It is the “forehead” part for birds in the CUB200-2011 dataset. [^4]: Two latent patterns may select the same neural unit [^5]: We used part boxes annotated during the QA process to learn a fast-RCNN for part detection. Given the inference result $\Lambda_{v}$ of part template $v$ on image $I$, we define a new inference score for localization refinement $S_{v}^{\textrm{new}}(\Lambda_{v}^{\textrm{new}})=S_{v}+\lambda_1\Phi(\Lambda^{\textrm{new}}_{v})+\lambda_2\frac{\Vert{\bf p}_{v}-{\bf p}_{v}^{\textrm{new}}\Vert}{2\sigma^2}$, where $\sigma=70$ pixels, $\lambda_1=5$, and $\lambda_2=10$. $\Phi(\Lambda^{\textrm{new}}_{v})$ denotes the fast-RCNN’s detection score for the patch of $\Lambda^{\textrm{new}}_{v}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Multileaved comparison methods generalize interleaved comparison methods to provide a scalable approach for comparing ranking systems based on regular user interactions. Such methods enable the increasingly rapid research and development of search engines. However, existing multileaved comparison methods that provide reliable outcomes do so by degrading the user experience during evaluation. Conversely, current multileaved comparison methods that maintain the user experience cannot guarantee correctness. Our contribution is two-fold. First, we propose a theoretical framework for systematically comparing multileaved comparison methods using the notions of *considerateness*, which concerns maintaining the user experience, and *fidelity*, which concerns reliable correct outcomes. Second, we introduce a novel multileaved comparison method, , that performs comparisons based on document-pair preferences, and prove that it is *considerate* and has *fidelity*. We show empirically that, compared to previous multileaved comparison methods, is more *sensitive* to user preferences and *scalable* with the number of rankers being compared.' author: - Harrie Oosterhuis - Maarten de Rijke bibliography: - 'cikm2017-lambda-multileave.bib' title: | Sensitive and Scalable Online Evaluation\ with Theoretical Guarantees --- [1]{} **Acknowledgments.** This research was supported by Ahold Delhaize, Amsterdam Data Science, the Bloomberg Research Grant program, the Criteo Faculty Research Award program, the Dutch national program COMMIT, Elsevier, the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement nr 312827 (VOX-Pol), the Microsoft Research Ph.D. program, the Netherlands Institute for Sound and Vision, the Netherlands Organisation for Scientific Research (NWO) under project nrs 612.001.116, HOR-11-10, CI-14-25, 652.002.001, 612.001.551, 652.001.003, and Yandex. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
{ "pile_set_name": "ArXiv" }
A man was found hanging from an overpass in an apparent suicide on Friday morning in New York's Central Park. The unnamed 45-year-old man was spotted by a passer-by at the park around 5:45 a.m. The man was reportedly hanging from a long cord with a bag around his head at the Winterdale Arch, located at the northwest area of the park at Central Park West and West 82nd Street. “He hung himself. Right off the railing at the edge of the arch,” a parks worker told the New York Post. 20 INDIAN STUDENTS COMMIT SUICIDE AFTER EXAM RESULTS “I saw a long orange cord. He had tied a plastic bag around his head, too. … It was awful to see." Police also reportedly found a suicide note with the body but chose not to publicize its contents. The identity of the man was not revealed, pending notification of kin. CLICK HERE TO GET THE FOX NEWS APP An autopsy will be performed to determine the exact cause of his death.
{ "pile_set_name": "OpenWebText2" }