text
stringlengths
12
4.76M
timestamp
stringlengths
26
26
url
stringlengths
32
32
11 and 168. 21 Calculate the greatest common factor of 57876 and 1428456. 636 What is the highest common divisor of 18517760 and 8160? 2720 What is the greatest common factor of 3160 and 78120? 40 Calculate the greatest common factor of 200 and 2816304. 8 What is the greatest common factor of 1550 and 2372825? 25 Calculate the greatest common factor of 6520 and 1536112. 1304 What is the greatest common factor of 51610026 and 488? 122 What is the greatest common factor of 16072 and 242648? 392 What is the greatest common divisor of 35524265 and 7840? 245 What is the greatest common divisor of 5292 and 35127? 27 What is the greatest common divisor of 9671106 and 6? 6 What is the greatest common factor of 47664 and 17962708? 1324 What is the highest common factor of 49260510 and 180? 90 What is the greatest common divisor of 93660 and 251195? 35 What is the highest common factor of 714071 and 4559? 47 Calculate the greatest common divisor of 906373 and 169. 13 Calculate the highest common factor of 5390 and 145600. 70 Calculate the highest common factor of 12540 and 3175470. 570 Calculate the greatest common divisor of 16 and 121892172. 4 What is the greatest common factor of 648 and 11877273? 81 Calculate the highest common factor of 52260 and 7236. 804 Calculate the greatest common factor of 1842358806 and 9. 9 Calculate the greatest common factor of 1215 and 12689595. 135 What is the greatest common factor of 196485 and 14045? 5 What is the highest common factor of 1073273 and 6256125? 5561 What is the highest common factor of 33183 and 77382? 27 What is the highest common divisor of 368203 and 59279? 187 What is the greatest common divisor of 42039879 and 1218? 609 What is the highest common divisor of 12075866 and 88? 22 What is the greatest common factor of 3599 and 1396107? 61 What is the highest common divisor of 843195202 and 27736? 6934 Calculate the highest common divisor of 575 and 198526225. 575 Calculate the highest common factor of 4015963 and 280630. 1477 Calculate the greatest common divisor of 3616 and 182029440. 3616 What is the highest common factor of 6762754 and 62? 2 Calculate the highest common factor of 3720304 and 17228. 236 Calculate the highest common divisor of 717623 and 44321. 943 What is the highest common divisor of 26912 and 6741920? 928 Calculate the greatest common factor of 254461659 and 645. 129 What is the highest common divisor of 11463542 and 1376? 86 Calculate the greatest common factor of 844033 and 26331. 131 What is the greatest common factor of 351457739 and 142? 71 Calculate the greatest common factor of 10731 and 19768035. 1533 What is the highest common factor of 3840 and 22845? 15 What is the highest common factor of 15023771 and 35? 7 What is the greatest common divisor of 16 and 690649384? 8 What is the greatest common divisor of 1593 and 81420? 177 Calculate the greatest common divisor of 2556922 and 17752. 634 Calculate the highest common factor of 325525121 and 33027. 11009 Calculate the greatest common divisor of 36805379 and 21203. 3029 What is the greatest common factor of 1094 and 106302886? 1094 Calculate the highest common factor of 21155 and 24467873. 4231 Calculate the highest common factor of 17096548 and 2065. 413 What is the greatest common factor of 38724080 and 48? 16 What is the highest common divisor of 1470 and 24133725? 735 Calculate the greatest common factor of 4159214 and 38. 38 Calculate the highest common divisor of 837 and 8585109. 837 Calculate the greatest common factor of 68 and 188151274. 34 What is the greatest common divisor of 23985 and 5283? 9 Calculate the greatest common divisor of 208 and 443824. 16 What is the greatest common factor of 107933600 and 800? 800 Calculate the greatest common divisor of 41778431 and 59. 59 What is the greatest common factor of 1552606 and 6094? 22 What is the greatest common factor of 211348 and 116? 4 Calculate the greatest common factor of 11428431 and 188865. 4197 Calculate the highest common divisor of 543102 and 8920074. 1158 What is the highest common factor of 64 and 1450275376? 16 What is the greatest common factor of 2029861 and 128231? 397 Calculate the highest common divisor of 114 and 6096777. 57 Calculate the highest common divisor of 515762858 and 117. 13 Calculate the greatest common divisor of 190 and 9372434. 38 Calculate the highest common divisor of 67236882 and 60. 6 Calculate the greatest common divisor of 10499639 and 13. 1 What is the highest common divisor of 422 and 293091238? 422 What is the highest common factor of 1323 and 11701935? 1323 What is the highest common divisor of 62522130 and 165? 165 What is the greatest common divisor of 110 and 1981925? 55 What is the greatest common factor of 1266 and 29479443? 633 Calculate the highest common divisor of 3974586 and 3822. 294 Calculate the highest common factor of 1053247516 and 1868. 1868 Calculate the greatest common divisor of 71793 and 763803. 27 What is the highest common factor of 16068525 and 18645? 165 What is the highest common divisor of 5004 and 13394596? 556 What is the highest common divisor of 10183110 and 3248? 14 What is the highest common divisor of 2175 and 28081512? 87 What is the highest common divisor of 7152899 and 182? 13 What is the highest common factor of 63894 and 3036? 138 Calculate the greatest common divisor of 1063209 and 20094. 591 Calculate the greatest common divisor of 1827 and 2640421. 203 What is the highest common factor of 147240 and 11515804? 1636 What is the highest common factor of 114 and 185089526? 38 Calculate the greatest common divisor of 6 and 1877658. 6 Calculate the greatest common divisor of 1896 and 35750265. 237 What is the highest common divisor of 187560 and 3617640? 360 Calculate the highest common divisor of 65649697 and 113917. 6701 What is the greatest common factor of 17536 and 7818496? 128 Calculate the highest common divisor of 36777 and 69492499. 943 What is the highest common factor of 30 and 1563190? 10 What is the greatest common factor of 16293260 and 3287? 19 Calculate the greatest common divisor of 13046 and 11578325. 6523 What is the greatest common divisor of 8992 and 164947? 281 Calculate the highest common factor of 349518 and 1794. 78 What is the greatest common divisor of 266 and 4359911? 19 What is the highest common divisor of 4186182 and 28210? 182 Calculate the highest common divisor of 42792750 and 266. 266 Calculate the highest common divisor of 240411200 and 341. 31 Calculate the highest common factor of 332869 and 81841. 367 What is the greatest common divisor of 296 and 6059009? 37 Calculate the highest common factor of 69882567 and 106. 53 Calculate the greatest common factor of 170578 and 101558. 986 Calculate the greatest common factor of 966 and 5608274. 322 Calculate the highest common divisor of 1530265645 and 505. 505 Calculate the greatest common divisor of 6116719 and 80801. 833 Calculate the greatest common factor of 168790 and 650. 10 Calculate the greatest common factor of 346818609 and 19. 19 What is the greatest common factor of 1059 and 38559? 3 What is the highest common divisor of 140 and 10749410? 70 Calculate the greatest common divisor of 63523741 and 2147. 113 What is the highest common factor of 6898345 and 697? 17 Calculate the greatest common factor of 1047 and 1112284289. 349 Calculate the greatest common divisor of 4041337 and 3140. 157 What is the highest common divisor of 2678 and 3357182? 206 What is the greatest common divisor of 1843515 and 1570017? 1731 Calculate the greatest common factor of 898805616 and 10848. 5424 What is the highest common factor of 4787966 and 24734? 298 Calculate the highest common divisor of 21 and 117482463. 21 What is the highest common divisor of 74093965 and 165? 55 What is the highest common factor of 1042846760 and 4? 4 What is the highest common divisor of 55852887 and 7108? 1777 What is the greatest common divisor of 34398854 and 714? 238 What is the greatest common divisor of 2171 and 255009? 167 Calculate the highest common divisor of 174 and 8358757. 29 What is the highest common factor of 780 and 776994270? 390 What is the greatest common factor of 638 and 1211254? 22 What is the greatest common fa
2024-04-17T01:27:17.568517
https://example.com/article/9846
The effect of tachykinins on the conducting airways of the rat. We studied the bronchial effects of intravenously administered tachykinins in inbred rats. Substance P and related tachykinins caused a dose-dependent bronchoconstriction. The bronchial reactivity to substance P differed significantly between different inbred rat strains. Substance K, eledoisin and kassinin were more potent than substance P in causing bronchoconstriction. This suggests a predominance in the bronchi of SP-E receptors. The bronchial effects of substance P and eledoisin were largely inhibited by atropine and slightly enhanced by hexamethonium. In addition to a direct effect on airway smooth muscle, tachykinins interfere with the cholinergic airway innervation of the rat at the ganglionic and postganglionic level.
2024-05-08T01:27:17.568517
https://example.com/article/2997
Q: Sorting a list depending on every other element in python If I have a list with elements which follows the structure: [team1, points1, playedgames1, team2, points2, playedgames2, team3, points3, playedgames3] and so on. An example (with 3 teams): ls = ["Milan", 6, 2, "Inter", 3, 2, "Juventus", 5, 2] and would like it to look like this: ["Inter", 3, 2, "Juventus", 5, 2, "Milan", 6, 2] and so on for more teams. As you can see, the list is now sorted after the lowest point first. Essentially, it now is: ["team2, points2, playedgames2, team3, points3, playedgames3, team1, points2, playedgames2] due to the fact that points2 had the lowest value. So, can I sort the list like this, with respect to the points but also to keep the structure of the list with (team, points, played games) and so on. Is this possible? The elements are retrieved from a text file. A: Here is one way: ls = ["Milan", 6, 2, "Inter", 3, 2, "Juventus", 5, 2] >>> sorted([ls[i:i+3] for i in range(0,len(ls),3)], key=lambda x:x[1]) [['Inter', 3, 2], ['Juventus', 5, 2], ['Milan', 6, 2]]
2024-05-11T01:27:17.568517
https://example.com/article/3563
{ "name": "vue-login", "version": "1.0.0", "description": "A Vue.js project", "author": "ykloveyxk <562872810@qq.com>", "private": true, "scripts": { "dev": "node build/dev-server.js", "build": "node build/build.js" }, "dependencies": { "vue": "^2.1.10", "vue-router": "^2.2.0" }, "devDependencies": { "autoprefixer": "^6.7.2", "axios": "^0.15.3", "babel-core": "^6.22.1", "babel-loader": "^6.2.10", "babel-plugin-transform-runtime": "^6.22.0", "babel-preset-env": "^1.1.8", "babel-preset-stage-2": "^6.22.0", "babel-register": "^6.22.0", "body-parser": "^1.16.1", "chalk": "^1.1.3", "compression": "^1.6.2", "config-lite": "^1.5.0", "connect-history-api-fallback": "^1.3.0", "cookie-parser": "^1.4.3", "copy-webpack-plugin": "^4.0.1", "css-loader": "^0.26.1", "element-ui": "^1.2.1", "eventsource-polyfill": "^0.9.6", "express": "^4.14.1", "extract-text-webpack-plugin": "^2.0.0-rc.3", "file-loader": "^0.10.0", "friendly-errors-webpack-plugin": "^1.1.3", "function-bind": "^1.1.0", "html-webpack-plugin": "^2.28.0", "http-proxy-middleware": "^0.17.3", "jsonwebtoken": "^7.3.0", "moment": "^2.17.1", "mongoose": "^4.8.4", "morgan": "^1.8.1", "node-sass": "^4.5.0", "objectid-to-timestamp": "^1.3.0", "opn": "^4.0.2", "optimize-css-assets-webpack-plugin": "^1.3.0", "ora": "^1.1.0", "rimraf": "^2.6.0", "sass-loader": "^6.0.2", "semver": "^5.3.0", "serve-favicon": "^2.4.0", "sha1": "^1.1.1", "url-loader": "^0.5.7", "vue-loader": "^11.0.0", "vue-style-loader": "^2.0.0", "vue-template-compiler": "^2.1.10", "vuex": "^2.1.3", "webpack": "^2.2.1", "webpack-bundle-analyzer": "^2.2.1", "webpack-dev-middleware": "^1.10.0", "webpack-hot-middleware": "^2.16.1", "webpack-merge": "^2.6.1" }, "engines": { "node": ">= 4.0.0", "npm": ">= 3.0.0" } }
2024-02-26T01:27:17.568517
https://example.com/article/6192
CommonWeal Journalist Yvonne Ridley says a Labour party led by Jeremy Corbyn would bring a much needed shake-up to the stale tit-for-tat of Prime Minister’s Questions THE much talked-about surge and purge of would-be voters in Labour’s leadership contest turns out to have been grossly exaggerated with infiltration by political rivals amounting to no more than 3,100, according to acting leader Harriet Harman who insists the “integrity” of next month’s election is beyond doubt. News reports of a mass Tory infiltration were simply manufactured and come as no surprise, really, for those of us who can now recognise Project Fear-style campaigns from 20 paces. The truth is, for Conservatives anyway, the last person they want challenging the Tory Prime Minister David Cameron from across the despatch box is ‘man of the people’ Jeremy Corbyn. David Cameron has had it too easy for too long since he moved in to Number 10. Cameron has had it too easy for too long since he moved in to Number 10. Labour leaders and their ministers have played a friendly game of verbal ping-pong across the House of Commons and after the faux rigors of Prime Minister’s Question Time, the political gap between those sitting on the front benches has become almost indistinguishable. If anything, PMQs has served to highlight the similarities and not the differences between the views of those who sit on the front benches regardless of their political stripe. While the politicians and the parliamentary sketch writers keep up the pretence of an argy bargy at PMQs, the voting public has recognised it for what it is – a sham. Last year’s Scottish referendum exposed the sham in full for what the London-based parliament had become … a cosy home for the Tories, Labour and Liberal Democrats who not only acted in union but promoted the idea of being ‘Better Together’. The General Election outcome earlier this year only caught the media off guard because the pollsters had got it so badly wrong. Looking back, it was hardly a surprise in England that disillusioned voters either stayed at home or voted Tory given that the policies of all three main parties were pretty much indistinguishable. Why vote ToryLite when you could have the real thing? In Scotland the story was different as independence voters were still infuriated by the treachery they witnessed on the night of the referendum when Tory and Labour party members openly hugged, kissed and danced with joy, followed up a few hours later by a smug Cameron who basically ditched ‘the Vow’ by tying Scottish powers to the ‘English question’, or Evel as it is now called. Young voters, in particular, were incensed at the perceived treachery, and far from disengaging became more politically active while even dyed-in-the-wool unionists were uncomfortable with the all-party love-in. Suddenly the SNP was seen as a party voters could trust. If anything, PMQs has served to highlight the similarities and not the differences between the views of those who sit on the front benches regardless of their political stripe. The same people from working class communities who voted Ukip out of frustration are now flocking to listen to Jeremy Corbyn, who has an entirely different message but spoken in an easy to understand manner free from condescension and patronising tones. Not only that, he’s a really nice bloke, even his enemies concede. The Corbyn message is one of hope and compassion. Suddenly it’s fine to be kind and be human again and this is not going to play well across the House of Commons despatch box because it will expose the UK Government for what it is: a thoroughly wretched and unpleasant regime which is looked upon in despair by the rest of Europe which now calls England ‘the nasty country’. The ‘Jez We Can’ message has also captivated the youth who are now becoming as politically engaged as their Scottish counterparts. In Corbyn, Cameron will be confronted by an effective opposition voice and he will no longer be able to claim he has the mandate of the people – although the SNP 56 regularly remind him of that when they collectively shout “not in Scotland” as he tries to tell the House he has the backing of the nation. If Corbyn wins – and nothing can be taken for granted – he will not only shake up the Labour party from within but he will wreck the cosy, incestuous little club which has developed on all of the front benches. The task of leading the Labour party in opposition will be helped by the presence of the SNP 56 who, like Corbyn, promote an anti nuke, anti-austerity and pro-welfare message. Who knows, even the rows of cynical and clinical professional Labour MPs who have invested their careers in the party may enjoy being part of an effective opposition. In Corbyn, Cameron will be confronted by an effective opposition voice and he will no longer be able to claim he has the mandate of the people. Once they see the smug look of satisfaction being wiped off the faces of the Conservatives by a truly ‘nice guy’ their resentment towards Corbynomics might subside. Respect is never given unconditionally, it is earned and I believe Corbyn will win round most MPs in his party. Those who resist should be honest enough to cross the rubicon and join the Tories. Just as the arrival of the SNP 56 has enlivened PMQs in recent months, a Corbyn-led opposition will make that 30-minute slot on Wednesdays in Westminster much more challenging for a government which has been allowed to become increasingly less accountable. No wonder the Eton mob fears a Corbyn victory. Picture courtesy of Tony Webster
2024-05-12T01:27:17.568517
https://example.com/article/9884
Disclosed are manure amendment compositions containing a dry or liquid mixture of (1) alum mud and at least one member selected from the group consisting of acid (e.g., sulfuric), bauxite, and mixtures thereof, or (2) bauxite and at least one member selected from the group consisting of acid (e.g., sulfuric), alum mud, and mixtures thereof, which when added to animal manure will form a treated manure product having improved environmental, health and/or animal performance. Methods of treating animal manure, said methods involving contacting said animal manure with an effective treatment amount of the above manure amendment composition or alum mud to form a treated waste product having an improved environmental, health and/or animal performance property. Methods for inhibiting ammonia volatilization from animal manure, said methods involving applying the above manure amendment composition or alum mud to animal manure in an amount sufficient to reduce the pH of said animal manure and thereby reduce ammonia volatilization from said poultry litter or animal manure for at least 24 hours compared to untreated animal manure. Methods for controlling atmospheric ammonia levels in an animal rearing facility, said methods involving applying the above manure amendment composition or alum mud to a portion of a manure receiving surface (e.g., floor such as a dirt floor) in said animal rearing facility in an amount sufficient to reduce the pH of said portion and thereby inhibit ammonia volatilization from said manure receiving surface for at least 24 hours to control atmospheric ammonia levels in said animal rearing facility at or below a selected level, said manure receiving surface comprising previously deposited manure. Methods for reducing the amount of phosphorus runoff and/or phosphorus leaching from fields fertilized with animal manure, said methods involving treating animal manure to be used as agriculture fertilizer by admixing said animal manure with the above manure amendment composition or alum mud at a rate sufficient to reduce the water extractable phosphorus in said animal manure and thereafter applying said poultry litter or animal manure to fields (e.g., soil) as an agricultural fertilizer. Methods for reducing the amount of metals runoff and/or leaching from fields fertilized with animal manure, said methods involving treating animal manure to be used as agriculture fertilizer by admixing said animal manure with the above manure amendment composition or alum mud at a rate sufficient to reduce the water extractable metal content in said animal manure; and thereafter applying said animal manure to fields (e.g., soil) as an agricultural fertilizer. Two of the biggest problems associated with animal manure are phosphorus (P) runoff and ammonia (NH3) emissions. Phosphorus runoff and leaching can result in accelerated eutrophication of lakes and rivers since P is normally the limited nutrient for algal production in freshwater systems (Schlinder, D. W., Science, 195: 260-262 (1977)). Phosphorus concentrations in runoff from fields fertilized with poultry litter can be very high, even when litter is applied at low to moderate levels (Edwards, D. R., and T. C. Daniel, Trans. Am. Soc. Agric. Eng., 35:1827-1832 (1992); Edwards, D. R., and T. C. Daniel, Bioresour. Technol., 41:9-33. (1992)). Edwards and Daniel (Edwards, D. R., and T. C. Daniel, J. Environ. Qual., 22:361-365 (1993)) reported that 80-90% of the P in runoff from pastures fertilized with poultry litter is soluble reactive P (SRP), which is the form that is most readily available for algal uptake (Sonzogni, W. C., et al., J. Environ. Qual., 11:555-563 (1982)). Several researchers have shown that P runoff and leaching from manure is more closely correlated to the amount of soluble P in the manure than total P (Shreve, B. R., et al., J. Environ. Qual., 24:106-111 (1995); Smith, D. R., et al., J. Environ. Qual., 30:992-998 (2001a); DeLaune, P. B., et al., J. Environ. Qual., 33:728-734 (2004a); DeLaune, P. B., et al., J. Environ. Qual., 33:2192-2200 (2004b)). Runoff water from fields fertilized with poultry litter has also been shown to have high concentrations of metals, such as arsenic, copper and zinc, which may cause water quality problems (Moore, P. A., Jr., T. C. Daniel, J. T. Gilmour, B. R. Shreve, D. R. Edwards, and B. H. Wood, J. Environ. Qual., 27:92-99 (1998). Ammonia concentrations often exceed 25 ppm in poultry houses, which can reduce poultry performance (Reece, F. N., et al., Poult. Sci., 59:486-488 (1980); Carlile, F. S., World's Poult. Sci. J., 40(2):99-113 (1984); Miles, D. M., et al., Poult. Sci., 83:1650-1654 (2004); Moore, P. A., Jr., et al., J. Environ. Qual., 40:1395-1404 (2011)). High levels of NH3 damage the respiratory tract of chickens, which negatively affected their immune system, making them more susceptible to diseases (Anderson, D. P., et al., Avian Dis., 8:369-379 (1964)). This may be more important than in the past due to the current threat posed by avian influenza. The incidence of airsaculitis has been shown to increase dramatically when broilers are exposed to high NH3 concentrations. Negative impacts on feed conversion and weight gains, along with ocular damage, have been shown to occur when NH3 concentrations in poultry barns are high. These negative impacts of NH3 have generally been reported when in-house concentrations exceed 25 ppm (uL L−1), hence it is recommended that NH3 concentrations be kept below this critical level in poultry barns (Carlile 1984). However, it was found that the average NH3 concentration in four poultry houses in NW Arkansas that were continually monitored for one year was 25.1 uL L−1 with much higher levels during winter months, and that over half of the N excreted from the birds at this farm was lost to the atmosphere as NH3 before the litter was removed from the barns (Moore et al. 2011). This not only represent a huge waste of a natural resource (300 million Kg N/year in the U.S. alone), but it results in air and water pollution. Approximately 80% of atmospheric NH3 loading in the United States comes from agriculture sources, with poultry responsible for 25% of the total (Batty, R., et al., Developments and selection of ammonia emissions factors: Final report, EC/R Inc., Durham, N.C., EPA Contract Report#68-D3-0034, U.S. Environmental Protection Agency, Research Triangle Park, NC, pp 111(1994)). In the 1990s Moore discovered that a simple topical application of alum to poultry litter would reduce P runoff and NH3 volatilization (U.S. Pat. Nos. 5,622,697; 5,914,104; 5,928,403; 5,961,968; 5,865,143; 5,890,454). It was also discovered that AlCl3 could be used for reducing NH3 emissions and P runoff from swine manure (U.S. Pat. Nos. 6,346,240 and 7,011,824). The chloride salt of Al is preferable for liquid manures because sulfate can be reduced to hydrogen sulfide under anaerobic conditions which may aggravate odor issues. During the past 20 years several studies have shown how alum additions reduce NH3 emissions and P runoff from manure (Moore, P. A., Jr., et al., J. Environ. Qual., 24:294-300 (1995); Moore, P. A., Jr., et al., Poult. Sci., 75:315-320 (1996); Moore, P. A., Jr., et al., Poult. Sci., 78:692-698 (1999); Moore, P. A., Jr., et al., J. Environ. Qual., 29:37-49 (2000); Moore, P. A., Jr., and D. R. Edwards, J. Environ. Qual., 36:163-174 (2007); Smith, D. R., et al., J. Environ. Qual., 30:992-998 (2001a); Smith, D. R., et al., J. Anim. Sci., 82:605-611 (2001b)). Additions of alum to poultry litter have also been shown to reduce arsenic, copper and zinc runoff from fields fertilized with litter (Moore, P. A., Jr., et al., J. Environ. Qual., 27: 92-99 (1998)). Moore and Miller (Moore, P. A., Jr., and D. M. Miller, J. Environ. Qual., 23:325-330 (1994)) published the first report that showed chemical amendments, such as alum, could be added to poultry litter to reduce P solubility. Later work by Moore et al. (1995, 1996, 1999, 2000) showed alum additions to poultry litter resulted in improved poultry production and higher crop yields, in addition to environmental benefits such as reduced NH3 emissions and reducing concentrations of P, metals and estrogen in runoff water and reducing P leaching. Alum was also shown to greatly reduce energy costs (e.g., propane) due to reduced ventilation requirements in cooler months as a result of lower in-house NH3 (Moore et al., 1999, 2000). Treating poultry litter with alum significantly reduces pathogens (e.g., Salmonella and Campylobacter) both in the litter and on poultry carcasses (Line, J. E., Poult. Sci., 81:1473-1477 (2002)). A cost/benefit analysis showed that the production benefits due to alum made this BMP (best management practice) very cost effective (Moore et al., 2000). Due to the production and environmental benefits of this BMP, over one billion broiler chickens are currently grown each year in the U.S. with alum (Moore, 2011). However, this only represents about 10% of the industry. The main reason cited by poultry growers and industry personnel for not using alum is cost, which has increased dramatically during the past 20 years. Thus, a need exists to develop a manure amendment that is as effective as alum, for example in reducing NH3 volatilization and P runoff, yet costs much less.
2023-08-05T01:27:17.568517
https://example.com/article/8579
It’s the key player of the new Chanel Nº5 L'EAU (and not only). The Rosa Centifolia, better known as Rose de Mai or May Rose, is an olfactory heritage that, thanks to its unique character, goes beyond times setting trends that are forever.
2024-04-28T01:27:17.568517
https://example.com/article/4711
About Me I am a politically-progressive, ethically-herbivorous anthropoid pursuing a paleontology education in the Los Angeles Basin. I am largely nocturnal, have rarely been photographed, and cannot thrive in captivity. Pages 23 November 2011 It's Curtains For The Expensive Tissue Hypothesis From my previous post, you already know what a poor attempt at debunking the paleo diet looks like. Now, I figure I owe you an example of how to do it right. Call this a Thanksgiving present. "Energetics and the evolution of human brain size," published earlier this month in Nature, tests and refutes the expensive tissue hypothesis. It's impressive work, and pretty devastating to the hypothesis that has provided a rhetorical foundation to the paleo diet mythology for over a decade now. Navarrete's, et. al.'s, main findings (further details below) are: There is no negative correlation between brain size and gut size in any mammalian taxa, refuting the ETH's prediction to the contrary; There is, however, a strong negative correlation between brain size and adipose tissue deposits; that is, fatter animals have smaller brains than lean ones; and, Humans are seeming exceptions to this rule because our fat deposits don't interfere adversely with our means of locomotion, thus freeing up energy for encephalization that other primates have to use for carrying around all that fat. And the stunning thing about this paper is that the authors didn't simply test the ETH using new data, but also re-tested the data from the original paper using newer statistical methods and controlling for confounding factors that that Aiello & Wheeler missed, for whatever reason. Their conclusion: when adiposity, phylogenetic relationships, sample bias and sex differences are controlled for, Aiello's & Wheeler's original data don't support their hypothesis any better than the newer data does! In short, the ETH is wrong at the foundation, not just at the margins. But, you should still hold your applause for a moment, so we can make clear not only what this paper is, but also what it is not. It is not evidence that pre-humans were strict vegans. It is not evidence that Homo sapiens are natural herbivores. It is not evidence that meat and dairy, in themselves, are intrinsically either good or bad for us. If you're the kind of vegan who looks for an evolutionary hook to hang your fall-from-grace fantasies on, you'll have to look elsewhere. Prehistoric humans and their ancestors ate meat, and sometimes a heck of a lot of it. You'll just have to deal with that. However, the paper is pretty good evidence that meat wasn't essential to our evolution. Meat, it turns out, probably didn't make us smart, after all. At the level of vegan blogosphere debate ammo, that might be cause for some applause. The Original Problem To understand how the ETH came about, how thoroughly Navarrete, et. al., have undermined it, and on what grounds they have done so, it's probably a good idea to hop in the Wayback Machine and understand what Aiello & Wheeler were trying to explain in the first place. The $64,000 question in paleoanthropology (adjusted for inflation) for the last 80 years or so has been, "why can humans have such freakishly huge brains compared to other primates their size, but still have the same basal metabolic rate?" The question is rooted in a biological principle called Kleiber's Law, which demonstrates that the metabolic rate of most animals scales to the 3/4 power of their mass; this law holds true across the animal kingdom, and appears to function in plants and bacteria, too: even within individual cells themselves! Kleiber's law can be used to precisely calculate the metabolic rate of any animal just by knowing their total mass. In short, it shows that animals of roughly the same size will have roughly the same basal metabolic rate (BMR), and that's where the problem with humans comes in. It turns out that within an animal, the metabolic rate is not evenly distributed among all its tissues. Some tissues -- brains, hearts, lungs, livers, the GI tract, to name a few -- use more calories than others; they are thus "expensive." Every organ has its own individual metabolic rate. So, even though animals of equal size will have equal overall BMRs, they won't necessarily allocate that energy to their organs in the same way. Let's say you have two species of roughly equal mass. One of them is characterized by a super strong heart, and the other by advanced lung capacity. Hearts and lungs both use a lot of energy, so each species will allocate its overall BMR to its distinct tissues in different ways, but will still have the same total BMR as the other. This means that without a change in overall mass, the strong-hearted species can never have the amazing lungs of the strong breather, and vice versa. Kleiber's law must hold, and to do that, some organs and tissues have to take priority over others. So long as their overall BMRs remain the same, different species of equal mass can display a lot of variation in the ways their individual tissues consume energy. This is the crux of the human brain problem. Using Kleiber's law, Aiello & Wheeler noted that an 80-lb. australopithecine would have had roughly the same BMR as an 80-lb. Homo sapiens, despite the difference in their brain sizes. The human brain would have 4 to 5 times the metabolic cost of the softball-sized australopith brain. So, Aiello & Wheeler reasoned, in order to maintain the BMR predicted by our mass, humans must have made a trade-off between competing tissues at some point in our evolution; i.e., as our brains gobbled up more energy, some other set of tissues had to get less, and thus shrink over evolutionary time. Something had to give. After assessing the cost and importance of various tissues within modern humans, Aiello & Wheeler concluded that the human tissue most reduced in comparison to other primates was the GI tract. As our brains got bigger, our guts got smaller. As a result, we had to become dependent on more high-quality, nutrient-dense, easily-digested food than other primates to maintain the high cost of our brains, since our reduced guts could no longer handle the sorts of food on which our ancestors had subsisted for millions of years. They proposed that the most likely reliable source of such calories was meat and other animal products. A dramatic increase in animal matter in the hominin diet eased the energy constraints imposed by nature on big brains, and allowed our brains to grow to massive proportions without violating Kleiber's law. In the popular press and later, in the blogosphere, the short hand version of the ETH became, "meat made us smart," or "meat-eating made us human." But that's not precisely what Aiello & Wheeler were claiming, and the difference between what they claimed and what carnists who cite them claim is crucial to understanding what Navarrete, et. al., have accomplished with their new paper. For the ETH, meat itself wasn't really the point. Though Aiello & Wheeler proposed it as the probable source of the necessary calories, they hinted that other high-quality foods, like sugary fruits, tubers, or oil-rich nuts and seeds, could also have done the job. A close reading shows that the ETH was fundamentally about total calories, not specific calorie sources. Even so, the prominence of meat-eating in the paper supplied de facto legitimacy to several paleofantasies about the necessity of meat to the human diet, one of which would become the modern paleo-diet movement. But more fundamental to the ETH than meat-eating -- indeed, the whole point of the paper -- was the claim that Kleiber's law is maintained through a necessary trade-off between expensive tissues within a given organism, in this case Homo sapiens. Increased meat-eating was merely a consequence of this claim, not the foundation of it. And for the last 15 years or so, the argument over whether meat was important to our evolution has obscured the more fundamental -- and eminently more testable -- claim of an expensive tissue trade-off. Any good hypothesis can produce at least one testable prediction. And the ETH has one, right there for everyone to see (though it's been astonishingly ignored for 15 years). If the ETH is true, we should expect to find a tight negative correlation between brain mass and the mass of other expensive tissues across a range of taxa, not just among primates. And it's this prediction, not whether cavemen were meat-eaters, that Navarrete, et. al., set out to test. The Fat Of The Matter The key way they tested the overall hypothesis across various mammal groups was controlling for adipose tissue deposits in their calculation of a given animal's mass. In short, they omitted fat deposit mass from all specimens, eliminating it as a variable. This was an important control tactic (and one not used by Aiello & Wheeler in their original paper), because adipose mass varies by season and habitat among many species, and can thus be a major confounding variable. Only by eliminating it altogether and testing brain size against fat-free body mass, the authors reason, could a possible trade-off between tissues be reliably detected. Under these conditions, no negative correlation between brain size and digestive tract mass was found. In fact, no negative correlation was found between brain size and the mass of any expensive tissue. The authors did, however, uncover a tight negative correlation between brain size and adipose tissue depots: the fattest species had the smallest brains. Given Kleiber's law, this might at first look like a dilemma: fat tissue doesn't use a whole lot of energy, so why would it constrain brain size? The answer is that it costs an animal a lot of energy to lug the extra weight around, especially while climbing or running. And it's here that humans -- along with whales and seals -- have an advantage: fat stores don't significantly interfere with our ways of getting around. Bipedalism and dorso-ventral flexion (the swimming method used by cetaceans and pinnipeds) are simply more efficient ways of moving. To understand just how big of an impact bipedalism has on human energy expenditure, take a look at the paper's Supplemental Material, and its discussion of the different energy costs that excess fat imposes on humans and chimpanzees. Human foragers spend between 18 to 22 percent of their daily energy on locomotion. Chimps have a comparable but somewhat larger range of 16 to 30 percent. But, because of the different ways they move around, a 10 percent increase in body fat deposits for humans means only a 1 percent increase in needed energy, while for chimps it means a 2 to 3 percent increase. In other words, it costs chimps twice to three times as much energy to move around the same amount of body fat as a human. Further complicating the matter is that the energy cost of travel during climbing for primates is almost directly proportional to body mass. Quadrapedal terrestrial walking and briachiation as modes of transport simply impose higher costs on primates than does efficient bipedalism. This energy cost adds up over time (especially evolutionary time), and thus can constrain the total amount of BMR available for encephalization.Thus, because humans save so much energy by being bipedal, they can store relatively large amounts of adipose tissue and still grow big brains. Digging Up Old Data If Navarrete, et. al., had stopped there, they'd have a pretty strong case: the ETH's predicted negative correlation between brain size and organ mass appears not to exist, at least among mammals. But, they took their investigation a step further and decided to re-test Aiello's & Wheeler's original data set, controlled for several compounding factors that Aiello & Wheeler hadn't accounted for. And that's where the real knock-out punch to the ETH happens. As detailed in the Supplemental Material, Aiello & Wheeler were working with a data set that had a couple of problems. Namely, it was biased towards catarrhine primates over platyrrhines; it didn't control for sex differences between members of species with marked sexual dimorphism (sexual size dimorphism affects body mass more than brain size), or for differences in the body mass of wild vs. captive specimens of the same species; and it didn't account for phylogenetic relationships between various hominid species (a fact I have pointed out before). In fairness to Aiello & Wheeler, most of this was beyond their control. 15 years ago, for instance, we didn't know that Paranthropus was a sister taxa to Homo rather than a direct ancestor, and the literature on primate body masses simply didn't contain as wide a sampling of platyrrhines as it does today. Aiello & Wheeler did the best they could with what they had. Nevertheless, Navarrete, et. al., were able to identify and control for these confounders in a new test using the latest phylogenetic statistical methods on the original data sample. And the results did not support Aiello's & Wheeler's hypothesis; even their own data failed the ETH in the end. Taken together with the new author's own data, these re-testing results pretty much have put the ETH down for the count. If they want to save it, Aiello & Wheeler will have to tackle Navarrete, et. al., with much more rigorous data and analysis than they used the first time around. Make no mistake, this is a quiet revolution in action. What this means to the vegan blogosphere is that there is now a robust and scientifically credible argument against the claim that meat-eating was essential to our evolution... and the case has nothing to do with animal rights or other aspects of vegan ethics. That being said, this paper cannot and should not be used as evidence that hominins did not eat meat at all, or that pre-human ancestors were purely frugivorous. If we do that with this paper, we'll be just as guilty of building a paleofantasy as the caveman dieters were when they turned the ETH into their shibboleth. So, while you're dining on Tofurkey or some African pumpkin stew (my planned Thanksgiving meal) this holiday, and obnoxious Uncle Carnist breaks out the old meat-made-us-human canard for the millionth time, feel free to take him to the mat. He's had it coming for years. 47 comments: Thanks for this! I enjoyed it so much I read it aloud to my boyfriend. I'm part of a crossfit affiliate and so surrounded by paleo-dieters quite often. This will be nice to have in my arsenal when I'm asked why I'm not in on the crossfitters' diet of choice. National Geographic had a detailed analysis of Itzi, the 5000 year old ice man. Probably the only stone age man whose stomach contents have been analyzed. It was meat and grain. No doubt meat provides high energy and protein, but they also found that Itzi would probably have died within ten years of heart attack or stroke. He was about 45 years old, so was having the same problems Americans are having with their high meat diet. Nobody says those ancient heavy meat eaters lived long and healthy lives. "The China Study" proves that diets heavy in meat and diary is killing us. We know that grains by themselves don't cause heart disease, because populations that subsist on them, such as Chinese and Japanese, and for that matter most of the world's traditional populations that haven't switched to a modern diet, do not suffer from heart disease, while Westerners do. We also know that Paleolithic people did consume grains, and in fact modern day hunter gatherers consume them as well. I give you links that prove it. Therefore, the whole Paleo assumption that "grains aren't Paleo" is total and complete BS. i believe you are not correct on any of the above accounts - but i really do not want to argue here - that paleolithic people or hunter gatherers ate "grains" or "bread" as a recent headlines touted, is summarily wrong - of the 9 (7 or 9 i believe) organic particles found on the 20K+ old grindstone were either roots, tubers or rhizomes with only 1 being an old pseudo-grain. NOTHING like modern grains and bread. the chinese and japanese also in now way "subsist" on "grains" but rather have historically eaten rice often along with fish, meat and vegetables - the fish eating populations being the healthiest - the chinese health is deteriorating rapidly in recent decades due to the switch from nominally healthy rice to devastatingly unhealthy wheat. and please don't quote to me the hugely biased "results" of the China Study until you consider Denise Mingers superb dismantling of ol Doc Campbells self-interested unscientific tripe. I would suggest to you the writings of a REAL PRACTISING doctor (ol doc campbell never saw a patient) - one who actually treated thousands of patients using lo carb diets for a lifetime practice curing almost every chronic condition he came across - an austrian named Dr Wolfgang Lutz - and his book "Leben Ohne Brot" - Life Without Bread-- as far as your links - nature magazine is questionable at best and don matesz has gone off the deepest of ends after denouncing paleo - his formerly lucid writings have deteriorated into frothy-mouth vegetarian/vegan rantings of the most absurd kind. and by now you have guessed - i ascribe to a primal/paleo diet outlook so any more discussions between us is probably quite pointless. "most of the world's traditional populations that haven't switched to a modern diet," The modern diet is considerably more carbohydrate, especially grain, heavy. Lets not forget eskimos living on a 100% animal based diet. There's simply not a single example of a "traditional" society that did not at least attempt to make animal protein a large percentage of their diet. Yes, humans have a lot of dietary flexibility, but thats exactly what makes us such a successful species. If you hadn't eaten for a week you would try to eat anything, including grains, even if they poisoned you. Modern grains are also modified to be easier to process, and are considerably lower in actual nutrient content. This is done to make them more profitable, not better to eat. Grains our ancestors tried to live on would have been processed in a different way (probably fermented? possibly sprouted? It's hard to say of course), but you cannot in any way compare it. I guess in the same way you can't compare modern meat practices with hunting (ie, the idea of paleo re-enactment is stupid and pointless), the animals are not the same and they don't eat the "correct" foods. Veganism has *only* the moral ground. From a metabolic viewpoint it is really a fruitless argument. We simply do not gain enough energy/ protein from plants themselves to make them the "ideal" staple in our diet, with the exception of highly processed, questionably food items. Ever tried raw veganism? You have to eat a loooooooooooot to make it work. (Thats an interesting debate too; the volume of food one needs to eat to get adequate nutrition on a vegan diet relative to the population we have) I'm not even trying to make a counterpoint to veganism as a moral viewpoint, let me make that clear; you are simply wrong about human nutrition. Over thousands of years our digestion and metabolism may (and probably will, thanks to our excessive population) alter to gain more actual nutrition from plant foods, but I imagine it would be at a cost to something else, likely our brains. Oh yeah, regarding heart disease? http://www.sciencedaily.com/releases/2009/06/090625133215.htm I think the issue in the western world is the focus on eating non-food, high carbohydrate items. Additionally, there is a huge focus on eating vegetable oils over saturated fats (even plant based saturated fats are frowned upon, eg palm oil). Finally in addition to the high - GI diet that the western world prescribes, it is still possible to eat a lot of reasonable fats, and this combination is the problem. As your body becomes more inflamed due to excess sugar, indigestible food like grains etc, fats that your body has little ability to break down... your body is simply not fed things that it can rebuild itself with. Great post. This is pretty damning evidence against the "meat made us smart" concept. Unfortunately it's also equally damning against Richard Wrangham's "cooked tubers made us smart" idea as well, which I was quite fond of :) Anonymous, The stomach content only confirmed the last couple of meals before Otzi's demise. Hair analysis suggests that Otzi's overall diet was almost completely vegan. Was this addressed by National Geographic? See 32:40 in McDougall's talk:http://www.youtube.com/watch?v=4XVf36nwraw Interestingly, McDougall chose not to mention Otzi's (poor) health and dental state. I think it's far-fetched to say that Otzi's poor health was due to meat. If anything, it might have been excessive reliance on einkorn wheat. By the way, the data from The China Study is all correlational. It really shouldn't be considered "proof" of anything, let alone the idea that meat is harmful. My name is Ben Chasteen and I am the Science/Technology editor at Before It's News, a people-powered news site serving over 4 million people a month. We publish over 4,000 user-generated posts each day at BeforeItsNews.com. I contacted months ago to see if you wanted to syndicate your RSS feed however, but didn't hear anything back. This time I am contacting you because I was wondering if you would be interested in receiving a short email of our top 5 Science/Technology stories each week? We have a lot of stories that the mainstream media don't cover. I think you'd find it a great source of unique information. If it's ok, please just email me back with a YES. You have my iron-clad promise that your email address will not be used for any other purpose or be added to any mailing lists. I would also be your personal contact at Before It's News, should you ever have questions or need anything. By the way, we also offer free WordPress blog hosting, and we can syndicate your RSS feed, if you're interested. Just let me know. Thank you for this, Humane Hominid! You go the meaning of our article exactly right and explain it very well. Just one little addition: Even if we would NOT control for the amount of fat storage, the ETH would be rejected, as then the correlation between gut mass and brain size would even be positive within mammals, and within primates. Thanks for your kind words, and for visiting. It's gratifying to know I'm getting some things right, after all. :) Sarah,Ha ha, you should read it out loud to your CrossFit pals, too. Or maybe just print and post it on the bulletin board. Will,Otzi's last meal was red deer meat and possibly cereals; his second to last meal was ibex meat, some dicot plants, and possibly cereals. -- http://www.pnas.org/content/99/20/12594.long The claim of his veganism based on hair sample analysis was found to be questionable, at best. All evidence indicates that he was a life-long omnivore.http://rstb.royalsocietypublishing.org/content/355/1404/1843.full.pdf+html If Otzi was eating vegan for most of his life (maybe) and then for reasons of scarcity he ate meat and grains, it could be preciselly that he was living tough times.If fruits and veggies were his main food why would he eat meat and grains (so tough to hunt, eat and digest) unless there was some scarcity?When you are in vacation at the beach, what do you enjoy the most?I do enjoy fruits, salades, coconut water and meat, avocados etc. of course when I have them available. Thanks for the post. This post was passed to me by a Vegan friend of mine (as we are debating the health virtues of paleo va vegan) If the homosapiens were evolved with big brains not because we ate meat but for some other reason, then the fact remains that human have always eaten meat. This would suggest it is important for our health. Again I'm not saying it's essential to eat meat for good health but I don't trust that medical science understand food and nutrition that well (maybe they understand it 50% say) And for that reason being vegan raises the risk of missing out nutritionally because you dont know what you dont know. In addition on the actual point that humans need to eat meat for large brains. I do think its interesting that all the most intelligent animals on the planet are carnivores or omnivores. HumanwhalesdolphinsdogsAll apes & Chimps (they do eat meat as well) Many animals classified as Herbivores (folivores) also eat meatA site that outlines the complex eating habits of apes and chimps can be found herehttp://beyondveg.com/billings-t/comp-anat/comp-anat-2a.shtml#categ not strict No, the fact that humans have always eaten meat does not suggest that it is important to our health. It might suggest that meat-eating provided human ancestors with a marginal advantage over competitors that improved reproductive fitness. This isn't the same thing as "being important to our health," though. Lots of such adaptations come with profound negative trade-offs (witness: sickle-cell anemia). In other words, meat could have helped us survive and still have been bad for us at the same time. Most wild animals are, it's true, functional omnivores to some degree. This is actually evidence against your position, as such an ancient trait is unlikely to evolutionarily significant in hominids. To show that it was significant or necessary, you'd have to do a lot more than simply point out its existence. That's only the beginning of an analysis, not the end of one. I disagree with your assumption "No, the fact that humans have always eaten meat does not suggest that it is important to our health. ". There is evidence that the whole Homo genus have eaten meat that is (depending on your source) around 2 million years of meat eating. This is a consistent attribute of the human species. It cannot be compared with a transient genetic mutation such as sickle-cell anemia. On your second point "Most wild animals are, it's true, functional omnivores to some degree. This is actually evidence against your position, as such an ancient trait is unlikely to evolutionarily significant in hominids." Personally I am on the fence as to wether eating meat is a cause or effect of larger brains but it definatly seems to be related (whales, apes,dolphins ect all omnivore / carniviour) and it is true that brains are extremely nutrient dense and the human brain takes up 25% of our waking energy according to wikipedia http://en.wikipedia.org/wiki/Brain. So I can see logically how eating meat in a pre industrial society could help as in the wild its a very energy dense food. That said nowadays (with our supermarkets) you could eat a bag of now sugar and get alot of energy quite easily. However a bag of sugar will not contain the same nutritional make up as that of of an animal tissue. For me the health issues today stem around nutrition not energy. The records show that paleolithic man was generally very healthy so eating meat worked then. My issue with veganism or vegetarianism is that moving to a diet that does not include meat requires an understanding of what we are missing nutritionally from the meat. I do not believe that medical science understands that topic sufficiently and therefore you are likely to miss something. The first part is not an assumption, it's an observation. A history of meat-eating does not suggest that meat-eating is important to our health. Natural selection doesn't care how healthy you are. It just wants you to make sure your kids survive to reproduce. If meat-eating gives you a slight advantage in that task, then it will be preserved, even if it is bad for you from a health perspective. That was the point of the comparison to sickle-cell anemia. Adaptive advantages always come with negative trade-offs. And you missed my second point completely. If meat-eating is something we share in common with other primates, then it wasn't definitive to our own evolution. This is not an opinion, it's how evolution works. Producing a list of traits proves nothing. You have to demonstrate whether the trait in question is ancestral or derived. If ancestral (that is, shared with so many other previous species that it's nothing special), then it's not helpful to your case. If derived (that is, unique to one taxon, or to a small handful of taxa sharing a common ancestor), then it's potentially important to your case. So, if you mean to suggest that omnivory is a derived rather than ancestral trait of H. sapiens (or cetaceans, for that matter), you're gonna need to present some darn impressive evidence. If you don't understand what I just said, then you don't understand basic evolutionary biology, and really ought to stop citing it to justify your diet philosophy. Anonymus,The purpose of good health is the thrival and personal evolution including the growth in consciousness of the individual.If you think we as human beings, we are on earth just to make babies, we are not in the same paradigm.Even agriculture had an important role in the evolution of human beings and the "materialistc consciousness" but that civilization was at the expense of individual health (meaning, joyful, vigorous, long lasting wellbeing and consciousness of our spiritual nature).So, as Paul said, a trait that may help increase the population of certain species (like a pest) doesn't necesarily imply a betterment of individual health (whole wellbeing)Did I well understood you Paul?Otherwise, look at those countries were life expentacy is very short but they make lots of babies so the species will not extinct.It makes me think about temporary abundance of predators that will then self balance with the environment as in our case that we are spoiling the earth and pay the consequenses sooner or later.I'm my personal experience, being raw vegan for 11 years, I am thriving and my brain is working at it's best without any meat, grain or cooked food.Not only that, I haven't eat meat for longer than that, 28 years, just that I was still eating cooked for 17 years before switching to more paleolithical eating.Think about it.Of course I speak from experiance and I don't see why would our ancestors have needed meat or fire to ""individually thrive"".I perceive there has been a devolution of health for the sake of the human experience but that as individuals, were are not obliged to go in that same direction of physical and mental degeneration, trapped as domesticated animals spoiling the earth and being accomplices of suffering. This is my first post to your blog and I must say I found your posts surprisingly lucid for a vegan advocate. I actually think most your posts are pretty spot on, but I did want to point out a big problem with your reply to Paul on January 29, 2012 5:08 PM. It is a mistake many vegans unconsciously make due to their classification of all omnivory as basically the same in the sense that it isn't vegan. Actually omnivory in humans is both a derived and an ancestral trait of H. sapiens. For example our basic teeth are an ancestral trait of insectivory species that changed over time as we evolved to add fruit and later other foods in our diet. Insects are "animals" to a vegan, but the derived traits H. sapiens has to chase down and kill an antelope have nothing to do with the ancestral traits we have for insectivory. By the same token the derived trait we have for eating starchy foods (an extra abundance of Amylase in our saliva compared to other primates) has nothing to do with the ancestral frugivorous traits. Yet to a vegan whether it is a fruit or a tuber makes little difference. Both are plant foods. So when you said, "So, if you mean to suggest that omnivory is a derived rather than ancestral trait of H. sapiens", you made a meaningless statement. H. sapien's specific type of omnivory has a wide variety and blend of both ancestral and derived traits. Part 1/2I like your article. The only issue which is not clear to me is whether the Kleiber's law applies after removing the fat mass or before it. If it applies before it then Navarette et al, changed the whole basis of the ETH. Lets go on a tangent.Lets just talk about physics using Kleiber's law. Efood is the energy in the food that we eat. Ewaste is the energy that leaves the digestive tract. Egut is the energy that food digestion uses, including the upkeep of the digestion system. Einput = Efood - Ewaste; Energy that was made available by the digestive system.Eoutput is the Energy that is made available to the rest of the system, not including the gut.Egut = Einput - Eoutput. Note my definition of Egut is different from ETH's treatment of gut size. Einput is the complete energy used by the system. Einput = Egut + Ebrain + Erest. Now Kleiber's law states that Energy remains constant for an animal. ETH says that Erest is more or less constant across homonids. I would think the same args will apply to other species also. Similar brain, heart, etc. Fat is a confounding factor :-). If this is true that Erest is more or less same across species, then we can simplify it to. Assuming that Erest is constant, brain size is inversely proportional to the ratio of energy utilized by the gut. In other words brain size is directly proportional to Energy efficiency of the Gut. The only point of contention in all of this is that Erest is not much variable. Looking at the energy utilization of each tissue, you see that compared to the brain and neurons the rest consumes very little. The only big consumer other than that is gut. So I am not sure why Erest will not be nearly constant. Ofcourse this assumes that Kleiber's law is based on the animal as a whole and does not depend on removal of the fat. I have never heard of that thing though. I just think that Navarette et al changed the whole basis of ETH, maybe they didn't understand ETH properly. Lets just think about it in physical terms.How do you get a high gut efficiency? By eating things from which energy can be extracted easily. Things that are very easy to get energy from are sugars, non-resistant Starches, and fat. Rest of all are difficult to get at. For fiber and resistant starches we need bacteria which involves a lot more expense, and for protein we need to convert it to glucose to obtain the energy, which is very thermogenic. One thing is for sure. Cooking makes energy a lot more easily available from food. The gut changes in Homo Erectus are very noticeable. So Wrangham's thesis is not in danger even if ETH goes out the window. I am not sure what navarette et al are getting at, but the concept of ETH makes a lot of sense. If you look in the above terms you will see why Herbivores (too much bacterial requirement) have smaller brains, compared to Carnivores (too much protein to glucose conversion), and omnivores (somewhere in between) have bigger brains than carnivores. It just makes sense. I agree with you meat is immaterial, although brains and bone marrow was critical in human evolution. That is the most easily available large source of fat. Humanoids did spend a large part of the evolution as scavengers. IMO the evolution went like this.1) Fruits provided simple sugars to early primates.2) Hunting provided some meat and importantly some fat, which may have allowed brain growth. I am not so sure though.3) Scavanging for brain and bone marrow. This would be the critical point.4) Tools allowing better hunting, easier access to fat.5) Cooking even better access to fat, and now access to easily digestible starches.6) Amylase enzyme growth, allowing less reliance on bacteria for extracting energy from starches. Wrangham's hypothesis makes it difficult to reconcile step5 and 6, as now they happened very much apart 1.8mya to 200kya. I guess this can only be solved if plants were not very starchy before 200kya, and humans were instrumental in selecting and propagating starchy plants. If that is the case something like agriculture started 200kya. Possibly why sorghum starch was found on neanderthal teeth. I sometimes think that maybe coconut played a large role in our evolution. Lots of easily available fat and sugars. Very very good for us. Large fruit plentiful in africa where we evolved. Although the fruit requires tools to get at, so does not get rid of the scavenger phase. The scavenger phase was before they started using tools. 2/2Obviously I am an omnivore, and don't side with either the vegans or the so called carnivores. Fat and starch both, are good for health. Not so much protein or fiber, although a little of both (10-15%) is required :-). But micro-nutrients are the real kings. Avoiding toxins is critical. I follow the PerfectHealthDiet. Anonymous #umpteen asked: What is the purpose of good health if not increasing the chances of survival and reproduction for the individual and the group? Depends what you mean by "good health." Evolution is a system built around the adequate, not the optimal. Most organisms who survive to reproductive age are already adequately "healthy" enough to make babies. As long as that's the case, nature doesn't care if they're unhealthy in other ways (got a gene that promotes cholesterolemia, sickle cell anemia or schizophrenisa? Tough shit, you're good enough to make babies, and that's all evolution cares about). "Also, you don't get to change the ETH's parameters in order to save it" I'm no expert in biological science, but in physics hypotheses and theories have their parameters modified or even added and subtracted all the time. When Einstein added in the parameter of the cosmological constant in 1917 to General Relativity, GR could have been more considered a hypothesis not a theory, as its predictions had not been experimentally tested (perihelion precession of Mercury had been observed long before GR).http://en.wikipedia.org/wiki/Cosmological_constant I'll concede this much, praguestepchild: I shouldn't have used the word "parameters." I should have said "foundation." What these authors have done isn't simply highlight a flaw in the periphery of the ETH, but demonstrated that it is completely wrong in its fundamental claim. There is no negative correlation between brain size and gut size, or between brain size and the size of any other expensive tissue. Nitpick: The precession of Mercury's perihelion had been observed prior to GR, but it wasn't explained. In fact, it can't be explained with Newtonian gravitation alone, but GR does explain it. So Mercury's perihelion was a good early test of GR. Other things (such as gravitational lensing) also provided nice tests for GR and it still works reasonably well except at small scales (then it breaks down). The cosmological constant was added because Einstein's field equation worked out with an expanding universe and at the time, the universe was thought to be static so he basically went "oh, I must be missing something here" and tacked on a constant to fix things. He later called this his greatest mistake when the universe was found to be expanding, but now that we think the universe is accelerating, this constant (or a new theory of gravitation) is probably necessary. I definitely wouldn't say that one famous example indicates that physicists add and remove parameters from equations "all the time" either. " they hinted that other high-quality foods, like sugary fruits, tubers, or oil-rich nuts and seeds, could also have done the job." Yes. But you're missing the key point - animal fat is the most nutrient dense fuel. Yes, it is possible that humans could have evolved in this way, but unlikely since it would require, effectively, farming to ensure continual access to foods. Lets be generous and assume that paleolithic tubers have the same amounts of calories as a modern potato, and somehow magically also contain more micronutrients. (I'll also ignore essential fatty acids and the old B vitamin bugbear). You're talking, say, 200 calories from a single potato. So in order to survive, the 80 lb human has to find, what, 8 per day? Every day. And remember, humans/pre humans were definitely social animals.. so the number of these tubers need to be found in considerably greater numbers.... I'm not denying that "it's possible that a large brain can grow without meat", sure. But show me an intelligent herbivore, or frugivore. Eating the fat and organs of an animal provides a huge amount of calories and nutrition immediately, which is why predators exist in the first place; its opportunistic eating. It's not a stupid argument that you make, but it's not very strong. Language (and indeed, even being able to recognise that other humans can understand you) had to evolve first as a survival tool, and you don't need to coordinate to find potatoes, you just look. You need to work on your reading comprehension skills. This is not article arguing that cavemen were vegetarians, or that tubers fueled brain evolution. It's an article arguing that the whole idea that diet of any kind fueled brain evolution is unsupported by current evidence. And BTW, like most other paleodieters, you're getting the brain-meat hypothesis wrong. If reading Aeillo's and Wheeler's original paper is too much, I'll spell it out for you. Their hypothesis goes like this: 1) Nature selected for larger brains in hominids.2) These larges brains placed a greater energy demand on said hominids.3) Hominid anatomy, at the same time, imposed an energy constraint that would normally prevent this brain enlargement.4) Nature compensated by selecting, in turn, for hominids who spent less energy on other tissues, freeing it up for the brain.5) This reduction in energy use in (they hypothesize) the gut led to a reduction in gut mass.6) This gut reduction + brain enlargement led hominids to seek more calorie dense foods, including more meat-eating than previous hominids. Now, if you pay attention to that sequence, you will see that bigger brains came before increased meat-eating, not after it. Hi. I still don´t understand Navarette study (maybe because my english is not perfect). What actually change with ETH? Human still have larger brain and shorter gut than for example monkey. Okay, after subtracting fat, human has higher BMR. Possibly for brain needs, but gut is not same like has monkey, it is smaller. How can disappear correlation between energy needs of brain and gut after subtracting fat? Then brain must have even bigger numbers in comparison with fat-free body, because his energy needs doesn´t change. Can you help me understand that? Thanks :) I'm late to this party, but have to comment. This statement of yours says it all: "However, the paper is pretty good evidence that meat wasn't essential to our evolution." I've been saying this for decades. Yes, our very early ancestors ate meat. Some of them probably even hunted for it. But, our biology didn't require either the eating of meat or the hunting of it in order to evolve into our modern form, or to ensure our survival. I'm a vegan and my fat butt, big brain, and little gut do just fine on nuts and berries and leaves and grass. That'd be coconuts, avocados, blueberries, spinach, and green onions. Nobody is starving at my house.
2024-05-08T01:27:17.568517
https://example.com/article/1667
New antidepressants: use in high-risk patients. This paper will review evidence on the safety and efficacy of new antidepressants in high-risk patients. Where available, data will be reviewed on the serotonin selective reuptake inhibitors (SSRIs), including fluoxetine, fluvoxamine, paroxetine, sertraline, and citalopram, and on the new reversible inhibitor of monoamine oxidase-A moclobemide.
2024-07-15T01:27:17.568517
https://example.com/article/8870
San Diego Classic Film Calendar Sunday, May 29, 2016 TCMFF 2016 - Day 3 Saturday at the festival started once again with going into Club TCM before it opened to hide a Falcon. Then Jasmine and I met up with my wife Mary and went down to the Egyptian for the 90th Anniversary of Vitaphone. Once again we ran into Joel Williams, who had the number 1 line number for the screening. The screening was awesome. It started with a little bit of history about talking motion pictures. There were a large number of attempts before Vitaphone finally got it right and made it commercially viable. The big problem was synchronizing the sound and the picture, and not just synchronizing but reliably synchronizing. Vitaphone solved this problem by recording the sound at the same time they were filming the actors. The sound was recorded on large-format vinyl records. Because both the film and the audio were captured at the same time, it was easier to synchronize later. They also spoke about the restoration efforts as it was a two-fold process. You had to find the film and then find the record that went with the film. Vitaphone shorts were marketed as vaudeville in a can. They would hire the top vaudeville performers of the day and film them to sell to theaters. They went on to show a number of these short vaudeville performances. I've been embedded my favorite ones below. I think I was most impressed with Baby Rose Marie. The woman who played Sally Rogers on the old Dick Van Dyke Show performing at about 5 or 6 years old and just killing it:Shaw and Lee: Conlin and Glass (part of the film): Jasmine's #TCMFF16YO review of 90th Anniversary of Vitaphone:I wish presentations at school could be as cool as this We hid a Falcon in the Egyptian as we were leaving. On the way out we ran into Kimberly who is in the number one position for a face in the crowd. We went directly to the TCL Chinese IMAX for Dead Men Don't Wear Plaid. We got there pretty early but still had line numbers in the 300s. I still felt good about that because it is a huge theater, so even though the line ran all the way through the mall and almost back around to the front, I was sure we'd get in. We decided to sit further forward then we normally would for the sake of getting pictures and video. When we sat down, we sat next to an African American women, who I was convinced I had met earlier. I had not. Her name was Beth, but she was someone who I had been chatting with on Twitter #TCMParty for a couple of years. The interview with Carl Reiner was after the film, but they had Eddie Muller there to introduce it. He started by asking the question why would they get him to introduce a parody of a Film Noir. He said that there had always been a grand tradition of doing Film Noir parodies that had started right in the middle of the Film Noir era, so he thought that Dead Men Don't Wear Plaid fit in perfectly well with all of that. He also talked about the film from a technical standpoint. The way that modern footage was integrated with vintage films so seamlessly is nothing short of amazing. For me seeing Dead Men Don't Wear Plaid was mostly about seeing the old movies. I figured that it had been a lot of years since I'd seen it, I would now know most of the films they were showing. I did, but I had forgotten just how funny a film it is. After the screening Carl Reiner was interviewed by Illeana Douglas. He was hilarious. He had such funny stories. One of the things that came up was Illeana asked him about who the character Alan Brady from The Dick Van Dyke Show was based on. He said that everybody always said that Alan Brady was based on Sid Caesar, from Your Show of Shows, but he was not. He was based on a combination of Phil Silvers and Milton Berle. He took some of the most extreme aspects of their personalities when they were working and added things from his imagination to turn Alan Brady into the monster he was. He also talked about Mel Brooks and The 2000 Year Old Man. It started as a comedy routine that he and Mel Brooks would do at parties. It became so popular that someone finally convinced them to record it. He said that at one point Cary Grant came up to him and wanted 12 copies of the album. He asked Cary Grant why he wanted the albums. He said that he wanted to take them to England. He later found out that the Queen loved it. He also said that he, and Mel Brooks were still very close friends and Mel comes over to his house several times a week to watch TV together. He also told a really off-color story about George Burns. He had directed George Burns in Oh, God. At the time Carl Reiner was in his sixties, and George Burns was in his eighties. George Burns was known for always having gorgeous woman with him wherever he went. Carl Reiner asked him about, now that he was in his eighties, what was sex like. George Burns responded it was like putting an oyster in a slot machine.One of the things that struck me about Carl Reiner was that he was very complimentary of everyone he spoke of. He would be asked about certain entertainers that he worked with in the past, and he would always respond with oh, he was a genius, she was wonderful.I got a couple of videos of Carl Reiner, one on Mel Brooks: And another on Edith Head:Jasmine's #TCMFF16YO review of Dead Men Don't Wear Plaid:I would make a joke about Juliet being good at sucking but they already did that 3 times We didn't stick around for the book signing. But we did stop to hide a Falcon in the TCM Chinese IMAX. By this point Jasmine and I had started referring to the Falcons as MacGuffins just in case somebody would overhear us. We had a bit more of a gap between this screening and the next. We were too early to get line numbers for the next screening, so we decided to grab food again at Johnny Rockets. The hostess asked if we wanted our picture taken for a free postcard. I said, what the heck. She came with the free postcard and a couple of other pictures that were not free. I decided to be a sucker and bought a couple of them anyway. This one's free, the others not so much. Before the next screening, we hid another Falcon in the TCL Chinese Multiplex. Next up was The War of the Worlds. I actually wasn't all that psyched about it. Yes, I like the film, but it is not one of my favorites of 1950s sci-fi. The introduction by Ben Burtt was really great. It totally made it worth it. Host Ben Burtt is a sound effects person who had worked on the Star Wars movies, but he talked about both the visual and sound effects. There were some effects that he wasn't sure about. For example, the beams that came off the ends of the wings of the ships, no one had documented how they did them. They started playing around with it and what they think they did was use a Jacob's Ladder and laid it down on its side and then used a fan to blow the sparks and filmed it with a green filter. Since the guy was mostly a sound effects guy, he talked a lot about the sound effects. One of these sound effects for some of the weapons was made by putting a mic at one end of a large spring and hitting it. Since the higher frequency sound waves travel faster than the lower frequency waves, you get this really cool effect. He had even set up a spring and a microphone and demonstrated it live for the audience. That was way cool. Turns out that despite my somewhat low expectations, this presentation turned out to be one of my favorite things of the Festival. Jasmine's #TCMFF16YO review of The War of the Worlds:Don't fight back to aliens, just hide in a church and wait for it all to blow over The original plan at this point had Jasmine and my wife Mary going to The King and I and me going to Endless Summer. But it turned out that Mary decided she wanted to see Forbidden Planet by the pool instead. Jasmine had thought that Endless Summer sounded cool but wanted to see if The King and I more. However. since her mom changed her mind, she decided to tag along with me for Endless Summer. Doing my best surf pose before Endless Summer I had decided that I wanted to wear board shorts for the Endless Summer screening, so I went back to the hotel to change. I stopped in Club TCM and hid the last Falcon of the day. It was a real treat to hear Bruce Brown speak before the screening. Jasmine thought he was "totally cool and chill." If I had any complaints it was that the person interviewing him probably knew sports, but not necessarily the sport of surfing. He asked Bruce Brown how he got Robert August and Mike Hynson to go with him. Bruce Brown thought this was kind of an odd question. I did too. It's not like these guys had agents or anything like that back then. When Bruce Brown was making surf movies he knew and was friends with almost every major surfer in the world. So I thought Bruce Brown's answer was absolutely hilarious. He said, "Well, I knew them and they could go." The interviewer also asked Bruce Brown about what he thought about Endless Summer being set aside for preservation by the Library of Congress. I don't know whether Bruce Brown was not fazed by this or whether he just didn't know that it had even happened. He just kind of blew it off. I kind of got the feeling the Bruce Brown really wasn't used to being interviewed and was a little taken aback by all the attention. He did say he would hang out in the theater afterwards to sign autographs and meet people. Jasmine and I both loved the film. If I had ever seen it in the theater it would have been when I was very very young and I don't remember it, so getting the chance to see it now was a treat. Afterwards sure enough, Bruce Brown was out in the lobby shaking hands and posing for pictures. My blurry proof that I met Bruce Brown After I shook hands with him and took a picture, I thanked him and moved to the side to give other people a chance. He asked me if I surfed, and I said no, not anymore, but I still skateboarded. He said that was cool, and I got the impression that he would much rather be talking about surfing or skateboarding than being there taking pictures.Jasmine's #TCMFF16YO review of Endless Summer.Radical. Even the director is rad, he wore a freaking Hawaiian shirt and flannel to an interview!Next up, the original plan called for Band of Outsiders but I thought I was too exhausted to read subtitles and Jasmine was just plain exhausted. She went up to the room while I went out to the pool and caught up with Mary and watched the last 45 minutes of Forbidden Planet. The last screening of the day was the midnight movie Gog in 3D. In the three years that I've been coming to TCMFF, I have never made it to a midnight movie. Gog! Gog in 3D did it. I had made some VIP candy, and I still had some of it left so I brought it to the theater and gave it out to people who were there for the screening. The introduction talked about the restoration. When you make a 3D film you have two separate pieces of film, one for each eye. The restoration was challenging because they had one good copy for the one eye, but the copy for the other eye was done with film that was printed with a very cheap process and almost all of the color had faded out of it. When you looked at the two side-by-side, it almost looked like the one have been filmed in black and white. The film was a hoot. Not good mind you but fun all the same. By the time we got back to the hotel everything was closed, so we just went up to bed, end of another great day at TCMFF.
2023-12-14T01:27:17.568517
https://example.com/article/2498
# [Snow Report for Schweitzer Mountain](http://alexa.amazon.com/#skills/amzn1.ask.skill.89af60c2-7235-4af0-92ac-82d09390f280) ![0 stars](../../images/ic_star_border_black_18dp_1x.png)![0 stars](../../images/ic_star_border_black_18dp_1x.png)![0 stars](../../images/ic_star_border_black_18dp_1x.png)![0 stars](../../images/ic_star_border_black_18dp_1x.png)![0 stars](../../images/ic_star_border_black_18dp_1x.png) 0 To use the Snow Report for Schweitzer Mountain skill, try saying... * *Alexa, open Schweitzer Mountain* * *Alexa, ask Schweitzer Mountain for the snow report* * *Alexa, ask Schweitzer Mountain for latest conditions* The Schweitzer Mountain Snow Report brought to you by SnoCountry brings you the latest snow fall, snow surface conditions, base depth, trail, and lift operations information. *** ### Skill Details * **Invocation Name:** schweitzer mountain * **Category:** null * **ID:** amzn1.ask.skill.89af60c2-7235-4af0-92ac-82d09390f280 * **ASIN:** B01MDONY13 * **Author:** SnoCountry * **Release Date:** October 21, 2016 @ 14:17:38 * **In-App Purchasing:** No
2023-08-04T01:27:17.568517
https://example.com/article/8903
Comparison of the properties of AMVISC and Healon. A standardized investigation to compare the biophysical characteristics of two sodium hyaluronate products, AMVISC and Healon, was conducted. Results showed that the two products exhibited similar biophysical properties. AMVISC exhibited an average kinematic viscosity of 41,554 centistokes and a calculated average molecular weight of 2.04 X 10(6) daltons. Healon exhibited an average kinematic viscosity of 47,271 centistokes and a calculated average molecular weight of 2.43 X 10(6) daltons.
2023-10-24T01:27:17.568517
https://example.com/article/1592
Massless gauge bosons other than the photon. Gauge bosons associated with unbroken gauge symmetries, under which all standard model fields are singlets, may interact with ordinary matter via higher-dimensional operators. A complete set of dimension-six operators involving a massless U(1) field, gamma('), and standard model fields is presented. The mu-->egamma(') decay, primordial nucleosynthesis, star cooling, and other phenomena set lower limits on the scale of chirality-flip operators in the 1-15 TeV range if the operators have coefficients given by the corresponding Yukawa couplings. Simple renormalizable models induce gamma(') interactions with leptons or quarks at two loops, and may provide a cold dark matter candidate.
2024-04-29T01:27:17.568517
https://example.com/article/7940
'Cold' Carriers to up travel time for transplant hearts Share Via Email You cannot blame transplant surgeons for having high BP. After all, speeding in an ambulance across the city, removing a beating heart (or liver or lung) from a brain-dead patient, putting it in a box of saline and rushing back before the organ gets infected or stops beating... and then operating for 6-12 hours, cannot be good for the nerves. The heart transplant team at Global Health City may be able to breathe a little easier soon, when they procure new organ transportation cases that are equipped with cold perfusion technology. “Right now, the organs are carried in sterilised boxes with ice and saline, which keep the organ usable for up to four hours. But the battle against time is a constant worry,” said Dr R Ravi Kumar, senior interventional cardiologist at the hospital. Despite its having the best organ transplant programme in the country, at least a 100 harvested organs go to waste every year, rued cardiologist Dr Nandkishore Kapadia at the inauguration of Global’s Heart Failure Clinic on Wednesday. Most of it is because the organ gets infected before it reaches the recipient. In contrast, the cold perfusion boxes will keep feeding the heart cold nutrients that circulate inside the blood-pumping organ, allowing it to stay alive and fresh for longer. “Studies have indicated that these cases will be able to keep harvested cadaver hearts alive for up to nine hours,” he added about the boxes which have been procured at a cost of `60 lakh. While the additional time will make it easier for ambulance drivers who battle the city’s traffic every day, what it really does is open up the possibility of procuring organs from further south. “At the moment, we struggle to procure organs from Tiruchy, which is about three hours away through our ‘green corridor’. With the advantage of these boxes, we may be able to receive organs from places like Coimbatore soon,” said Dr Ravi. Global’s Heart Failure Clinic brings together the whole gamut of heart services from elective surgeries to emergencies in one department and is among the few of its kind in the country.
2024-04-13T01:27:17.568517
https://example.com/article/1134
Traditionally, hospitals and other generators of infectious waste simply burned medical waste in fossil fuel-fired incinerators or deposited them into the municipal solid waste landfill sites. This practice caused several environmental, health and safety problems evoking public outcry and opposition. One of the major negative consequences of incineration was the chemical emissions and particulate matter generated and released into the atmosphere. Nearly 50% or more of the medical waste stream is comprised of Polyvinyl Chloride (PVC) plastic, which has proven to be non-recyclable and a major source of dioxin, Nitrous Oxides (NOx), Carbon Monoxide (CO), Carbon Dioxide (CO2), and heavy metals (all internationally regulated air emissions) when incinerated. As air quality standards have become more stringent, previously unregulated medical waste incinerators are being shut down due to the high cost of maintenance, the growing concerns of increasing levels of CO2, NOx and CO from emissions, and the prohibitive cost of retrofitting incinerators to meet pollution standards. This incinerator-related "backlash" has already occurred in the United States, causing a large number of incinerators to be decommissioned and shut down. Remediation Earth Inc.'s technology is designed to process "Red bag" and sharps waste.
2024-06-03T01:27:17.568517
https://example.com/article/9582
Here’s a good indication of the strong economic boom going on in Seattle, and it goes beyond the many construction cranes dotting the downtown landscape. Our news partner KING 5 reports from the Ferrari Concours d’Elegance event in Renton this past weekend and finds that Seattle’s Ferrari club is firing on all cylinders with membership increasing. We have a lot of members who have worked at Microsoft or Amazon or Boeing. You can thank some of the Amazon and Microsoft millionaires for growth of the club, which some believe has more than doubled in the past several years. “I’d say it is an indicator of success overall,” judge Will Diefenbach tells KING 5. “We have a lot of members who have worked at Microsoft or Amazon or Boeing.” While there may be more Ferraris buzzing Seattle streets, the success of the region is coming with costs. Today, Seattle Mayor Ed Murray is expected to release a report on affordable housing in the city, and Seattle writer Jeff Reifman recently posted an essay titled “How our success is ruining Seattle.” Furthermore, just last week I moderated a panel at the Museum of History and Industry on the growing pains associated with the current tech boom. (More on that panel later this week). Here’s the full report on the Ferrari club:
2023-10-15T01:27:17.568517
https://example.com/article/6703
Actress Priya Prakash Varrier has revealed she was on house arrest after her 'wink' in the song 'Manikya Malaraya Poovi' went viral. "I wasn't allowed to go out because my parents were tensed. The media people would turn up at my door without even informing," she added. "Everything was new to me as well as my family," Priya further said.
2023-08-27T01:27:17.568517
https://example.com/article/6658
In anticipation of the US sanctions against Iranian oil exports, which were reimposed by the Trump Administration on Monday (along with additional sanctions on everything from Iranian shipping to banking and insurance), oil tankers bearing the Iranian flag have embraced a stealthy approach to keeping the oil flowing: They're 'ghosting' international trackers by turning off their transponders, rendering the ships impossible to track by anything aside from visual cues.
2024-07-17T01:27:17.568517
https://example.com/article/1856
Q: how to pass current user's email address to Google Script I have a script behind a Google spreadsheet that sends an email once certain cells of a row is completed. The script works by sending to multiple hard coded email addresses. How can I add the current user's email address as a 2nd CC? One of the CC is hard coded but the 2nd changes depending on the person updating the spreadsheet. I know how to grab the email address but how do I pass it as a variable so it actually gets CC-ed? var currentemail = Session.getActiveUser().getEmail(); var options = {cc: 'Session.getActiveUser().getEmail(), someotheremail@domain.com'}; or var options = {cc: 'currentemail, someotheremail@domain.com'}; GmailApp.sendEmail(email, subject, body, options); Obviously these do not work :) Many thanks in advance A: this can be done like below : function sendWithCc(){ var currentemail = Session.getActiveUser().getEmail(); var options = {};// create object options['cc'] = currentemail+',someotheremail@domain.com';// add key & values (comma separated in a string) GmailApp.sendEmail('somemail@domain.com', 'subject', 'body', options); // I stringified 'subject' and 'body' to make that test work without defining values for it }
2023-08-31T01:27:17.568517
https://example.com/article/9976
This invention relates generally to the field of data communications. More particularly, a system and method are provided for enabling the automatic evolution of remote procedure call structures in a distributed computing environment. In many distributed computing environments, a client process communicates with a server process via a remote procedure call (RPC). As part of a call, a pre-defined RPC interface is invoked to describe a requested operation or set of data. For example, one RPC interface invocation may constitute a request for the creation of a new user session. Another invocation may constitute a request for a set of data. A RPC interface may include any number of input and output arguments. These arguments may be of different data types, including scalars (e.g., numbers, text) and pre-defined RPC record types containing fields of one or more data types. When a client or server is configured with an application that employs RPCs, it is equipped with knowledge of a static set of pre-defined RPC interfaces and record types, such as the current versions at the time of configuration. However, RPC interfaces and record types evolve over time, thereby leaving different entities (e.g., clients, servers) with different sets or versions. Thus, over time, new RPC interfaces and record types are created (e.g., to request different types of data) and existing ones are augmented with additional fields or parameters. For example, an input or output parameter may be added to an RPC interface, or a field may be added to an RPC record type. When a new RPC interface or record type is released, or an existing one is modified, it is impractical to attempt to update all entities with the new or modified entity. An application developer must expend great care and effort to ensure that a client or server endowed with a modified RPC interface or record type is able to communicate with another entity using an older version. Applications have typically handled this problem in one of two ways. First, an application may employ a self-describing structure comprising name/value pairs or using a markup language such as XML (Extensible Markup Language) or HTML (Hypertext Markup Language). The self-describing structure might contain information, in addition to its data, that allows an older computing device to pick out the fields known to it and ignore others. However, an application employing a self-describing structure can only handle low-level changes because the framework for exchanging a self-describing structure must remain the same. And, the use and exchange of self-describing structures greatly increases the amount of data/information that must be communicated, thereby decreasing efficiency. Second, the application may include logic designed to deduce a common set of structures understood by two communicating computer systems, and exchange those structures. However, the use of specialized logic merely transfers complexity from the RPC infrastructure to the application. Thus, evolving an RPC interface or record type while maintaining the ability for devices having different versions of the interface or record type to use one in an RPC call has become very difficult for developers.
2024-07-23T01:27:17.568517
https://example.com/article/3655
But the 5-to-4 ruling by the Supreme Court indicated that its new conservative majority is far less likely to agree to last-minute stay requests from those facing execution. It also emphasized the stark divide between conservative and liberal justices on capital punishment and the most humane way to carry it out. “What is at stake in this case is the right of a condemned inmate not to be subjected to cruel and unusual punishment in violation of the Eighth Amendment,” wrote Justice Stephen G. Breyer, objecting to the majority’s decision. AD AD He added, “To proceed in this matter in the middle of the night without giving all members of the court the opportunity for discussion tomorrow morning is, I believe, unfortunate.” He was joined by his fellow liberal colleagues Ruth Bader Ginsburg, Sonia Sotomayor and Elena Kagan. It was unclear Friday what would happen next in Price’s case. A spokesman for the office of Alabama’s attorney general said it was still reviewing the Supreme Court’s decision and “cannot yet comment on our next steps.” Price, sentenced to death for his role in murdering an Alabama minister in 1991 with a sword and a dagger, was asking to be executed by inhaling nitrogen gas, a process called nitrogen hypoxia, rather than risk a “botched” execution by injection. Alabama allows nitrogen hypoxia but has never used it in an execution. AD AD But the Supreme Court majority said Price had missed his chance to elect that manner of death. In a brief, unsigned order, the court’s conservatives said that, in June 2018, death-row inmates in Alabama were given 30 days to elect nitrogen hypoxia. While 48 inmates did so, Price did not. “He then waited until February 2019 to file this action and submitted additional evidence today, a few hours before his scheduled execution time,” said the order from Chief Justice John G. Roberts Jr. and Justices Clarence Thomas, Samuel A. Alito Jr., Neil M. Gorsuch and Brett M. Kavanaugh. That majority earlier this year allowed the execution of a Muslim inmate in Alabama who had complained that he was not allowed an imam by his side at his death, while Christian inmates could have a chaplain with them. The five justices suggested that the legal action had come too late. The conservatives also recently rejected an appeal from a Missouri inmate who said that lethal injection in his case could cause excruciating pain because of a rare medical condition that could cause him to choke on his own blood during the process. The court ruled, 5 to 4, that Russell Bucklew had not proved that lethal injection would choke him or that another manner of execution would alleviate the problem. AD AD The number of executions nationwide has dropped significantly in recent years. There were 25 death sentences carried out last year, down from 98 in 1999. So far this year, three executions have been carried out — two in Texas and one in Alabama — down from seven at the same point in 2018. Breyer’s dissent revealed the behind-the-scenes maneuvering that accompanies execution-stay requests. “Should anyone doubt that death sentences in the United States can be carried out in an arbitrary way, let that person review the following circumstances as they have been presented to our court this evening,” Breyer wrote. After Price obtained stays from a district judge and the U.S. Court of Appeals for the 11th Circuit, the state of Alabama asked the Supreme Court to intervene after 9 p.m. Thursday. AD Breyer wrote that he requested the court take no action until Friday, when the justices were scheduled to meet in private conference to discuss other matters. AD “I recognized that my request would delay resolution of the application and that the state would have to obtain a new execution warrant, thus delaying the execution by 30 days,” Breyer wrote. “But in my judgment, that delay was warranted, at least on the facts as we have them now.” But he said the majority would not agree to that, “thus preventing full discussion among the court’s members. In doing so, it overrides the discretionary judgment of not one, but two lower courts. Why?” The court’s ruling was emailed to reporters at 2:51 a.m. Friday. AD While the deliberations proceeded in Washington, Alabama officials decided to halt Price’s execution just before the death warrant expired at midnight. That left them angry, as well. “This evening, the state of Alabama witnessed a miscarriage of justice,” Gov. Kay Ivey (R) said in a statement. AD The state’s attorney general, Steve Marshall, said the execution’s delay meant that relatives of Bill Lynn, the minister Price was convicted of killing in 1991, were “deprived of justice.” “They were, in effect, re-victimized by a killer trying to evade his just punishment,” Marshall said in a statement. “This 11th-hour stay for death row inmate Christopher Price will do nothing to serve the ends of justice. Indeed, it has inflicted the opposite — injustice, in the form of justice delayed.” He vowed that “justice will be had” for Lynn.
2024-03-12T01:27:17.568517
https://example.com/article/7953
Children's use of majority information is influenced by pragmatic inferences and task domain. Do children always conform to a majority's testimony, or do the pragmatics of that testimony matter? We investigated the influence of pragmatics on conforming to a majority across 2 domains: when learning about object labels and when learning about causal relationships. Four- and 5-year-olds (N = 250) were given a choice between an object endorsed by a 3-person majority, or one endorsed by a single minority informant. Within each domain, there were 4 pragmatic conditions, each with modified testimony so that the majority either explicitly provided an opinion about or pragmatically implied their opinion about the alternative object chosen by the minority. In the unendorsed condition, informants explicitly unendorsed the unchosen object. In the implied condition, informants said nothing about the unchosen object. In the ignorance condition, informants explicitly expressed ignorance about the unchosen object, and in the hidden condition, the chosen object was the only one present at the time of the endorsement. We found that children were most likely to endorse the majority object in the unendorsed condition, in which the majority's opinion was explicitly stated, and least likely in the hidden condition, in which only one object at a time was present, with the other 2 conditions intermediate. Children's preference for majority testimony also depended on the task domain, with a stronger preference for the majority in the language task than causal task. Children might not simply have a majority bias; rather, they use majority information differently depending on the pragmatics and task demands. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
2023-10-22T01:27:17.568517
https://example.com/article/6727
Q: How to store an retrieve float from NSUserDefaults I'm having trouble storing and retrieving a float in NSUserDefauts. I store the value, but when I retrieve it, it returns 0. here's what I tried and didn't work: [pref setFloat:3.0f forKey:@"key"]; float value = [pref floatForKey:@"key"]; //value=0 [pref setFloat:3 forKey:@"key"]; float value = [pref floatForKey:@"key"];//value=0 [pref setObject:[NSNumber numberWithFloat:3] forKey:@"key"]; float value = [[pref objectForKey:@"key"]floatValue];//value=0 [pref setObject:[NSNumber numberWithFloat:3.0f] forKey:@"key"]; float value = [[pref objectForKey:@"key"]floatValue];//value=0 What am I doing wrong here? I've tried these 4 pieces of code but they all return zero when retrieving the float from NSUserDefaults. Any help is appreciated. Thanks! A: Save -(void) saveFloatToUserDefaults:(float)x forKey:(NSString *)key { NSUserDefaults * userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setFloat:x forKey:key]; [userDefaults synchronize]; } Load -(float) loadFloatFromUserDefaultsForKey:(NSString *)key { NSUserDefaults * userDefaults = [NSUserDefaults standardUserDefaults]; return [userDefaults floatForKey:key]; } How-To [self saveFloatToUserDefaults:5.241 forKey:@"myFloat"]; float x = [self loadFloatFromUserDefaultsForKey:@"myFloat"]; A: The first example in your code is fine, assuming that this line: NSUserDefaults *pref = [NSUserDefaults standardUserDefaults]; Appears before it. As I suggested in my comment, the behaviour you are seeing suggests that pref is nil.
2023-11-07T01:27:17.568517
https://example.com/article/5737
Maple Raspberry “Ice Cream” This dish features juicy raspberries folded into creamy soft serve with a swirl of buttery maple syrup. It is super quick to make, nutritious, and tastes amazing. With a little prep ahead of time, it serves as a quick snack to throw together when you are short on time. It is completely natural and is made simply from fruit–nature’s most perfect food. Sweet, ripe bananas are transformed in this recipe and topped with juicy raspberries for a pop of ruby-red sweetness.
2023-12-21T01:27:17.568517
https://example.com/article/2546
// 0x0D0001A0 const GeoLayout snufit_geo[] = { GEO_SHADOW(SHADOW_CIRCLE_4_VERTS, 0x96, 100), GEO_OPEN_NODE(), GEO_SCALE(0x00, 16384), GEO_OPEN_NODE(), GEO_ASM(0, geo_snufit_move_mask), GEO_TRANSLATE_NODE(0x00, 0, 0, 0), GEO_OPEN_NODE(), GEO_DISPLAY_LIST(LAYER_OPAQUE, snufit_seg6_dl_06009748), GEO_CLOSE_NODE(), GEO_DISPLAY_LIST(LAYER_OPAQUE, snufit_seg6_dl_06009498), GEO_DISPLAY_LIST(LAYER_OPAQUE, snufit_seg6_dl_06009938), GEO_DISPLAY_LIST(LAYER_OPAQUE, snufit_seg6_dl_06009B68), GEO_BILLBOARD(), GEO_OPEN_NODE(), GEO_ASM(0, geo_snufit_scale_body), GEO_SCALE(0x00, 0), GEO_OPEN_NODE(), GEO_DISPLAY_LIST(LAYER_ALPHA, snufit_seg6_dl_06009A10), GEO_CLOSE_NODE(), GEO_CLOSE_NODE(), GEO_CLOSE_NODE(), GEO_CLOSE_NODE(), GEO_CLOSE_NODE(), //! more close than open nodes GEO_END(), };
2024-02-20T01:27:17.568517
https://example.com/article/2800
Hepatotoxicity due to mitochondrial dysfunction. Mitochondria are involved in fatty acid beta-oxidation, the tricarboxylic acid cycle, and oxidative phosphorylation, which provide most of the cell energy. Mitochondria are also the main source of reactive oxygen species in the cell and are involved in cell demise through opening of the mitochondrial permeability transition pore. It was therefore to be expected that mitochondrial dysfunction could be a major mechanism of drug-induced liver disease. Microvesicular steatosis (which may cause liver failure, coma, and death) is the consequence of severe impairment of mitochondrial beta-oxidation. Endogenous compounds (such as cytokines or female sex hormones) or xenobiotics (including toxins such as ethanol and drugs such as aspirin, valproic acid, ibuprofen, or zidovudine) can inhibit beta-oxidation directly or through a primary effect on the mitochondrial genome or the respiratory chain itself. In some patients, infections and cytokines, or inborn errors of beta-oxidation enzymes or the mitochondrial genome, may favor the appearance of drug-induced microvesicular steatosis. Nonalcoholic steatohepatitis may develop under conditions causing prolonged, microvesicular, and/or macrovacuolar steatosis. In this condition, chronic impairment of mitochondrial beta-oxidation (causing steatosis) and the respiratory chain (increasing the production of ROS) lead to lipid peroxidation, which, in turn, may cause the diverse lesions of steatohepatitis, namely, necrosis, inflammation, Mallory's bodies, and fibrosis. Finally, mitochondria are involved in several forms of drug-induced cytolytic hepatitis, through inhibition or uncoupling of respiration or through a drug-induced or reactive metabolite-induced mitochondrial permeability transition. The latter effect commits hepatocytes to either apoptosis or necrosis, depending on the number of organelles that have undergone the permeability transition.
2023-10-17T01:27:17.568517
https://example.com/article/3233
Physiotherapy for ankylosing spondylitis: evidence and application. Ankylosing spondylitis (AS) is a disease that tends to affect younger individuals, many of whom are in the prime of their lives; therefore, incorporating the most up-to-date evidence into physiotherapy practice is critical. The purpose of this review is to update the most recent evidence related to physiotherapy intervention for AS and highlight the application of the findings to current physiotherapy research and clinical practice. The results of this review add to the evidence supporting physiotherapy as an intervention for AS. The emphasis continues to be on exercise as the most studied physiotherapy modality, with very few studies examining other physiotherapy modalities. Results of the studies reviewed support the use of exercise, spa therapy, manual therapy and electrotherapeutic modalities. In addition, the results of this review help to understand who might benefit from certain interventions, as well as barriers to management. A review of recently published articles has resulted in a number of studies that support the body of literature describing physiotherapy as an effective form of intervention for AS. In order to continue to build on the existing research, further examination into physiotherapy modalities, beyond exercise-based intervention, needs to be explored.
2023-10-16T01:27:17.568517
https://example.com/article/6227
Jabhat Tahrir Syria (JTS) and Jabhat Tahrir al-Sham (HTS) have begun the implementation of the fourth item of the agreement reached under the auspices of Sheikhs, "Abdullah al-Muhaisini and Abu Mohammed al-Sadiq" after fighting between the parties lasted for more than two months. The JTS' official of the negotiating file, "Ahmed Shami Abu Mohammed," told Syria Call news network that most of the two parties' prisoners have been released (406 ones), noting that Tahrir al-Sham released 149 prisoners from JTS and 37 prisoners of Soqour al-Sham, and Jabhat Tahrir Syria (JTS) released 185 prisoners and Soqour al-Sham let 35 prisoners leave prison. Each of HTS and JTS agreed to find a mechanism to resolve the issues of the remaining detainees, under the supervision of Sheikh "Omar Hudhayfah," the general jurist of the Sham Corps and Sheikhs, "Abdullah Muhaisini" and "Abu Mohammed Sadiq." Under the agreement a laeder from Jabhat Tahrir Syria was released, "Abu Azzam Sarakib" after an arrest of eight months in the prisons run by Tahrir al-Sham, following arrest in the first attack launched by HTS on the city of Sarakib. All of Tahrir al-Sham, Jabhat Tahrir Syria with Soqour al-Sham have reached an agreement a week ago to end the infighting between the two parties and stop the arrests and media harassment and the release of detainees and the formation of a committee to follow up the implementation of the agreement and begin consultations to reach a comprehensive solution on the military, political, administrative and judicial affiars.
2023-12-30T01:27:17.568517
https://example.com/article/2637
@import (once) "../include/vars"; @import (once) "builder/builder"; @schemeBackground: #ffffff; @schemeBackgroundSecondary: #F8F8F8; @schemeTextColor: #000000; @schemeTextColorSecondary: #3a3a3a; @schemeControlColor: #AF0015; @schemeControlTextColor: #ffffff; @schemeFontFamily: @fontName; @schemeFontSize: @fontSize; .scheme-builder( @schemeBackground, @schemeBackgroundSecondary, @schemeTextColor, @schemeTextColorSecondary, @schemeControlColor, @schemeControlTextColor, @schemeFontFamily, @schemeFontSize ); .navview { .navview-pane { .navview-menu { li { &.active { &::before { background-color: @white; } } } } } }
2024-07-04T01:27:17.568517
https://example.com/article/9909
July 22 1990 horoscope You tend to intellectualize experience, which January 2 1985 horoscope make you slow to react or. In a snake year, it will be important to avoid behaviors that are excessively. Cluster of tiny'bells' on a long stalk. For persons born on the 27th july 22 1990 horoscope of any month. There are hundreds of people who have no one- thousands, even zillions. Don't try to control me or censor me. Mike at the crystal ball inc. Cancer women ordinarily have none of these faults. Taurean people tend to be slow, methodical, practical and reserved. In analogy with saturn, her ruler, and the 10th house. The good news is that there is very little malice in their fights. Millions of sins, filthy deeds, july 22 1990 horoscope of violence and physical contagions. You may not enjoy parties very much, or you prefer to be in anonymous settings. Both leo and aquarius are quick to realize that this meeting is a rare and valuable gift. It's important to learn the difference. Falconer, kim astrology aptitude: how to become what you are meant to be, afa, 18. If you are not lucky with your present name, better change it. To make the effort to display a spotlight item twice weekly. So who is the best match in the zodiac for this fire sign. A denial of the trinity doctrine and a form of polytheism. Bak et halleluja mp3 remington vs speer bullets. It's calming' 'i like it!'. Reveal much more depth when a birth year, and in the case of the former, a birth. Part of fortune as moon- sun (it is the moons position when the sun rises). A lover who can keep up with the social life of the libra will be a good match. Your health is very strong. Each individual is born with specific samskaras or past life impressions and it is the job of uranus to assist us in breaking down these patterns through the process of change at times radically so. He understands matters faster than those with him. This is what we call identifying the dominant planets. That the internet is terrible, it's the worst place to make a friend. Your career and place in the public eye gets a shining light. Ronald david ronnie wood (born june 1, 1947 in hillingdon, london) is an english rock guitarist and bassist best known as a member of the rolling stones, faces, and the jeff beck group. They have similar qualities in common but one should surrender one thing that they love july 22 1990 horoscope their relationship to continue which is very hard and might become a conflict in the near future. You'll find him easy on your pocket-book in many ways. Magazines form july 22 1990 horoscope important part of businesses and professionals in today's world.
2024-02-27T01:27:17.568517
https://example.com/article/4620
Accelerated Learning for Busy People Accelerated Learning for Busy People Language Support – EN Language Support Learn the following languages using Languistik: Learn English Learn Finnish Learn French Learn German Learn Italian Learn Spanish Learn Swedish See ‘Shopping’ page for a list of available language courses. User interface options: English, French, German, Spanish, Finnish, Swedish, Italian, Estonian. The User Guide is available in English, Finnish, French, German and Spanish.
2023-09-20T01:27:17.568517
https://example.com/article/3256
<?php namespace Imbo\Http\Response\Formatter; use Imbo\Model; use Imbo\Helpers\DateFormatter; use Imbo\Exception\InvalidArgumentException; /** * Abstract formatter */ abstract class Formatter implements FormatterInterface { /** * Date formatter helper * * @var DateFormatter */ protected $dateFormatter; /** * Class constructor * * @param DateFormatter $formatter An instance of the date formatter helper */ public function __construct(DateFormatter $formatter = null) { if ($formatter === null) { $formatter = new DateFormatter(); } $this->dateFormatter = $formatter; } /** * {@inheritdoc} */ public function format(Model\ModelInterface $model) { if ($model instanceof Model\Error) { return $this->formatError($model); } else if ($model instanceof Model\Status) { return $this->formatStatus($model); } else if ($model instanceof Model\User) { return $this->formatUser($model); } else if ($model instanceof Model\Images) { return $this->formatImages($model); } else if ($model instanceof Model\Metadata) { return $this->formatMetadataModel($model); } else if ($model instanceof Model\Groups) { return $this->formatGroups($model); } else if ($model instanceof Model\Group) { return $this->formatGroup($model); } else if ($model instanceof Model\AccessRule) { return $this->formatAccessRule($model); } else if ($model instanceof Model\AccessRules) { return $this->formatAccessRules($model); } else if ($model instanceof Model\ArrayModel) { return $this->formatArrayModel($model); } else if ($model instanceof Model\ListModel) { return $this->formatListModel($model); } else if ($model instanceof Model\Stats) { return $this->formatStats($model); } throw new InvalidArgumentException('Unsupported model type', 500); } }
2024-01-04T01:27:17.568517
https://example.com/article/3150
Yield of endomyocardial biopsy in patients with biventricular failure. Comparison of patients with normal vs reduced left ventricular ejection fraction. Twenty five patients with biventricular failure underwent endomyocardial biopsy procedures. Twelve of these 25 patients had normal left ventricular ejection fraction. Endomyocardial biopsy sampling was useful in eight of 12 patients (67 percent) with biventricular failure and normal left ventricular ejection fraction. Biopsy specimens in five of these 12 patients demonstrated endocardial or infiltrative heart disease and excluded these diseases in three other patients with constrictive pericarditis. This study suggests that the clinical presentation of biventricular failure, combined with the noninvasive determination of a normal left ventricular ejection fraction, is helpful in selecting patients for endomyocardial biopsy study. Patients with biventricular failure and normal left ventricular ejection fractions have a high probability of having pericardial or infiltrative heart disease, conditions that often can be differentiated only by analysis of myocardial tissue. Hemodynamic assessment of patients without infiltrative processes further allows one to eliminate those patients with a high likelihood of having constrictive pericardial disease.
2023-08-12T01:27:17.568517
https://example.com/article/5803
Revenue growth belies claims of economic slow-down: Jaitley news 10 January 2017 A double-digit growth in tax revenue collection between April and December 2016 with direct tax collections increasing 12.01 per cent and indirect tax mop-up growing 25 per cent, belies any major impact of demonetisation on the economy, says finance minister Arun Jaitley. In fact, Jaitley said, collection of value added tax had also risen for most states, which renders claims of an economic slow-down baseless. "All stories about job losses or businesses suffering losses are anecdotal. This data is real and not an estimate". Jaitley's assertion comes against the backdrop of the All India Manufacturers Organisation (AIMO) warning that demonetisation could crimp manufacturing activity. Former prime minister Manmohan Singh had predicted a 2 per cent slide in the growth rate, Pronab Sen, former chairman of the National Statistical Commission, had forecast a 1 per cent dip in growth. The advance estimates of GDP released by CSO last week, however, projected economic growth to be around 7.1 per cent, down from the 7.6 per cent last fiscal but above the lows predicted by experts. Tax collection figures for the period April -December 2016 show a positive trend as collection of direct taxes grow 12.01 per cent and indirect taxes collections grow 25 per cent over the corresponding period last year, ie, April-December 2015. Direct tax and indirect tax collection figures for the period April 2016 to December 2016 have shown a positive trend as direct taxes grew 12.01 per cent and indirect taxes grew 25 per cent over the corresponding period last year. The figures for direct tax collections up to December 2016 show that net collections stood at Rs5,53,00 crore, which is 12.01 per cent more than the net collection for the corresponding period last year. This is 65.3 per cent of the total budget estimates of direct taxes for F.Y 2016-17. Corporate income tax collections grew 10.7 per cent while personal income tax collections rose 21.7 per cent year-on-year in April-December 2016-17. However, after adjusting for refunds, the net growth in corporate tax collections is 4.4 per cent while that of personal income tax is 24.6 per cent. Refunds amounting to Rs1,26,371 crore have been issued during April-December 2016, which is 30.5 per cent higher than the refunds issued during the corresponding period last year. After accounting for the third installment of advance tax received in December 2016, the collections under advance tax stand at Rs2,82,000 crore, which is 14.4 per cent higher than the figures for the corresponding period of last year. CIT advance tax is growing at 10.6 per cent while personal income tax advance collections have recorded a growth of 38.2 per cent. Collection of indirect taxes, including central excise, service tax and customs up to December 2016 show that net revenue collections stood at Rs6,30,000 crore, which is 25 per cent more than the net collections for the corresponding period last year. Till December 2016, about 81 per cent of the budget estimates of indirect taxes for financial year 2016-17 has been achieved. As regards central excise, net tax collections stood at Rs2,79,000 crore during April-December 2016 compared to Rs1,95,000 crore during the corresponding period in the previous financial year - a growth of 43 per cent year-on-year. Net tax collections on account of service tax during April-December 2016 stood at Rs1,83,000 crore compared to Rs1,48,000 crore during the corresponding period of the previous financial year, thereby showing a growth of 23.9 per cent. Net tax collections on account of customs during April-December 2016 stood at Rs1,67,000 crore compared to Rs1,60,000 crore during the same period in the previous financial year, thereby recording a growth of 4.1 per cent During December 2016, net indirect tax collections (with ARM) grew at the rate of 14.2 per cent compared to corresponding month last year. The growth rate in net collection for customs, central excise and service tax was (-) 6.3 per cent, 31.6 per cent and 12.4 per cent, respectively, during December 2016, compared to the corresponding month last year. The de-growth in customs collections appear to be on account of a decline of gold imports by about 46 per cent (in volume terms) in December 2016 over December 2015. Jaitley said data showed that indirect tax collection had moved up significantly during the nine-month period. "Since there has been a considerable debate in the public space as to the impact of the currency squeeze in the months of November and December, the data of these two months become relevant," Jaitley said. Jaitley said VAT collections in most states have shown an increase and they also received taxes in the old currency in November. "In my opinion, all well administered states have seen a rise in VAT collection even in November." On the divergence in GDP and tax collection figures, Jaitley said: "We will only comment on final figures (of GDP). Today we only have advance estimates presumption. Tax collection data are real, it is not a presumption."
2023-10-08T01:27:17.568517
https://example.com/article/2399
Posts with the tag Fabian Adeoye Lojede. Displaying results 1-1 of 1. With its endless capital and battalion of stars, Hollywood habitually eclipses everything else and leaves North American cineplexes turgid with stale super-hero flicks, listless comedies, and their sequels.
2024-05-13T01:27:17.568517
https://example.com/article/8440
Tuesday, November 15, 2005 When designing the interface for the BTree module, there are some conflicting priorities. An important goal is to ensure that the interface is reusable and makes as little assumptions about the content of "keys" and "values" as possible. Some of the points to consider are: a) Should the BTree module be aware of the type system available in the system?b) Should it be aware of how multiple attribute keys are handled? I chose to keep the level of abstraction higher. Therefore in my implementation of the BTree module, the module has no knowledge of what is contained in the key, whether it is made up of single or multiple attributes, etc. The advantage is that a key can be anything that satisfies a relatively simple interface defined by the system. The disadvantages are: 1) The system cannot perform specific performance optimisations that would be dependent upon the knowledge of the attribute structure and type system. For example, the system cannot perform compression on the key structure.2) The system cannot support search operators at an attribute level. When searching and fetching keys and values, only one composite operator is supported - >=. That is, the fetch algorithm knows how to obtain the next key that is equal or greater than the specified key. On the whole, I am happy with the interface as it provides greater reusability. It allows the BTree module to be independently useful. A problem I have been grappling with recently is how to support data driven construction of keys, without breaking the high level interface defined so far. In a DBMS system, the definition of the keys is stored in System Catalogs. There has to be a way by which the key definition can be used to generate keys. In the current design, the BTree module expects to be given a factory class for instantiating keys. I thought of creating a Key factory that would read a key definition and then generate keys based upon the definition. However, the problem is that such a key factory would be capable of generating many different types of keys, whereas the current BTree interface expects a one to one relationship between the key factory and the type of key. I have come to the conclusion that I need to enhance the system in two ways: Firstly, I need to support one to many relationship between the key factory and the key. Since the BTree instance is tied to a specific type of key, it therefore needs to be enhanced to supply an "id" that enables the key factory to generate the specific type of key required by the BTree instance. This means that instead of invoking: key = keyFactory.getNewInstance() the BTree instance would invoke: key = keyFactory.getNewInstance(myid) The key factory would be able to use the "id" to determine the type of key required. The second enhancement I need is to do with performance. At present, the Object Registry always generates new instances of objects, which would be inefficient for a key factory that needs to maintain its own internal registry. I therefore need to enhance the registry to support Singletons - key factories need to be cached in the registry so that the same factory can be used by multiple BTree instances. My final point about the tradeoffs between flexibility and performance is to do with storage structure of keys and log records. I have tried to make the storage structures self describing. This means that the keys and values, as well as the log records, must persist sufficient information to be able to reconstruct their state when required. In a multi-attribute key, for example, this means that each attribute must store its type information, data length, etc. along with the data. This naturally increases the storage space requirement of entities. The benefit is that the system does not require external support to determine how to read data back from persistent storage. For example, the BTree module does not require the System Catalalog to be available, it has sufficient information to be able to read and write keys to persistent storage. Thursday, November 10, 2005 In a major refactoring exercise I have split the SimpleDBM modules into two higher level packages. The Latch Manager, the Object Registry and the Util packages all go under the package org.simpledbm.common. The rest of the packages go under org.simpledbm.rss. I decided to use the acronym RSS for the low level data management API in honour of System R. System R called its low level API Research Storage System or RSS in short. See the paper Morton M. Astrahan, Mike W. Blasgen, Donald D. Chamberlin, Kapali P. Eswaran, Jim Gray, Patricia P. Griffiths, W. Frank King III, Raymond A. Lorie, Paul R. McJones, James W. Mehl, Gianfranco R. Putzolu, Irving L. Traiger, Bradford W. Wade, Vera Watson: System R: Relational Approach to Database Management. ACM Trans. Database Syst. 1(2): 97-137(1976). Of course, all this refactoring has left CVS in a mess, as now there are many empty subdirectories under src/org/simpledbm. I have also been working on a Developer's Reference Manual. This will hopefully contain sufficient information for interested hackers and database technology enthusiasts to play around with various modules. You can access the document here. I would welcome any feedback. Finally, the SimpleDBM project has now graduated out of the incubator at www.java.net. This should lead to greater exposure and interest in the project. Monday, November 07, 2005 BTree scans are now available as well. This means that the BTree implementation is feature complete, although, there are still some areas that need more work. For the rest of November, I am going to concentrate on improving the code, refactoring bits that I don't like, updating documentation, and generally cleaning up the code. All this in preparation for a release of the code towards the end of the month. From December onwards, work can start on the remaining bits in the Data Manager layer of SimpleDBM, i.e, tables. I have not yet decided whether knowledge about data types should be part of this layer or whether it is best left to a higher-level layer. I am tempted to keep the data layer as low level as possible; this will make it more reusable. Wednesday, November 02, 2005 Both insert key and delete key operations are now available. More test cases are being written; in order to test lock conflicts when the same key is concurrently inserted/deleted by different transactions, the new test cases have to use multiple threads. It is harder to debug such test cases, but Eclipse makes it easy. I simply run the JUnit tests in debug mode and set break points at appropriate places. To ensure that the BTree implementation is tested thoroughly, I have started to use the code coverage tool Clover. The vendor has graciously provided me a free license to use this tool. Using this tool I am able to determine which code paths have not been tested, and then write test cases to exercise those.
2023-11-11T01:27:17.568517
https://example.com/article/5375
The Confederate Flag Doesn’t Belong in a Museum Finding the proper home for this symbol of oppression isn’t as simple as politicians keep saying it is. The U.S. flag and South Carolina state flag fly at half-staff as the Confederate battle flag also flies on the South Carolina State House grounds in Columbia, South Carolina, on June 20, 2015. Photo by Jason Miczek/Reuters In the aftermath of the shooting in Charleston, South Carolina, that claimed the lives of nine parishioners at the Emanuel African Methodist Episcopal Church, politicians from both sides of the aisle have called for the Confederate flag that flies on the grounds of the South Carolina state Capitol to come down. A common theme of these calls to lower the flag has been the suggestion that this symbol of America’s racist past belongs not on a flagpole on public land but in a museum. Rand Paul and Bernie Sanders, Jeb Bush and President Obama all have called for the flag to be retired to a museum. At first blush, the suggestion makes sense. Museums preserve and exhibit material culture. When Jeb Bush ordered the Confederate flag that had once flown over Florida’s capitol to be taken down, he relocated it to the Museum of Florida History. But the solution of what to do with South Carolina’s Confederate flag is not so simple. Displaying the flag in, say, the South Carolina State Museum would provide little improvement over flying it at the Capitol, unless the museum made the effort to provide context that would explain the flag’s place in the state’s, and the nation’s, history. What might such an exhibit look like? It would need to tell the history behind the flag. It is a symbol of white supremacy, and museums should acknowledge it as such. The designer for the second national flag of the Confederacy described it as a representation of the fight to “maintain the Heaven-ordained supremacy of the white man over the inferior or colored race.” The exhibit should also acknowledge the role the flag played in South Carolina’s past. The flag that’s captured national attention this week came to Columbia in 1962, as a reaction to black people fighting for and winning rights during the civil rights era. Effective museum interpretation would not stop there. It would address the reoccurring questions surrounding this symbol. Why do people find the flag offensive? Why are other people so attached to the flag?Why do some people who embrace the fullness of Southern pride, including the Confederate flag, not see themselves as racists? Furthermore, a complete interpretation of the Confederate flag would need to make clear that black people have always resisted white supremacy and fought for the demise of institutional racism. The late historian Vincent Harding put forth this idea, characterizing black people as committed to their freedom and unwilling to accept oppression. There has always been a cadre of black people willing to die for their freedom in America, and this too is germane to museum interpretation of the Confederate flag. In addition to being a sacred space, the AME church in Charleston was also home to the storied congregation to which the revolutionary Denmark Vesey had belonged. His church was burned after Vesey was accused of plotting an uprising in which enslaved people would revolt against slave masters. It’s certainly possible that a museum could create such an exhibit, though the terms in which South Carolina Gov. Nikki Haley discussed the flag in her remarks on Monday suggest that curators would feel political pressure to also describe the flag in its defenders’ terms. Before acknowledging that the flag, to some, “is a deeply offensive symbol of a brutally oppressive past,” she noted that others revere the flag, and to them it is “a symbol of respect, integrity, and duty. They also see it as a memorial, a way to honor ancestors who came to the service of their state during time of conflict. That is not hate, nor is it racism.” Such efforts to appease the flag’s defenders might help to perpetuate a misunderstanding of its initial purpose and lasting power as a symbol of oppression. It’s not just external pressure that would be the problem, however. Museums don’t have a great track record on issues of race. Race has emerged as the most popular topic in the museum community that we never actually address directly. (We even tend to avoid using the word race, preferring vague terms like “social justice” and “diversity and inclusion.”) Too many museums are blind to how race lives in their collections, exhibition spaces, and public interactions. The prevailing approach to race in museums has been to ignore it. The American Alliance of Museums’ first statement on diversity and inclusion was only issued in 2014. The statement reflects the hazy vision of the field. It makes clear that AAM “values and celebrates the unique attributes, characteristics, and perspectives that make each person who they are,” but does not include any specific language prohibiting hate or discrimination of any kind. Embracing diversity is not the same as making a commitment to dismantling systems of racism, sexism, homophobia, and other pervasive forms of oppression. Too often, museums have simply chosen to neglect racially charged holdings in their collections rather than confront and interpret them. But censoring objects that are symbols of oppression does nothing to make a museum a socially conscious forum—something museums claim they’re striving for. That museum professionals still suffer a fraught relationship with race was made plain in the wake of the Ferguson, Missouri, protests. On Twitter, museum professionals expressed the difficulty of addressing race, especially when they don’t have support from their institutions. Many museum workers claimed leaders at their institutions specifically told them to avoid race and other incendiary topics. Others complained that they did not have training, knowledge, or adequate resources to facilitate a productive conversation on race in a museum space. Their frustrations often resulted in deciding to just go along with the grain and leave the subject untouched. Thankfully, there is a rapidly growing body of scholarship that provides museums with the tools for thoughtful interpretation on race relations. Most recently, Keisha Blain, assistant professor of history at the University of Iowa, led a crowdsourced project to form the #CharlestonSyllabus. Housed on the blog of the African American Intellectual History Society, the list contains both primary and secondary resources, historically grounding the shooting in Charleston. For museum professionals who have never had to discuss or interpret race beyond the old Black History Month tropes, #CharlestonSyllabus equips them with the materials to address the news in a constructive way. More traditional resources, like the book Representations of Slavery: Race and Ideology in Southern Plantation Museums,offer useful analysis of how museums deal with race and their impact on their communities. While museums, in general, are sadly not prepared to accession the Confederate flag, the events of this week should be seen as a call to action. There are now ample resources available for museums to learn about race and use that knowledge to undergo strategic change. It is time for museums to stop the twisted tango talk that peripherally deals with race or evades it altogether. This approach is not productive and devalues the lived experience of black Americans, as well as any representations of blackness in museums. It stunts the country’s growth in understanding our past and in working toward racial healing. American museums should seize this moment to look through their collections for objects that can contextualize the flag and its relation to the shooting in Charleston. Take, for instance, the quilt Southern Shame, Southern Horrors by Gwendolyn Magee, which was made in response to Mississippi’s failure in 2001 to adopt a state flag without the Confederate battle emblem and is part of the collection at the Michigan State University Museum. The quilt presents three layers of visuals: the Confederate flag, black lynched bodies, and a Ku Klux Klan hood. Exhibiting it would provide visitors with the opportunity to interpret history; the quilt depicts horrors that visitors may have not been able to conceptualize on their own. As of now, the quilt is displayed on the Quilt Index, a research database the museum runs, but not in the museum itself. Until museums move forward, a better plan might involve the flag traveling to various community centers, libraries, and museums throughout the country for a year, accompanied by trained facilitators and documentarians. Traveling beyond state borders would reinforce that the issues tied up in the flag are hardly limited to South Carolina. Each stop could provide a forum for communities to dissect what the flag means to them and for people to listen to others who may not share their point of view. After the flag’s one-year tour, I do not think it should go to a museum. Unfortunately, I’ve yet to visit a mainstream museum I believe would make it an institutional priority to do the intense and in-depth public engagement this object deserves. That doesn’t mean there aren’t museums that handle race sensitively. The Levine Museum of the New South in Charlotte, North Carolina, specifically collects material culture dealing with the post-Reconstruction South, and its exhibit schedule challenges visitors to think critically about race. And there are many black museums around the country that do so as well. But it’s too often the case that black institutions are expected to use their resources to interpret images of white supremacy. The problem of interpreting this symbol shouldn’t be their burden. It is long past time for the museum world to get its house in order. If museums genuinely want to be socially responsible, they will have to commit to learning about and addressing race. In the meantime, I believe the Confederate flag should go to the Emanuel AME Church in Charleston. There it could serve as an emblem of free Africans in America working to survive while trying to dismantle white supremacy. That legacy belongs to them.
2024-07-03T01:27:17.568517
https://example.com/article/8774
GoCoin Ideally, when a customer asks if you accept a certain form of payment, the answer should always be yes. GoCoin is the first checkout solution designed to accommodate bitcoin and other popular altcoins like litecoin and dogecoin. Give your customers the convenience of choice, and in turn, you can choose to keep the coins or instantly exchange them to your preferred currency.
2024-03-05T01:27:17.568517
https://example.com/article/9345
Why are businesses going open-source? Red Hat answers Watch Now Ever since Red Hat released Red Hat Enterprise Linux (RHEL) 8 in May, CentOS users have been waiting impatiently for CentOS 8 to arrive. Now, their wait is over. CentOS 8 is here and ready for download. This is great news for the many hosting companies, data centers, and businesses with in-house Linux experts that rely on CentOS every day for their work. By Datanyze's count of web servers, CentOS, with 15.65% of the market, is second only to Ubuntu, with its 26.7% share. It's popular because CentOS is a Red Hat Enterprise Linux (RHEL) clone with most of RHEL's top-tier business server Linux benefits but without RHEL's costs. That's great if you know Linux like the back of your hand and you're willing to take responsibility if something goes wrong. If you'd rather have the comfort of knowing you have support if things go awry, RHEL is a better choice. What do you get with CentOS 8? For starters, it's built on the 4.18 Linux kernel. Yes, that's far from the newest Linux kernel, but CentOS, like RHEL, is all about stability for production systems. If you want bright, new shiny kernels, look to Linux distros such as Fedora. Other major changes include a changeup to the foundations of the Yum package manager, which is now based on the DNF (a.k.a. Dandified yum). While it maintains the same command-line interface and stable API for sysadmin and DevOps integration, it should be faster than its predecessor. For developers, besides Git 2.18, CentOS offers these version control systems: Mercurial 4.8 and Subversion 1.10. Python 3.6 is now CentOS' default Python implementation, but Python is not installed automatically. Limited support for Python 2.7 -- very limited from what my friends tell me -- is also available. Other languages offered in the new CentOS mix include Node.js 10.1, PHP 7.2, Ruby 2.5, Perl 5.26, and SWIG 3.0. The CentOS GCC compiler is based on version 8.2. It includes support for more recent C++ language standard versions, better optimizations, new code hardening techniques, improved warnings, and new hardware support. But, as neat as all that is, if you really want to use CentOS as a cutting-edge developer platform, you'll want to check out the new rolling release version of CentOS: CentOS Stream. This version, which will be released in early October, will have the latest and greatest of everything and it will be updated several times a day. Needless to say, you should not use CentOS Stream for production server systems. CentOS also includes such server basic programs as the popular database servers: MariaDB 10.3, MySQL 8.0, PostgreSQL 10, PostgreSQL 9.6, and Redis 5. It includes the Apache HTTP Server 2.4 and nginx 1.14, too. One important program neither it nor RHEL 8 has is Docker. Don't think Red Hat is dismissing the importance of containers. It's not. Indeed, Red Hat OpenShift is all about containers, and it's one of Red Hat's most important platforms. Instead, Red Hat has largely replaced Docker with its own container tools: buildah and podman. These are compatible with existing Docker images. For those of you who use CentOS as a desktop, the default GNOME Shell interface has been updated to version 3.28. Underneath it, the default display server is Wayland. If you insist, you can still use the historic X.Org server for your display server. For server admins, the biggest change is that nftables framework has replaced iptables. and the firewalld daemon uses nftables as its default backend. In short, while there shouldn't be any major changes in your firewall settings as you move up from CentOS 7.x, you'd be wise to check them carefully. For example, while nftables has an iptables commands compatibility layer, it's default syntax is different from iptables. That means you must look closely at any scripts that call on firewall functionality. Upgrading to CentOS 8 If you want to work from the source code up, you'll find it at git.centos.org. Source code RPMs will also be published. If you're already running CentOS, you can grab the source code with the command: yumdownloader --source <packagename> If you want to upgrade from CentOS 7.x to 8, you should know that you'll be on your own. As far as I know, there are no instructions out yet on how to do an in-place upgrade. On RHEL, in-place upgrades are supported. Your best move will be to back up your data, take an applications inventory, do a fresh install of CentOS 8, and then port your data and applications over. I also have a colleague who's still running CentOS 4. He's far from the only one; it was a very popular release. Do not even try to upgrade straight from CentOS 6 or earlier to CentOS 8. Bad things will happen. For most companies, though, it's time to start evaluating CentOS 8. You may not be migrating to it immediately, but down the road, you'll want to make the upgrade. Related Stories:
2024-04-17T01:27:17.568517
https://example.com/article/3063
<!DOCTYPE html> <html itemscope lang="en-us"> <head><meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta charset="utf-8"> <meta name="HandheldFriendly" content="True"> <meta name="MobileOptimized" content="320"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"><meta name="generator" content="Hugo 0.62.2" /> <meta property="og:title" content="We Don&#39;t Need an Operating System Anymore" /> <meta name="twitter:title" content="We don&#39;t need an operating system anymore"/> <meta itemprop="name" content="We don&#39;t need an operating system anymore"><meta property="og:description" content="For the last 40 years we&#39;ve been using traditional operating systems to run our applications. Our operating systems are designed as generic underpinnings for our applications, but they come at the cost of lots of complexity. It doesn&#39;t have to be this way. If we&#39;re willing to embrace immutable infrastructure we can compile the OS functionality into the application itself. Or, in the case of high level languages, into the language runtime." /> <meta name="twitter:description" content="For the last 40 years we&#39;ve been using traditional operating systems to run our applications. Our operating systems are designed as generic underpinnings for our applications, but they come at the cost of lots of complexity. It doesn&#39;t have to be this way. If we&#39;re willing to embrace immutable infrastructure we can compile the OS functionality into the application itself. Or, in the case of high level languages, into the language runtime." /> <meta itemprop="description" content="For the last 40 years we&#39;ve been using traditional operating systems to run our applications. Our operating systems are designed as generic underpinnings for our applications, but they come at the cost of lots of complexity. It doesn&#39;t have to be this way. If we&#39;re willing to embrace immutable infrastructure we can compile the OS functionality into the application itself. Or, in the case of high level languages, into the language runtime."><meta name="twitter:site" content="@devopsdays"> <meta property="og:type" content="talk" /> <meta property="og:url" content="/events/2018-oslo/program/per-buer/" /><meta name="twitter:creator" content="@devopsdaysoslo" /><meta name="twitter:label1" value="Event" /> <meta name="twitter:data1" value="devopsdays Oslo 2018" /><meta name="twitter:label2" value="Dates" /> <meta name="twitter:data2" value="October 29 - 30, 2018" /><meta property="og:image" content="https://www.devopsdays.org/img/sharing.jpg" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:image" content="https://www.devopsdays.org/img/sharing.jpg" /> <meta itemprop="image" content="https://www.devopsdays.org/img/sharing.jpg" /> <meta property="fb:app_id" content="1904065206497317" /><meta itemprop="wordCount" content="95"> <title>We don&#39;t need an operating system anymore - devopsdays Oslo 2018 </title> <script> window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date; ga('create', 'UA-9713393-1', 'auto'); ga('send', 'pageview'); </script> <script async src='https://www.google-analytics.com/analytics.js'></script> <link href="/css/site.css" rel="stylesheet"> <link href="https://fonts.googleapis.com/css?family=Roboto+Condensed:300,400,700" rel="stylesheet"><link rel="apple-touch-icon" sizes="57x57" href="/apple-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="/apple-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="/apple-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="/apple-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="/apple-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="/apple-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/apple-icon-180x180.png"> <link rel="icon" type="image/png" sizes="192x192" href="/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="96x96" href="/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png"> <link rel="manifest" href="/manifest.json"> <meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> <meta name="theme-color" content="#ffffff"> <link href="/events/index.xml" rel="alternate" type="application/rss+xml" title="DevOpsDays" /> <link href="/events/index.xml" rel="feed" type="application/rss+xml" title="DevOpsDays" /> <script src=/js/devopsdays-min.js></script></head> <body lang=""> <nav class="navbar navbar-expand-md navbar-light"> <a class="navbar-brand" href="/"> <img src="/img/devopsdays-brain.png" height="30" class="d-inline-block align-top" alt="devopsdays Logo"> DevOpsDays </a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="navbar-nav mr-auto"><li class="nav-item global-navigation"><a class = "nav-link" href="/events">events</a></li><li class="nav-item global-navigation"><a class = "nav-link" href="/blog">blog</a></li><li class="nav-item global-navigation"><a class = "nav-link" href="/sponsor">sponsor</a></li><li class="nav-item global-navigation"><a class = "nav-link" href="/speaking">speaking</a></li><li class="nav-item global-navigation"><a class = "nav-link" href="/organizing">organizing</a></li><li class="nav-item global-navigation"><a class = "nav-link" href="/about">about</a></li></ul> </div> </nav> <nav class="navbar event-navigation navbar-expand-md navbar-light"> <a href="/events/2018-oslo" class="nav-link">Oslo</a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbar2"> <span class="navbar-toggler-icon"></span> </button> <div class="navbar-collapse collapse" id="navbar2"> <ul class="navbar-nav"><li class="nav-item active"> <a class="nav-link" href="/events/2018-oslo/location">location</a> </li><li class="nav-item active"> <a class="nav-link" href="https://devopsdays-oslo.eventbrite.com/?aff=webpage">registration</a> </li><li class="nav-item active"> <a class="nav-link" href="/events/2018-oslo/program">program</a> </li><li class="nav-item active"> <a class="nav-link" href="/events/2018-oslo/speakers">speakers</a> </li><li class="nav-item active"> <a class="nav-link" href="/events/2018-oslo/contact">contact</a> </li><li class="nav-item active"> <a class="nav-link" href="/events/2018-oslo/conduct">conduct</a> </li></ul> </div> </nav> <div class="container-fluid"> <div class="row"> <div class="col-md-12"><div class = "row"> <div class = "col-md-5 offset-md-1"> <h2 class="talk-page">We don&#39;t need an operating system anymore</h2><br /><br /><br /> <span class="talk-page content-text"> <p>For the last 40 years we've been using traditional operating systems to run our applications. Our operating systems are designed as generic underpinnings for our applications, but they come at the cost of lots of complexity.</p> <p>It doesn't have to be this way. If we're willing to embrace immutable infrastructure we can compile the OS functionality into the application itself. Or, in the case of high level languages, into the language runtime.</p> <p>The results are small, fast and secure virtual machines that can do a lot of what our beloved Linux systems always have done.</p> </span></div> <div class = "col-md-3 offset-md-1"><h2 class="talk-page">Speaker</h2><img src = "/events/2018-oslo/speakers/perbu.png" class="img-fluid" alt="per-buer"/><br /><br /><h4 class="talk-page"><a href = "/events/2018-oslo/speakers/per-buer"> Per Buer </a></h4><a href = "https://twitter.com/perbu"><i class="fa fa-twitter fa-2x" aria-hidden="true"></i>&nbsp;</a><br /> <span class="talk-page content-text">Per Buer is the CEO and cofounder of IncludeOS. Founder of Varnish Software. Before that he worked as a programmer and later sysadmin for Linpro. Born and raised with open source software. Enjoys and makes his own sour beer.</span> </div> </div><div class="row cta-row"> <div class="col-md-12"><h4 class="sponsor-cta"> Sponsors</h4></div> </div><div class="row sponsor-row"><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.acando.no"><img src = "/img/sponsors/acando.png" alt = "Acando" title = "Acando" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.arcticcircledc.com"><img src = "/img/sponsors/arcticcircledc.png" alt = "Arctic Circle Data Center" title = "Arctic Circle Data Center" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.bekk.no/"><img src = "/img/sponsors/bekk.png" alt = "Bekk Consulting AS" title = "Bekk Consulting AS" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.capraconsulting.no"><img src = "/img/sponsors/capra.png" alt = "Capra" title = "Capra" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "http://chef.io"><img src = "/img/sponsors/chef.png" alt = "Chef Software, Inc" title = "Chef Software, Inc" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.dnb.no"><img src = "/img/sponsors/dnb.png" alt = "DNB" title = "DNB" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.finn.no/job/employer/company/1"><img src = "/img/sponsors/finn.png" alt = "FINN.no" title = "FINN.no" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "http://www.ibm.com/software/rational/devops/"><img src = "/img/sponsors/ibm.png" alt = "IBM" title = "IBM" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "http://intility.no/"><img src = "/img/sponsors/intility.png" alt = "Intility" title = "Intility" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.praqma.com/"><img src = "/img/sponsors/praqma.png" alt = "Praqma" title = "Praqma" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.redhat.com"><img src = "/img/sponsors/redhat-before-20190528.png" alt = "Red Hat, Inc" title = "Red Hat, Inc" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.zetta.io"><img src = "/img/sponsors/zettaio.png" alt = "zetta.io" title = "zetta.io" class="img-fluid"></a> </div></div><div class="row cta-row"> <div class="col-md-12"><h4 class="sponsor-cta">Community Sponsors</h4></div> </div><div class="row sponsor-row"><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.arresteddevops.com"><img src = "/img/sponsors/arresteddevops.png" alt = "Arrested DevOps" title = "Arrested DevOps" class="img-fluid"></a> </div><div class = "col-lg-1 col-md-2 col-4"> <a href = "https://www.evry.com/no/"><img src = "/img/sponsors/evry.png" alt = "EVRY Norge AS" title = "EVRY Norge AS" class="img-fluid"></a> </div></div><br /> </div></div> </div> <nav class="navbar bottom navbar-light footer-nav-row" style="background-color: #bfbfc1;"> <div class = "row"> <div class = "col-md-12 footer-nav-background"> <div class = "row"> <div class = "col-md-6 col-lg-3 footer-nav-col"> <h3 class="footer-nav">@DEVOPSDAYS</h3> <div> <a class="twitter-timeline" data-dnt="true" href="https://twitter.com/devopsdays/lists/devopsdays" data-chrome="noheader" height="440"></a> <script> ! function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0], p = /^http:/.test(d.location) ? 'http' : 'https'; if (!d.getElementById(id)) { js = d.createElement(s); js.id = id; js.src = p + "://platform.twitter.com/widgets.js"; fjs.parentNode.insertBefore(js, fjs); } }(document, "script", "twitter-wjs"); </script> </div> </div> <div class="col-md-6 col-lg-3 footer-nav-col footer-content"> <h3 class="footer-nav">BLOG</h3><a href = "https://www.devopsdays.org/blog/"><h1 class = "footer-heading">Blogs</h1></a><h2 class="footer-heading">01 October, 2019</h2><p class="footer-content"></p><a href = "https://www.devopsdays.org/blog/2019/10/01/nota-sobre-mudan%C3%A7a-de-local-do-devopsdays-salvador-2019/"><h1 class = "footer-heading">Nota sobre mudança de local do DevOpsDays Salvador 2019</h1></a><h2 class="footer-heading">by devopsdays Salvador 2019 - 01 October, 2019</h2><p class="footer-content">Quando escolhemos a Universidade Federal da Bahia (UFBA), tínhamos como objetivo nos aproximarmos da Academia, nos aproximarmos de uma Universidade Federal e assim, estreitar ainda mais os laços que esta comunidade de tecnologia tem com esse local que tanto formou e ainda forma profissionais para o mercado, não somente soteropolitano, mas sim para o mundo. Escolhemos a Universidade Federal da Bahia, pois lá estão diversos grupos de professores e alunos que se beneficiariam profundamente com um evento dentro das suas instalações, e quem ainda não conhece a UFBA, teria o prazer de ver sua enorme praça verde, poderia descobrir uma Biblioteca imensa, poderia descobrir que lá está o ponto de presença de Rede Nacional de Pesquisa (RNP) que conecta a maioria dos órgãos federais e locais através da sua rede remessa.</p><a href="https://www.devopsdays.org/blog/index.xml">Feed</a> </div> <div class="col-md-6 col-lg-3 footer-nav-col"> <h3 class="footer-nav">CFP OPEN</h3><a href = "/events/2020-natal" class = "footer-content">Natal</a><br /><a href = "/events/2020-vancouver" class = "footer-content">Vancouver</a><br /><a href = "/events/2020-houston" class = "footer-content">Houston</a><br /><a href = "/events/2020-seattle" class = "footer-content">Seattle</a><br /><a href = "/events/2020-toronto" class = "footer-content">Toronto</a><br /><a href = "/events/2020-atlanta" class = "footer-content">Atlanta</a><br /><a href = "/events/2020-krakow" class = "footer-content">Kraków</a><br /><a href = "/events/2020-baltimore" class = "footer-content">Baltimore</a><br /><a href = "/events/2020-aracaju" class = "footer-content">Aracaju</a><br /><a href = "/events/2020-denver" class = "footer-content">Denver</a><br /><a href = "/events/2020-des-moines" class = "footer-content">Des Moines</a><br /><a href = "/events/2020-kyiv" class = "footer-content">Kyiv</a><br /><a href = "/events/2020-boise" class = "footer-content">Boise</a><br /><a href = "/events/2020-poznan" class = "footer-content">Poznań</a><br /><a href = "/events/2020-porto-alegre" class = "footer-content">Porto Alegre</a><br /><a href = "/events/2020-portugal" class = "footer-content">Portugal</a><br /><a href = "/events/2020-salt-lake-city" class = "footer-content">Salt Lake City</a><br /><a href = "/events/2020-minneapolis" class = "footer-content">Minneapolis</a><br /><a href = "/events/2020-zurich" class = "footer-content">Zürich (Winterthur)</a><br /><a href = "/events/2020-cape-town" class = "footer-content">Cape Town</a><br /> <br /><a href="https://devopsdays.org/speaking">Propose a talk</a> at an event near you!<br /> </div> <div class="col-md-6 col-lg-3 footer-nav-col"> <h3 class="footer-nav">About</h3> devopsdays is a worldwide community conference series for anyone interested in IT improvement.<br /><br /> <a href="/about/" class = "footer-content">About devopsdays</a><br /> <a href="/privacy/" class = "footer-content">Privacy Policy</a><br /> <a href="/conduct/" class = "footer-content">Code of Conduct</a> <br /> <br /> <a href="https://www.netlify.com"> <img src="/img/netlify-light.png" alt="Deploys by Netlify"> </a> </div> </div> </div> </div> </nav> <script> $(document).ready(function () { $("#share").jsSocials({ shares: ["email", {share: "twitter", via: 'devopsdaysoslo'}, "facebook", "linkedin"], text: 'devopsdays Oslo - 2018', showLabel: false, showCount: false }); }); </script> </body> </html>
2024-04-03T01:27:17.568517
https://example.com/article/3494
Q: Reordering items in a dynamic collection / list stored in Firebase I'm currently making an interface to construct a questionnaire. Each question within the questionnaire is stored in Firebase with the following structure: questions | - {key} | - lastModified // <-- When the question was created, last updated | - position // <-- The position in which the question appears on frontend | - question // <-- Question text content | - uid // <-- The unique key with which it is saved to firebase /* ... */ /* Repeat for n questions! */ /* ... */ The admin can add, remove, update and reorder questions. When an admin removes a question, I have to increment the position value of all questions underneath the removed question. The way I am approaching this is to modify my locally stored copy of the list (in my case, cloning the array stored in Redux state), performing necessary adjustments and then pushing it to firebase overriding the existing 'questions' data set. Here is my code: // Remove question action creator export function removeQuestion(key, questions) { return (dispatch) => { dispatch({type: REMOVE_QUESTION}); const updatedQuestions = questions.filter((question) => { return !(question.uid === key); // I remove the target item here. }).map((question, index) => { return { lastModified: question.lastModified, position: index, // I update the position of all other items here. question: question.question, stageId: question.stageId, uid: question.uid }; }); questionsRef.set(updatedQuestions) // And then I save the entire updated dataset here. .then(() => dispatch(removeQuestionSuccess(key))) .catch(error => dispatch(removeQuestionError(key, error))); }; } But, is there a better way of doing this? A: This is the solution I've opted for. What I've done is: No longer store list of as an array locally (in my case in Redux state), store it as an object as pulled from Firebase, convert it to an array whenever it needs to be mutated. Working with the array, remove specified item (I used Array.Filter() for this) Map over array, updating each item with their new position (using map index) Parse array back into an object before pushing it back to Firebase This is a generalised version of the code I've written so it might be applied to any list of items: export function removeItem(key, items) { return (dispatch) => { dispatch({type: REMOVE_ITEM}); // Convert items object (from state) into an array for easier manipulation const itemsArr = items ? Object.keys(items).map(key => items[key]) : []; const updatedItems = itemsArr.filter((item) => { // Filter out the item that needs to be removed return !(item.uid === key); }).map((item, index) => { // Update new position for remaining items return { lastModified: item.lastModified, position: index, content: item.content, uid: item.uid }; }); // Parsing array back into obj before pushing to Firebase let updatedItemsObj = {}; updatedItems.forEach(item => { updatedItemsObj[item.uid] = item; }); itemsRef.set(updatedItemsObj) .then(() => dispatch(nextAction(...))) .catch(error => dispatch(errorAction(key, error))); }; }
2023-11-02T01:27:17.568517
https://example.com/article/8541
WHO guidelines for drinking water quality HEALTH RISKS FROM DRINKING DEMINERALISED WATER Frantisek Kozisek National Institute of Public Health Czech Republic I. INTRODUCTION The composition of water varies widely with local geological conditions. Neither groundwater nor surface water has ever been chemically pure H2O, since water contains small amounts of gases, minerals and organic matter of natural origin. The total concentrations of substances dissolved in fresh water considered to be of good quality can be hundreds of mg/L. Thanks to epidemiology and advances in microbiology and chemistry since the 19th century, numerous waterborne disease causative agents have been identified. The knowledge that water may contain some constituents that are undesirable is the point of departure for establishing guidelines and regulations for drinking water quality. Maximum acceptable concentrations of inorganic and organic substances and microorganisms have been established internationally and in many countries to assure the safety of drinking water. The potential effects of totally unmineralised water had not generally been considered, since this water is not found in nature except possibly for rainwater and naturally formed ice. Although rainwater and ice are not used as community drinking water sources in industrialized countries where drinking water regulations were developed, they are used by individuals in some locations. In addition, many natural waters are low in many minerals or soft (low in divalent ions), and hard waters are often artificially softened. Awareness of the importance of minerals and other beneficial constituents in drinking water has existed for thousands years, being mentioned in the Vedas of ancient India. In the book Rig Veda, the properties of good drinking water were described as follows: Sheetham (cold to touch), Sushihi (clean), Sivam (should have nutritive value, requisite minerals and trace elements), Istham (transparent), Vimalam lahu Shadgunam (its acid base balance should be within normal limits) (1). That water may contain desirable substances has received less attention in guidelines and regulations, but an increased awareness of the biological value of water has occurred in the past several decades. Artificially-produced demineralised waters, first distilled water and later also deionized or reverse osmosis-treated water, had been used mainly for industrial, technical and laboratory purposes. These technologies became more extensively applied in drinking water treatment in the 1960 as limited drinking water sources in some coastal and inland arid areas could not meet the increasing water demands resulting from increasing populations, higher living standards, development of industry, and mass tourism. Demineralisation of water was needed where the primary or the only abundant water source available was highly mineralized brackish water or sea water. Drinking water supply was also of concern to ocean-going ships, and spaceships as well. Initially, these water treatment methods were not used elsewhere since they were technically exacting and costly. In this chapter, demineralised water is defined as water almost or completely free of dissolved minerals as a result of distillation, deionization, membrane filtration (reverse osmosis or nanofiltration), electrodialysis or other technology. The total dissolved solids (TDS) in such water can vary but TDS could be as low as 1 mg/L. The electrical conductivity is generally less than 2 mS/m and may even be lower ( < 0.1 mS/m). Although the technology had its beginnings in the 1960, demineralization was not widely used at that time. However, some countries focused on public health research in this field, mainly the former USSR where desalination was introduced to produce drinking water in some Central Asian cities. It was clear from the very beginning that desalinated or demineralised water without further enrichment with some minerals might not be fully appropriate for consumption. There were three reasons for this: Demineralised water is highly aggressive and if untreated, its distribution through pipes and storage tanks would not be possible. The aggressive water attacks the water distribution piping and leaches metals and other materials from the pipes and associated plumbing materials. Distilled water has poor taste characteristics. Preliminary evidence was available that some substances present in water could have beneficial effects on human health as well as adverse effects. For example, experience with artificially fluoridated water showed a decrease in the incidence of tooth caries, and some epidemiological studies in the 1960′s reported lower morbidity and mortality from some cardiovascular diseases in areas with hard water. Therefore, researchers focused on two issues: 1.) what are the possible adverse health effects of demineralised water, and 2.) what are the minimum and the desirable or optimum contents of the relevant substances (e.g., minerals) in drinking water needed to meet both technical and health considerations. The traditional regulatory approach, which was previously based on limiting the health risks from excessive concentrations of toxic substances in water, now took into account possible adverse effects due to the deficiency of certain constituents. At one of the working meetings for preparation of guidelines for drinking water quality, the World Health Organization (WHO) considered the issue of the desired or optimum mineral composition of desalinated drinking water by focusing on the possible adverse health effects of removing some substances that are naturally present in drinking water (2). In the late 1970′s, the WHO also commissioned a study to provide background information for issuing guidelines for desalinated water. That study was conducted by a team of researchers of the A.N. Sysin Institute of General and Public Hygiene and USSR Academy of Medical Sciences under the direction of Professor Sidorenko and Dr. Rakhmanin. The final report, published in 1980 as an internal working document (3), concluded that “not only does completely demineralised water (distillate) have unsatisfactory organoleptic properities, but it also has a definite adverse influence on the animal and human organismâ€?. After evaluating the available health, organoleptic, and other information, the team recommended that demineralised water contain 1.) a minimum level for dissolved salts (100 mg/L), bicarbonate ion (30 mg/L), and calcium (30 mg/L); 2.) an optimum level for total dissolved salts (250-500 mg/L for chloride-sulfate water and 250-500 mg/L for bicarbonate water); 3.) a maximum level for alkalinity (6.5 meq/l), sodium (200 mg/L), boron (0.5 mg/L), and bromine (0.01 mg/L). Some of these recommendations are discussed in greater detail in this chapter. During the last three decades, desalination has become a widely practiced technique in providing new fresh water supplies. There are more than 11 thousand desalination plants all over the world with an overall production of more than 6 billion gallons of desalinated water per day (Cotruvo, in this book). In some regions such as the Middle East and Western Asia more than half of the drinking water is produced in this way. Desalinated waters are commonly further treated by adding chemical constituents such as calcium carbonate or limestone, or blended with small volumes of more mineral-rich waters to improve their taste and reduce their aggressiveness to the distribution network as well as plumbing materials. However, desalinated waters may vary widely in composition, especially in terms of the minimum TDS content. Numerous facilities were developed without compliance with any uniform guidelines regarding minimum mineral content for final product quality. The potential for adverse health effects from long term consumption of demineralised water is of interest not only in countries lacking adequate fresh water, but also in countries where some types of home water treatment systems are widely used or where some types of bottled water are consumed. Some natural mineral waters, in particular glacial mineral waters, are low in TDS (less than 50 mg/l) and in some countries, even distilled bottled water has been supplied for drinking purposes. Otherbrands of bottled water are produced by demineralising fresh water and then adding minerals for desirable taste. Persons consuming certain types of water may not be receiving the additional minerals that would be present in more highly mineralized waters. Consequently, the exposures and risks should be considered not only at the community level, but also at the individual or family level. II. HEALTH RISKS FROM CONSUMPTION OF DEMINERALISED OR LOW-MINERAL WATER Knowledge of some effects of consumption of demineralised water is based on experimental and observational data. Experiments have been conducted in laboratory animals and human volunteers, and observational data have been obtained from populations supplied with desalinated water, individuals drinking reverse osmosis-treated demineralised water, and infants given beverages prepared with distilled water. Because limited information is available from these studies, we should also consider the results of epidemiological studies where health effects were compared for populations using low-mineral (soft) water and more mineral-rich waters. Demineralised water that has not been remineralised is considered an extreme case of low-mineral or soft water because it contains only small amounts of dissolved minerals such as calcium and magnesium that are the major contributors to hardness. The possible adverse consequences of low mineral content water consumption are discussed in the following categories: Direct effects on the intestinal mucous membrane, metabolism and mineral homeostasis or other body functions. Little or no intake of calcium and magnesium from low-mineral water. Low intake of other essential elements and microelements. Loss of calcium, magnesium and other essential elements in prepared food. Possible increased dietary intake of toxic metals. 1. Direct effects of low mineral content water on the intestinal mucous membrane, metabolism and mineral homeostasis or other body functions Distilled and low mineral content water (TDS < 50 mg/L) can have negative taste characteristics to which the consumer may adapt with time. This water is also reported to be less thirst quenching (3). Although these are not considered to be health effects, they should be taken into account when considering the suitability of low mineral content water for human consumption. Poor organoleptic and thirst-quenching characteristics may affect the amount of water consumed or cause persons to seek other, possibly less satisfactory water sources. Williams (4) reported that distilled water introduced into the intestine caused abnormal changes in epithelial cells of rats, possibly due to osmotic shock. However, the same conclusions were not reached by Schumann et al. (5) in a more recent study based on 14-day experiments in rats. Histology did not reveal any signs of erosion, ulceration or inflammation in the oesophagus, stomach and jejunum. Altered secretory function in animals (i.e., increased secretion and acidity of gastric juice) and altered stomach muscle tone were reported in studies for WHO (3), but currently available data have not unambiguously demonstrated a direct negative effect of low mineral content water on the gastrointestinal mucous membrane. It has been adequately demonstrated that consuming water of low mineral content has a negative effect on homeostasis mechanisms, compromising the mineral and water metabolism in the body. An increase in urine output (i.e., increased diuresis) is associated with an increase in excretion of major intra- and extracellular ions from the body fluids, their negative balance, and changes in body water levels and functional activity of some body water management-dependent hormones.Experiments in animals, primarily rats, for up to one-year periods have repeatedly shown that the intake of distilled water or water with TDS ≤ 75 mg/L leads to: 1.) increased water intake, diuresis, extracellular fluid volume, and serum concentrations of sodium (Na) and chloride (Cl) ions and their increased elimination from the body, resulting in an overall negative balance.., and 2.) lower volumes of red cells and some other hematocrit changes (3). Although Rakhmanin et al. (6) did not find mutagenic or gonadotoxic effects of distilled water, they did report decreased secretion of tri-iodothyronine and aldosterone, increased secretion of cortisol, morphological changes in the kidneys including a more pronounced atrophy of glomeruli, and swollen vascular endothelium limiting the blood flow. Reduced skeletal ossification was also found in rat foetuses whose dams were given distilled water in a one-year study. Apparently the reduced mineral intake from water was not compensated by their diets, even if the animals were kept on standardized diet that was physiologically adequate in caloric value, nutrients and salt composition. Results of experiments in human volunteers evaluated by researchers for the WHO report (3) are in agreement with those in animal experiments and suggest the basic mechanism of the effects of water low in TDS (e.g. < 100 mg/L) on water and mineral homeostasis. Low-mineral water markedly: 1.) increased diuresis (almost by 20%, on average), body water volume, and serum sodium concentrations, 2.) decreased serum potassium concentration, and 3.) increased the elimination of sodium, potassium, chloride, calcium and magnesium ions from the body. It was thought that low-mineral water acts on osmoreceptors of the gastrointestinal tract, causing an increased flow of sodium ions into the intestinal lumen and slight reduction in osmotic pressure in the portal venous system with subsequent enhanced release of sodium into the blood as an adaptation response. This osmotic change in the blood plasma results in the redistribution of body water; that is, there is an increase in the total extracellular fluid volume and the transfer of water from erythrocytes and interstitial fluid into the plasma and between intracellular and interstitial fluids. In response to the changed plasma volume, baroreceptors and volume receptors in the bloodstream are activated, inducing a decrease in aldosterone release and thus an increase in sodium elimination. Reactivity of the volume receptors in the vessels may result in a decrease in ADH release and an enhanced diuresis. The German Society for Nutrition reached similar conclusions about the effects of distilled water and warned the public against drinking it (7). The warning was published in response to the German edition of The Shocking Truth About Water (8), whose authors recommended drinking distilled water instead of “ordinary” drinking water. The Society in its position paper (7) explains that water in the human body always contains electrolytes (e.g. potassium and sodium) at certain concentrations controlled by the body. Water resorption by the intestinal epithelium is also enabled by sodium transport. If distilled water is ingested, the intestine has to add electrolytes to this water first, taking them from the body reserves. Since the body never eliminates fluid in form of “pure” water but always together with salts, adequate intake of electrolytes must be ensured. Ingestion of distilled water leads to the dilution of the electrolytes dissolved in the body water. Inadequate body water redistribution between compartments may compromise the function of vital organs. Symptoms at the very beginning of this condition include tiredness, weakness and headache; more severe symptoms are muscular cramps and impaired heart rate. Additional evidence comes from animal experiments and clinical observations in several countries. Animals given zinc or magnesium dosed in their drinking water had a significantly higher concentration of these elements in the serum than animals given the same elements in much higher amounts with food and provided with low-mineral water to drink. Based on the results of experiments and clinical observations of mineral deficiency in patients whose intestinal absorption did not need to be taken into account and who received balanced intravenous nutrition diluted with distilled water, Robbins and Sly (9) presumed that intake of low-mineral water was responsible for an increased elimination of minerals from the body. Regular intake of low-mineral content water could be associated with the progressive evolution of the changes discussed above, possibly without manifestation of symptoms or causal symptoms over the years. Nevertheless, severe acute damage, such as hyponatremic shock or delirium, may occur following intense physical efforts and ingestion of several litres of lowmineral water (10). The so-called “water intoxication” (hyponatremic shock) may also occur with rapid ingestion of excessive amounts not only of low-mineral water but also tap water. The “intoxication” risk increases with decreasing levels of TDS. In the past, acute health problems were reported in mountain climbers who had prepared their beverages with melted snow that was not supplemented with necessary ions. A more severe course of such a condition coupled with brain oedema, convulsions and metabolic acidosis was reported in infants whose drinks had been prepared with distilled or low-mineral bottled water (11). 2. Little or no intake of calcium and magnesium from low-mineral water Calcium and magnesium are both essential elements. Calcium is a substantial component of bones and teeth. In addition, it plays a role in neuromuscular excitability (i.e., decreases it), the proper function of the conducting myocardial system, heart and muscle contractility, intracellular information transmission and the coagulability of blood. Magnesium plays an important role as a cofactor and activator of more than 300 enzymatic reactions including glycolysis, ATP metabolism, transport of elements such as sodium, potassium, and calcium through membranes, synthesis of proteins and nucleic acids, neuromuscular excitability and muscle contraction. Although drinking water is not the major source of our calcium and magnesium intake, the health significance of supplemental intake of these elements from drinking water may outweigh its nutritional contribution expressed as the proportion of the total daily intake of these elements. Even in industrialized countries, diets deficient in terms of the quantity of calcium and magnesium, may not be able to fully compensate for the absence of calcium and, in particular, magnesium, in drinking water. For about 50 years, epidemiological studies in many countries all over the world have reported that soft water (i.e., water low in calcium and magnesium) and water low in magnesium is associated with increased morbidity and mortality from cardiovascular disease (CVD) compared to hard water and water high in magnesium. An overview of epidemiological evidence is provided by recent review articles (12-15) and summarized in other chapters of this monograph (Calderon and Craun, Monarca et al.). Recent studies also suggest that the intake of soft water, i.e. water low in calcium, may be associated with higher risk of fracture in children (16), certain neurodegenerative diseases (17), pre-term birth and low weight at birth (18) and some types of cancer (19, 20). In addition to an increased risk of sudden death (21-23), the intake of water low in magnesium seems to be associated with a higher risk of motor neuronal disease (24), pregnancy disorders (so-called preeclampsia) (25), and some cancers (26-29). Specific knowledge about changes in calcium metabolism in a population supplied with desalinated water (i.e., distilled water filtered through limestone) low in TDS and calcium, was obtained from studies carried out in the Soviet city of Shevchenko (3, 30, 31). The local population showed decreased activity of alkaline phosphatase, reduced plasma concentrations of calcium and phosporus and enhanced decalcification of bone tissue. The changes were most marked in women, especially pregnant women and were dependent on the duration of residence in Shevchenko. The importance of water calcium was also confirmed in a one-year study of rats on a fully adequate diet in terms of nutrients and salts and given desalinated water with added dissolved solids of 400 mg/L and either 5 mg/L, 25 mg/L, or 50 mg/L of calcium (3, 32). The animals given water dosed with 5 mg/L of calcium exhibited a reduction in thyroidal and other associated functions compared to the animals given the two higher doses of calcium. While the effects of most chemicals commonly found in drinking water manifest themselves after long exposure, the effects of calcium and, in particular, those of magnesium on the cardiovascular system are believed to reflect recent exposures. Only a few months exposure may be sufficient consumption time effects from water that is low in magnesium and/or calcium (33). Illustrative of such short-term exposures are cases in the Czech and Slovak populations who began using reverse osmosis-based systems for final treatment of drinking water at their home taps in 2000-2002. Within several weeks or months various complaints suggestive of acute magnesium (and possibly calcium) deficiency were reported (34). The complaints included cardiovascular disorders, tiredness, weakness or muscular cramps and were essentially the same symptoms listed in the warning of the German Society for Nutrition (7). 3. Low intake of some essential elements and microelements from low-mineral water Although drinking water, with some rare exceptions, is not the major source of essential elements for humans, its contribution may be important for several reasons. The modern diet of many people may not be an adequate source of minerals and microelements. In the case of borderline deficiency of a given element, even the relatively low intake of the element with drinking water may play a relevant protective role. This is because the elements are usually present in water as free ions and therefore, are more readily absorbed from water compared to food where they are mostly bound to other substances. Animal studies are also illustrative of the significance of microquantities of some elements present in water. For instance, Kondratyuk (35) reported that a variation in the intake of microelements was associated with up to six-fold differences in their content in muscular tissue. These results were found in a 6-month experiment in which rats were randomized into 4 groups and given: a.) tap water, b.) low-mineral water, c.) low-mineral water supplemented with iodide, cobalt, copper, manganese, molybdenum, zinc and fluoride in tap water, d.) low-mineral water supplemented with the same elements but at ten times higher concentrations. Furthermore, a negative effect on the blood formation process was found to be associated with non-supplemented demineralised water. The mean hemoglobin content of red blood cells was as much as 19% lower in the animals that received non-supplemented demineralised water compared to that in animals given tap water. The haemoglobin differences were even greater when compared with the animals given the mineral supplemented waters. Recent epidemiological studies of an ecologic design among Russian populations supplied with water varying in TDS suggest that low-mineral drinking water may be a risk factor for hypertension and coronary heart disease, gastric and duodenal ulcers, chronic gastritis, goitre, pregnancy complications and several complications in newborns and infants, including jaundice, anemia, fractures and growth disorders (36). However, it is not clear whether the effects observed in these studies are due to the low content of calcium and magnesium or other essential elements, or due to other factors. Lutai (37) conducted a large cohort epidemiological study in the Ust-Ilim region of Russia. The study focused on morbidity and physical development in 7658 adults, 562 children and 1582 pregnant women and their newborns in two areas supplied with water different in TDS. One of these areas was supplied with water lower in minerals (mean values: TDS 134 mg/L, calcium 18.7 mg/L, magnesium 4.9 mg/L, bicarbonates 86.4 mg/L) and the other was supplied with water higher in minerals (mean values: TDS 385 mg/L, calcium 29.5 mg/L, magnesium 8.3 mg/L, bicarbonates 243.7 mg/L). Water levels of sulfate, chloride, sodium, potassium, copper, zinc, manganese and molybdenum were also determined. The populations of the two areas did not differ from each other in eating habits, air quality, social conditions and time of residence in the respective areas. The population of the area supplied with water lower in minerals showed higher incidence rates of goiter, hypertension, ischemic heart disease, gastric and duodenal ulcers, chronic gastritis, cholecystitis and nephritis. Children living in this area exhibited slower physical development and more growth abnormalities, pregnant women suffered more frequently from edema and anemia. Newborns of this area showed higher morbidity. The lowest morbidity was associated with water having calcium levels of 30-90 mg/L, magnesium levels of 17-35 mg/L, and TDS of about 400 mg/L (for bicarbonate containing waters). The author concluded that such water could be considered as physiologically optimum. 4. High loss of calcium, magnesium and other essential elements in food prepared in low-mineral water When used for cooking, soft water was found to cause substantial losses of all essential elements from food (vegetables, meat, cereals). Such losses may reach up to 60 % for magnesium and calcium or even more for some other microelements (e.g., copper 66 %, manganese 70 %, cobalt 86 %). In contrast, when hard water is used for cooking, the loss of these elements is much lower, and in some cases, an even higher calcium content was reported in food as a result of cooking (38-41). Since most nutrients are ingested with food, the use of low-mineral water for cooking and processing food may cause a marked deficiency in total intake of some essential elements that was much higher than expected with the use of such water for drinking only. The current diet of many persons usually does not provide all necessary elements in sufficient quantities, and therefore, any factor that results in the loss of essential elements and nutrients during the processing and preparation of food could be detrimental for them. 5. Possible increased dietary intake of toxic metals Increased risk from toxic metals may be posed by low-mineral water in two ways: 1.) higher leaching of metals from materials in contact with water resulting in an increased metal content in drinking water, and 2.) lower protective (antitoxic) capacity of water low in calcium and magnesium. Low-mineralized water is unstable and therefore, highly aggressive to materials with which it comes into contact. Such water more readily dissolves metals and some organic substances from pipes, coatings, storage tanks and containers, hose lines and fittings, being incapable of forming low-absorbable complexes with some toxic substances and thus reducing their negative effects. Among eight outbreaks of chemical poisoning from drinking water reported in the USA in 1993-1994, there were three cases of lead poisoning in infants who had blood-lead levels of 15 μg/dL, 37 μg/dL, and 42 μg/dL. The level of concern is 10 μg/dL. For all three cases, lead had leached from brass fittings and lead-soldered seams in drinking water storage tanks. The three water systems used low mineral drinking water that had intensified the leaching process (42). First-draw water samples at the kitchen tap had lead levels of 495 to 1050 μg/L for the two infants with the highest blood lead; 66 μg/L was found in water samples collected at the kitchen tap of the third infant (43). Calcium and, to a lesser extent, magnesium in water and food are known to have antitoxic activity. They can help prevent the absorption of some toxic elements such as lead and cadmium from the intestine into the blood, either via direct reaction leading to formation of an unabsorbable compound or via competition for binding sites (44-50). Although this protective effect is limited, it should not be dismissed. Populations supplied with low-mineral water may be at a higher risk in terms of adverse effects from exposure to toxic substances compared to populations supplied with water of average mineralization and hardness. 6. Possible bacterial contamination of low-mineral water All water is prone to bacterial contamination in the absence of a disinfectant residual either at source or as a result of microbial re-growth in the pipe system after treatment. Re-growth may also occur in desalinated water. Bacterial re-growth within the pipe system is encouraged by higher initial temperatures, higher temperatures of water in the distribution system due to hot climates, lack of a residual disinfectant, and possibly greater availability of some nutrients due to the aggressive nature of the water to materials in contact with it. Although an intact desalination membrane should remove all bacteria, it may not be 100 % effective (perhaps due to leaks) as can be documented by an outbreak of typhoid fever caused by reverse osmosis-treated water in Saudi Arabia in 1992 (51). Thus, virtually all waters including desalinated waters are disinfected after treatment. Non pathogenic bacterial re-growth in water treated with different types of home water treatment devices was reported by Geldreich et al. (52) and Payment et al. (53, 54) and many others. The Czech National Institute of Public Health (34) in Prague has tested products intended for contact with drinking water and found, for example, that the pressure tanks of reverse osmosis units are prone to bacterial regrowth, primarily do to removal of residual disinfectant by the treatment. They also contain a rubber bag whose surface appears to be favourable for bacterial growth. III. DESIRABLE MINERAL CONTENT OF DEMINERALISED DRINKING WATER The corrosive nature of demineralised water and potential health risks related to the distribution and consumption of low TDS water has led to recommendations of the minimum and optimum mineral content in drinking water and then, in some countries, to the establishment of obligatory values in the respective legislative or technical regulations for drinking water quality. Organoleptic characteristics and thirst-quenching capacity were also considered in the recommendations. For example, human volunteer studies (3) showed that the water temperatures of 15-350 C best satisfied physiological needs. Water temperatures above 350 or below 150 C resulted in a reduction in water consumption. Water with a TDS of 25-50 mg/L was described tasteless (3). 1. The 1980 WHO report Salts are leached from the body under the influence of drinking water with a low TDS. Because adverse effects such as altered water-salt balance were observed not only in completely desalinated water but also in water with TDS between 50 and 75 mg/L, the team that prepared the 1980 WHO report (3) recommended that the minimum TDS in drinking water should be 100 mg/L. The team also recommended that the optimum TDS should be about 200-400 mg/L for chloride-sulphate waters and 250-500 mg/L for bicarbonate waters (WHO 1980). The recommendations were based on extensive experimental studies conducted in rats, dogs and human volunteers. Water exposures included Moscow tap water, desalinated water of approximately 10 mg/L TDS, and laboratory-prepared water of 50, 100, 250, 300, 500, 750, 1000, and 1500 mg/L TDS using the following constituents and proportions: Cl- (40%), HCO3 (32%), SO4 (28%) / Na (50%), Ca (38%), Mg (12%). A number of health outcomes were investigated including: dynamics of body weight, basal and nitrogen metabolism, enzyme activity, water-salt homeostasis and its regulatory system, mineral content of body tissues and fluids, hematocrit, and ADH activity. The optimal TDS was associated with the lowest incidence of adverse effect, negative changes to the human, dog, or rat, good organoleptic characteristics and thirst-quenching properties, and reduced corrosivity of water. In addition to the TDS levels, the report (3) recommended that the minimum calcium content of desalinated drinking water should be 30 mg/L. These levels were based on health concerns with the most critical effects being hormonal changes in calcium and phosphorus metabolism and reduced mineral saturation of bone tissue. Also, when calcium is increased to 30 mg/L, the corrosive activity of desalinated water would be appreciably reduced and the water would be more stable (3). The report (3) also recommended a bicarbonate ion content of 30 mg/L as a minimum essential level needed to achieve acceptable organoleptic characteristics, reduced corrosivity, and an equilibrium concentration for the recommended minimum level of calcium. 2. Recent recommendations More recent studies have provided additional information about minimum and optimum levels of minerals that should be in demineralised water. For example, the effect of drinking water of different hardness on the health status of women aged from 20 to 49 years was the subject of two cohort epidemiological studies (460 and 511 women) in four South Siberian cities (55, 56). The water in city A water had the lowest levels of calcium and magnesium (3.0 mg/L calcium and 2.4 mg/L magnesium). The water in city B had slightly higher levels (18.0 mg/L calcium and 5.0 mg/L magnesium). The highest levels were in city C (22.0 mg/L calcium and 11.3 mg/L magnesium) and city D (45.0 mg/L calcium and 26.2 mg/L magnesium). Women living in cities A and B more frequently showed cardiovascular changes (as measured by ECG), higher blood pressure, somatoform autonomic dysfunctions, headache, dizziness, and osteoporosis (as measured by X-ray absorptiometry) compared to those of cities C and D. These results suggest that the minimum magnesium content of drinking water should be 10 mg/L and the minimum calcium content should be 20 mg/L rather than 30 mg/L as recommended in the 1980 WHO report (3). Based on the currently available data, various researchers have recommended that the following levels of calcium, magnesium, and water hardness should be in drinking water: For magnesium, a minimum of 10 mg/L (33, 56) and an optimum of about 20-30 mg/L (49, 57); For calcium, a minimum of 20 mg/L (56) and an optimum of about 50 (40-80) mg/L (57, 58); For total water hardness, the sum of calcium and magnesium should be 2 to 4 mmol/L (37, 50, 59, 60). At these concentrations, minimum or no adverse health effects were observed. The maximum protective or beneficial health effects of drinking water appeared to occur at the estimated desirable or optimum concentrations. The recommended magnesium levels were based on cardiovascular system effects, while changes in calcium metabolism and ossification were used as a basis for the recommended calcium levels. The upper limit of the hardness optimal range was derived from data that showed a higher risk of gall stones, kidney stones, urinary stones, arthrosis and arthropathies in populations supplied with water of hardness higher than 5 mmol/L. Long-term intake of drinking water was taken into account in estimating these concentrations. For short-term therapeutic indications of some waters, higher concentrations of these elements may be considered. IV. GUIDELINES AND DIRECTIVES FOR CALCIUM, MAGNESIUM, AND HARDNESS LEVELS IN DRINKING WATER The WHO in the 2nd edition of Guidelines for Drinking-water Quality (61) evaluated calcium and magnesium in terms of water hardness but did not recommend either minimum levels or maximum limits for calcium, magnesium, or hardness.The first European Directive (62) established a requirement for minimum hardness for softened or desalinated water (≥ 60 mg/L as calcium or equivalent cations). This requirement appeared obligatorily in the national legislations of all EEC members, but this Directive expired in December 2003 when a new Directive (63) became effective. The new Directive does not contain a requirement for calcium, magnesium, or water hardness levels. On the other hand, it does not prevent member states from implementing such a requirement into their national legislation. Only a few EU Member States (e.g. the Netherlands) have included calcium, magnesium, or water hardness into their national regulations as a binding requirement. Some EU Member States (e.g. Austria, Germany) included these parameters at lower levels as unbinding regulations, such as technical standards (e.g., different measures for reduction of water corrosivity). All four Central European countries that became part of the EU in May 2004 have included the following requirements in their respective regulations but varying in binding power; The Russian technical standard Astronaut environment in piloted spaceships – general medical and technical requirements (64) defines qualitative requirements for recycled water intended for drinking in spaceships. Among other requirements, the TDS should range between 100 and 1000 mg/L with minimum levels of fluoride, calcium and magnesium being specified by a special commission separately for each cosmic flight. The focus is on how to supplement recycled water with a mineral concentrate to make it “physiologically valuableâ€? (65). V. CONCLUSIONS Drinking water should contain minimum levels of certain essential minerals (and other components such as carbonates). Unfortunately, over the two past decades, little research attention has been given to the beneficial or protective effects of drinking water substances. The main focus has been on the toxicological properties of contaminants. Nevertheless, some studies have attempted to define the minimum content of essential elements or TDS in drinking water, and some countries have included requirements or guidelines for selected substances in their drinking water regulations. The issue is relevant not only where drinking water is obtained by desalination (if not adequately re-mineralised) but also where home treatment or central water treatment reduces the content of important minerals and low-mineral bottled water is consumed. Drinking water manufactured by desalination is stabilized with some minerals, but this is usually not the case for water demineralised as a result of household treatment. Even when stabilized, the final composition of some waters may not be adequate in terms of providing health benefits. Although desalinated waters are supplemented mainly with calcium (lime) or other carbonates, they may be deficient in magnesium and other microelements such as fluorides and potassium. Furthermore, the quantity of calcium that is supplemented is based on technical considerations (i.e., reducing the aggressiveness) rather than on health concerns. Possibly none of the commonly used ways of re-mineralization could be considered optimum, since the water does not contain all of its beneficial components. Current methods of stabilization are primarily intended to decrease the corrosive effects of demineralised water. Demineralised water that has not been remineralized, or low-mineral content water – in the light of the absence or substantial lack of essential minerals in it – is not considered ideal drinking water, and therefore, its regular consumption may not be providing adequate levels of some beneficial nutrients. This chapter provides a rationale for this conclusion. The evidence in terms of experimental effects and findings in human volunteers related to highly demineralised water is mostly found in older studies, some of which may not meet current methodological criteria. However, these findings and conclusions should not be dismissed. Some of these studies were unique, and the intervention studies, although undirected, would hardly be scientifically, financially, or ethically feasible to the same extent today. The methods, however, are not so questionable as to necessarily invalidate their results. The older animal and clinical studies on health risks from drinking demineralised or low-mineral water yielded consistent results both with each other, and recent research has tended to be supportive. Sufficient evidence is now available to confirm the health consequences from drinking water deficient in calcium or magnesium. Many studies show that higher water magnesium is related to decreased risks for CVD and especially for sudden death from CVD. This relationship has been independently described in epidemiological studies with different study designs, performed in different areas, different populations, and at different times. The consistent epidemiological observations are supported by the data from autopsy, clinical, and animal studies. Biological plausibility for a protective effect of magnesium is substantial, but the specificity is less evident due to the multifactorial aetiology of CVD. In addition to an increased risk of sudden death, it has been suggested that intake of water low in magnesium may be associated with a higher risk of motor neuronal disease, pregnancy disorders (so-called preeclampsia), sudden death in infants, and some types of cancer. Recent studies suggest that the intake of soft water, i.e. water low in calcium, is associated with a higher risk of fracture in children, certain neurodegenerative diseases, pre-term birth and low weight at birth and some types of cancer. Furthermore, the possible role of water calcium in the development of CVD cannot be excluded. International and national authorities responsible for drinking water quality should consider guidelines for desalination water treatment, specifying the minimum content of the relevant elements such as calcium and magnesium and TDS. If additional research is required to establish guidelines, authorities should promote targeted research in this field to elaborate the health benefits. If guidelines are established for substances that should be in deminerialised water, authorities should ensure that the guidelines also apply to uses of certain home treatment devices and bottled waters. References Sadgir P, Vamanrao A. Water in Vedic Literature. In: Abstract Proceedings of the 3rd international Water History Association Conference (http://www.iwha.net/a_abstract.htm), Alexandria: 2003. Working group report (Brussels, 20-23 March 1978). Health effects of the removal of substances occurring naturally in drinking water, with special reference to demineralized and desalinated water. EURO Reports and Studies 16. Copenhagen: World Health Organization, 1979. Guidelines on health aspects of water desalination. ETS/80.4. Geneva: World Health Organization, 1980. Melles Z, Kiss SA. Influence of the magnesium content of drinking water and of magnesium therapy on the occurrence of esalinized Disclaimer: The testimonials, videos, articles and comments herein are the expressed opinions and experiences of the individual persons shared under their 1st amendment right of free speech. The information on this website has not been reviewed by the FDA. Products offered for sale herein are not intended to treat, cure or prevent any disease or health condition. No medical claims are being made. Click here to see why we must say that.
2024-07-21T01:27:17.568517
https://example.com/article/2762
Biba Apparels Offers (8 Offers) Updated On : 10 December 2016 BIBA Apparels Private Limited ("BIBA") emerged as a complete brand in 1986 and has since been the brand of choice for women and girls across India. What initially started with salwar, kameez and dupattas, has gradually emerged as a big fashion brand for females. Biba was launched as a wholesale brand to traditional retailers, with its first exclusive store launched in Mumbai. Today Biba is present across 76 Indian cities with 192 exclusive outlets and over 250 multi-brand stores. Read More... Festive Collection! Check out the latest autumn-winter collection online and avail up to 50% discount. Choose from the wide variety of suits, kurtas, anarkali sets, sarees and many more. Don't miss this amazing discount. Get up to 50% off on girls tops. Crafted from pure cotton and printed fabrics, women tops at BIBA is detailed with elements like band collar, round neckline and more. Shop now, get free delivery and pay COD. Shop for beautifully designed short kurtis for ladies/women online with up to 50% discount. Grab the huge collection of BIBA short kurtis for Women with Free Free shipping and Cash on Delivery options. Get up to 50% off on all products. Shop wide range of traditional clothes for women and girls such as kurtis, tops, anarkali suits, fusion dresses, palazzo pants and more. Get extra 10% off on minimum purchase of ₹3,499. Free shipping, Cash on Delivery. Shop Now! Visit BIBA's official store and buy stylish salwar suits for women. These designer salwar kameez sets are available in a wide variety of fits, hues, fabric designs, and sizes. Each outfit has been ingeniously imagined and meticulously designed for women who believe in shattering the stereotypes without forgoing tradition. Shop Now! Now Shop for ravishing and glamorous Anarkali Suit Sets to fill the world with sunshine are listed in landing page and grab them at Flat 50% on shopping cart value. Click here to grab this offer before it expires and click to visit special landing page. Browse the wide range of kids’ lehenga sets, lehenga cholis and lehenga collection that will make your little one look like a star and save up to 50%. Buy dresses for girls, team up multi-coloured cholis with off-white or lime green lehenga sets, or select a flared or tiered lehenga for an amalgam of modern as well ethnic look. Browse the wide range of kids’ lehenga sets, lehenga cholis and lehenga collection that will make your little one look like a star and save up to 50%. Buy dresses for girls, team up multi-coloured cholis with off-white or lime green lehenga sets, or select a flared or tiered lehenga for an amalgam of modern as well ethnic flavour. Women's ethnic wear across a range of categories Biba offers ethnic fashion wear in different styles. Its various products include suits (A-line, Kalidars, Anarkalis), Bottoms (Palazzos, churidars, Salwars, Skirts), kurtis (long, short, tops, crop tops) and accessories (potlis, Indian batwa –i.e. pouches) for Indian fashion enthusiasts. The entire range of classy suits, formal outfits, casual kurtis, beautiful dupattas and colourful lehanga-tops are available in all ranges and sizes, for women as well as young girls. Biba launched a Kids line called the Biba Girls targeted at girls between the age of 2 and 12 years. The company has also tied up with designers like Rohit Bal and Anju Modi and showcases their premier collection in its stores. The store also offers mix-n-match range of suit lengths and clothes for kurtas and bottoms. Discounts and Vouchers make shopping all the more fun for fashion lovers Biba periodically offers discounts and offers on its stunning outfits for women and girls. The company keeps advertising different types of promo codes and gift vouchers for its customers. At Vouchercodes.in, you will find the best of coupon codes and deals from Biba on a single page. The coupons we advertise are the most recent ones, and 100% redeemable. So what are you waiting for - come buy your favourite fashion wear at prices you could have never imagined! Biba.in is an Indian portal delivering the quality fashion accessories for Indian placement. It has come into being in 1986 under the Biba Apparels Private Limited with products linked to women’s fashion accessories such as Anarkali, Wedding collection, Churidar, Salwar Kameez, Kurtis Collection, Kurta for Women, Lehenga for Girls, Tops for Women, Designer Suits, Tunics, Palazzo Pants, Kids Dresses and many more other offerings. The word “Biba” has been judged as “young or pretty girl” in Punjabi language and has been coined by Mrs. Meena Bindra as sublimation to Salwar, Kameez, and dupattas in 1986 when she launched them for the first time and now today, it has emerged as a brand name to symbolize their portal. And nowadays with this name, Biba has also pioneered in offering their costumes to more than 10 Indian Bollywood movies such as “Na Tum Jano Na Hum, Devdas, Hulchul and Baghban. They invariably want to create a hassle free environmental conditions through aiding customers with handy websites and easy payment options such as through all major credit and debit cards like VISA and Master cards, net banking through all major Indian banks and also through Cash on Delivery payment options. And also provide a team of professionals to serve you all the time with just a call away at +91 9266604621 from Monday to Friday in between to 9:30am to 6:30pm or by leaving a mail at customercare@bibaindia.com. Further, to treat their customers with much care and honors, here comes us with www.vouchercodes.in aid you with special offers and deals through our exclusive and complimentary Biba Coupon Codes rendering you with further rebates and money off discounts to get more wonderful shopping experiences through our shopping gateways. Moreover, you can even make small subscriptions to our newsletter for getting the latest news right into your portal on regular foot. 1. Biba has an amazing collection of Indian Wears with an wide range of top qualities of accessories and clothing for all the occasions. 2. Their delivery is hassle free, with a easy to operate website and easy and different way of payment methods. 3. The company has an awesome customer services, they are always there to assist. 4. Avail amazing discounts on every purchase. 5. The quality of the products is something that you can vouch for.
2023-08-16T01:27:17.568517
https://example.com/article/1845
Reapportionment Act of 1929 The Reapportionment Act of 1929 (ch. 28, , ) was a combined census and apportionment bill passed by the United States Congress on June 18, 1929, that established a permanent method for apportioning a constant 435 seats in the U.S. House of Representatives according to each census. However, like earlier Apportionment Acts, the 1929 Act neither repealed nor restated the requirements of the previous apportionment acts that congressional districts be contiguous, compact, and equally populated. It was not clear whether these requirements were still in effect until in 1932 the Supreme Court of the United States ruled in Wood v. Broom that the provisions of each apportionment act affected only the apportionment for which they were written. Thus the size and population requirements, last stated in the Apportionment Act of 1911, expired with the enactment of the 1929 Act. The 1929 Act gave little direction concerning congressional redistricting. It merely established a system in which House seats would be reallocated to states which have shifts in population. The lack of recommendations concerning districts had several significant effects. The Reapportionment Act of 1929 allowed states to draw districts of varying size and shape. It also allowed states to abandon districts altogether and elect at least some representatives at large, which several states chose to do, including New York, Illinois, Washington, Hawaii, and New Mexico. For example, in the 88th Congress (in the early 1960s) 22 of the 435 representatives were elected at-large. Historical context Article One, Section 2, Clause 3 of the United States Constitution requires that seats in the United States House of Representatives be apportioned among the various states according to the population disclosed by the most recent decennial census, but only counting three-fifths of the slave population until the Fourteenth Amendment in 1868. The first federal law governing the size of the House and the method of allotting representatives, the Apportionment Act of 1792, was signed into law by George Washington in April 1792. It set the number of members of the House at 105 (effective March 4, 1793, with the 3rd Congress). With but one exception, the Apportionment Act of 1842, Congress enlarged the House of Representatives by various degrees following each subsequent census until 1913, by which time the membership had grown to 435. From the 1790s through the early 19th century, the seats were apportioned among the states using Jefferson's method. In 1842, the House was reduced from 242 to 223 members by the incoming Whig Party, which had ousted the Jacksonian Democrats. The Act of 1842 also contained an amendment which required single-member district elections rather than at-large elections within a state, prompting backlash against an increase in Congressional power. In 1842 the debate on apportionment in the House began in the customary way with a jockeying for the choice of a divisor using Jefferson's method. On one day alone, 59 different motions to fix a divisor were made in a House containing but 242 members. The values ranged from 30,000 to 140,000 with more than half between 50,159 and 62,172. But the Senate had tired of this approach and proposed instead an apportionment of 223 members using Webster's method. In the House John Quincy Adams urged acceptance of the method but argued vehemently for enlarging the number of members, as New England's portion was steadily dwindling. From 1842 through the 1860s, the House increased minimally at each census and as new states were admitted to the union. But the Fourteenth Amendment dramatically increased the apportionment population of the Southern states because the black population counted fully instead of being reduced to three-fifths its numbers. As a result, a major increase in seats was needed to keep about the same number of seats in the northern states and the House was enlarged by 50 seats (21%) in respect of the 1870 census. The reapportionment of 1872 created a house size of 292. No particular apportionment method was used during the period 1850 to 1890, but from 1890 through 1910, the increasing membership of the House was calculated in such a way as to ensure that no state lost a seat due to shifts in apportionment population. In 1881, a provision for equally populated contiguous and compact single member districts was added to the reapportionment law, and this was echoed in all decennial reapportionment acts through to 1911. Then, in 1920, the Republicans removed the Democrats from power as the Whigs had done in 1838, taking the presidency and both houses of Congress. Due to increased immigration and a large rural-to-urban shift in population from 1910 to 1920, the new Republican Congress refused to reapportion the House of Representatives with the traditional contiguous, single-member districts stipulations because such a reapportionment would have redistricted many House members out of their districts. A reapportionment in 1921 in the traditional fashion would have increased the size of the House to 483 seats, but many members would have lost their seats due to the population shifts, and the House chamber did not have adequate seats for 483 members. By 1929, no reapportionment had been made since 1911, and there was vast representational inequity, measured by the average district size; by 1929 some states had districts twice as large as others due to population growth and demographic shift. Impact The Reapportionment Act of 1929 capped the number of representatives at 435 (the size previously established by the Apportionment Act of 1911), where it has remained except for a temporary increase to 437 members upon the 1959 admission of Alaska and Hawaii into the Union. As a result, the average size of a congressional district has tripled in size—from 210,328 inhabitants based on the 1910 Census, to 710,767 according to the 2010 Census. Additionally, due to the unchanging size of the House, combined with the requirement that districts not cross state lines, and the population distribution among states in the 2010 Census there is a wide size disparity among congressional districts: Montana has the largest average district size, with 994,416 people; and Rhode Island has the smallest, with 527,624 people. Since 1941, seats in the House have been apportioned among the states according to the method of equal proportions. Implementation of this method has eliminated debates about the proper divisor for district size; any divisor that gives 435 members has the same apportionment. It created other problems however, because, given the fixed-size House, each state's congressional delegation changes as a result of population shifts, with various states either gaining or losing seats based on census results. Each state is then responsible for designing the shape of its districts. Redistricting The Act also did away with any mention of districts at all. This allowed political parties in control of a state legislature to draw district boundaries at will and to elect some or all representatives at large. See also United States congressional apportionment Redistricting References External links Wood v Broom, . Category:1929 in law Category:United States federal government administration legislation Category:Redistricting
2024-04-28T01:27:17.568517
https://example.com/article/9643
Q: Controlling the lifetime of keys unlocked in a GnuPG agent For (slightly) increased security, I would like to have better control of the lifetime of any unlocked keys, depending on the task being performed. Ideally, I would start an interactive sub-shell, do any tasks involving secrets, then have all unlocked keys be cleared automatically when the sub-shell exits. I know that one can manually clear cached passphrases using gpg-connect-agent, but AFAIK that requires each key to be specified explicitly. Another option would be to set a sort cache expiry time using the --default-cache-ttl or --max-cache-ttl options for gpg-agent; but generally that means either setting a long TTL, or being asked for the same passphrase more than once. I seem to remember that a long time ago it was possible to specify an alternative gpg-agent socket path and basically start an independent session, but that does not seem to be possible any more; newer versions seem to use a fixed path that cannot change. So, what am I missing? Is there a way to achieve what I want? A: While I do not have a full solution, I did find a workaround: By using an alternate home directory for GnuPG via the --homedir parameter or the GNUPGHOME environment variable, one can force GnuPG to use a different set of key storage files and associated agent socket paths. With that in mind, I can start a shell inside a new gpg-agent session: gpg-agent \ --homedir /my/other/keys \ --default-cache-ttl 86400 \ --max-cache-ttl 86400 \ --daemon \ \ /bin/bash Any entered passphrases will either expire if the specified TTL passes (one day in this example), or will be "forgotten" when the new shell exits, as that will cause the parent gpg-agent instance to self-terminate. The reason I do not consider this a full solution is that it forces the use of a separate keyring. However, that works perfectly for my specific use case and, therefore, I did not investigate further. It may be possible to achieve the full effect of independent sessions for the same keyring by having symlinks to the default GnuPG keyring, provided enough care is taken w.r.t. maintaining any locking between different gpg-agent instances. I'll leave that as an exercise to the reader...
2024-07-19T01:27:17.568517
https://example.com/article/6294
This place was absolutely beautiful. My hometown is not far from here and there is a swimming hole there that the locals have been trashing lately. This place had minimal trash. Honestly about 3-4 pieces my trail mates and I picked up. The trail is nonexistent but once you get the the location that the cars are parked, the way to the creek is simple. Then follow it down to see the falls. It’s a scramble down the mountain/make your own path type of thing. Great on a hot summer day! Absolutely Beautiful Hike!!! I would rate this moderate instead of easy. The "easy" rating is misleading. There was also no sight of a waterfall when we went and the trail does not cross over the river like stated. Nonetheless, it was well worth the hike! We hiked in about 4.2 miles and had a picnic lunch at what seemed to be the end of the trail. We then took some time to swim in the river (be careful). We only saw one other group hiking around the same time, and one couple that seemed to have camped out. I would definitely go back and maybe make a mini backpacking trip out of it. Trail? What trail? I asked in the camp and was told to follow the creek, That we did, through several beat down bandit camps with large fire rings and some trash left by those without wilderness ethics. Plenty of shade, We stayed on the south side of the creek and had no trouble locating the cascading pools above the main waterfall. Despite the lack of defined trail I still recommend this forest stroll for the beauty of the cool (cold!) sparkling and refreshing water. Be sure to supervise young children and wilderness novices around the slippery granite of the cascades. My dog even slid around and looked surprised when he needed help to get back on his feet. Didn't see any snakes, just birds and small trout. The hike took us about 6 hours from the trail head to Jennie lake. The hike was very strenuous there are some shade parts and some sunny part. We did the loop from Jennie lakes but we couldn’t find the trail that loop around to weaver lake so we had to boulder hop down the mountain luckily I was looking on the map on my GPS to find my way to Weaver Lake Just wanted to add that Kern River Conservancy and Keepers of the Kern are putting more effort in keeping that part of the river clean. It's disturbing to me that people go to a beautiful place like the Forks and have to leave their trash. we have gotten the river from Kernville to Johnsondale bridge free of trash. We started from the big meadow parking lot (adds a little more then a mile to your hike- try to find the fox meadow trailhead, spits you out near the trailhead for Jennie lakes) and backpacked in to Jennie lake. The hike there was mainly uphill and took us 4 hours. We reached the lake at 330pm to find three groups already setting up camp, which worried us but luckily there were plenty of camp spots dispersed around the lake we could camp at. The lake was gorgeous and there were tons of trout that we saw! As it got later we made a fire and cooked dinner. It was a perfect one nighter. We left at 830am on Sunday and got back to the parking lot at 1030. The hike back down was extremely easy since it was mostly downhill. Great hike and next time we will do the hole loop! Great trail to get to some wild trout. Not many people fishing this far up the Kern because of its isolated location. Great trout and great backpacking. The hike out is tough. Leave early in the morning or later in the evening. Pack as light as possible. Beautiful hike. We slept Friday night at Big Meadows campground and then left for the trailhead in the morning. The hike only took about 3 hours, and Poop Out Pass was not as difficult (in my opinion) as some described. The lake was gorgeous but there were at least six other camps while we were up there. The fishing it great! Caught rainbow trout using Power Bait. This place was absolutely stunning. When we were here the falls were perfect. The water wasn't too strong, but the rocks are slippery at points. Some people like to remove clothing so keep your eyes out if you have kids! The top of the falls is beautiful and there are several pools to swim around in. Some deeper than others. You can see occasional trout in the streams and deeper pools. We just got home and we already want to go back. Did a quick overnight at Jennie Lake 13 July. all trails well marked, no snow to speak of. Lake was clear, and mosquito free. Decent crowd for a Thursday. Ran out of time to do the loop via JO pass, but ranger posted info said there was still big snow and fast water up that way. Would recommend Jennie Lake for overnight backpack, great workout, especially going up Poop Out Pass. Did this with my 8 year old on his first multi night backpacking trip. He carried his own pack weighs down at about 10lbs. This was the first time To Jennie lake for both of us. Started out of fox meadow. Trail was great until top of poop out pass then lost trail with snow. Thank goodness for GPS locator on All-Trails app. Kept us in the ball park of the trail as we climbed over snow banks. With all the snow this year the lake was still mostly iced over and any little stream was now a river but we were able to find a nice camp site away from the lake (all lake sites were under snow still) and was only 3min walk to lake where we caught two small rainbows. Had the whole place to ourselves!!!! It was an amazing trip. I may be a little biased because I patrol this area, but this hike is beautiful, can be challenging, and is sure to give some amazing views. Right now there is tons of snow and the creeks are nearly raging rivers. However, this trail (versus the whole loop) is the best access point to Jennie Lake! AMAZING! We hiked with two kids 8 and 10 yr old. Keep going past the 1 mile!!! You'll be glad you did. Due to the massive amount of Rain/Snow this year the water level is very high. Extremely beautiful hike. We hiked 5 miles in and there was still more to the trail. Much to do and see. Our favorite parts were the little primitive camp sights along the way that the kids stopped and played in. I've hiked to Jennie lake but entering at lodgepole on the Twin Lakes trail...its a tough trail approximately 8 miles...if you feel you won't make it over the J.O. Pass you can camp at Clover Creek about 6 miles in and tackle the pass to Jennie lake the next day...I made it in one day before but was exhausted...this lake is beautiful and I only rate it a 4 because I love solitude and this lake is usually pretty busy in the summer... We pick wild blackberries along Bear Creek Road leading to Balch Park nearly every summer. We haven't been for about 6 years, so we thought we would drive through on our way to Ponderosa to see if there were any berries. I wasn't very optimistic about finding any, given the recent prolonged drought conditions and the late date, but I was pleasantly surprised to see the blackberries bushes were plenty. Unfortunately most berries were dried up or overripe, but persistence paid off with just enough berries in our buckets to make one batch of wild blackberry jam! I recommend driving up no later than middle of August to harvest a good amount of blackberries at their peak of ripeness! Make sure to wear long sleeve shirts, long pants, close toe shoes and maybe even some latex/rubber gloves as protection against the vicious thorns in the bramble. You don't want thick or leather gloves because the berries are delicate and you will just squish them if you try to pick them wearing leather gloves. I have a friend that to,d me once about her Balch Park Family camping tradition. They would drive up early enough to stop and pick a bag of blackberries on their way up. She would used them to make wild blackberry syrup and serve over their pancakes for breakfast on the first morning. YUM! What a tradition and what great memories! Something about the labor and pain needs to gather those blackberries just make the reward so much sweeter! Fishing at Balch Park and picking wild blackberries has definitely been a family favorite! Going in is a breeze, but the hike out is a challenge. There are great fishing and swimming holes about 3.6-3.7 miles in along the Kern River. At the forks there are many signs of litter and junk leftover from recent others. We arrive on a Wednesday and hardly saw a soul. Left on Saturday morning and saw heaps of groups, fishers, campers, scouts etc. Definitely will come back.
2023-12-31T01:27:17.568517
https://example.com/article/9843
Effects of Enteral Immunonutrition in Patients Undergoing Pancreaticoduodenectomy: A Meta-Analysis of Randomized Controlled Trials. The effect of enteral immunonutrition (EIN) in patients undergoing pancreaticoduodenectomy (PD) is still doubtful. This meta-analysis aimed to assess the impact of EIN on postoperative clinical outcomes for patients undergoing PD. A literature search was carried out to identify all of the randomized controlled trials (RCTs) concerning the use of EIN for PD. Data collection ended on April 1, 2018. Pooled risk ratios (RRs) and the mean difference (MD) with a 95% CI were calculated using fixed effects or random effects models. The analyses were performed with RevMan 5.3.5. Four RCTs with a total of 299 patients were included. Immunonutrition reduced the incidence of postoperative infectious complications (RR 0.58, 95% CI 0.37-0.92; p = 0.02) and shortened the length of hospital stay (MD -1.79, 95% CI -3.40 to 0.18; p = 0.03). Conversely, there were no significant differences in the incidence of overall postoperative complications (RR 0.81, 95% CI 0.62-1.05; p = 0.11), non-infectious complications (RR 0.94, 95% CI 0.69-1.28; p = 0.70) and postoperative mortality (RR 2.43, 95% CI 0.37-16.10; p = 0.36). EIN reduced postoperative infectious complications and shortened the length of the hospital stay; immunonutrition should be encouraged in patients undergoing PD.
2024-02-26T01:27:17.568517
https://example.com/article/4793
Because, aviously… Chris Pine Usually, it’s at Comic-Con or some other time when large promotions are rolled out, but right now I have a few new trailers that are making me excited for upcoming programming/movies. For starters: Jack Ryan is being made into a TV show on Amazon Prime. We don’t have a date for it yet, but we … It’s ten years later, and as promised the Junior Counselors are back for a reunion at 9:30AM sharp. Everyone is pretty much the same, except for Ben, who’s had a nose-job that has changed his appearance from Bradley Cooper to Adam Scott. Camp Firewood hasn’t changed much either over the years, as there are still stories …
2023-09-06T01:27:17.568517
https://example.com/article/3517
Favorite Food: Like many schnauzers he has a sensitive stomach so he only eats Flint River Ranch Fish & Chips, which is the only food that doesn't bother his tummy and has cleared up all of his problems. He's very healthy and happy.
2024-01-26T01:27:17.568517
https://example.com/article/3976
The delivery of radio frequency (RF) energy to target regions within solid tissue is known for a variety of purposes of particular interest to the present inventions. In one particular application, RF energy may be delivered to diseased regions (e.g., tumors) in targeted tissue for the purpose of tissue necrosis. RF ablation of tumors is currently performed within one of two core technologies. The first technology uses a single needle electrode, which when attached to a RF generator, emits RF energy from the exposed, uninsulated portion of the electrode. This energy translates into ion agitation, which is converted into heat and induces cellular death via coagulation necrosis. The second technology utilizes multiple needle electrodes, which have been designed for the treatment and necrosis of tumors in the liver and other solid tissues. PCT application WO 96/29946 and U.S. Pat. No. 6,379,353 disclose such probes. In U.S. Pat. No. 6,379,353, a probe system comprises a cannula having a needle electrode array reciprocatably mounted therein. The individual electrodes within the array have spring memory, so that they assume a radially outward, arcuate configuration as they are advanced distally from the cannula. In theory, RF ablation can be used to sculpt precisely the volume of necrosis to match the extent of the tumor. By varying the power output and the type of electrical waveform, it is possible to control the extent of heating, and thus, the resulting ablation. However, the size of tissue coagulation created from a single electrode, and to a lesser extent a multiple electrode array, has been limited by heat dispersion. As a consequence, when ablating lesions that are larger than the capability of the above-mentioned devices, the common practice is to stack ablations (i.e., perform multiple ablations) within a given area. This requires multiple electrode placements and ablations facilitated by the use of ultrasound imaging to visualize the electrode in relation to the targeted tissue. Because of the echogenic cloud created by the ablated tissue, however, this process often becomes difficult to accurately perform. This process considerably increases treatment duration and patent discomfort and requires significant skill for meticulous precision of probe placement. In response to this, the marketplace has attempted to create larger lesions with a single probe insertion. Increasing generator output, however, has been generally unsuccessful for increasing lesion diameter, because an increased wattage is associated with a local increase of temperature to more than 100° C., which induces tissue vaporization and charring. This then increases local tissue impedance, limiting RF deposition, and therefore heat diffusion and associated coagulation necrosis. In addition, patient tolerance appears to be at the maximum using currently available 200 W generators. It has been shown that the introduction of conductive fluid, such as saline, into the extra-cellular spaces of the targeted tissue increases the tissue conductivity, thereby creating a larger lesion size. However, because electrically conductive fluid may preferentially travel into fissures or spaces inside, and even outside, of the targeted tissue, application of ablation energy to the targeted tissue may result in irregular ablation shapes that may include healthy tissue. For this reason, it would be desirable to provide improved electrosurgical methods and systems for more efficiently and effectively ablating tumors in the liver and other body organs that are larger than the single ablation capability of the electrode or electrode array on the electrosurgical device being used.
2024-03-18T01:27:17.568517
https://example.com/article/1244
George Child Villiers, 8th Earl of Jersey George Henry Robert Child Villiers, 8th Earl of Jersey DL (2 June 1873 – 31 December 1923), was a British peer and Conservative politician from the Villiers family. Villiers was the son of Victor Child Villiers, 7th Earl of Jersey, and the Honourable Margaret Elizabeth, daughter of William Henry Leigh, 2nd Baron Leigh. He succeeded his father in the earldom in 1915 and served briefly as a Lord-in-waiting under David Lloyd George between January and August 1919. Lord Jersey sold the Child & Co bank, part of the family's inheritance since the 5th Earl married into the Child family, to Glyn, Mills & Co. Bank in 1923. Family Lord Jersey married Lady Cynthia Almina Constance Mary Needham, daughter of Francis Needham, 3rd Earl of Kilmorey, and Ellen Constance Baldock, on 8 October 1908. They had four children: George Child Villiers, 9th Earl of Jersey (1910 – 1998). Lady Joan Child Villiers (1911–2010), married David Colville, (who died 1986) grandson of Charles Colville, 1st Viscount Colville of Culross. Hon. (Edward) Mansel Child Villiers (3 May 1913 – 9 March 1980), married twice (firstly 1934, diss. 1940 to Barbara Mary Frampton and secondly 1946, diss. 1971 to Princess Maria Gloria Pignatelli Aragona Cortez: Lady Ann Child Villiers (1916 – 2006), married (1937) Major Alexander Henry Elliot (d. 1986). Notes References Retrieved September 5, 2009: George Henry Robert Child-Villiers, 8th Earl of the Island Jersey Category:1873 births Category:1923 deaths Category:Earls of Jersey George Child Villiers, 8th Earl of Jersey Category:Conservative Party (UK) hereditary peers Category:Deputy Lieutenants of Oxfordshire Category:Earls in the Jacobite peerage Category:Members of Middlesex County Council
2023-10-28T01:27:17.568517
https://example.com/article/6089
Q: Loading NSTableView Lazily On iOS, an NSTableView (UITableView) is loaded lazily, in the sense that the table only displays the cell that are needed for the current view and not more. As the user scrolls up or down, more information is loaded from the data source. Is there something similar for OSX? I have an NSTableView with 1000+ records to load (+multiple columns) which is resulting in laggy loading as well as lag when scrolling up or down. Would this 'lazy' loading be a good solution(if possible)? Or maybe something along the same lines? Thanks. P.S. Regarding the loading process - I use the two usual methods -(id)tableView:(NSTableView *)aTableView objectValueForTableColumn:(NSTableColumn *)aTableColumn row:(NSInteger)rowIndex and -(NSInteger)numberOfRowsInTableView:(NSTableView *)aTableView to load the data. Within them there are if statements to check some variables, since I have 2 combo boxes that contain data, that when containing "ALL" and "ALL" load these 1000+ records. If the combo boxes are not "ALL" and "ALL", then only a couple of records are loaded, which load just fine. I think the problem, given that you have explained that lazy loading is done automatically, is that I am loading the data from XML files, so maybe the parsing is taking up a large chunk of processing time. I though of loading the data from the XML file into an NSDictionary at runtime and keeping it in memory, available for use when needed, so that I can avoid the loading time when the information is actually needed to display. What do you think? Thanks! A: You definitely want to cache your XML file into some sort of data structure. The tableView:objectValueForTableColumn:row: method will be called once for every visible cell. If you have a lot of columns, it's easy to have 200 cells visible. Page down, and those cells all change, your XML file loads 200 times, and you get laggy scrolling. Even if the XML file is only loaded and parsed for every cell in a specific column, it will be slow and should be fixed. You can verify that the caching is necessary by making tableView:objectValueForTableColumn:row: return nil where it would otherwise parse the XML file. Once you make this change, does it scroll smoothly, or do you have other problems? It would obviously make the application unusable, but you should always check if an optimization is necessary and sufficient before implementing it.
2024-07-13T01:27:17.568517
https://example.com/article/9700
Q: How can i convert canvas to bitmap for save in sdCard I use SurfaceView to move two bitmap picture over the screen. I tried this: ... @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); ... canvas.drawBitmap(bitmap,matrix,paint); canvas.drawBitmap(bitmap2,matrix,paint); } ... How it possible to save canvas into sdCard like this? public saveCanvasIntoSdCard(Canvas canvas) { } A: Solution I found: public Bitmap getBitmap() { Bitmap bmOverlay = Bitmap.createBitmap(bitmap2.getWidth(), bitmap2.getHeight(), bitmap2.getConfig()); Canvas canvas = new Canvas(bmOverlay); canvas.drawBitmap(bitmap, matrix, null); canvas.drawBitmap(bitmap2, 0, 0, null); return bmOverlay; } public void save(View view){ String root = Environment.getExternalStorageDirectory().toString(); File myDir = new File(root + "/dress"); myDir.mkdirs(); String fname = "save.jpg"; File file = new File (myDir, fname); if (file.exists ()) file.delete (); try { FileOutputStream out = new FileOutputStream(file); getBitmap().compress(Bitmap.CompressFormat.JPEG, 100, out); out.flush(); out.close(); } catch (Exception e) { e.printStackTrace(); } }
2023-08-26T01:27:17.568517
https://example.com/article/6737
The Asian Pacific Society of Periodontology (APSP) was founded in 1993, with representatives from 14 countries and the following objectives: "To serve as a non-profit medium for the exchange, advancement, and dissemination of scientific knowledge related to periodontal research and education in the Asian Pacific Regions." The 13th APSP meeting, which was held in Kuala Lumpur on September 28--29, 2019, provided an excellent platform for the academic exchanges and future collaboration in the Asia-Pacific region. The Asia-Pacific region is the most populous region of the globe, accounting for about 60% of the world\'s population. Therefore, the level of oral health care---and especially periodontal health---in this region directly reflects the overall level of the world. However, the reality is that a wide gap exists in oral health care levels between regions, and some areas even have limited access to basic periodontal treatment. Furthermore, recent research has confirmed that periodontal disease and systemic diseases are closely related, and periodontal disease should therefore be managed as a non-communicable disease. Additionally, the demand for periodontal treatment is significantly increasing due to the rapid increase in the elderly population. Consequently, this is a crucial time for Asia-Pacific periodontists to share their knowledge and enhance the balanced improvement of oral health in the Asia-Pacific region. [^1]: ^†^Young Ku is Immediate Past President of Asian Pacific Society of Periodontology.
2023-12-04T01:27:17.568517
https://example.com/article/4035
Loop diuretic resistance in heart failure: resistance etiology-based strategies to restoring diuretic efficacy. Loop diuretics are a cornerstone of symptom management for nearly all patients with heart failure. Diuretic resistance is a clinical problem with similar presentation despite diverse and multiple etiologies. Although the exact incidence is not known, diuretic resistance occurs frequently and can increase the length of hospitalization. Despite the prevalence of loop diuretic prescription in heart failure and frequency of diuretic resistance, current heart failure guidelines provide nonspecific guidance on strategies to restore diuretic efficacy. Providers are left with many questions regarding the optimum diuretic titration strategy in the setting of diuretic resistance. In light of these highly prevalent uncertainties, we present a case vignette-structured literature review of the mechanisms of diuretic resistance and recommend therapeutic strategies based on the resistance etiology to improve diuretic response in acute decompensated heart failure.
2024-06-26T01:27:17.568517
https://example.com/article/6828
Q: Split Excel worksheet into multiple worksheets based on a column with VBA Question is simple and may be repetitive. I have an Excel workbook which contains around 50 columns I have a criteria column for splitting this workbook into mulitple workbooks The approach is as shown below Name SportGoods quantity ABC CRICKETBAT 10 DEF BaseballBat 20 GHI football 30 MNO gloves 10 PQR shoes 10 ABCD CRICKET SHOES 10 DEFG BaseballBat 20 GHIL football 30 MNOP gloves 10 PQRS shoes 10 I am looking for a macro which enables me to create multiple Excel workbooks based on the column SportGoods like: Excel/CSV for all Cricket Items like CRICKETBAT, CRICKET SHOES, Gloves Excel/CSV for all football items like football and shoes As input parameter I would be providing distinct cricket items, distinct football items. Source would be a large Excel data sheet which contains ~5000 records. Can someone help me with a macro which would help in generating multiple workbooks based on the above details? A: Summary This is a short, but smart macro. It splits & saves the data on the active sheet into different CSV files. The newly created files are stored in a new folder called CSV output at the same location as your Excel file. VBA macro Sub GenerateCSV() Application.ScreenUpdating = False Application.DisplayAlerts = False iCol = 2 '### Define your criteria column strOutputFolder = "CSV output" '### Define your path of output folder Set ws = ThisWorkbook.ActiveSheet '### Don't edit below this line Set rngLast = Columns(iCol).Find("*", Cells(1, iCol), , , xlByColumns, xlPrevious) ws.Columns(iCol).AdvancedFilter Action:=xlFilterInPlace, Unique:=True Set rngUnique = Range(Cells(2, iCol), rngLast).SpecialCells(xlCellTypeVisible) If Dir(strOutputFolder, vbDirectory) = vbNullString Then MkDir strOutputFolder For Each strItem In rngUnique If strItem <> "" Then ws.UsedRange.AutoFilter Field:=iCol, Criteria1:=strItem.Value Workbooks.Add ws.UsedRange.SpecialCells(xlCellTypeVisible).Copy Destination:=[A1] strFilename = strOutputFolder & "\" & strItem ActiveWorkbook.SaveAs Filename:=strFilename, FileFormat:=xlCSV ActiveWorkbook.Close savechanges:=False End If Next ws.ShowAllData Application.ScreenUpdating = True Application.DisplayAlerts = True End Sub Save it in a new VBA module Understanding the code iCol = 2 strOutputFolder = "CSV output" The first line is your criteria column. A 1 would stand for column A, 2 for column B and so on. Second, we define a folder name where all our CSV files should be saved at. You can also set a fully qualified path like C:\some\folder. Otherwise Excel will create a folder at your Excel file's location Set ws = ThisWorkbook.ActiveSheet Here we save our current workbook and worksheet in a variable. Its not neccessarry to do this, but since we are dealing with multiple workbooks (newly created ones) I recommend this Set rngLast = Columns(iCol).Find("*", Cells(1, iCol), , , xlByColumns, xlPrevious) ws.Columns(iCol).AdvancedFilter Action:=xlFilterInPlace, Unique:=True Set rngUnique = Range(Cells(2, iCol), rngLast).SpecialCells(xlCellTypeVisible) Ok, what does this part? First, we search the last cell only in the criteria column. This must be done before our filtering and is needed later. Then, we use the famous advanced filter method to filter out in place all duplicate values from our criteria column. At last, we save all visible cells in a variable called rngUnique If Dir(strOutputFolder, vbDirectory) = vbNullString Then MkDir strOutputFolder Lets see if a folder called CSV output already exists. If not, create one For Each strItem In rngUnique If strItem <> "" Then [...] End If Next Now, we start to loop through all unique values in our variable rngUnique. But empty values are skipped ws.UsedRange.AutoFilter Field:=iCol, Criteria1:=strItem.Value An important line. We use the autofilter method and view all lines which match our current unique value. The old advanced filter gets canceled automatically. Workbooks.Add ws.UsedRange.SpecialCells(xlCellTypeVisible).Copy Destination:=[A1] These two lines create a new empty workbook and copy over only visible cells from our input Workbook strFilename = strOutputFolder & "\" & strItem Here we put together the CSV path. We take the current unique value as file name. The extension CSV is appended automatically since we have chosen xlCSV as output format. Make sure, your unique values do not contain invalid filename characters like < > | / * \ ? " or the corresponding CSV file won't be created ActiveWorkbook.SaveAs Filename:=strFilename, FileFormat:=xlCSV ActiveWorkbook.Close savechanges:=False The last step is to save the current workbook as a CSV and take the variable strFilename as filename. The CSV delimiter depens on your regional setting delimiter. It's possible to change the fileformat, eg. to tab delimited CSV or Excel 2003 workbook Application.ScreenUpdating = False Application.DisplayAlerts = False The first line speeds up our macro a bit since Excel doesn't need to show every single step of filtering. The second line suppresses annoying File already exists prompts. Later we enable those functions again
2024-01-05T01:27:17.568517
https://example.com/article/7830
A Freshman’s Fresh Take on the Updated Alcohol Policy While outrage towards the recent update on the alcohol policy ensues, an insidious counterculture of hard pre-gaming — bred by our community’s incessant demand for alcohol on Thursday and Saturday nights — surfaces and raises one question for many incoming freshmen like me: is the student community genuinely interested in creating a safe environment for all? Upon learning about the new alcohol policy, I understood the change as a boon. Like many incoming Swatties, my first priority was the need to feel safe in a completely new environment. Thus, I couldn’t understand the pushback against the revised policies — hypothetically, students would be safer when attending parties and less inclined to consume alcohol. Plus, there is the entangling snare of legality: by allowing alcohol for all, Swarthmore abets in promoting underage drinking. Most upperclassmen would say my that my parochial and naïve opinion amounts to a sacrilegious attack against deeply-entrenched Swarthmore traditions. To be honest, I don’t care. After enduring a strenuously long week, I anticipated my first Pub Nite would serve as a respite from the hectic lifestyle to which I will soon acclimate. I say yes to the awkward dancing, juice cartons, and Doritos — a party can most definitely be simultaneously dry and fun. Approximately an hour before Pub Nite, however, I started receiving texts regarding places where pre-gaming was occurring. Willets 3rd, Alice Paul lounge, Parrish 4th, individual dorm rooms, etc. An update would come roughly every 15 minutes and not stop until midnight. Hence, when entering Paces at around 11 PM, I was not surprised to see that many plastered students were already swaying tipsily on the Paces dancefloor. Pub Nite was BYOB, which ushered in an influx of freshmen desperately pining for sips of alcohol from upperclassmen. The whole scene was uncomfortable and pathetic, and I just wanted to leave. So did the policy fail to create a safe and inclusive space for all students, wet and dry? As a neophyte strategist myself, I established a few guiding principles that will prove to be instrumental in explaining this phenomenon. 1) Swatties will drink. There is a constant demand for alcohol. If it is not met at Paces, students will find other ways to get alcohol. Alcohol demand will not decrease because of public unavailability. 2) Swatties will purchase alcohol. Rum and vodka will flow like milk and honey here, and definitely has. If it is not met in Paces/funded by the school, the alcohol supplies in dorms and cars will certainly match the quantity demanded. 3) Swatties need to relax. For the average Swattie, the spate of homework and extra-curricular activities is overwhelming, and many — including Plato and researchers from the Harvard School of Public Health — have noted that temperate alcohol consumption is an excellent form of relaxing and creating happiness. Consuming alcohol is prevalent in Swarthmore culture. What we have here is a classic example of counterculture created when authority figures attempt to “ban” something. This counterculture, however, can be just as dangerous as the previous paradigm. Non-freshman students who pre-game today, fully cognizant of the disappearance of alcohol from public events, will consume the same amount of alcohol that they would have if they pre-gamed and drank at Paces. Underclassmen follow suit, but since we are not familiar with the rules of the game, it can be dangerous for the new and inexperienced. Being alone with cheap tequila, subpar beer (can freshmen really afford anything nice?) and your posse of friends who know nothing about dealing with alcohol poisoning is starkly different from being at Paces and having a sober supervisor manning the beer station. It is far too easy to go overboard when left unattended. This behavior is also replicated for parties that do supply alcohol, such as Disorientation, as students feel the need to arrive at parties already in a drunken stupor. My gut feeling was betrayed by reality. It’s not parties that are unsafe, but it is the drinking culture created by banning drinking at parties that is not sustainable. Aside from their concern for safety, freshmen also look for fun and adventure; at Swarthmore, this commonly translates to drinking and partying. For freshmen without alcohol experience, the peer pressure to drink is greater than ever when it is your hallmate pulling out the vodka two doors down from you. And here is where I must stress that our community’s tendency to imbibe should be questioned, as it is the driving force that produced this counterculture. In Pennsylvania, as is the case with most states, minors are prohibited from purchasing, possessing, and consuming alcohol. To vehemently support the distribution of alcohol at Paces by citing tradition as a reason to continue the practice is to support lawlessness. Why are students upset that an illegal practice has now been stopped? While I both acknowledge the benefits of social drinking and approve of former President Chopp’s participation in the Amethyst Initiative, the plain truth is that minors’ consumption of alcohol is illegal. There are many alternative de-stressors that are both dry and fun — students may feel that the only way to have fun on a Saturday night is to get shitfaced in Willets because their hallmates are doing it. When we are intransigent to comply with law because it differs from our norm, we inadvertently create a dangerous drinking environment. This also shines a light on our petulance: we will consciously choose to blur the lines of the law if the individual utility gained from participating in illicit activities outweighs that gained from altruistically creating a comfortable environment for others. At the same time, the administration should not be foolishly satiated with their policies riddled with holes; the ramifications of policy can only be understood and felt by students, and thus, we should be integral members in the decision-making process. Being a Swattie for nearly a month now, I see that we are collectively intelligent. We know what creates a safe drinking environment. We are mostly aware of what our limits are, and we notice and speak up when our peers over-consume. Also, we will help those in need. I firmly believe that administration must pass their reins onto a responsible and elected student government that understand the real effects of policy. Banning alcohol to prevent liability issues is definitely a step in the right direction, but banning drinking games for students of legal age, funnels, and the donations to Senior Week is excessive. We should acknowledge both the benefits and flaws presented through the recent policy changes. And though no grand strategy can predict every looming consequence, I contend that students should be given more license to create the safe, legal, and fun environment that we want through policy. The momentum the administration started by introducing the policy may have led us slightly awry, but our community must answer a few key questions going forward as we seek to a genuinely safe social environment. What does it mean for Swarthmore to be a community that condones underage drinking? Is it possible for drinking and non-drinking students to interact with each other at parties without a pressure to drink? Are the wishes of non-drinking students truly respected when considering the nature of the debate surrounding alcohol policy? Hello, did you like this article? Write for The Gazette! Open staff meetings are every Wednesday at 7:00 p.m. in The Daily Gazette office on Parrish 4th. Info about our editors can be found here; you can also email us at editors@daily.swarthmore.edu. 6 comments You have clearly never visited another college campus. Swarthmore was accommodating to an absurd degree. For an institution to shelter students above the age of 18 who can vote and serve in the military from alcohol is as realistic as abstinence only education. This isn’t Sodom or Gommorah, or anywhere even close to having parties like U of A or PSU. This is a school of 1500 in suburban Pennsylvania. Seriously? Because he pointed out that it’s probably not a good idea for the College to explicitly purchase alcohol with tuition dollars and then serve that alcohol to minors? He even gave a whole bunch of concessions that drinking is really fun and that the policy could create a toxic culture. One minor point – Pub Nite was frequently “uncomfortable and pathetic” for me as a freshman before the policy changes, too. In fact, “uncomfortable and pathetic” is a pretty good description for about half of all Paces parties. Your email address will not be published. Required fields are marked * Comment Name * Email * Website Example Widget This is an example widget to show how the Right Sidebar looks by default. You can add custom widgets from the widgets screen in the admin. If custom widgets is added than this will be replaced by those widgets.
2024-01-25T01:27:17.568517
https://example.com/article/7239
Q: Easiest way to select distinct with least number of null I want to create a view over a table that has 500k rows and 10 columns. In that table there are duplicate id but with different amount of information, because some of the columns are NULL. My objective is to keep one column in case of duplicates, but want to keep the one with less number of NULL values. Let me explain it with a quick example. I am working with a query similar to this. CREATE TABLE test (ID INT, b char(1), c char (1), d char(1)) INSERT INTO test(ID,b,c,d) VALUES (1,NULL,NULL,NULL), (1,'B', NULL,NULL), (1,'B','C',NULL), (1,'B','C','D'), (2,'E','F',NULL), (2,'E',NULL,NULL), (3,NULL,NULL,NULL), (3,'G',NULL,NULL) SELECT DISTINCT ID,b,c,d FROM test DROP TABLE test The result is ID b c d -------------------- 1 NULL NULL NULL 1 B NULL NULL 1 B C NULL 1 B C D 2 E F NULL 2 E NULL NULL 3 NULL NULL NULL 3 G NULL NULL However, the output I want to see is ID b c d -------------------- 1 B C D 2 E F NULL 3 G NULL NULL So, based on the id and if there are duplicates, I want to have the row with the least number of nulls. How is it possible? Thank you very much A: If you want the row with the least number of NULLs, then you would basically count them: select t.* from test t order by ( (case when b is null then 1 else 0 end) + (case when c is null then 1 else 0 end) + (case when d is null then 1 else 0 end) ) desc fetch first 1 row only; However, if you want one row per id with a non-NULL value in each column (if available) then @maSTAShuFu's answer is appropriate. EDIT: If you want one row per client, then simply use row_number(): select t.* from (select t.*, row_number() over (partition by client_id order by ( (case when b is null then 1 else 0 end) + (case when c is null then 1 else 0 end) + (case when d is null then 1 else 0 end) ) desc ) as seqnum from t ) t where seqnum = 1;
2024-05-16T01:27:17.568517
https://example.com/article/7130
Right ventricular dysfunction and B-type natriuretic peptide in asymptomatic patients after repair for tetralogy of Fallot. Early detection of right ventricular (RV) dysfunction is essential in the assessment of patients with repaired tetralogy of Fallot (TOF). This study aimed to assess latent RV dysfunction in asymptomatic patients with TOF and to determine the predictive value of B-type natriuretic peptide (BNP). Pressure-volume loops were recorded for 16 young patients (New York Heart Association class 1 or Ross class 0; median age, 14.2 years) using the conductance catheter technique. All the patients had RV dilation secondary to pulmonary regurgitation after surgical repair of TOF. Indexes of RV function were derived at baseline level and during dobutamine infusion. Contractility was calculated by the slope of the end-systolic pressure-volume relation (ESPVR). An increase in ESPVR during dobutamine infusion was considered to indicate contractile reserve as a marker for latent RV dysfunction. The median ESPVR significantly increased from 0.32 mmHg/ml (0.13-0.72 mmHg/ml) at baseline to 0.57 mmHg/ml (0.24-1.55 mmHg/ml) during dobutamine infusion (p = 0.005). However, for five patients, no relevant increase in contractility was found, indicating impaired RV contractile reserve. There was only a weak inverse correlation between impaired contractile reserve and BNP (r = -0.28). Even asymptomatic patients with only a mildly enlarged right ventricle can have impaired RV function. Early RV dysfunction cannot be predicted accurately with BNP.
2024-06-09T01:27:17.568517
https://example.com/article/2280
Skills That Matter in a World Awash in Data There is a paradox put forth by the French philosopher Jean Buridan which is commonly referred to as Buridan’s Ass. One interpretation goes something like this: Take a donkey and stake it equidistant between two identical piles of hay. Since the donkey is incapable of rational choice and the piles of hay are indistinguishable, the donkey will die of hunger. Of course, in the real world, we all presume the donkey would somehow “pick” a pile. We accept these situations all around us: fish seem to “choose” a direction to swim, birds of the same species seem to “decide” whether or not to migrate, and data seems to “suggest” things that we wish to prove. Which of these is not like the others? The answer is the data. Data has no ability to “act” on its own. We can use it or not, and it simply doesn’t care. The choice is entirely ours. The challenge is how we decide rationally what data to use and how to use it, when we have enough data, and when we have the “right” data. Making the wrong choice has serious consequences. Making the right choice can lead to enormous advantage. Let’s look at the facts. We know that we are living in a world awash in data. Every day, we produce more data than the previous day, and at a rate which is arguably impossible to measure or model because we have lost the ability to see the boundaries. Data is not only created in places we can easily “see” such as the Internet, or on corporate servers. It is created in devices, it is created in the cloud, it is created in streams that may or may not be captured and stored, and it is created in places intentionally engineered to be difficult or impossible to perceive without special tools or privileges. Things are now talking to other things and producing data that only those things can see or use. There is no defendable premise that we can simply scale our approach to data from ten years ago to address the dynamic nature of data today. This deluge of data is resulting in three inconvenient truths: Organizations are struggling to make use of the data already in hand, even as the amount of “discoverable” data increases at unprecedented rates. The data which can be brought to bear on a business problem is effectively unbounded, yet the requirements of governance and regulatory compliance make it increasingly difficult to experiment with new types of data. The skills we need to understand new data never before seen are extremely nuanced, and very different than those which have led to success so far. Data already in hand – think airplanes and peanut butter. Recently, I was on a flight which was delayed due to a mechanical issue. In such situations, the airline faces a complex problem, trying to estimate the delay and balance regulations, passenger connections, equipment availability, and many other factors. There is also a human element as people try to fix the problem. All I really wanted to know was how long I had in terms of delay. Did I have time to leave the gate and do something else? Did I have time to find a quiet place to work? In this situation, the answer was yes. The flight was delayed 2 hours. I wandered very far from the gate (bad idea). All of a sudden, I got a text message that as of 3:50PM, my flight was delayed to 3:48PM. I didn’t have time to wonder about time travel… I sprinted back to the gate, only to find a whole lot of nothing going on. It seemed that the airline systems that talk to each other to send out messaging were not communicating correctly with the ones that ingested data from the rest of the system. Stand down from red alert… No plane yet. False alarm. While the situation is funny in retrospect, it wasn’t at the time. How many times do we do something like this to customers or colleagues? How many times do the complex systems we have built speak to one another in ways that were not intended and reach the wrong conclusions or send the wrong signals? I am increasingly finding senior executives who struggle to make sense out of the data already on-hand within their organization. In some cases, they are simply not interested in more data because they are overwhelmed with the data on hand. This position is a very dangerous one to take. We can’t just “pick a pile of hay.” There is no logical reason to presume that the data in hand is sufficient to make any particular decision without some sort of analysis comparing three universes: data in hand, data that could be brought to bear on the problem, and data that we know exists but which is not accessible (e.g. covert, confidential, not disclosed). Only by assessing the relative size and importance of these three distinct sets of data in some meaningful way can we rationally make a determination that we are using sufficient data to make a data-based decision. There is a phenomenon in computer science known as the “dispositive threshold.” This is the point at which sufficient information exists to make a decision. It does not, however, determine that there is sufficient information to make a repeatable decision, or an effective decision. Imagine that I asked you if you liked peanut butter and you had never tasted it. You don’t have enough information. After confirming that you know you don’t have a peanut allergy, I give you a spoon of peanut butter. You either “like” it or you don’t. You may feel you have enough information (dispositive threshold) until you learn that there is creamy and chunky peanut butter and you have only tasted one type, so you ask for a spoon of the other type. Now you learn that some peanut butter is salted and some isn’t. At some point, you step back and realize that all of these variations are not changing the essence of what peanut butter is. You can make a reasonable decision about how you feel about peanut butter without tasting all potential variations of peanut butter. You can answer the question “do you like peanut butter” but not the question “do you like all types of peanut butter.” The moral here, without getting into lots of math or philosophy, is this: It is possible to make decisions with data if we are mindful about what data we have available. However, we must at least have some idea of the data we are not using in the decision-making process and a clear understanding of the constraints on the types of decisions we can make and defend. Governance and regulatory compliance – bad guys and salad bars. Governance essentially boils down to the three time-worn pieces of advice: “say what you’re going to do, do it, say you did it.” Of course, in the case of data-based decision making, there are many nuances in terms of deciding what you are going to do. Even before we consider rules and regulations, we can look at best practice and reasonableness. We must decide what information we will allow in the enterprise, how we will ingest it, evaluate it, store it, and use it. These become the rules of the road and governance is the process of making sure we follow those rules. So far, this advice seems pretty straightforward, but consider what happens when the governance system gets washed over by a huge amount of data that has never been seen before. Some advocates of “big data” would suggest ingesting the data and using techniques such as unsupervised learning to tell us what the data means. This is a dangerous strategy akin to trying to eat everything on the salad bar. There is a very real risk that some data should never enter the enterprise. I would suggest that we need to take a few steps first to make sure we are “doing what we said we will do.” For example, have we looked at the way in which the data was created, what it is intended to contain, and a small sample of the data in a controlled environment to make sure it lives up to the promised content. Small steps before ingesting big data can avoid big, possibly unrecoverable mistakes. Of course, even if we follow the rules very carefully, the system changes. In the case of governance, we must also consider the changing regulatory environment. For example, the first laws concerning expectations of privacy in electronic communication were in place before the Internet changed the way we communicate with one another. Many times, laws lag quite significantly behind technology, or lawmakers are influenced by changes in policy, so we must be careful to continuously re-evaluate what we are doing from a governance perspective to comply not only with internal policy, but also with evolving regulation. Sometimes, this process can get very tricky. Consider the situation of looking for bad behavior. Bad guys are tricky. They continue to change their behavior, even as systems and processes evolve to detect bad behavior. In science, these types of problems are called “quantum observation” effects, where the thing being observed changes by virtue of being observed. Even the definition of “bad” changes over time or from the perspective of different geographies and use cases. When we create processes for governance, we look at the data we may permissibly ingest. When we create processes for detecting (or predicting) bad behavior, the dichotomy is that we must use data in permissible ways to detect malfeasant acts that are unconstrained by those same rules. So in effect, we have to use good data in good ways to detect bad actors operating in bad ways. The key take-away here is a tricky one: We must be overt and observant about how we discover, curate and synthesize data to discover actions and insights that often shape or redefine the rules. The skills we need – on change and wet babies. There is an old saying that only wet babies like change all the time. The reality is that all of the massive amounts of data facing an enterprise are forcing leaders to look very carefully at the skills they are hiring into the organization. It is not enough to find people who will help “drive change” in the organization – we have to ensure we are driving the right change because the cost of being wrong is quite significant when the pace of change is so fast. I was once in a meeting where a leader was concerned about having to provide a type of training to a large group because their skill level would increase. “They are unskilled workers. What happens if we train them, and they leave?” he shouted. The smartest consultant I ever worked with crystallized the situation with the perfect reply, “What happens if you don’t train them and they stay!” Competitors and malefactors will certainly gain ground if we spend time chasing the wrong paths of inquiry, yet we can just as easily become paralyzed with analysis and do nothing, which is itself a decision that has cost (the cost of doing nothing is often the most damaging). The key to driving change in the data arena is to balance the needs of the organization in the near term with the enabling capabilities that will be required in the future. Some skills, like the ability to deal with unstructured data, non-regressive methods (such as recursion and heuristic evaluation), and adjudication of veracity will require time to refine. We must be careful to spend some time building out the right longer-term capabilities so that they are ready when we need them. At the same time, we must not ignore skills that may be needed to augment our capability in the short term. Examples might include better visualization, problem formulation, empirical (repeatable) methodology, and computational linguistics. Ultimately, I recommend considering three categories to consider from the perspective of skills in the data-based organization: Consider what you believe, how you need to behave, and how you will measure and sustain progress. Ultimately, the skills that matter are those that will drive value to the organization and to the customers served. As leaders in a world awash in data, we must be better than Buridan’s Ass. We must look beyond the hay. We live in an age where we will learn to do amazing things with data or become outpaced by those who gain better skills and capability. The opportunity goes to those who take a conscious decision to look at data in a new way, unconstrained and full of opportunity if we learn how to use it.
2024-06-01T01:27:17.568517
https://example.com/article/5506
The present invention relates to apparatus for either on-line or off-line testing of an electrical circuit. More particularly, the invention relates to circuit testing apparatus employing a linear feedback shift register ("LFSR") for signature analysis. This application includes corresponding and additional subject matter to that of application Ser. No. 571,256 filed Jan. 16, 1984 in the name of Michael Whelan and commonly assigned with the present application. Methods and apparatus for testing electrical circuits by "signature analysis" are well known. Commercial devices such as the Hewlett Packard HP 5004A signature analyzer have been available for a number of years. Publications describing such devices and methods of testing include "Hexadecimal Signatures Identify Trouble Spots in Microprocessor Systems" by G. Gordon et al., Electronics Magazine, Mar. 3, 1977, pp. 89-96; "Die Signatur-Analyse" by K. Heine, Elektronik Magazine (1979), Volume 1, pp. 48-51; "Logic-state and Signature Analysis Combine for Fast, Easy Testing" by I. Spector, Electronics Magazine, June 8, 1978, pp. 140-145. With the signature analysis method of testing, an electrical circuit having an input and an output is connected at its output to a linear feedback shift register (LFSR) and connected at its input to an automatic test pattern generator (ATG). The ATG applies test patterns sequentially to the input of the circuit under test (CUT) for a prescribed number of test cycles. The output signals produced during this test sequence cause the LFSR to pass through a succession of specific states so that, after completion of the test, the LFSR contains a prescribed, known "signature" (bit pattern) if the ATG, the CUT and the LFSR are working properly. Upon the completion of the test, the bit pattern in the LFSR is compared to the known bit pattern of the correct signature (i.e., the signature of a properly-working CUT) to determine whether any of the elements of the test system (ATG, CUT and LFSR) is faulty. During the test, the contents of the LFSR change from one "signature" to the next as the LFSR moves through its successive states. For a 16-bit LFSR, for example, there are 2.sup.16 different possible signatures. Normally, however, the LFSR passes through only a small percentage of this maximum number of signatures during a given test. Even assuming that the ATG and the LFSR hardware itself is functioning properly, the signature analysis method of circuit testing has two basic problems: (1) It is difficult, if not impossible, to determine when (during which test cycle) the error appeared in the CUT from the final LFSR signature at the completion of the test sequence (assuming that the final signature is incorrect). (2) It is possible for the LFSR to land in the correct state at the conclusion of a test sequence, indicating the correct signature, even though the CUT is faulty and produces multiple errors. To understand the latter problem, it is necessary to analyze the error detection capabilities of the LFSR using probabilistic mathematics. It will be assumed, throughout this analysis, that all output symbols produced by the CUT are equally likely and there is no correlation between them. It will be assumed also (1) that the LFSR is designed to produce a maximal length random number sequence if continually presented with null symbols; (2) that the LFSR cannot get into a particular state from two different states upon receipt of the same input symbol and, similarly, (3) that the LFSR can only make a transition from one specific state to another upon receipt of one specific symbol. Consider a test system with an ATG, CUT and LFSR connected together, which passes through N test cycles. The CUT therefore produces a string of N output symbols during the testing procedure which could be a correct sequence a or, in the alternative, an i symbol error sequence, b. b is thus a sequence of N symbols which differs from the sequence a in exactly i positions. It is assumed, in this regard, that all i symbol errors (for any particular value of i) are equally likely. During the test sequence the LFSR starts in some initial state, S.sub.0, and compresses the string a or b of N symbols. In compressing these symbols, the LFSR will pass through N+1 states in total (including the initial and final states). However, since it is possible that states may be revisited, the number of distinct states may be less than this. We let A.sub.i denote the average number of distinct states visited when a string of i symbols is compressed; we let M denote the total number of LFSR states (e.g., 2.sup.16 for a 16-bit LFSR). Now, therefore: EQU A.sub.0 =1 (i.e., the initial state) EQU A.sub.i+1 =A.sub.i +P, where P is the probability that the (i+1).sup.st symbol causes a transition to a state which has not previously been visited. Since all symbols are equally likely, it follows that all next states are equally likely. ##EQU1## This geometric series can be summed to yield: EQU A.sub.i =(1-.alpha..sup.i+1)/(1-.alpha.) It follows that the fraction of states visited (on the average) during the compression of i symbols can be written as A.sub.i /M, a probability which we define as ".beta.." FIG. 1 shows the results of this analysis for the case of an 8-bit LFSR as compared to actual observation. This graph sets forth the average number of different states visited by an LFSR (.beta.) in dependence upon the length of the LFSR sequence. For an 8-bit LFSR with 256 different states, there is a remote possibility that the LFSR will visit each state once during a 256 state sequence. However, as an average, the LFSR will visit only about 60 percent of these possible states during a 256 state sequence. FIG. 1 thus shows the upper bound 2 for the number of different states which may be visited by the LFSR. The calculated average number of different states is indicated by the dash line 4, whereas actual observations are indicated by the solid lines 6 and 8. It can be seen that there is very good agreement between analysis and observation. In considering the error coverage of the LFSR, it is necessary to define some additional notation: a.sub.i =the i.sup.th symbol in a; andlikewise PA1 b.sub.i =the i.sup.th symbol in b; PA1 S.sub.i (a)=the LFSR state after processing the first i symbols of a; and PA1 S.sub.F (a)=the LFSR state after a has been completely compressed. PA1 P.sub.i =the probability that the LFSR detects an i-symbol error (i.e., an erroneous sequence yields an incorrect signature); and PA1 R.sub.i =the probability that the LFSR does not detect an i-symbol error (i.e., an erroneous sequence yields the correct signature). By definition P.sub.i =1-R.sub.i. Furthermore, We wish to determine P.sub.i and R.sub.i for i.gtoreq.1.
2024-07-20T01:27:17.568517
https://example.com/article/9354
RTI activist asks for rainfall data, Telangana govt demands Rs 20 lakh to give info The fee of Rs 20 lakh demanded by the state government includes a whopping Rs 3 lakh as GST. news RTI 27-year-old Rajesh Serupally, an independent journalist and RTI activist, based in Nizamabad district, Telangana received a jolt after the Telangana State Development Planning Society (TSDPS) asked him to pay a whopping fee of Rs 20,30,960 including Goods and Services Tax (GST) of Rs 3,09,960 for disclosing rainfall data under the Right to Information Act. Nizamabad has a total of 41 Automatic Weather Stations (AWS). An amount of Rs 3500 was charged to get data from each AWS, totalling to Rs 17,22,000, and an additional Rs 3,09,960 towards GST. Lately, Rajesh has been working on a story relating to deficit rainfall in Nizamabad, and its impact on agriculture and farmers in the region. To collect the rainfall data of the last one year (from June 2018 to May 2019), he initially approached the Nizamabad Chief Planning Officer (CPO). However, as the CPO failed to provide the data, Rajesh was forced to file an RTI query. The query was then directed to Telangana State Development Planning Society who have been monitoring the weather, and providing all the rainfall data. However, on July 30, Rajesh got a reply, in which the department had asked him to shell out Rs 20.30 lakh to avail the information. This response left Rajesh flummoxed. “I was completely shocked!,” Rajesh said to TNM. He added, “What was more bizarre was that the department even including a GST charge of Rs 3 lakh. I honestly didn’t know how to react to it.” “I have filed many RTIs earlier, and nobody has ever charged GST for an RTI query,” Rajesh said. Prominent city-based RTI activist, Vijay Gopal said, “The government never charges GST fee on any RTI query. The amount is nominal. I think that the TSDPS department is not well informed about RTI. It is ridiculous.” Rajesh added, “I don’t think anyone has filed an RTI query with TSDPS department.” Agreeing to this, TSPDPS director in-charge Sudershan Reddy said, “Nobody had earlier asked us for rainfall data from Automatic Weather Stations through RTI.” Telangana has 1,044 AWS. “Since the past one year, private insurance companies have been asking us for rainfall data, and we decided to monetise it. We made a policy of charging Rs 3,500 for data for each weather station. Based on that, he (Rajesh) was also asked to pay Rs 20 lakh.” According to Sudershan Reddy, private companies have been asking for rainfall data to cross-check the claims of farmers who are beneficiaries of Pradhan Mantri Fasal Bima Yojana. “How would we know that he is not an individual associated with the insurance companies, who is seeking this data for just Rs 10 on their behalf?” Sudershan Reddy asked. Rajesh however opines that this data should be made public for the benefit of farmers and agriculture activists. “It is government data. They shouldn’t monetise it. Rainfall data is a great source for activists to find if a district is suffering from drought? It would be useful for farmers to claim weather based crop insurance,” he said. Interestingly, the ruling TRS party had recently supported the BJP in making an amendment to the RTI Act 2005, in the Rajya Sabha. Activists claim the RTI Amendment Bill has diluted the Act itself, and also jeopardises the sovereignty of the states.
2024-01-06T01:27:17.568517
https://example.com/article/6342
Q: Issue with interface java/spring I wrote this code @Controller public class StudentsController { private ArrayList<student> students = new ArrayList<>(); private IStudentsRepository studentsRepository; @GetMapping("/") public String index(Model model){ model.addAttribute("students", students); return "index"; } @GetMapping("/create") public String create(@ModelAttribute student student) { studentsRepository.create(student); return "create"; } ..... } for a web application and I get this error java.lang.NullPointerException: null at com.example.demo.Controller.StudentsController.create(StudentsController.java:27) ~[classes/:na] When I try to access the page create with its functionalities it doesn't work just gives me that error what is the problem? Also when I initialize the IStudent Repository interface IntelliJ tells me that studentsRepository is not assigned even if I used it to create method. A: You are not injecting IStudentsRepository. Use @Autowired You can use Field Injection like this @Autowired private IStudentsRepository studentsRepository; Or the best way would be to use Constructor Injection. @Controller public class StudentsController { private ArrayList<student> students = new ArrayList<>(); private final IStudentsRepository studentsRepository; @Autowired public StudentsController(final IStudentsRepository studentsRepository) { this.studentsRepository = studentsRepository; }
2024-02-05T01:27:17.568517
https://example.com/article/4977
Opening Statement"My hats are off to these guys, I know that's kind of a corny cliché but they have worked so hard and they have done such an excellent job in school and to have everything going on in their life and the wins and the preparation, and to turn right back around and prepare like they did in the short period of time and to be as locked in as they were and to take a team where they really, really wanted to understand what they wanted to do, what they did, their calls, things of that nature. Our guys were completely locked in; they played with a relentless spirit, their energy was tremendous. They were active. The communication was excellent. We wanted to play a 40-minute game, especially on the defensive end, and I think these guys did it. They came back in the second half and although the team scored more points obviously than they did in the first, our guys continued to play at a really high level. So, proud of how they're playing and proud of how they're working, they're maturing. I thought our coaching staff, especially Tim Buckley who was the lead on this, did a fantastic job in a short period of time of getting them ready, making sure that we didn't inundate them with information but at the same time that they absorbed enough to know what they had to do and we felt like we were playing a very athletic team, a team that was excellent on the glass. They're trying to kind of find their way a little bit. I've known Kevin (Nickelberry) for a long time, I think he's an excellent coach. As I said to him after the game, our guys were on it. They were ready to play, excited to play and wanted to join some elite company. There's no way around that when there's only a few teams in this illustrious history of this program that have started 11-0, they wanted to join that and I'm proud of them." On working on things with a big lead"Second half we did different things. It's a long game, we weren't just going to go back and sit at half court, man-to-man, or sit in zone because that's when you get complacent so we wanted them to be aggressive. We wanted to be in different spots, we did a lot different things in the second half but they were all things that we do." On maintaining focus "I was far more concerned about this game than I was Notre Dame after Monday in the sense of being locked in. When you're concerned about a team that is coming off of a great win like Kentucky and has finals after Monday's practice I had no doubt that they would be ready to play Notre Dame but this was the one that was concerning maybe but they locked in. They erased those coaching fears that you have and it's only natural, but they erased those right away and our coaching staff and players combined did a great job of being ready to attack today." Howard University Head Coach Kevin Nickelberry It was 12-8, then 20-10; what happened in the first half? "Tom must have gave his team one heck of a speech. They were coming off a good Kentucky game, and a good Notre Dame game. We were hoping that they would be underestimating us. They came out as hungry as I have ever seen them. They were getting after us and spreading us out. Victor (Oladipo) was running the lanes so hard, we had to worry about him in transition. When we ran back to cover him in transition, they spread us out and were nailing threes. It was a barrage of threes. If I wasn't coaching against them, I would have been impressed. This team has improved tremendously since the last time we played them. It was unbelievable to see how hungry this came team out from the beginning. This is a Tom Crean team. They are gritty, the run, and they spread you out." What did you tell your team in the lockeroom? "I told them this was a very good team. We have played against some good teams this year. We've played Georgetown, Oregon State; I mean we played some good teams. But this was probably the best team we have played. This team runs and shares the ball very, very well. This is something we can learn from. I told my team we can learn a lot about a team that two years ago, many people weren't thinking much of. This is the same team that my juniors and seniors played two years ago. Obviously they have added a few players, and (Cody)Zeller has been a great addition. Hopefully in a couple years, we can say that we have improved as much as the Indiana basketball team has."
2024-04-19T01:27:17.568517
https://example.com/article/3185
T.J. Miller Arrested for Alleged Battery of Cab Driver The comedian was taken into custody by LAPD early Friday morning and released without having to post bail. Silicon Valley and Office Christmas Party star T.J. Miller was arrested early Friday morning in Los Angeles for battery on a cab driver, police said. The arrest took place at 1 a.m. Friday morning after an altercation with a cab service driver resulted in Miller allegedly assaulting the employee over a political argument involving Donald Trump, LAPD officer Jenny Houser told The Hollywood Reporter. The incident was recorded as a private person arrest, commonly known as a citizen's arrest, the official said. Miller was taken into custody by LAPD officers and released without having to post bail later that same morning, officer Houser said. LAPD did not confirm which car share company the driver was employed by. A request for comment from Miller's legal representatives was not immediately responded to. Miller is set to keep his hosting gig at Sunday's Critics' Choice Awards, according to the Los Angeles Times. On Sunday, Miller tweeted that he might possibly address the incident during the awards show airing tonight on A&E.
2023-09-30T01:27:17.568517
https://example.com/article/9159
Recent years, led by the consumer markets including smart phones, smart TVs and tablets, flash memories have been developing rapidly. Nevertheless, due to complex mask patterns, exorbitant manufacturing costs, increasingly large word line leakage and crosstalk between cells, and increasingly small number of electrons in floating gates, the size reduction capacity of the flash memories is greatly limited. It is estimated that the development of the size reduction capacity will be difficult to continue when the size reduces to 1z nm. Thus, emerging non-volatile memories such as CBRAM, MRAM, PRAM and RRAM gain increasing attention, wherein resistive random access memory RRAM, by virtue of high speed, large capacity, low power consumption, low cost and high reliability, is regarded as the most powerful candidate for flash memories. Nevertheless, due to the effect of process, voltage and temperature (PVT), as shown in FIG. 1, there is a serious consistency problem with the resistance of the RRAM resistive units; that is, there are deviations in the resistance between wafers, between chips on the same wafer, and between different regions on the same chip. In addition, the resistance in both a high resistance state and a low resistance state presents normal distribution in a certain range. Therefore, it is difficult to provide a current-mode read circuit with a relatively ideal reference current. In addition, it is not feasible to use a fixed reference current for it cannot track the deviations brought about by regions and temperatures in the high resistance state and low resistance state of the resistive units. At present, it is common to use a shared reference cell to provide a reference current, as shown in FIG. 2. This allows tracking of the change of the resistance as the region and temperature change. Nevertheless, there is also a consistency problem with the resistance of the reference unit per se, and the reference currents generated by the reference unit also present normal distribution. Therefore, it is necessary to find a suitable reference array structure to narrow the reference current distribution and improve the read margin, thereby increasing the read speed and read success rate.
2024-07-27T01:27:17.568517
https://example.com/article/9184
Arnaud Djoum Arnaud Sutchuin-Djoum (born 2 May 1989), also known as Arnaud Djoum or Arnaud Sutchuin, is a Cameroonian footballer who plays for Al-Raed. He has previously played for Belgian clubs Brussels and Anderlecht, in the Netherlands for Roda JC, in Turkey for Akhisar Belediyespor, in Poland with Lech Poznań and in Scotland at Heart of Midlothian. Career Club Djoum started his career at Brussels in the 2006–07 season. He played 12 matches and scored one goal for the Belgian club. He moved to Anderlecht, but failed to make a break through. In January 2009, he moved to Roda JC Kerkrade in the Netherlands appearing 119 times over a span of 5 years. After a spell in Turkey with Akhisar Belediyespor, Djoum signed for Polish club Lech Poznań in early 2015. Djoum then joined Heart of Midlothian in September 2015. He scored his first goal for the club in a narrow 2–1 loss to rivals Celtic in October 2015. After settling in he very quickly became a star man in Robbie Neilson's team. He left Hearts to join Saudi club, Al-Ra'ed in July 2019, upon the expiry of his contract. International Djoum represented various Belgium youth teams, before making his senior debut for Cameroon in September 2016, in a 2–0 win over The Gambia in an Africa Cup of Nations qualifier. He played the whole 90 minutes as Cameroon defeated Egypt 2–1 on 5 February 2017, to win the 2017 Africa Cup of Nations. Career statistics Honours Lech Poznań Ekstraklasa: 2014–15 Cameroon Africa Cup of Nations: 2017 References External links Voetbal International profile Profile at elfvoetbal Category:1989 births Category:Living people Category:Sportspeople from Yaoundé Category:Cameroonian footballers Category:Cameroon international footballers Category:Belgian footballers Category:Belgium youth international footballers Category:Belgian people of Cameroonian descent Category:RWDM Brussels FC players Category:R.S.C. Anderlecht players Category:Roda JC Kerkrade players Category:Lech Poznań players Category:Al-Raed FC players Category:Belgian Second Division/Belgian First Division B players Category:Eredivisie players Category:Ekstraklasa players Category:Süper Lig players Category:Saudi Professional League players Category:Belgian expatriate footballers Category:Expatriate footballers in the Netherlands Category:Expatriate footballers in Poland Category:Expatriate footballers in Saudi Arabia Category:Belgian expatriate sportspeople in the Netherlands Category:Belgian expatriate sportspeople in Saudi Arabia Category:Cameroonian expatriate sportspeople in Saudi Arabia Category:Heart of Midlothian F.C. players Category:Scottish Professional Football League players Category:Association football midfielders Category:2017 Africa Cup of Nations players Category:2017 FIFA Confederations Cup players Category:2019 Africa Cup of Nations players Category:Black Belgian sportspeople
2023-11-12T01:27:17.568517
https://example.com/article/1382
HEADWATERS INCORPORATED INVESTMENT APPEALS Headwaters improves lives through innovative advancements in construction materials. We are leading participants in the Building Products and Construction Materials business segments. Our building products include: architectural stone, resin-based exterior siding accessories, specialty roofing products, trimboard, windows, concrete block and brick, as well as other building products. We’re also the largest manager and marketer of coal combustion products in the construction materials industry with a primary focus on recycling high quality fly ash as a replacement for portland cement. Premier Leadership Positions in Niche Markets Estimated that Headwaters meets 75% of national demand for decorative shutters, gable units, and mounting blocks. The Company’s Stone Products segment controls approximately 30% of their niche market. Headwaters’ Concrete Block business is the largest manufacturer of concrete block products in the Texas block market today. Headwaters Resources / Construction Materials segment controls almost half of all fly ash sales in the U.S. today. Quality Products and Strong Brands Headwaters is the vendor of choice for both wholesalers and the leading U.S. home improvement retailers, including Lowe’s and Home Depot. Some of our brands include: Atlantic Premium Shutters™, Mid-America Siding Components™, Inspire Roofing Products™, Tapco Integrated Tool Systems™, Eldorado Stone®, Kleer Lumber®, and many more. Well Positioned in Construction Markets The Company remains committed to its mission of providing innovative building materials to its customers and to carefully managing its business and capital structure to deliver profitability to its shareholders. Headwaters continues to see improvements in its end markets, and is well-positioned to benefit as it continues to improve and grow its ongoing operations. National Distribution Network Headwaters’ Building Products segment has more than 1,000 wholesale distributors across the country. Our Construction Materials segment is the only fly ash company in the U.S. with a nationwide infrastructure that includes 25 terminals, roughly 825 railcars, and approximately 100 trucks. Improved Balance Sheet Headwaters has maintained exceptional operating leverage with contribution margins that are some of the highest in the industry. The Company has also continued to successfully de-leverage its debt. At the end of the June 2011 quarter, the Company's Net Debt to Adjusted EBITDA ratio was at an all-time high of 6.7x. At the end of Q3 FY 2016, the Net Debt to Adjusted EBITDA ratio stood at 2.6x. Long-Term Relationships and Exclusive Contracts Headwaters’ Construction Materials business has over 100 long-term, exclusive contracts with coal-fired utilities in the U.S. today.
2023-12-01T01:27:17.568517
https://example.com/article/5882
28 May 2012 I just finished a bit of gardening and pruning, and had some really good post ideas. But they have flown away--and I must trust the Lord to remind me of them if they are truly important. --- Our morning sermons at Brainerd Hills Presbyterian Church are getting better and better. We're in Galatians, in the Fruit of the Spirit passage, and the sermons focus on a fruit, a weed, and an artificial fruit. Fruit: Love Weed: Hate Artificial Fruit: Tolerance Fruit: Patience Weed: Impatience ...and that is as far as has been preached. I can't say how much this sermon series has encouraged me. The slow pace through the list of fruit has given me a chance to spend a week thinking over, meditating about, praying for just one character trait. It's been very calming and very growthful. --- I got to see The Hunger Games onSaturday. I enjoyed the movie so much but may have to expound in a whole nother post. --- My reading has become more purposeful. I recently finished Unorthodox: The Rejection of My Hasidic Roots and am working on Stroke of Genius and Freakonomics. I'm struggling a little in Stroke of Genius because of the author's materialistic (believing in only the material world) conclusions. And I'm struggling with Freakonomics because of a correlation the authors made between the fall in crime in the late 1990s with the beginning of legal abortions in the mid-1970s. Humph. They may indeed be correlated but it still doesn't justify the holocaust of 4 million kids since 1973. --- Memorial Day lunch is awaiting preparation. Thus, this post is ending. Good day. 25 May 2012 1. Write for 5 minutes flat – no editing, no over thinking, no backtracking 2. Link back here and invite others to join in. 3. And then absolutely, no ifs, ands or buts about it, you need to visit the person who linked up before you & encourage them in their comments. Seriously. That is, like, the rule. And the fun. And the heart of this community. --- Our dear pastor, preaching through Galatians, is in the midst of the verse listing the Fruit of the Spirit. He teaches us about one fruit each Sunday, so it's taking a while, but it is a gift to slow down and really contemplate just one fruit, instead of racing through such a familiar passage. Last Sunday was patience day. The week before was peace day. It turns out that the week following the sermon is packed with chances--opportunities--to learn to exercise that gift. I want to be more patient, but it's hard to be presented with 'opportunity' after 'opportunity' to be patient. God keeps presenting me with circumstances that try my patience. Son J1 is developing his own preferences and desires. He doesn't always feel compliant with naptime,or snacktime, or travel time. So he pitches a fit, or whines, or fake-cries. God stirs in my heart to be patient. Do I take the opportunity? Or do I rely on myself, pass patience by, and make friends with anger, frustration, and yelling? 22 May 2012 I'm still surprised at how much I enjoy being with J1. He's playful, clever, and living. He initiates games--sometimes tag, sometimes hide and seek, and sometimes a variation that I know because we spend so much time together. He likes to cuddle and sit close to me--but not if I make him! He liked to play games--but usually of his own choice. And when he's hurt or sad, he comes to me. He sits on my lap to cry. For me, it begs the question. When I'm hungry, tired, bored, hurting, thirsty--where do I go? Do I invent something to satisfy myself or do I run to my Savior? He is the Living Water! He is the vine, the door, the way. Unless I run to him, I will never be satisfied. "He only is my rock and my salvation, my fortress; I shall not be shaken." --Psalm 62:6 21 May 2012 I just found the Blogger app, which means I can hop on my iPod and type up a quick post. Yay! Hopefully now I can post more often; it's hard to post from the ordinary computer when I might only have a few minutes. Thank you, app people-- this will hopefully bring back some writing creativity. About Me I am a thirty-something daughter, sister, wife, Covenant College graduate, and friend...and now a mom(!). I like Calvinism, chocolate, carbohydrates, and moonlit walks on the beach. I am bilingual (es runaju Latviski). I'm an ENFP married to an ISXJ. Oh, and I am a transplanted Coloradan living in the South.
2024-02-14T01:27:17.568517
https://example.com/article/7014
Introducing Dexter, the Automatic Indexer for Postgres Your database knows which queries are running. It also has a pretty good idea of which indexes are best for a given query. And since indexes don’t change the results of a query, they’re really just a performance optimization. So why do we always need a human to choose them? Introducing Dexter. Dexter indexes your database for you. You can still do it yourself, but Dexter will do a pretty good job. Dexter works in two phases: Collect queries Generate indexes We’ll walk through each of them. Phase 1: Collect And parses out the query and duration. It uses fingerprinting to group queries. Queries with the same parse tree but different values are grouped together. For instance, both of the following queries have the same fingerprint. The data is aggregated to get the total execution time by fingerprint. You can get similar information from the pg_stat_statements view, except queries in the view are normalized. This means, you get: SELECT * FROM ratings WHERE user_id = ?; instead of SELECT * FROM ratings WHERE user_id = 3; However, we need the actual values to determine costs in the next step. To prevent over-indexing, you can set a threshold for the total execution time before a query is considered for indexing. Phase 2. Generate To generate indexes, Dexter creates hypothetical indexes to try to speed up the slow queries we’ve just collected. Hypothetical indexes show how a query’s execution plan would change if an actual index existed. They take virtually no time to create, don’t require any disk space, and are only visible to the current session. You can read more about hypothetical indexes here. The main steps Dexter takes are: Filter out queries on system tables and other databases Analyze tables for up-to-date planner statistics if they haven’t been analyzed recently Get the initial cost of queries Create hypothetical indexes on columns that aren’t already indexes Get costs again and see if any hypothetical indexes were used While fairly straightforward, this approach is extremely powerful, as it uses the Postgres query planner to figure out the best index(es) for a query. Hypothetical indexes that were used AND significantly reduced cost are selected to be indexes. To be safe, indexes are only logged by default. This allows you to use Dexter for index suggestions if you want to manually verify them first. When you let Dexter create indexes, they’re created concurrently to limit the impact on database performance. Trade-offs and Limitations The big advantage of indexes is faster data retrieval. On the flip side, indexes add overhead to write operations, like INSERT, UPDATE, and DELETE, as indexes must be updated as well. Indexes also take up disk space. Because of this, you may not want to index write-heavy tables. Dexter does not currently try to identify these tables automatically, but you can pass them in by hand. As for other limitations, Dexter does not try to create multicolumn indexes (edit: this is no longer the case). Dexter also assumes the search_path for queries is the same as the user running Dexter. You’ll still need to create unique constraints on your own. Dexter also requires the HypoPG extension, which isn’t available on some hosted providers like Heroku and Amazon RDS. Thanks This software wouldn’t be possible without HypoPG, which allows you to create hypothetical indexes, and pg_query, which allows you to parse and fingerprint queries. A big thanks to Dalibo and Lukas Fittl respectively.
2023-12-24T01:27:17.568517
https://example.com/article/4812
Semiconductor devices are required to have higher performance and lower power consumption and to be produced at lower cost. To meet these requirements, many techniques such as System in Package (SiP) are developed (refer to, for example, “Nikkei Electronics” pp. 81-92 in Oct. 10, 2005 issue, published by Nikkei Business Publications, Inc.) In this non-patent document, techniques for interconnecting plural chips which are sealed in a single package are described. One example disclosed in this document is such that, on one surface of an interposer for, for example, a memory card, four chips (ND type flash memories) are stacked and electrical connections between each chip and the interposer are provided by wires (wire bonding). Another example disclosed in the same document is such that, on one surface of an interposer for a memory card, eight chips (ND type flash memories) are stacked and electrical connections between chips and a chip and the interposer are provided by Si through-hole electrodes. In the same document, further, a method for manufacturing an image pickup element is disclosed. Disclosed in this document are section views by process step to illustrate a method for manufacturing a conventional image pickup element by utilizing wire bonding and section view by process step to illustrate a method for manufacturing an image pickup element by utilizing Si through-hole electrodes. In the former section views by process step, the section views are described in which, after chips with their sensor surfaces facing up are joined to the top surface of a supporting substrate, the chips and the supporting substrate are connected by wires and then a protective glass is joined to the sensor surfaces. In the latter method for manufacturing an image pickup element, the following process steps are disclosed. First, a supporting material for protecting sensors is attached to a wafer. Then, the wafer is thinned. Next, Si through-hole electrodes are formed and a rewiring/backside protection layer is formed on the exposed surface of the wafer. Then, the supporting material is removed and a protective glass is attached to the sensor surface of the wafer. Next, external terminals are installed. Finally, the wafer with the protective glass is cut into individual chips. In this way, the image pickup element is manufactured.
2024-01-27T01:27:17.568517
https://example.com/article/7417
Pulse pressure in normotensives: a marker of cardiovascular disease. The purpose of the present study was to evaluate the relation of the systemic arterial pulse pressure and other parameters derived from the 24-h arterial blood pressure (BP) monitoring to the severity of coronary artery disease, carotid lesions, and left ventricular (LV) mass index in patients without arterial hypertension. One hundred ten patients with known coronary artery disease underwent coronary arteriography, 24-h arterial BP monitoring, and ultrasound imaging of the carotid arteries and the myocardium. Measurements of 24-h arterial BP monitoring (systolic, diastolic, and average BP, pulse pressure, abnormal values of systolic and diastolic BP, and heart rate), the severity of coronary heart disease (Gensini score), intima-media thickness (IMT) of the common carotid artery and LV mass index were determined in all patients. By univariate analysis, only 24-h pulse pressure was significantly related to the severity of coronary artery disease (P < .01), carotid IMT(P < .01), and LV mass index (P < .01). In a multivariate analysis, 24-h pulse pressure was also the best predictor of the severity of coronary lesions (P = .009), carotid IMT (P = .003), and LV mass index (P = .009). Gensini score was related (P < .01) to LV mass index and not to carotid IMT. In conclusion, systemic arterial pulse pressure derived from 24-h arterial BP monitoring is related to coronary artery disease, carotid IMT, and LV mass index independently of age or any other derivative of 24-h arterial BP monitoring, indicating that this parameter could be a marker of global cardiovascular risk.
2024-05-07T01:27:17.568517
https://example.com/article/1624
Q: How to call angular js function controller from jquery in page How to call angularjs function or controller from jquery I am new to this angularjs here is my function here i have to pass the var1 and var2 from jquery i have searched many articles but i cant understand. Please help me <script type="text/javascript"> function customersController($scope, $http) { $http.get("https://tic.com/testingservice/HttpService.svc/operator/var1/var2") .success(function (response) { $scope.names = response; }); } A: If I speak general, there can be cases when jQuery use is need in Angularjs app i.e. jQuery plugin is used that isn't available in Angular version. Every Controller has $scope/this and You can access the scope of an angular element if you have an ID tag attached to the same DOM element as the ng-controller: the DOM: <div id="myctrl" ng-controller="MyCtrl"></div> your controller: function MyCtrl($scope) { $scope.anyFunc = function(var1, var2) { // var1, var2 passed from jquery call will be available here //do some stuff here } } in jQuery you do this and it will access that controller and call that function : angular.element('#myctrl').scope().anyFunc('value1','value2'); Running Sample angular.module('app', []) .controller('MyCtrl', function ($scope) { $scope.anyFunc = function (var1, var2) { alert("I am var1 = " + var1); alert("I am var2 = " + var2); }; }); $(function(){ $("#hit").on("click",function(){ angular.element($("#myctrl")).scope().anyFunc('VAR1 VALUE', 'VAR2 VALUE'); }); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div id="myctrl" ng-app='app' ng-controller="MyCtrl"> Call via jQuery <button id="hit">Call</button> </div> Happy Helping! A: Both the above answers helped me a lot finally i got the answers $(function(){ $("#hit").on("click",function(){ angular.element($("#myctrl")).scope().anyFunc('VAR1 VALUE', 'VAR2 VALUE'); }); }); function customersController($scope, $http) { $scope.anyFunc = function (var1, var2) { alert("I am var1 = " + var1); alert("I am var2 = " + var2); $http.get("https://tic.com/testingservice/HttpService.svc/operator/var1/var2") .success(function (response) { $scope.names = response; }); }; } <div id="myctrl" ng-app='' ng-controller="MyCtrl"> Call via jQuery <button id="hit">Call</button> </div> A: Try Like this angular.module('app', []) .controller('jqueryCtrl', function ($scope) { $scope.callFromJquery = function (data) { alert(data); }; }); $(function () { $("#press").on("click", function () { angular.element($("#jquery")).scope().callFromJquery('I Called You ! Angular !'); }); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div id="jquery" ng-app='app' ng-controller="jqueryCtrl"> Just Press ME <button id="press">Press ME</button> </div>
2024-01-22T01:27:17.568517
https://example.com/article/8678
Our Furry Family Danny Danny was born on the 14th April 1995; he is brother to Jack and the Top Cat in the house. Jack and Danny were the first cats that I adopted, they came from a good family who had to give them up because they were emigrating and did not want them to have to cope with quarantine. Without doubt Danny is the leader of the pack; he tolerates the others in his space and will gently (or not so gently) remind them who the boss is! He can be chatty when demanding attention, and head buts your hand to remind it that it should be stroking him.
2024-03-13T01:27:17.568517
https://example.com/article/6345
Q: How can I make Matlab's editor smart indent always on while writing code? While working with Matlab I'm using very often the combination of Ctrl+A (select all) followed by Ctrl+I (smart indent). Can I make Matlab's editor smart indent option working automatically, while I type code? A: File > Preferences > Editor/Debugger > Language > Apply smart indenting while typing
2023-11-14T01:27:17.568517
https://example.com/article/8881
Q: How to open ASP page in a new tab/window? I know that the HTML anchor tag has an attribute called target, which will open a new page in a new window/tab if we set target='_blank'. It should work well with static HTML pages. However, when we submitting a form by clicking the submit button on an ASP page, I find that the result page normally will be loaded into the same tab/window. So my question is: are we able to load page into a new tab/window when user clicks submit button? Thanks. EDIT: Looks like <form target='blank'> is the answer although its said to be deprecated according to w3schools website. :) A: Just like a link: <form target='_blank'> No need to do anything on the ASP side of things. Of course, as with all pop-ups, this is subject to browser settings. A: Form's target shold work. <form target="_blank" ...></form> From here (have you searched?)
2024-01-02T01:27:17.568517
https://example.com/article/8311
As the story has it, one day I headed to the opposite side of the globe – the Flipside. I arrived in Asia February 16th, 2005 and thought I’d do a year, then leave. Years later and I’m still here. I went from being some random foreign girl to taking on labels I never imagined – university professor, film extra, professional boxer, reality TV star, CEO of my own girls-only fitness company (Flipside Fitness), CEO of my own boxing club (Hulk's Club, formerly known as Hulk's Boxing), and now I'm launching my 3rd business -- Empowered Clubhouse. After 11.5yrs in Korea, I then picked up one day and left. I returned to Toronto, Canada with Flipside Fitness on my brain, Hulk's in my heart, boxing in my bag, and my four-legged friend Balboa Button by my side. But then I left again. This time it was for the Philippines. That's where I'm at now, living in the land of the happy people. The struggles are real and the struggles are many but I'm living life on my terms, I'm calling the shots, and I'm doing what I love. Life is an amazing adventure and this is my story of yesterday. Sunday, May 28, 2017 Canada-Bound When I return to the Philippines I'll be living in Makati so Balboa and I posed for one last shot here in Eastwood. I know Balboa will be in good hands when I'm gone, very good hands, but it was kind of sad to pose for this picture with him. I'll miss him so much. Boarded my plane and for the next double digits it was just me and my bag. Worked a lot on my business proposal while on the plane and planning what I need to do in Canada. The plane rides, both of them, were better then the airport experiences. I had a rude encounter with the airport jerk -- a Korean dude who ignored those around him so I resorted to telling him off in Korean. He still didn't listen, hence the nickname I gave him. In Taiwan even my overly expensive fancy headphones couldn't block out the screaming child. Security ended up dealing with that kid and the mom. Up in the sky things were a different story. I've always been so fascinated with the lift-off and landing of a flight -- the scenery. It felt like a whole other world up in the sky. So beautiful and so peaceful.
2024-01-21T01:27:17.568517
https://example.com/article/4236
TIMKEN 461/453X Bearing Leave a message TIMKEN 461/453X Bearing Description DPC Bearing Co.Ltd specializes in selling TIMKEN 461/453X bearings, TIMKEN 461/453X bearing related information: Model: 461/453X Bearing; Brand: TIMKEN,if you want to know more about TIMKEN 461/453X Bearing for information, you can do online consultation or call the hotline and email us [email protected], we will serve you in detail. TIMKEN bearings 461 453X Suppliers and Distributors.. Sun Rises Group Limited now brings quality TIMKEN bearings 461 453X at competitive price and we also offers quick delivery to make you 100% satisfied. With rich experience in this field, we have been well-known for the quality, value and good service. Expecting to become your long-term. 27881/27820 TIMKEN bearings - FAG bearings|INA bearings. In 1899, TIMKEN group founder Mr. Timken Henry invented a taper Roller Bearings applied to axle field at that time, that is the tapered roller bearing, 27881/27820 TIMKEN then the company was established.It has 52 factories and 81 sales offices, 9 technology and engineering center and 14 distribution and service center, spread all over the world in 29 countries and regions. Our Deep groove ball bearings Timken 82BIH390 bearings and other bearings are 100% compliant with the international quality standard ISO 9001 and are highly appreciated in various markets around the world. As an experienced bearing manufacturer, you will be able to buy Needle roller bearings Timken 82BIH390 at our lowest price, comprehensive. Original | Timken - 463/453X Bearing's Importers,Inside. TIMKEN 461/453X bearings Company enjoys a high reputation in bearing industry the quality of the TIMKEN 461/453X . TIMKEN 462/453X bearings Quality brand ball bearings . TIMKEN 462/453X bearings is available in different varieties shapes and sizes at unbeatable prices. Rolls Bearing Limited accepts bulk order of bearings and provides Timken W203PPG Bearing - bunnytraining.nl Our Deep groove ball bearings Timken W203PPG bearings and other bearings are 100% compliant with the international quality standard ISO 9001 and are highly appreciated in various markets around the world. TIMKEN HH437549/HH437510 Tapered roller bearings TIMKEN company is the world's leading tapered roller bearings manufacturer. The tapered roller bearings made by TIMKEN always have the highest standards of quality and performance. The TIMKEN HH437549/HH437510 tapered roller bearings consist of a bearings cone and bearings cup which belong in the single row tapered roller bearings class. Timken Part Number 461 - 453-B VIEW CAD DRAWING DOWNLOADS NEW PRODUCT OFFERING Like the TS bearing design, the TSF design consists of two main separable. NTN Bearing - 6936AZZ Best - stockholmcyclechic.com The bearing 6936AZZ manufactured by NTN can be ordered immediately from the bearings distributor TUN Bearing Co.,Ltd.The bearings delivery is done directly from the stock, and in the event the bearings are not in stock, we shall inform you immediately related to the delivery term. For extra details on the 6936AZZ manufactured by NTN you can email us [email protected] FAG | NUP2328E.M1 | Bearings | bearing for sale The axial load capacity of FAG NUP2328E.M1 bearings is limited by their internal design and depends on how they are locked to the shaft. The standard size of the Cylindrical Roller Bearings FAG bearing is 140 - 300 - 102mm. Our FAG NUP2328E.M1 bearing can withstand a large axial load of more than 20% of the basic dynamic load rating. TIMKEN 461/453X Tapered Roller Bearing 42.85x104.775x30. TIMKEN 461/453X Tapered Roller Bearing are two of our main products. If you want to know them price, size and other information can contact us.
2024-03-12T01:27:17.568517
https://example.com/article/7468
Thebes tablets The Thebes tablets, with inscriptions in Mycenaean Greek using Linear B, were discovered in Thebes, Greece. They belong to the Late Helladic IIIB context, contemporary with finds at Pylos. A first group of 21 fragments was found in the 1963–64 campaign; A further 19 tablets were found in 1970 and 1972. Using Near Eastern cylinder seals associated with the finds, the editors of the published corpus of the whole archive now date the destruction of the Kadmeion, the Mycenaean palace complex at Thebes, and thus the writing of the tablets, some of which were still damp when they were unintentionally fired, to shortly after 1225 BC. Chadwick identified three recognizable Hellenic divinities, Hera, Hermes and Potnia "Mistress", among the recipients of wool. He made out a case for ko-ma-we-te-ja, also attested at Pylos, as the name of a goddess. Quite early, before the more recent discoveries, Frederick Ahl made the provocative suggestion concerning the "Phoenician" or "palm-leaf" (phoinix) letters: "Cadmus did bring writing to Thebes, but this writing was not the Phoenician alphabet, but Linear B". Discovery A substantial additional portion, some 250 tablets, amounting roughly to 5% of the entire Mycenaean corpus from all sites, was discovered in Pelopidou Street and the "Arsenal" by Vassilis L. Aravantinos, the current archaeological superintendent of Thebes, from 1993 to 1995, in a rescue excavation. In 1996, a few more tablets were identified in a museum among finds from the 1963–1964 dig in Thebes. The number of "tablets" given by the editors is actually misleading. For example, as Tom Palaima and Sarah James have independently demonstrated, the "123 tablets" of the Fq series actually are many fragments of texts that originally made up between 15 and 18 tablets. Publication The French and Italian linguists Louis Godart and Anna Sacconi were charged with the publication of these tablets. During the following years, their preliminary glimpses of the contents, which suggested that the new tablets would reveal a dramatically new view of Mycenaean religion, gave rise to accusations that the project was moving too slowly; when the tablets were finally published in 2001, the effect of their overall content was perceived by some reviewers to be rather less than expected. Findings Many of the Thebes tablets can be read as containing information on divinities and religious rites; others mention quantities of various commodities. By the sites mentioned, the boundaries of the region controlled by the Theban palace can be estimated: the Theban palace controlled the island of Euboea and had a harbour in Aulis. The tablets contain a number of important terms previously unattested in Linear B, such as ra-ke-da-mi-ni-jo /Lakedaimnijos/ "a man from Lacedaemonia (Sparta)", or ma-ka /Mā Gā/ "Mother Gaia" (a goddess still revered in Thebes in the 5th century BC, as reported, for example, in Aeschylus' Seven Against Thebes). Also ku-na-ki-si /gunaiksi/ "for women" exhibits the peculiar oblique stem of "woman". Vassilis L. Aravantinos, Louis Godart and Anna Sacconi read the tablets to indicate cult activity dedicated to Demeter, Zeus protector of crops, and to Kore, and they speculate that the roots of the Eleusinian Mysteries can be traced back to Mycenaean Thebes. Thomas G. Palaima, however, has criticized their suggestions as "subject to very dubious interpretations" and "highly suspect on linguistic and exegetical grounds". Other arguments against the identification of cult activity in the texts have been advanced by Sarah James and Yves Duhoux. Palaima attaches importance to one tablet (Uo 121) as evidence of linking sacrificial animals with foodstuffs at the end of LHIIIB. The same phenomenon, part of ritual Mycenaean feasting, occurs in the contemporary Pylos tablets. Vienna symposium The results of a specialist symposium held in Vienna December 5–6, 2002 have been published. The papers further dismantle the "house of cards" constructed by the editors of the tablets on the religious references in the Thebes tablets. Günter Neumann (pp. 125–138) states that the animals in the Thebes tablets are not in any way sacred or "divine" but animals that would naturally be part of everyday life for both Mycenaean and later Greeks. He gathers explicit historical evidence, including references to the animals being fed grains. Michael Meier-Brügger (pp. 111–118) clearly demonstrates that de-qo-no as "master of banqueting" is linguistically impossible. It must be deipnon "main dinner", as in Homer; that di-wi-ja-me-ro cannot equal "the part for the goddess Diwia" but has to be "two-day period" (as also argued earlier by Melena and in this volume by Killen). Also, si-to is not an otherwise unattested god Sito (Grain) but plain siton "grain". José Luis Garcia Ramón (pp. 37–69) demonstrates that linguistically, a-ko-ro-da-mo cannot be agorodamos "mystic assembler of the people". He proposes simply the Greek male name Akrodamos. He also considers that o-po-re-i is a personal name, parallel to another in the Thebes tablets me-to-re-i. They mean respectively "on the mountain" and "beyond the mountain" so o-po-re-i does not mean "Zeus of the Fall Harvest", which is impossible according to Mycenaean usage for god's names and epithets. John T. Killen (pp. 79–110) specifically concludes (p. 103) that "the fact that ma-ka, o-po-re-i, and ko-wa never all occur together, and that it requires a special hypothesis to explain this fact, combined with what I believe are the continuing difficulties with explaining o-po-re-i as a theonym /Opo:rehi/, make me reluctant for the present to accept the ma-ka = Ma:i Ga:i [i.e., Mother Earth] equation". References Sources V. L. Aravantinos, L. Godart, A. Sacconi, Thèbes: Fouilles de la Cadmée I: Les tablettes en Linéaire B de la Odos Pelopidou: Édition et commentaire. Pisa and Rome: Istituti editoriali e poligrafici internazionali, 2001. V. L. Aravantinos, L. Godart, A. Sacconi, A. Sacconi, Thèbes: Fouilles de la Cadmée III: Corpus des documents d'archives en linéaire B de Thèbes (1-433). Pisa and Rome: Istituti editoriali e poligrafici internazionali, 2002. V. L. Aravantinos, M. del Freo, L. Godart, Thèbes: Fouilles de la Cadmée IV: Les textes de Thèbes (1-433): Translitération et tableaux des scribes. Pisa and Rome: Istituti editoriali e poligrafici internazionali, 2005. Bryn Mawr Classics Review 20: review of all three volumes, 2005. Duhoux, Yves, "Dieux ou humains? Qui sont "ma-ka", "o-po-re-i" et "ko-wa" dans les tablettes linéaire B de Thèbes," Minos p. 37-38 (2002-2003 [2006]), pp. 173-254. S.A. James, "The Thebes Tablets and the Fq series: A Contextual Analysis," Minos 37-38 (2002-2003 [2006]), pp. 397-418. S. Deger-Jalkotzy and O. Panagl, Die Neuen Linear B-Texte aus Theben (Vienna: Verlag der Österreichischen Akademie der Wissenschaften 2006). Category:Mycenaean Greek inscriptions Category:Archaeological artifacts Category:Ancient Thebes (Boeotia)
2023-10-24T01:27:17.568517
https://example.com/article/1785
Milton Keynes Call for council to ban takeaways from entire estate where most residents are overweight A councillor is calling for a ban on new takeaway businesses from being able to open near a specific estate due to the poor health of its residents. Milton Keynes Councillor Shammi Akter is asking the authority not to grant any more licences for fast food businesses in the area of Netherfield. Netherfield has been […] Workers at a food factory in Milton Keynes allege they have banned from taking water breaks during their shifts, leaving them thirsty for up to six hours at a time while doing manual labour. Employees at the Cranswick Food factory in Milton Keynes are not able to take bottles of water onto the factory floor […] Woman with memory loss creates VR wedding company so couples never forget special day A woman has launched a business that will allow people to relive their wedding day whenever they like, through virtual reality technology. Emma Beckett was inspired to start the ‘virtual wedding’ company after a brain injury left her with memory loss, highlighting to her the importance of preserving precious moments. The single mum from Milton […] Man poured petrol over neighbour, set her alight and smoked cigarette as she screamed in violent attack A man who poured petrol over his neighbour and smoked a cigarette as she burned has been jailed for 19 years. Mother Kirsten Ashby, from Milton Keynes, was attacked by neighbour Raymond James Bowen in November last year after she attempted to help his partner who was having a seizure. Ms Ashby, 27, almost died […] School apologises for asking children to pull eyes to 'look Chinese' A school has been forced to apologise following an outcry from parents over a class celebration of “Chinese Day”, where children were encouraged to pull the skin up at the sides of their eyes to “look Chinese”. Parents of the class of six and seven-year-olds at the New Bradwell School in Milton Keynes discovered what […] Milton Keynes population 'will double to half a million' The population of Milton Keynes is expected to double, growing to half a million people, within 30 years. Locally known as MK, it was first created in 1967, making it both the last and the largest of the new towns built after the Second World War, to create more housing for those squeezed out of […] Spontaneous frisbee and football break out amid all-day M1 traffic Frustrated drivers used six hours stuck on a closed motorway this morning to get some playtime in, with football, frisbee and touch rugby taking places on the lanes of the M1. The motorway was closed near Milton Keynes after the appearance of what appeared to be a “ripped black binliner with a liquid oozing out […] Corbyn promises proper regulation of privately rented homes Jeremy Corbyn attacked the Government’s “misguided” housing policy during a rally in Milton Keynes on Monday evening. Hundreds watched the Labour leader’s rally, who is campaigning to unseat Tory MPs with slim majorities. Corbyn addresses the crowds Addressing the crowds, Mr Corbyn said the election campaign had been “a dry run, a dress rehearsal” and […] Milton Keynes is the culmination of humankind’s search for Utopia To the sneerers, Milton Keynes is a soulless stretch of concrete cows and roundabouts. But the notorious new town is actually the culmination of humankind’s search for Utopia, a new BBC season will claim. “As a young person, I did find Milton Keynes a boring, soulless place to grow up. It was just famous for […] Leeds fires gun on battle for European Capital of Culture 2023 Will it be Leeds, Dundee, Milton Keynes or Truro? Britain might be leaving the EU but that hasn’t stopped a host of cities vying to win the title European Capital of Culture 2023. Glasgow in 1990 and Liverpool in 2008 have previously reaped a reward in tourism, investment and jobs from hosting the event. The […] We know that sometimes it’s easier for us to come to you with the news. That's why our new email newsletter will deliver a mobile-friendly snapshot of inews.co.uk to your inbox every morning, from Monday to Saturday. This will feature the stories you need to know, as well as a curated selection of the best reads from across the site. Of course, you can easily opt out at any time, but we're confident that you won't. Oliver Duff, Editor By entering your email address and clicking on the sign up button below, you are agreeing to receive the latest daily news, news features and service updates from the i via email. You can unsubscribe at any time and we will not pass on your information.
2023-10-22T01:27:17.568517
https://example.com/article/2686
This losing streak is starting to feel like the last days of the WCE... Gillis was directly responsible for the outcome of tonight's loss. Ditto for Booth. God forbid, where this franchise would be right now if it wasn't for Dave Nonis and his genius moves a few years back. Are you drunk? Only drunk with love my Hindu friend........on,y drunk with love. In all seriousness though, the Canucks should be trying to model themselves after the Florida Panthers. They've completely screwed us on every single deal. They are where they are today........largely because of us!
2023-10-28T01:27:17.568517
https://example.com/article/2309
ASI unearths ‘first-ever’ physical evidence of chariots in Copper-Bronze age The excavation, which began in March, has also unearthed eight burial sites and several artefacts. The three chariots found in burial pits indicate the possibility of “royal burials” while other findings confirm the population of a warrior class here, officials said. | TNN | Updated: Jun 6, 2018, 13:34 IST Highlights The excavation, which began in March, has also unearthed eight burial sites and several artefacts, including three coffins, antenna swords, daggers, combs, and ornaments The discovery of a chariot puts us on a par with other ancient civilizations, like Mesopotamia, Greece, etc: ASI Remains of one of the chariots found in the excavation. Related Videos ASI unearths ‘first-ever’ phys... MEERUT: In a first, Archaeological Survey of India (ASI) has stumbled upon the remains of a chariot that dates back to “Bronze Age” (2000-1800 BC) at Sinauli village of Baghpat district in Uttar Pradesh. Decorated with copper motifs, the findings of the Copper-Bronze age have opened up further research opportunities into the area’s civilisation and culture. The ASI team, which has been excavating the archaeologically rich site for the past three months, unveiled the new finding on Monday. The swords and daggers confirm the existence of a warrior population. The excavation, which began in March, has also unearthed eight burial sites and several artefacts, including three coffins, antenna swords, daggers, combs, and ornaments, among others. The three chariots found in burial pits indicate the possibility of “royal burials” while other findings confirm the population of a warrior class here, officials said. The excavations being carried out by the Archaeological Survey of India (ASI) in Vadnagar —Prime Minister Narendra Modi’s hometown — have uncovered another gem from the city’s famed past. Archaeologists have unearthed a major structure measuring 50 metres x 25 metres (roughly 164 feet x 82 feet) dating back to the 5th century. “The discovery of a chariot puts us on a par with other ancient civilizations, like Mesopotamia, Greece, etc. where chariots were extensively used. It seems a warrior class thrived in this region in the past,” said SK Manjul who is co-director of Excavations and ASI’s Institute of Archaeology in Delhi. A coffin found at one of the ‘royal burial’ sites Manjul termed the digging drive a “path-breaking” one, also because of the copper plated anthropomorphic figures – having horns and peepal-leafed crowns – found on the coffins, that indicated a possibility of “royal burials”. “For the first time in the entire sub-continent, we have found this kind of a coffin. The cover is highly decorated with eight anthropomorphic figures. The sides of the coffins are also decorated with floral motifs,” Manjul said. While coffins have been discovered during past excavations in Harappa, Mohenjo-daro and Dholavira (Gujarat), but never with copper decorations, he added. The findings also shed light on the noteworthy progress the Indian civilsation had made at the time, making it at par with the 2000 BC Mesopotamia. “We are now certain that when in 2000 BC, the Mesopotamians were using chariots, swords, and helmets in wars, we also had similar things.” The ASI team is excavating the archaeologically rich site for the past three months and unveiled this new finding only on Monday The swords, daggers, shields and a helmet confirmed the existence of a warrior population, and the discovery of earthen and copper pots, semi-precious and steatite beads, combs, and a copper mirror from the burial pits point towards a “sophisticated” craftsmanship and lifestyle. “It is confirmed that they were a warrior class. The swords have copper-covered hilts and a medial ridge making it strong enough for warfare. We have also found shields, a torch and daggers,” the archaeologist said. The current site lies 120 metres from an earlier one in the village, excavated in 2005, where 116 burials were found along with antenna swords and pottery. Experts termed the digging drive a “path-breaking” one. While it was difficult to ascertain the exact race of the latest buried remains, Manjul asserted that the chariots and coffins did not belong to the Harappan civilisation. “The findings of the 2005 excavation – pottery, beads and other cultural material – were similar to those of the Harappan civilisation.” Manjul said the similarities could have been an outcome of the migration of the Harappans to the Yamuna and the upper planes during the late mature Harappan era. All Comments ()+^ Back to Top Characters Remaining: 3000 Continue without login or Login from existing account FacebookGoogleEmail Share on FacebookShare on Twitter Refrain from posting comments that are obscene, defamatory or inflammatory, and do not indulge in personal attacks, name calling or inciting hatred against any community. Help us delete comments that do not follow these guidelines by marking them offensive. Let's work together to keep the conversation civil.
2024-01-12T01:27:17.568517
https://example.com/article/3438
Q: Google Maps API Android NullPointerException buenas tardes. El problema es que al ejecutar la aplicación en la que tengo un MapFragment, si no escribo código me va perfectamente pero si escribo el código para tener una referencia hacia un GoogleMap, me da error "NullPointerException", no se que podrá ser, alguna idea? Aquí dejo el código resumido, con las partes importantes, el fallo me da cuando esta en la línea de getMapAsync(this); public class EventCardViewActivity extends AppCompatActivity implements OnMapReadyCallback{ private GoogleMap googleMap; MapFragment mapFragment = (MapFragment) getFragmentManager() .findFragmentById(R.id.mapFragment); mapFragment.getMapAsync(this); @Override public void onMapReady(GoogleMap map){ googleMap = map; } Aquí esta el fallo exacto: 08-09 14:51:22.754 8627-8627/eventoslpa.com.eventoslpa_final E/AndroidRuntime: FATAL EXCEPTION: main Process: eventoslpa.com.eventoslpa_final, PID: 8627 java.lang.RuntimeException: Unable to start activity ComponentInfo{eventoslpa.com.eventoslpa_final/eventoslpa.com.eventoslpa_final.activity.EventCardViewActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.google.android.gms.maps.MapFragment.getMapAsync(com.google.android.gms.maps.OnMapReadyCallback)' on a null object reference at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2750) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2811) at android.app.ActivityThread.-wrap12(ActivityThread.java) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1528) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6316) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:872) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:762) Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void com.google.android.gms.maps.MapFragment.getMapAsync(com.google.android.gms.maps.OnMapReadyCallback)' on a null object reference at eventoslpa.com.eventoslpa_final.activity.EventCardViewActivity.onCreate(EventCardViewActivity.java:85) at android.app.Activity.performCreate(Activity.java:6757) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1119) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2703) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2811)  at android.app.ActivityThread.-wrap12(ActivityThread.java)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1528)  at android.os.Handler.dispatchMessage(Handler.java:102)  at android.os.Looper.loop(Looper.java:154)  at android.app.ActivityThread.main(ActivityThread.java:6316)  at java.lang.reflect.Method.invoke(Native Method)  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:872)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:762)  A: No puedo hacer comentarios todavia ;( tienes que tener en cuenta que MapFragment mapFragment = (MapFragment) getFragmentManager().findFragmentById(R.id.mapFragment); se debe referenciar despues de setContentView en onCreate, o si es desde un fragment despues de haber inflado la vista. Revisa en el layout que el nombre del MapFragment sea el mismo mapFragment y tambien verfica que este definido el MapFragment no SupportMapFragment
2024-02-07T01:27:17.568517
https://example.com/article/9009
3/4" Refinery Steam Hose — Bulk/Uncoupled Part# 11013449 This oil resistant steam hose is designed for saturated steam service to 406°F and super-heated service to 450°F. The cover has medium oil resistance and can be used in applications encountering petroleum products. This hose has an EPDM cover and is reinforced with two braids of carbon steel wire, providing flexibility and abrasion resistance.
2024-04-12T01:27:17.568517
https://example.com/article/1376
Q: How to save all query results into an array I'm trying to store the query results into an array. I've a table with 50 rows and 6 columns, how to save this 50 values into an array? My problem is save into array like: Value1 - red - metal - 100 - in stock - price So each cell of array must have this organization. A: You can use 2 dimensional array in this manner $sql = "select * from table"; $result = mysql_query($query); $myArray = array(); while(*emphasized text*$arrayresult = mysql_fetch_array($result)) { $myArray[] = array( 'id'=>$arrayresult['id'], 'title'=>$arrayresult['title'] ); }
2024-03-22T01:27:17.568517
https://example.com/article/6395
The following article originally ran on LDS Living in September 2015. It was the week before I met my fiancée—but for all I knew, I was going to be waiting another 10 years. I felt discouraged, disheartened, and just plain tired. “The day will come when you feel you have met your eternal mate.” It was right there in my patriarchal blessing. I read those four words over and over and over, “The day will come.” I knew that my patriarchal blessing said that I would be married. I felt that the blessings which I had received were real when I was promised a spouse. I believed that if I was obedient, then everything would work out— at least in some kind of a the-Lord-knows-better-than-me type way. ► You'll also like: 3 Words Married People Should Stop Saying to Singles But knowing about those blessings didn’t change the fact that loneliness had become my constant companion. I distracted myself with dates, and instead of isolation, I chose business as my solace. I attempted to weary the Lord. I tried to play the dating game exactly right. I prayed; I fasted; I went to the temple. But despite going on over 1,000 dates, my attempts felt completely fruitless. I felt like Sheri Dew when she said, “Believe me, if fasting and prayer and temple attendance automatically resulted in a [spouse], I’d have one” (“You Were Born to Lead, You Were Born for Glory,” BYU Speeches, Dec. 9, 2003). I tried to think about the day when my promised blessings would come. As soon as I tried to imagine it, a quote flooded my mind and extinguished the fire of doubt, only to drown my hope of timing: “Some blessings come soon, some come late, and some don’t come until heaven” (Jeffrey R. Holland, “An High Priest of Good Things to Come,” general conference, Oct. 1999). I know I was supposed to feel comforted, but the thought that I might never get married at first was shocking and then turned into something awkwardly settling, before shifting into a deep understanding of my purpose here in life. But the thoughts remained—maybe my wife died with the pioneers. Maybe I’m just not meant to marry in this life. I had tried, and I was tired . . . just tired. I became content knowing that my future was in God’s hands while my present was in mine. I chose to do all that I could do, find a positive attitude, and move forward. I decided to move on with my life. ► You'll also like: 10 Things Latter-day Saint Singles Wish You Knew I wanted to quit (whatever that meant), but instead, I gained hope—from heroes like Sheri Dew—that there was more to life than just being married. It was then that I wish I had heard what Dallin H. Oaks said in an area conference to 225 stakes in the southeast U.S. “We should not feel discouraged if things that we pray for have not yet happened . . . it shouldn’t matter if we are single or not . . . if we just do our best and trust in the Lord and His timing” (Personal notes, Dec. 2015). But as many people say, finding a spouse happens in a way that is totally unexpected. I mean, I met a girl on a hike and proposed two months later. I can’t say that it was the way it was supposed to happen or that destiny brought us together—but it just happened. The point isn’t that you should be content with never getting married—the point is that no matter what your stage is in life, make sure you live it. There is so much beauty in life and if we are only looking to fulfill one desire, we may miss out on all the richness that awaits us. Just like the story of the man on the cruise ship who didn’t participate in any of the activities because he was too focused on other things and didn’t take time to find out what possibilities were available to him. Don’t let being single hold you back; don’t let not being married be the reason you live beneath your potential. “The day will come” shouldn’t be a discouraging phrase but an emboldening rally to soak up every day that we can. We might not know when that day will come or what that day will hold, but we will find it by moving forward. What in the world are you waiting for? The world is already waiting for you! Lead image from Getty Images. Check out Zack Oates's book, Dating Never Works. . .Until It Does. Zack Oates has been on over 1,000 dates. And while not all of them were great, he did learn a lot of great lessons from them along the way. Find out tips and hints to successfully navigate the dating game—all from someone who has been there.
2023-11-30T01:27:17.568517
https://example.com/article/5454
质量保证 环境保护 环境保护 What is RoHSThe RoHS Directive stands for "the restriction of the use of certain hazardous substances in electrical and electronic equipment". This Directive bans the placing on the EU market of new electrical and electronic equipment containing more than agreed levels of lead, cadmium, mercury, hexavalent chromium, polybrominated biphenyl (PBB) and polybrominated diphenyl ether (PBDE) flame retardants. Directive 2002/95/EC of the European Union on RoHS When did RoHS come into force The RoHS Directive came into force on 1 July 2006. More detail can be found at:http://www.rohs.gov.uk Are Foctek products RoHS compliant Our optical products on this site are deemed to be compliant in accordance with the maximum allowable concentration values issued by the EU Technical Adaptation Committee or by relevant exemptions. Some of the RoHS testing reports of Foctek’s Products or Materials Used in Foctek’s Products
2023-10-20T01:27:17.568517
https://example.com/article/7520
Q: java - establishing new variables based on input So, the program calls for a bowling game. Once you ask how many players, every player gets 2 throws to knock down 10 pins (simulated by random numbers). If you knock them all down with the first go, then you get 20 points, if on your second go you get 15 points, other than that score = throw 1 + throw 2. The tricky part for me has been getting each player to alternate frames (2 throws = 1 frame, up to ten frames per player) while the program keeps track of everyone's score! I thought I had it with this but the randoms just add up ill post the output on the bottom import java.util.Scanner; public class Bowling { public static void main(String [] args){ Scanner input = new Scanner (System.in); Game aNew = new Game(); int player; int i; int j; int nPlay; System.out.print("How many players are there?: "); nPlay = input.nextInt(); for (j = 1; j<= 10; j++) { for (i = 1; i <= nPlay; i++ ){ player = i; // i tried player i = new Player() but get error "value aNew.getScore(player); // already used in scope"" } } Games class: import java.lang.Math; public class Game { int score = 0; int player; int ran1; int ran2; public Game() { } public int getScore(int player){ ran1 = (int) (9 * Math.random()); ran2 = (int) (((10 - ran1)) * Math.random()); if (ran1 == 10){ score += 20; } else if (ran1 + ran2 == 10){ score += 15; } else { score += ran1 + ran2; } System.out.println("Player " + player + " score is: " + score + "\n"); System.out.println("ran1: " + ran1 + " ran2: " + ran2); return score; } } Player class: public class Player { /** this class is not doing anything, however i would like for an object to store a score for every player to keep track if that is possible?*/ int score; public Player (){ int score; } } Output: (suspicious how every time the last player had the highest or I would have never noticed it didn't work!) How many players are there?: 2 Player 1 score is: 2 ran1: 1 ran2: 1 Player 2 score is: 11 ran1: 8 ran2: 1 Player 1 score is: 16 ran1: 5 ran2: 0 Player 2 score is: 21 ran1: 4 ran2: 1 Player 1 score is: 29 ran1: 8 ran2: 0 Player 2 score is: 35 ran1: 5 ran2: 1 Player 1 score is: 40 ran1: 3 ran2: 2 Player 2 score is: 47 ran1: 6 ran2: 1 Player 1 score is: 56 ran1: 8 ran2: 1 Player 2 score is: 61 ran1: 5 ran2: 0 Player 1 score is: 68 ran1: 7 ran2: 0 Player 2 score is: 77 ran1: 8 ran2: 1 Player 1 score is: 85 ran1: 4 ran2: 4 Player 2 score is: 90 ran1: 4 ran2: 1 Player 1 score is: 97 ran1: 7 ran2: 0 Player 2 score is: 103 ran1: 3 ran2: 3 Player 1 score is: 107 ran1: 4 ran2: 0 Player 2 score is: 114 ran1: 3 ran2: 4 Player 1 score is: 121 ran1: 7 ran2: 0 Player 2 score is: 125 ran1: 0 ran2: 4 A: Your Player class is currently doing nothing. Fix this by first making accessors and modifiers for your Player.score variable like this. Maybe even give them a name: public class Player { int score; String name; public Player(String name) { this.name = name; } public void getName() { return this.name; } public void setScore(int newScore) { score = newScore; } public int getScore() { return score; } } now your game class should have a collection type of players like public class Game { List<Player> players = new List<Player>(); public Game() { } public void AddPlayer(Player p) { players.add(p); } public Player getPlayer(int index) { return players.get(index); } public void playerBowl(Player p) { ran1 = (int) (9 * Math.random()); ran2 = (int) (((10 - ran1)) * Math.random()); if (ran1 == 10){ p.setScore(p.getScore() +20); } else if (ran1 + ran2 == 10){ p.setScore(p.getScore() +15); } else { p.setScore(p.getScore() + ran1 + ran2); } System.out.println("Player " + p.getName() + " score is: " + p.getScore() + "\n"); System.out.println("ran1: " + ran1 + " ran2: " + ran2); } } Ill leave the main for you to figure out :D. Sorry if the syntax is a little off, I wrote it in the edit window :/
2024-04-04T01:27:17.568517
https://example.com/article/4021
A destructive menace is heading directly towards Earth, and the Enterprise is the only starship in interception range. Admiral James T. Kirk must once again assume command to take on this threat and solve the mystery of "Veejur" before it destroys all life on Earth. The film that launched the Star Trek movie series has mixed reviews, but the novelization of that movie offers us a glimpse into the mind of the creator of Star Trek! In this episode of Literary Treks, hosts Bruce Gibson and Dan Gunther are joined by "Dr. Trek" himself, Larry Nemecek, to discuss the novelization of Star Trek: The Motion Picture by Gene Roddenberry. We talk about differences between the film and the novel, the persistent "Alan Dean Foster myth," Kirk's subtextual relationship with Spock, the story from Veejur's perspective, Decker's ultimate fate, and wrap up with our final thoughts and ratings. In the news segment, we report on the announcement of Star Trek: The Motion Picture: Inside the Art & Visual Effects by Jeff Bond & Gene Kozicki, coming in March of 2020. We also respond to listener feedback from the Babel Conference for Literary Treks 283: Everyone's Tilting at Windmills. News ST:TMP Inside The Art & Visual Effects (00:03:40) Listener Feedback (00:04:40) Feature:The Motion Picture: 40th Anniversary Edition The Motion Picture (00:10:39) Different Iterations (00:20:53) The "Alan Dean Foster" Myth (00:24:24) T'hy'la and New Humans (00:31:32) Mining TMP for Future Star Trek (00:46:48) Canon Has Evolved (00:55:09) "In Thy Image" (01:01:55) V'Ger's Perspective (01:08:26) Additional Scenes (01:12:42) Epsilon Nine or IX (01:20:46) Marriage Contracts (01:24:06) Ratings (01:28:17) Final Thoughts (01:37:58)
2023-10-14T01:27:17.568517
https://example.com/article/6320
In order to operate automotive vehicles within mandated exhaust emission limits it is necessary to properly control to a selected air/fuel ratio and that requires a determination of the mass air flow so that the fuel injection rate can be correctly adjusted. In addition to emission concerns, it is desired to obtain fuel consumption efficiency and good engine performance. All of these objectives tax the capabilities of fuel control systems. Rapid time response as well as precision of mass air flow measurement have become high priority goals. It has been a common practice to measure mass air flow at a fixed time interval. In the event of transients in air flow due to changes in engine operating parameters it is necessary to employ special transient calculations to approximate the real instantaneous airflow. The calculations are time consuming and lacking in accuracy. It has been proposed to incrementally sample air flow based on engine rotation during an intake valve event. The sampling rate varies with engine speed and thus requires engine speed information and further cannot be optimally matched to the response time of the air flow meter.
2023-12-05T01:27:17.568517
https://example.com/article/7210
Kelly Urich A Guy Stops On His Way To Disneyland To Pull A Driver From Fiery Crash by Kelly Urich,posted Oct 9 2013 7:46AM Bob Michels was driving through Southern California on his way to Disneyland with his family when he saw a tractor-trailer lose control and veer off the highway. The tractor-trailer crashed into a bunch of cars at a car dealership, and burst into flames. The truck happened to be hauling paper towels, so the fire got out of hand almost instantly. Now, at this point Bob could have kept driving, and concentrated on making good time....which every dad knows is the MOST important thing on a road trip. But that's not the type of guy Bob is. Instead, he immediately pulled over, got out of his car, jumped a guardrail, hopped a fence, and pulled the female driver out of her truck just as the flames were about to reach her. Firefighters eventually got there and put out the fire, but not before it destroyed the entire truck, ten cars at the dealership, and damaged six more. Bob DID eventually make it to Disneyland, and in addition to winning the respect of his family, who watched the whole thing unfold, Disney officials also rewarded him with the meaningless title of "Honorary Citizen for the Day."
2024-02-14T01:27:17.568517
https://example.com/article/1940
package com.mashibing.service.base; import com.mashibing.bean.TblSendLog; import com.baomidou.mybatisplus.extension.service.IService; /** * <p> * 发送日志表 服务类 * </p> * * @author lian * @since 2020-04-18 */ public interface TblSendLogService extends IService<TblSendLog> { }
2023-12-05T01:27:17.568517
https://example.com/article/3897
Malposition of a transvenous cardiac electrode associated with amaurosis fugax. A 55-year-old male received a transvenous ventricular pacemaker for sick sinus syndrome. After pacemaker insertion, he developed recurrent episodes of amaurosis fugax. The source of thromboemboli was the pacemaker electrode malpositioned in the left ventricle. Position of the electrode was diagnosed correctly by two-dimensional echocardiography. The malpositioned electrode was successfully removed with cessation of recurrent thromboembolic phenomenon.
2024-06-15T01:27:17.568517
https://example.com/article/4352
1. Field of the Invention The present invention relates to an electronically controlled fuel injection apparatus for an internal cumbustion engine, and more particular to an improvement of an electronically controlled fuel injection apparatus for an internal combustion engine suitable for use in an internal combustion engine of an automobile having an air conditioning device, which apparatus determines the amount of fuel injection in accordance with engine operating conditions such as an engine rotation speed and the amount of intake air and, in a deceleration operation, cuts the fuel injection until the engine rotation speed falls below a predetermined fuel injection reinitiation rotation speed if the current engine rotation speed is higher than a predetermined fuel injection cut rotation speed. 2. Description of the Prior Art As apparatus for supplying air-fuel mixture having an optimum air-fuel ratio in accordance with operating conditions of an internal combustion engine to a combustion chamber of the engine, apparatus which uses an carburetor and apparatus which uses a mechanically or electronically controlled fuel injection apparatus have been known. In recent years, the electronically controlled fuel injection apparatus has been widely used because it can supply air-fuel mixture having a correct air-fuel ratio to an engine of an automobile which requires purification measures for exhaust gas. In a prior art internal combustion engine having such an electronically controlled fuel injection apparatus, the fuel injection is cut in a deceleration operation until an engine rotation speed falls below a predetermined fuel injection reinitiation rotation speed if the current engine rotation speed is higher than a predetermined fuel injection cut rotation speed in order to prevent after-fire, save fuel and prevent exhaust of incomplete combustion gas. The fuel injection reinitiation rotation speed is set in addition to the fuel injection cut rotation speed in order to provide a hysteresis function to the stop of the fuel injection to prevent undesired condition such as hunting from occurring. In the prior art electronically controlled fuel injection apparatus, however, the fuel injection is stopped even when a compressor of a car air conditioner is driven by an engine output shaft into an active state and the amount of intake air when a throttle valve is fully closed is increased by an air bypass valve of a throttle body in order to maintain necessary output for the compressor of the air conditioner. Accordingly, in the deceleration operation when the air conditioner compressor is active, a torque generated by the engine immediately before the stop of the fuel injection is large and a friction torque immediately after the stop of the fuel injection is large. Since a difference between the positive and negative torques is large, the drivability is very poor and a shock occurs upon the stop of the fuel injection. In the prior art electronically controlled fuel injection apparatus, the fuel injection is stopped until the engine rotation speed falls below the predetermined fuel injection reinitiation rotation speed if the current engine rotation speed is higher than the predetermined fuel injection cut rotation speed in order to prevent the hunting, as described above. If the engine rotation speed is higher than the predetermined fuel injection cut rotation speed, the operation mode moves to the deceleration mode. If the deceleration mode resumes if it has been interrupted intermediate the fuel injection cut rotation speed and the fuel injection reinitiation rotation speed, the fuel injection is stopped until the engine rotation speed falls below the fuel injection reinitiation rotation speed. Accordingly, the prior art apparatus has problems in the drivability of car when it is accelerated or decelerated during the deceleration mode.
2024-04-02T01:27:17.568517
https://example.com/article/7267
Arsenal’s 3 most utterly useless players at Qarabag; they should all go now By Tony Attwood OK, let’s set the context. Arsenal have just won nine in a row. If you don’t want to count the FL Trophy, eight in a row. If you don’t want to count the League Cup or the Trophy, seven in a row. If you don’t want… well, that’s getting ridiculous. Seven in a row is the minimum I will allow. Date Game Res Score Competition 25 Aug 2018 Arsenal v West Ham United W 3-1 Premier League 02 Sep 2018 Cardiff City v Arsenal W 2-3 Premier League 12 Sep 2018 Coventry City v Arsenal W 0-3 Football League Trophy 15 Sep 2018 Newcastle United v Arsenal W 1-2 Premier League 20 Sep 2018 Arsenal v Vorskla Poltava W 4-2 UEFA Europa League 23 Sep 2018 Arsenal v Everton W 2-0 Premier League 26 Sep 2018 Arsenal v Brentford W 3-1 League Cup 29 Sep 2018 Arsenal v Watford W 2-0 Premier League 04 Oct 2018 FK Qarabağ Agdam v Arsenal W 0-3 UEFA Europa League FK Qarabağ Agdam to give them their proper name were obviously not world class, not even European class, but they still had to be beaten, it was a fearsome atmosphere, and a bloody long journey. No, not the toughest of course, but still it was there and it had to be won, and the right team had to be picked. Avoid too heavy a reliance on some who might play on Sunday, give some of the players who have not been getting games a chance to get moving. And we had just won either six, seven or eight in a row depending on how you take it. So what headlines do we get the next morning? ‘Our most useless player of all time’: Some Arsenal fans are annihilating £7m Wenger signing And then just when I thought maybe that was one turnip in a field of cornflakes we get Arsenal news: ‘He really was a weakness’ – Martin Keown criticises star vs Qarabag This time it was Kolasinac, playing his first game back after a long injury layoff. Players need games to get back into the swing of things. And even if the manager feels he is not up to it, the player has to be given a chance. Not to do so is a total waste of a resource. Watch Arsenal Live Streams With StreamFootball.tv ‘Needs to go’, ‘Doesn’t wanna play’, Selfish- Some of Twitter crucifies Arsenal player after win So put it together and those three players should all be out the door now. But you know what… in the next game it will be another three and another three, because all these blogettas ever do is hit out at Arsenal players and they will never stop. Follow their advice and in a matter of a month or so we will have no one. And let us not forget that players can change. Look at this headline… Arsenal fans can’t get enough of Alex Iwobi’s dazzling skill v two Qarabag defenders And think back to last season and the season before when Iwobi was hammered from every corner. Remember the headlines like “Alex Iwobi not good enough, again” – Pain in the Arsenal 2 January 2018. ‘He’s just not good enough’, ‘Somehow he got worse’ Football Fan Cast 4 April 2018 ‘There is a list of them not good enough’ Daily Mirror 22 February 2018 ‘Arsenal Fans Blast Iwobi – You Are Not Good Enough To Play For Arsenal.’ OG Nigeria 16 Feb 2017 I could fill page after page with these. The point is clear – many young Arsenal players burst through and become great players despite playing for Arsenal. “Despite” because they have to live with a daily barrage of mindless and pointless abuse whose only reason for being there is to bring Arsenal down. What we actually saw last night was the first goal for Arsenal from a player born in the 21st century. OK not as important as winning the game, but still something to remember and celebrate instead of finding more and more players to destroy. Of course the media pick up on it too… Late night radio in England had a fulsome summary of Chelsea’s “fine” win and some mumbled thing about their “in a row” record that I couldn’t relate to reality. The report concluded that “Arsenal also won in Europe” and that was that. Mind you even the Telegraph could not bring itself to report the wild excessive abuse that poured forth after last night, restricting itself to, “As with each of their previous seven victories though, this was not an entirely convincing display” At least that was mild compared with the gibberish from some quarters. I am sometimes criticised for going on and on about these negative morons and their rants, but outing their gross stupidity is the only way I can think of exposing their crassness and raging anti-Arsenal attitudes. This is the Anti-Arsenal-Arsenal – back in full force. They got rid of Wenger, now they are gathering their strength again.
2023-12-15T01:27:17.568517
https://example.com/article/4261
package haxepunk.graphics.hardware.opengl; typedef GLTexture = lime.graphics.opengl.GLTexture;
2023-08-03T01:27:17.568517
https://example.com/article/7513
ippiResizeCenter equivalent ippiResizeCenter equivalent I am trying to achieve zoom effect using the resize mechanism. It works, but the image is also shifted. There is no equivalent to the function ippiResizeCenter anymore as it is deprecated. How can I achieve zoom effect?
2024-07-06T01:27:17.568517
https://example.com/article/2373
Ocular findings in patients with alopecia areata. This study investigated ocular findings in patients with alopecia. A total of 42 patients with alopecia (31 male, 11 female; 84 eyes) and 45 healthy individuals (28 male, 17 female; 90 eyes) were enrolled in the study. Of the patients with alopecia, 34 had alopecia areata, seven had alopecia universalis, and one had ophiasis alopecia. Seven patients had eyebrow involvement and seven had eyelash involvement. Autorefractometry, keratometry, visual acuity, central corneal thickness and intraocular pressure (IOP) measurements, bilateral anterior and posterior segment examinations, Schirmer's tests, and visual field examinations were performed in both groups. The mean ± standard deviation age of the subjects was 25.21 ± 10.88 years in the alopecia group and 28.24 ± 9.31 years in the control group. Lens abnormalities were observed in 35 eyes in the alopecia group and in 11 eyes in the control group (P < 0.05). Posterior segment abnormalities were seen in 29 eyes in the alopecia group and four eyes in the control group (P < 0.05). There were no statistically significant differences in age, sex, visual acuity, refractive error, keratometric findings, IOP, central corneal thickness, perimetry, or Schirmer's test results between the alopecia and control groups (P > 0.05). Patients with alopecia may have more lenticular and retinal findings than normal individuals, but those findings do not interfere with visual acuity. Close surveillance for the early onset of cataract formation is important in patients with alopecia.
2024-07-01T01:27:17.568517
https://example.com/article/2279
Q: Issue writing to an array of strings Having had issues with a slightly more complicated section of code, I've stripped away at it, but still the error remains. Could you cast a cursory eye over this and point out my errors? //before this, nbnoeud is defined and graphe is a stream that reads from a .txt file double* xtab = (double *) calloc(nbnoeud, sizeof(double)); double* ytab = (double *) calloc(nbnoeud, sizeof(double)); char** nomtab = (char **) calloc(nbnoeud, 100*sizeof(char)); double x, y; char nom[100]; int numero=0, scancheck; int a; for(a=0; a<nbnoeud; a++){ scancheck = fscanf(graphe, "%d %lf %lf %[^\n]", &numero, &x, &y, nom); if(scancheck = 4) printf("Read item %d; \t Scancheck: %d; \t %s - (%lf, %lf). \n", numero, scancheck, nom, x, y); xtab[a] = x; ytab[a] = y; nomtab[a] = nom; I imagine it's something to do with compatibility of this but to my eyes it all checks out /*int b; //this section just illustrates that only the previously accessed elements are being overwritten. See code2.png for(b=0; b<=nbnoeud; b++){ printf("%s \n", nomtab[b]); }*/ } for(a=0; a<nbnoeud; a++){ printf("Read item %d; \t \t \t %s - (%lf, %lf). \n", a, nomtab[a], xtab[a], ytab[a]); } exit(1); The issue arises when I come to print nomtab[0] through [7] (nbnoeud = 8, in this case), as all values (nomtab[0], nomtab[1], etc.) are equal to the final value that was written. Bizarrely, having checked it, only the already accessed elements of nomtab are overwritten. For example, after the first loop, nomtab[0]= Aaa and the rest equal null; after the second loop, nomtab[0] & nomtab[1] = Baa and the rest equal null (see second image). I imagine there is moronically simple resolution for this but that makes the not-knowing all the more intolerable. I've also tried using strcpy and it didn't like that. Any ideas? Output: Output with check of array contents after each loop A: The problem lies within your line nomtab[a] = nom; This sets the pointer at nomtab[a] to point to the local array nom. But with each loop iteration you are overwriting nom when reading the file Input with fscanf. If you want to store all strings, you should make a copy of nom and store that. You can use strdup(nom) to make a copy of nom. BTW, you are calling fscanf without limiting the number of characters to write into nom. This is a bad idea. If there is a line in the file that is too long, then you will have a buffer overflow in nom, overwriting your stack. Edit: This line looks suspicous: char** nomtab = (char **) calloc(nbnoeud, 100*sizeof(char)); I guess, what you want is an array with nbnoeud strings of 100 chars length. You should change that to either char* nomtab = (char *) calloc(nbnoeud, 100*sizeof(char)); strcpy(nomtab[100*a], nom); or char** nomtab = (char **) calloc(nbnoeud, sizeof(char*)); nomtab[a] = strdup(nom); In the first case nomtab is a big buffer containing all strings (characters), in the second case nomtab is an array of pointers to strings, each one being allocated some other way. In the second case you need to take care to free() the allocated string buffers.
2023-08-16T01:27:17.568517
https://example.com/article/6742
219 F.2d 64 Willard L. GLEESON and Mary F. Gleeson, Appellants,v.Fred E. CARR, Trustee in Bankruptcy of the Estate ofBroadcasting Corporation of America, debtor, Appellee. No. 14203. United States Court of Appeals, Ninth Circuit. Jan. 15, 1955.Rehearing Denied March 22, 1955. 1 Morris Lavine, Los Angeles, Cal., Shaw & Kegley, Riverside, Cal., for appellants. 2 Craig, Weller & Laugharn, Frank C. Weller, Hubert F. Laugharn, Thomas S. Tobin, Los Angeles, Cal., for appellee. 3 Before STEPHENS and CHAMBERS, Circuit Judges, and McLAUGHLIN, District Judge. 4 McLAUGHLIN, District Judge. 5 This is an appeal taken from orders made in a reorganization proceeding under Chapter X of the Bankruptcy Act. 6 The Broadcasting Corporation of America, debtor, hereinafter referred to as BCA, is a California corporation which operated five radio stations in California for some time prior to 1946. The outstanding stock of BCA totals 250 shares owned as follows: 127 shares-- Willard Gleeson 106 ' -- E. W. Laisne 10 ' -- E. D. Laisne (son of E. W. Laisne) 7 ' -- held individually by seven persons 7 Appellants Willard and Mary Gleeson are husband and wife. In addition to being the sole majority stockholder, Mr. Gleeson is a director, president, and manager of BCA; Mrs. Gleeson is a director and secretary of BCA. E. W. Laisne is the third director. 8 In December, 1946, BCA was assigned television Channel 1 by the Federal Communications Commission and began construction. In May, 1947, work was suspended by virtue of an international dispute over Channel 1 and subsequently the Commission withdrew the Channel. In November, 1947, Channel 6 was tentatively assigned to BCA; however, the Commission suspended construction on all new television stations in the United States in March, 1948, and since then BCA has not resumed construction. The significance of attempting to incorporate television within its operations is the cost incurred which materially impaired the financial condition of BCA. 9 In September, 1949, a meeting of the board of directors of BCA was held. Only the Gleesons attended this meeting; Laisne had notice but did not attend. At the meeting, Mr. Gleeson made a motion to adopt a resolution authorizing and directing him as president and Mrs. Gleeson as secretary to execute and deliver a promissory note in the sum of $90,762.60 to themselves, secured by a second trust deed upon the real property of BCA. This motion was seconded by Mrs. Gleeson, adopted by both, and the resolution was executed two days thereafter. The note is based upon the following pre-existing indebtedness: 10 $25,966.45-- Advances to BCA prior to December 31, 1944 11 30,600.00-- Cash advances, January 1, 1945 to September 21, 1949 12 24,500.00-- Accumulated salary from 1938 to October 1, 1949 9,696.15-- Miscellaneous 13 At the time the note and trust deed were given, BCA was heavily indebted to many creditors, secured as well as unsecured. These corporate debts were incurred through Gleeson as president and manager of BCA. No offer of similar security was made to other unsecured creditors. 14 On November 29, 1951, BCA filed an amended petition under Chapter X of the Bankruptcy Act, 11 U.S.C.A. 501 et seq., seeking reorganization. The petition represented that the corporate assets were valued at $595,566.89 and its liability totaled $195,392.60. The petition was approved; William Ross appointed trustee in possession and Nelson Peters appointed special master to hear and report on all matters arising in the court of the reorganization proceeding. Dates for the various hearings were set, including the hearing pursuant to Chapter X, § 236 of the Act, 11 U.S.C.A. 636, in the event no plan was approved. The trustee was directed to give notice of the hearings to all parties in interest. Printed notices were sent out on or about July 28, 1952, to two hundred and forty-three creditors and parties in interest. The appellants and the debtor corporation attended the hearings; appellants filed secured claims and the debtor corporation, objecting to the plan proposed by the trustee, proposed a plan of its own. Both plans were eventually rejected by the creditors. BCA then filed a petition on April 1, 1953, under Chapter X, § 147 of the Act, 11 U.S.C.A. 547, to convert to a Chapter XI proceeding based on a bill pending in Congress to appropriate approximately $300,000.00 to compensate BCA for the loss attributable to the governmental action incurred in its television venture. This petition was denied. 15 The special master found the total assets of BCA to be $211,167.89 while its liabilities amounted to $431,138.76. The master recommended that the debtor be adjudged a bankrupt and that the secured claims of appellants and the Citizens National Trust and Savings Bank of Riverside be allowed. 16 The final hearing on the reorganization matter was held on March 2, 1953. In the order dated May 22, 1953, the district judge made the following determinations: the assets and liabilities of BCA on October 29, 1951 (the date of the original petition) were $211,167.89 and $431,138.76 respectively; BCA was adjudged a bankrupt and the matter referred to David Head as referee in bankruptcy; the secured claims of appellants and the Riverside Bank to the extent that its claim was based on personal notes of appellants or guaranteed by them were disallowed. 17 The Gleesons, as individuals, in an amended notice, appeal from the above orders and specify the following errors: 18 1. The district court erred in entering the order adjudging the debtor a bankrupt as there was no hearing upon notice as required by § 236 of the Act. 19 2. The district court erred in referring the proceedings to David B. Head of Los Angeles instead of N. C. Peters of Riverside. 20 3. The district court erred in denying the debtor's petition under § 147 of the Bankruptcy Act to convert the proceedings to one under Chapter XI. 21 4. The district court erred in disallowing the secured claims of Willard and Mary Gleeson and that portion of the secured claim of the Riverside Bank representing a direct and individual debt of Willard Gleeson. 22 The trustee in bankruptcy filed a motion to dismiss this appeal in so far as it attacks the order adjudicating BCA a bankrupt. Motion to Dismiss 23 The trustee in bankruptcy's motion must be denied. 24 The motion's merit is to be measured by those portions of the Act specifically governing appeals in reorganization proceedings. These provisions control here as the adjudication of the corporation as a bankrupt was made as an order terminating the reorganization proceedings.1 25 As creditors, appellants' standing to appeal is derived from § 206 of the Act, 11 U.S.C.A. § 606, which provides that: 26 'The debtor, the indenture trustees, and any creditor or stockholder of the debtor shall have the right to be heard on all matters arising in a proceeding under this chapter.' 27 The right to be heard under this section includes the right to appeal. Rogers v. Concolidated Rock Products Co., 9 Cir., 1940, 114 F.2d 108; In re Keystone Realty Holding Co., 3 Cir., 1941, 117 F.2d 1003; Dana v. Securities & Exchange Commission, 2 Cir., 1942, 125 F.2d 542; In re South State Street Bldg. Corp., 7 Cir., 1943, 140 F.2d 363; Horowitz v. Kaplan, 1 Cir., 1951, 193 F.2d 64. 28 Since the order adjudicating BCA a bankrupt was a matter arising in a proceeding under Chapter X, § 236 of the Act, it falls within the purview of § 206 granting appellants the right to be heard. By virtue of this right to be heard, appellants also have the right to appeal. For these reasons the motion must be denied. Notice and Hearing under § 236 29 We come now to the merits of this appeal. 30 Appellants claim that the order adjudging BCA a bankrupt was entered without a hearing upon notice as required by Chapter X, § 236 of the Act. The plain meaning of § 236 coincides with the fundamental essentials of due process that unless notice is given and a hearing held and an opportunity to be heard is accorded the parties affected, an adjudication of bankruptcy must be reversed. Sylvan Beach v. Koch, 8 Cir., 1944, 140 F.2d 852; In re American Bantam Car Co., 3 Cir., 1952, 193 F.2d 616. However, this notice requirement does not place a greater obligation upon the judge 'than to see that notice reasonably designed to bring the matter in ordinary course to the creditor's attention is given.' North American Car Corp. v. Peerless W. & V. March. Corp., 2 Cir., 1944, 143 F.2d 938, 941. Here the record clearly shows that the district court issued an order dated January 14, 1952 in which a hearing pursuant to § 236 was set for May 12, 1952 at a specific time and place, and the trustee in possession was directed to give notice.2 In his report dated September 5, 1952, the trustee in possession in the reorganization proceeding informed the court that pursuant to the amended orders of March 19 and July 10, 1952, the notice of hearing set for October 6, 1952 was sent out by mail to all known creditors, stockholders, and parties in interest, 243 in number, on or about July 28, 1952.3 The hearing was continued from time to time to March 2, 1953 when BCA was adjudged a bankrupt.4 Appellants attended the interim hearings and were or should have been well advised in the matter. Under these circumstances we can only come to the conclusion that the order of adjudication was entered after a hearing held upon adequate notice. The fact that the notice was coupled with other notices regarding the reorganization proceeding presents no difficulties in view of § 120 of the Act, 11 U.S.C.A. 520. Referee in Bankruptcy 31 Appellants complain that it was error for the district judge to refer the proceedings to David B. Head of Los Angeles as referee in bankruptcy instead of Referee N. C. Peters of Riverside. This is a matter in ordinary bankruptcy and not in reorganization, Sylvan Beach v. Koch, supra, and is therefore governed by § 22, sub. b of the Act, 11 U.S.C.A. § 45, sub. b. Under that section, the judge, within his discretion, may at any time for convenience of the parties or for cause transfer a case from one referee to another. Benioff v. Wyman, 9 Cir., 1952, 195 F.2d 440; G. Levor & Co. Inc. v. Hunt, 6 Cir., 1940, 111 F.2d 450. Here there is no evidence that the judge abused his discretion, nor is there any claim or evidence that the appellants were prejudiced by the reference. Petition under § 147 32 Appellants next complain of the order denying the petition of BCA under § 147 of the Act to convert the Chapter X reorganization proceeding into a Chapter XI arrangement matter. Appellants claim that 'The right to amend is given to the debtor under Section 147. The debtor in this case did amend, * * * when he does so, it shall be treated thereafter for all purposes as a Chapter XI proceedings.'5 Appellants misconstrue § 147 to mean that a petition to amend by the debtor automatically converts the proceeding from reorganization to arrangement. On the contrary, the grant or denial of a request to amend rests within the court's discretion. John Hancock Mut. Life Ins. Co. v. Casey, 1 Cir., 1944, 141 F.2d 104. The petition to amend is chiefly based on the possibility that the debtor may realize compensation from Congress for losses incurred in its television venture due to adverse governmental action. 33 The court below cannot be held to have abused its discretion in ruling that the movant's optimism should be based on facts, not hope. When, as and if the facts warrant, Chapter XI may be invoked. Disallowance of Secured Claims 34 Finally appellants claim that the district court erred in disallowing the secured claims of appellants and the Riverside Bank contrary to the recommendations of the special master and without an independent hearing. 35 There can be no question of the power of the court to allow or disallow claims, secured and unsecured, in a reorganization proceeding. Pepper v. Litton, 1939, 308 U.S. 295, 60 S.Ct. 238, 84 L.Ed. 281; In re Commonwealth Light & Power Co., 7 Cir., 1944, 141 F.2d 734. 36 Nor can it be doubted that a master's report is other than advisory. It is not binding upon the court, and it may be adopted or rejected in whole or in part after due notice and opportunity to be heard has been accorded the interested parties. In re Mifflin Chemical Corp., 3 Cir., 1941, 123 F.2d 311; § 117 of the Bankruptcy Act, 11 U.S.C.A. 517. 37 Here a hearing upon notice was held upon the subject of the master's report on January 26, 1953. The court found that it could not accept the master's finding and conclusion that appellants' claims were entitled to reorganization recognition as secured claims, and so set the same aside in this regard, finding and concluding for itself to the contrary.6 38 As appellants had notice and opportunity to be heard, they cannot now be heard to complain simply because the court disagreed with the master. No contention is made that the court was wrong in substance. Even if it were, no prejudicial error resulted here, for the reorganization plans failed, and during the bankruptcy proceedings under the very order of the court complained of appellants will have a chance to prove anew if they can their contention that they should be recognized as secured creditors of the bankrupt corporation. 39 The orders of May 22, 1953, appealed from are affirmed. 1 Transcript of Record, pp. 211, 235 2 Transcript of Record, pp. 47, 48 3 Transcript of Record, pp. 123, 124, 295 4 Transcript of Record, pp. 244, 245 5 Opening Brief of Appellant, p. 26 6 Transcript of Record, pp. 211, 244, 245
2024-03-03T01:27:17.568517
https://example.com/article/4205