Document
stringlengths
87
1.67M
Source
stringclasses
5 values
De Poezenkrant De Poezenkrant was an irregularly published Dutch magazine with short reports about and many illustrations about cats that came out between 1974 and 2023. The 'Letters' section was infamous, not least because of the answers and threats from the editors. The magazine was written, designed, and published by graphic designer Piet Schreuders. Writer Willem Frederik Hermans and photographer Ed van der Elsken also often contributed. The design was experimental and eclectic in format, layout and style; the later issues increasingly appeared as pastiches of well-known publications, ranging from National Geographic magazine to the gossip magazine Privé. Because it was published irregularly, subscriptions were taken out per issue, rather than per quarter until 2019. After that, the magazine could only be purchased individually at widely varying prices per edition. The magazine also featured submissions from readers, including the poet Jean Pierre Rawie. In reality, De Poezenkrant was not a magazine focused on cats at all, but it was a playful vehicle for graphic design and typography by Piet Schreuders, it was only later that the paper evolved into what newspaper Het Parool described a "cultural-historical phenomenon" History De Poezenkrant was first published on 7 February 1974 and initially came out about once a month, although two newspapers appeared on 10 July 1974. From the fifteenth issue onwards, the newspapers (sometimes in the form of small booklets) were published more irregularly. Later the intervals between editions increased to several years. The combined number 50-51 was published in October 2004, number 52 in 2007 and number 53 in spring 2009. Number 57 from 2013 was the booklet Poes in oppression and resistance 1940-1945 by Paul Arnoldussen. Number 67 from 2021 was the aforementioned pastiche on the magazine Privé. In issue 70, dated 7 February 2024, a footnote reads "Last issue". In 2005, the Amsterdam Public Library devoted an exposition to De Poezenkrant.
WIKI
User:MonsterSRM/Network discovery Network discovery is the process by which the elements of a computer network are modeled by software. Network discovery is sometimes referred to as network autodiscovery or network auto-discovery. Purposes Network discovery has applications in two major areas: * Inventory, auditing, and documentation * Network monitoring and management In both cases, for small and static networks it can be practical to forgo network discovery software and manually determine the contents and configuration of a computer network. However, as the size of a network and the frequency of changes to it increase, this becomes impractical. Aspects of Network Discovery Many different approaches to network discovery exist but they all gather information about the network and then process this information in order to create a software model of the network. Different types of solutions vary in the following ways: * Information gathering * Scope of discovery * Algorithms for building the model * Presentation of the model Information gathering for network discovery Most approaches to network discovery use active probing of network devices though some also use passive listening from one or multiple points on the network. Active probing solutions depend on the existence of some software running on the devices being probed. Most such solutions only rely on software that is already expected to be running on the network devices as a standard software component or supported protocol. Two primary examples are SNMP and CIM which can be used to gather information about the device. Such solutions are said to support agent-less discovery not because they rely on no software agents, but because they do not rely on any additional custom agents to be installed on the computer network. The type and nature of information gathered primarily depends upon the scope of the network discovery but also on the needs of the algorithms for building the model. Scope of network discovery .Agentless network discovery refers to solutions that do not require the installation of special-purpose software agents on network devices to be probed. This does not means the network discovery software is not replying on any software agent running on the probed devices. is somewhat of a misnomer in that these solutions do in fact depend upon the existence of a software agent on the probed devices however when such agents are standard software components of the network devices * Types of networks supported * Types of devices supported * OSI layers supported Vary in terms of what is modeled: - Which OSI layers are discovered - Vendor specific or vendor-neutral - Enterprise vs.Carrier Algorithms Gathering of information about the network - Active: Probing devices - Passive: Listening to network traffic Algorithms for creating models based on information gathered My Snippets It is performed for inventories, audits, and as a basis for software that monitors network availability, health, and performance. The alternative to discovering a computer network automatically using network discovery software is to manually maintain an inventory of network elements. a virtual model of a computer network is constructed in software which can documentation of most useful for reasonslarge networks because the cost and difficulty Copy Paste from Another Wiki Page A computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications among users and allows users to share resources. Networks may be classified according to a wide variety of characteristics. A computer network allows sharing of resources and information among interconnected devices. In the 1960s, the Advanced Research Projects Agency (ARPA) started funding the design of the Advanced Research Projects Agency Network (ARPANET) for the United States Department of Defense. It was the first computer network in the world. Development of the network began in 1969, based on designs developed during the 1960s. Purpose Computer networks can be used for several purposes: * Facilitating communications. Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing. * Sharing hardware. In a networked environment, each computer on a network may access and use hardware resources on the network, such as printing a document on a shared network printer. * Sharing files, data, and information. In a network environment, authorized user may access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks. * Sharing software. Users connected to a network may run application programs on remote computers. * Information preservation. * Security. * Speed up. Network classification The following list presents categories used for classifying networks. Connection method Computer networks can be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as optical fiber, Ethernet, wireless LAN, HomePNA, power line communication or G.hn. Ethernet as it is defined by IEEE 802 utilizes various standards and mediums that enable communication between devices. Frequently deployed devices include hubs, switches, bridges, or
WIKI
Wikiquote:Votes for deletion/Nadesico This really isn't controversial, so I decided to close it. This discussion has never had a two-thirds consensus for deletion, so all we've accomplished is moving the bar between "keep" and "no consensus/default keep". At this point, the 6 votes are split 50/50 (including Poetlister's formal vote, which was a bit late but arguably just a clarification of her nomination), and the rest of the community has shown no interest in chiming in even after 17 days and 2 extensions. Questions of copyvio (Poetlister, UDScott, Cato, InvisibleSun) have been fixed. The issue of identifying the subject (Poetlister, UDScott, Aphaia, LrdChaos, Jeff Q) have been clarified (if somewhat arbitrarily). Cleanup and structure (UDScott, Jeff Q) have been implemented. All that's left is the obvious need for quotability, which will have to be addressed as a content issue. If we don't get any sourced expansions or other improvements within a few months (which unfortunately seems likely), we may want to revisit this with a another nomination. ~ Jeff Q (talk) 16:03, 4 October 2007 (UTC) Nadesico No intro, so we don't even know if it's a film, a video game or what. Long list of non-notable quotes; may be sufficiently extensive to be copyvio. — Poetlister 14:51, 17 September 2007 (UTC) * Vote closes : 15:00, 24 September 2007 (UTC) * No consensus yet; extend for 7 days to 15:00, 1 October 2007 (UTC) —The preceding unsigned comment was added by Cato (talk • contribs) 21:57, 24 September 2007 (UTC) * Vote extended once more by 3 days to 15:00, 4 October 2007 (UTC), as consensus is ambiguous without clear position from nominator, and at least one vote is conditional upon changes in the article which haven't been made. I ask participants to make clear statements (where they haven't already) for each of two situations: (A) the article left as-is, and (B) the article significantly changed to provide subject identification and remove copyvio potential. This way, actions taken on the article (or not) by the new close time should provide a consensus. ~ Jeff Q (talk) 22:43, 1 October 2007 (UTC) * Keep. While the creator did not make it easy, I believe this refers to a manga or anime (or both). As long as the source is verified and the page is cleaned up, I don't see any reason for the page not to stay (provided of course there is no copyvio, as Poetlister suggests may exist). Of course, should no one show an interest in improving the page, and it remains as it currently exists, I could certainly see it being deleted (and would probably change my vote accordingly at the end of the voting period if it stays as is). ~ UDScott 15:09, 17 September 2007 (UTC) * My Keep vote is for situation B outlined above by Jeff. ~ UDScott 15:23, 2 October 2007 (UTC) * Keep Needs lots of trimming, but there are a few salvageable quotes.--Cato 19:26, 17 September 2007 (UTC) * Comment Such as?--Poetlister 16:04, 19 September 2007 (UTC) * Comment As for notability ... the original work might be anime, and manga and novels may have been spin-offs. It could be known in the English speaking world (I don't know personally though). At least notable in Japan. If they are pithy enough to collect on this project ...... I confess I am not a big fan of this kind of works. --Aphaia 16:19, 19 September 2007 (UTC) * Delete. This page has not been worked on since VfD nomination, leaving questions of source, copyvio and quality. - InvisibleSun 02:33, 24 September 2007 (UTC) * Delete. I think that the sheer amount of quoting probably approaches copyvio status, and it certainly doesn't help that we still don't have any clarification on just what the subject is. —LrdChaos (talk) 17:44, 26 September 2007 (UTC) * Comment: Here are some details to help the discussion. The TV version is titled Kidô senkan Nadeshiko (Martian Successor Nadesico). The IMDb records on this show are sketchy and somewhat self-contradicting (giving both 1996 and 2002 as release years), but it appears that it ran for 26 episodes. Our current article has about 135 dialog-segment quotes (although it's hard to tell given the total lack of readable formatting), which, if the show is the sole source of quotes, would be an average of just over 5 quotes per episode. This probably means many episodes go well over this, but just winnowing out the inane, unoriginal, too-aural/visual, or mere plot-point quotes would probably take care of substantial-copvio concerns — again, if this is the TV show. ~ Jeff Q (talk) 20:23, 26 September 2007 (UTC) * Delete . If we can't even get someone to work on this who knows which quotes belong to which of the works with this name, we can't really identify the intended subject. Leaving the current article in place would therefore be propagating confusion. The best alternative to deletion I can think of at the moment would be to replace the entire content with the only quote IMDb has that is at least arguably memorable and non-trivial: * Gai: Jiro Yamada is the boring name my mother gave me. Gai Daigoji is the name of my soul! * in a section titled "Unidentified episode", rename it to match the TV-show title, and add an intro drawn from the WP article. ~ Jeff Q (talk) 04:08, 28 September 2007 (UTC) * Keep now that I have essentially replaced the existing article with a basic intro, infrastructure, and the single quote I mentioned above, in a desperate attempt to make something from this article that the community can make a clear decision on. I don't really care if we keep this version or alter it to match the manga or combine the several variations on this work (although I don't recommend the last, given that the anime and manga appear to be rather different). But we need a clear scope for the article, and a reasonable set of pithy quotes specifically from the identified subject. If we find such a limited version acceptable, we can treat the situation as a content dispute, possibly using the history to extract some non-trivial quotes to restore. If we keep this as a TV-show article, we should also move it to Martian Successor Nadesico to match the WP article, and add a Wikiquote link there to encourage contributions here. Unless we use this or some other, similar form of this article that clearly identifies the subject and contains a potentially sourceable selection of pithy quotes, I would go back to my "delete" vote. ~ Jeff Q (talk) 16:17, 3 October 2007 (UTC) * Delete Kudos to Jeff for his hard work, but if that's the best quote available, then there's no point having an article. I'd delete it myself, but I don't want to close a VfD that I opened if it's controversial. Poetlister 15:27, 4 October 2007 (UTC)
WIKI
Talk:COVID-19 pandemic in Finland Wiki Education Foundation-supported course assignment This article was the subject of a Wiki Education Foundation-supported course assignment, between 2 August 2021 and 4 September 2021. Further details are available on the course page. Student editor(s): Madisonochoa. Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 18:27, 17 January 2022 (UTC) Request to update the map Please update the map if needed after the confirmation of 23 cases as of 08.03.2020 — Preceding unsigned comment added by Don Colorodo (talk • contribs) 14:37, 8 March 2020 (UTC) recovered plus 6 the earlier case recovered. SINCE then https://yle.fi/uutiset/osasto/news/130_in_helsinki_face_quarantine_following_exposure_to_coronavirus/11235775 suggests 6 more cases - though the English is not clear. DOES this test include the January case or not? — Preceding unsigned comment added by <IP_ADDRESS> (talk) 18:02, 1 March 2020 (UTC) WikiProject COVID-19 I've created WikiProject COVID-19 as a temporary or permanent WikiProject and invite editors to use this space for discussing ways to improve coverage of the ongoing 2019–20 coronavirus pandemic. Please bring your ideas to the project/talk page. Stay safe, -- Another Believer ( Talk ) 17:38, 15 March 2020 (UTC) Number of dead (as at 27 March) The THL website yesterday (26/03) reported 5 dead; later this was rolled back to 4, with no explanation (AFAIK) of why. I'll try to keep an eye on that... DoubleGrazing (talk) 08:43, 27 March 2020 (UTC) Beginning of April: HS -> THL Data Publishing Transition There's now a bit of confusion in the daily case data from the end of March through the beginning of April. The Helsinki Sanomat apparently stopped updating its own public database, and the Finnish national health institute began making its national disease register-derived data publically available in machine-readable format. I'd clean up the disease progression chart if I could, but I simply cannot find any reliable source of information online about the daily case increases in that period. Everywhere seems to have gotten thrown off by the transition. For now, the information on this wikipedia page for those dates cannot be taken as accurate. When it DOES get cleaned up (as much as it can be), some explanation ought to be given as footnotes to the chart. — Preceding unsigned comment added by 2001:14BA:8054:8900:492E:C17D:731:2CE4 (talk) 09:20, 3 April 2020 (UTC) * To add fuel to the fire, an anonymous user, probably meaning well, decided to just add 300 cases from something they heard from YLE perhaps. This round number step jump in overall cases is completely implausible and discrepant with the official data source. This spreadsheet is the current data source for overall cases from THL: * I'd trust THL's fact-checking much more than the popular press. What I see is that THL's dataset changes regularly as could be due to test results taking a while to arrive. The data concerns the day on which the tests are taken. * Positivity1 (talk) —Preceding undated comment added 00:21, 5 April 2020 (UTC) The 300 was an error in the number of recoveries; I've fixed that. Finnish health authorities simply gave up on both recovery and origin tracking early on, which is why the 10 number has frozen. The addition of 247 cases on 4.4 I feel is correct and should stand. THL itself has backdated most of these new cases to the preceding days, saying they want to record them on the date tests were given, even if results are found several days later. I feel this is not comparable to other countries' health authorities and will be confusing, if we are always supposed to ignore the past few days while pending tests results roll in. I recommend that we simply update the chart each day with the [daily count published by THL](https://thl.fi/fi/web/infektiotaudit-ja-rokotukset/ajankohtaista/ajankohtaista-koronaviruksesta-covid-19/tilannekatsaus-koronaviruksesta) to reflect simply the number of cases known at that date. There is also some discussion of THL's backdating approach in the Talk page for the case timeline chart itself. ABR (talk) 06:28, 5 April 2020 (UTC) * The current chart conveys a misleading impression of the dynamics of the growth of overall cases in Finland. That [daily count published by THL](https://thl.fi/fi/web/infektiotaudit-ja-rokotukset/ajankohtaista/ajankohtaista-koronaviruksesta-covid-19/tilannekatsaus-koronaviruksesta) understandably gets changed retroactively by THL. Day-by-day logging [daily count published by THL](https://thl.fi/fi/web/infektiotaudit-ja-rokotukset/ajankohtaista/ajankohtaista-koronaviruksesta-covid-19/tilannekatsaus-koronaviruksesta), there tends to be an implausible drop in the number of new cases, which is at least in part due to a lag from test to result, as you can see today . The count after a few days you can take more seriously. * If you are modelling the growth of cases, this wikipedia page is certainly the wrong place to come at present for a fact-checked consensus. People will do so because they do not trust the popular press and understand how the only numbers politicians understand are money and votes... The wrong data can leave modellers crying wolf in the community. At the moment the more general reader may also come away with the false impression that there is a sudden increase and nobody recovers. * A suggestion is that, each time the chart is updated, to update all the data from THL "Kaikki tapaukset jaoteltuna päivämäärän mukaan" up until n days ago and not include the last n-1 days in the chart. n is about 4 at the moment, and will depend on if there becomes a backlog in determining whether a patient is a case or not. What I like about the chart is it gives a cumulative value. You can get the excel csv of new cases by substituting in csv for json in the url at "Kaikki tapaukset jaoteltuna päivämäärän mukaan". Some kind of comment next to the chart about stopping tracking recovery would inform the readership well. Perhaps the term "active cases" needs revising. * Positivity1 (talk) —Preceding undated comment added 10:01, 5 April 2020 (UTC) * This proposal (update only up until n days ago) seems reasonable, as does updating each day once per the THL daily cumulative total on that day. The big jump on 4.4 might be a one-time event due in some way to the HS-THL changeover, or it might be something that will happen from time to time regardless due to fluctuations in Finland's laboratory processing and case registration. ABR (talk) 11:01, 5 April 2020 (UTC) * HS although quite a reputable source is neither completely politically unbiased nor statistically talented. Indeed HS have a long history of balking up the stats. The proposed five suggestions for revision are to have: * - THL's cumulative total * - separately the cumulative chart up until n days ago. (I will give it a few days still to be sure what that n is currently) * - editors replacing all the numbers in the chart with those derived accurately from * - instead of the label "active cases" in the chart, "surviving cases" * - a separate note of the current estimated number recovered, with a reference about how tracking recovery stopped. * Ladies and Gentlemen, please do we all agree on these suggestions? * Positivity1 (talk) —Preceding undated comment added 13:11, 6 April 2020 (UTC) * If You want to use the data from THL daily update, data is as follows. The Cases and add is from the source mentioned, and Test is the subtraction of the numbers from the source above, so that 2x sources match: * Date;Cases;Add;Test * 7.4.2020;2308;132;? https://www.is.fi/kotimaa/art-2000006466770.html * 6.4.2020;2176;249;2176 https://www.laakarilehti.fi/ajassa/ajankohtaista/suomessa-on-todettu-2-176-koronavirustartuntaa/ * 5.4.2020;1927;45;1927 https://ls24.fi/lannen-media/suomessa-on-nyt-1-927-varmennettua-koronatartuntaa-yli-neljannes-tartunnoista-on-todettu-helsingissa * 4.4.2020;1882;267;1882 https://www.uusisuomi.fi/uutiset/thl-juuri-nyt-suomessa-1882-koronatartuntaa-lahes-200-uutta-raportoitua-tapausta/fa3c52d4-6c66-474d-b2eb-adadb5adb252 * 3.4.2020;1615;97;1615 https://www.laakarilehti.fi/ajassa/ajankohtaista/varmistettuja-koronatapauksia-nyt-1-615/ * 2.4.2020;1518;72;1518 https://www.is.fi/kotimaa/art-2000006461321.html * 1.4.2020;1446;62;1446 https://www.uusisuomi.fi/uutiset/tama-on-suomen-koronatilanne-thl-laajat-vasta-ainetutkimukset-eivat-nyt-kannata/d373a8db-3b7d-4b03-a1c7-bfd9a152c25f * 31.3.2020;1384;71;1384 https://www.is.fi/kotimaa/art-2000006459270.html * 30.3.2020;1313;95;1313 https://www.is.fi/kotimaa/art-2000006457692.html * 29.3.2020;1218;55;1218 https://www.satakunnankansa.fi/a/e91dcf44-150c-4adc-8048-0506f33b90cf * 28.3.2020;1163;138;1163 https://www.satakunnankansa.fi/a/ade00d2e-2e91-4939-a9d8-3cd49ea7d828 * 27.3.2020;1025;;1025 https://www.iltalehti.fi/koronavirus/a/24a5ac9f-7daa-4078-9ea7-9be3828d7f9b * No double checking below * 26.3.2020;958;; https://www.karjalainen.fi/uutiset/uutis-alueet/kotimaa/item/246315 * 25.3.2020;880;; https://www.is.fi/kotimaa/art-2000006451970.html * Something to think about. Cases released on 4th April (267 new) were allocated to (were the test results of) almost 20 different days. So you want to update the chart by test date, there will be a huge delay in getting the figures to settle. — Preceding unsigned comment added by Niemri (talk • contribs) 03:38, 7 April 2020 (UTC) * Thank you very much for these helpful references of which I was previously unaware, although I had noticed the change in the data. How about including something below the chart citing these articles next to THL's cumulative count cautioning there can be a delay in getting the figures to settle by test date? * Positivity1 (talk) —Preceding undated comment added 08:10, 7 April 2020 (UTC) * Here's my take on some of these proposals: * Update the chart by test date, not by result date. I agree with Positivity1, in that it's fine if the figures don't immediately settle, as long as we mention it below the chart. * Include a separate note explaining that recoveries are not actively monitored. This footnote is a good example on how to word it. * Do not change the term "active cases". This is the term used by default on the other country templates, and the term "surviving cases" isn't any less confusing. As long as the lack of recorded recoveries is explained, "active cases" works. * Improve the readability of this talk page. We should indent our replies (as per WP:THREAD) and sign our posts (WP:SIGNATURE) appropriately. As it stands, this talk page is very confusing to read. * —Rutlandbaconsouthamptonshakespeare (talk) 10:00, 7 April 2020 (UTC) * Thank you all very much for you thoughts. Looking here, please understand, I foresee a risk that making these improvements would not be a constructive use of my time. When what you write gets tarred as opinion comes as a cautionary note the effort will prove fruitless and regrettably I have my own highly subjective life to lead. Given what I know, in my opinion, editting wikipedia can become more about tenacity than genuine consensus. I thank ABR for refering me to that discussion. As for n, what I see is that today April 8th in my mathematical opinion from looking at which numbers changing from April 6th to 7th, is that THL's daily new case data from March 1-30 remains stable with updates occuring in the interceding days. One of the issues in the change of figures that Niemri mentions are some laboratories getting synched up with THL . I wish you well in providing the international readership with an accurate description of the facts. * Positivity1 (talk) —Preceding undated comment added 22:10, 7 April 2020 (UTC) * Positivity1 I apologize if my statement about "opinion" came across as negative. It seemed like something was being stated as fact and I wanted to point out (probably unnecessarily) otherwise. Of course all we have with this messy real-world situation is opinions and I don't have a strong one here, nor should it matter. Unless there is some automation, though, editing numbers for multiple days each update might be laborious and end up getting done less. * ABR (talk) 13:35, 8 April 2020 (UTC) I don't have a clear opinion, how we should present the data. However, I believe we should select either way, not mix them. I agree with Positivity1, that updating the cases by the sample date requires quite much effort, and I probably won't be updating the figures, if we go that way. If I checked it correctly, on 8th Apr THL allocated 179 new cases for the following sample dates: 26 Mar: 1, 2 Apr: 2, 3 Apr: 1, 4 Apr: 4, 5 Apr: 12, 6 Apr: 73, 7 Apr: 86. The chart can be corrected either way. THL keeps a record of cases by the sample date, and various news sources have reported the daily figures by the result date. Niemri (talk) 04:02, 9 April 2020 (UTC) The total cases data seemed to be culled from YLE ( https://yle.fi/uutiset/3-11300232 ) which does not exactly match the data from the THL karttasovellus (thl.fi/koronakartta). These sources disagree slightly and the YLE data is a bit harder to extract as one must either scroll over each date or parse the source, read the bar height and divide by 1.74 (this might be also be system/browser dependent). Additionally, it is not possible in the yle data to properly align the x-axis as the labels are placed only approximately. I presume the THL data is more reliable, but it only goes as far back as 9.4. I therefore used the THL data where available and to align the x-axis, but continued using the YLE data for the historical data preceding 9.4 (Levi Keller) <IP_ADDRESS> (talk) 18:35, 24 May 2020 (UTC) The data from THL for the full range can be extracted easily from the source for the cumulative view. Data now reflects sample date. <IP_ADDRESS> (talk) 10:33, 10 June 2020 (UTC) * The difference between THL koronakartta and THL daily bulletin (=numbers in YLE) is that the in koronakartta new cases are splitted to the days when the test is taken. Numbers in the koronakartta can also change retroactivly when THL is fixing the incorrect data. Ie. koronakartta data is better quality but it is not stable expecially for last 5 days. In YLE the number of the cases is the total number of the cases which the cases are increased in every day. Here is visualisations about the delay, . --Zache (talk) 11:02, 10 June 2020 (UTC) Lethality by age? Anyone willing to add a table like Template:2019–20 coronavirus pandemic data/Italy medical cases based on data at https://thl.fi/en/web/infectious-diseases/what-s-new/coronavirus-covid-19-latest-updates/situation-update-on-coronavirus and https://experience.arcgis.com/experience/92e9bb33fac744c9a084381fc35aa3c7 (copied to https://www.statista.com/statistics/1103926/number-of-coronavirus-cases-in-finland-by-age-group)? Ain92 (talk) 22:27, 5 May 2020 (UTC) A Commons file used on this page or its Wikidata item has been nominated for deletion The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion: Participate in the deletion discussion at the. —Community Tech bot (talk) 17:07, 26 June 2020 (UTC) * Toilet paper shortage.jpg National Emergency Supply scandal This section is currently unreadable. Could someone who is knowledgeable about this do an edit? From gibberish to English? — Preceding unsigned comment added by <IP_ADDRESS> (talk) 21:54, 16 October 2020 (UTC) Main article missing covid spike cases from Russia I think it would be beneficial if main article has some information about recent covid cases spike after football fans arriving from Russia. https://yle.fi/uutiset/osasto/news/dozens_of_coronavirus_cases_among_football_fans_returning_from_russia/11996510 — Preceding unsigned comment added by Don Colorodo (talk • contribs) 08:45, 23 July 2021 (UTC)
WIKI
What are Palpitations? Steady electrical impulses make the heart beat with such regularity that you don't even notice it. But if the system develops a glitch, you may experience palpitations – a fluttering or pounding sensation in your chest – as your heart beats too fast or ‘skips a beat’.   What causes palpitations? Palpitations occasionally indicate a serious heart problem, but most cases are caused by fatigue, worry, illness or stress. Although they can make you feel anxious, they don't usually need medical treatment, and can be treated quite easily. Click here for advice on relieving palpitations. Recurring palpitations Unless you have a history of heart disease, there's generally no reason to see your doctor if you have palpitations unless they occur more than once a week, become more frequent or are accompanied by a feeling of light-headedness or dizziness. You should talk to your doctor if you get palpitations often and have other signs of increased thyroid over-activity such as weight loss, fatigue or insomnia. If you faint or experience tightness in your chest accompanied by nausea and sweating, call 999 immediately; you might be having a heart attack.
ESSENTIALAI-STEM
## # # This script opens one TextGrid with segments in one Tier, computes the durations of every segment # that has a label and writes the result in milliseconds to a file whose name starts with the filename and # ends with 'durations.txt'. This file is deleted without warning prior to writing anything to it. # # Version 1.0, Henning Reetz, 25-jun-2007 # Version 1.1, Henning Reetz, 13-jun-2009; inserted 'tier' as a variable # Version 2.0, Henning Reetz, 09-dec-2014; new Praat script syntax; correct interval counter # Version 2.1, Henning Reetz, 16-dec.-2014 added 'removeObject:' # # Tested with Praat 5.4.0 # ## # clear information window clearinfo # set one file name base_name$ = "g071a000" # set the tier to be analysed (very helpful to declare it here once; 'tier' is used at many places in the script) tier = 1 # construct a name for the result file by adding '_duration.txt' to the base name result_file$ = base_name$ + "_durations.txt" # Read and select the .TextGrid file: (note that we do not need the sound file itself!) Read from file: "'base_name$'.TextGrid" # selecting is not really necessary here; just to be sure selectObject: "TextGrid 'base_name$'" # check whether tier 'tier' is an interval tier. Get this information first tier_one_is_interval = Is interval tier: 'tier' # check now whether tier 'tier' is an interval tier. if tier_one_is_interval <> 1 # in case it is not an interval tier, inform the user and stop program printline 'newline$' Tier 'tier' of 'file_name$'.TextGrid is not an interval tier. 'newline$' Job aborted. exit endif # otherwise (i.e. tier 'tier' is an interval tier) assume that there are some labeled intervals in this tier and start working # get the number of intervals of Tier 'tier' number_of_intervals = Get number of intervals: 'tier' # Delete any pre-existing variant of the output file and write a header to it # The header line is actually first stored in a variable 'header_row$' which is then written to the file filedelete result_file$ header_row$ = "File" + tab$ + "label" + tab$ + "Duration (ms.)" + newline$ header_row$ > 'result_file$' # Preset an counter for labelled intervals to inform user at the end of script nr_labelled_intervals = 0 # go thru all intervals for i to number_of_intervals # get the label of the 'i'th interval interval_label$ = Get label of interval: 'tier', 'i' # check whether label is not empty, i.e. there is a name given if interval_label$ <> "" # this segment has a label; so get time of beginning of segment begin_interval = Get starting point: 'tier', 'i' # get end of segment end_interval = Get end point: 'tier', 'i' # duration in ms is the distance between beginning and end times 1000 duration = (end_interval - begin_interval) * 1000 # now write the data, separated by tabulators to the out_file. # Note that there is a space after "out_file$" to separate the channel, where the data is written to # from the actual stuff which follows, which is actually the stuff that is send to this channel fileappend 'result_file$' 'base_name$''tab$''interval_label$''tab$''duration:2''newline$' # increase labelled intervals counter nr_labelled_intervals += 1 # in case there was not a labeled segment, we join here again endif # we increase the variable 'i' by one endfor # we come here when the variable 'i' is larger than 'number_of_intervals; # tell the user that we're done printline 'nr_labelled_intervals' durations of labels written to 'result_file$'. # cleanup - remove TextGrid object removeObject: "TextGrid 'base_name$'"
ESSENTIALAI-STEM
Page:Malabari, Behramji M. - Gujarat and the Gujaratis (1882).djvu/43 27 Is it strange, then, that the collectors should try to give Sayed Edroos another lease of life as councillor to Sir James Fergusson? But this sort of happy-family arrangement will not do in these days—at least, Itrust, not in the days of Sir James. The Sayed Sáheb is an exploded myth. He has ceased to believe in himself, and even his existence has become a matter of doubt. The Gujarát people, therefore, will have none of him. The Presidency protests against the contemplated jobbery of a second term for the Sayed. They say Let us have anybody else,—Mr. Cumu Sulliman, or even Ismal Khán, butler to the Collector of Cobblington; but no Sayed Edroos—they have had enough of his name in official reports of "members present." Let him live,retired— As worthy a man as the Sayed is a certain Mohlá of this province. The Mohlaji is at present in hot water. He is the head of a
WIKI
U.S. agents fire tear gas as some migrants try to breach fence TIJUANA, Mexico — U.S. border agents fired tear gas on hundreds of migrants protesting near the border with Mexico on Sunday after some of them attempted to get through the fencing and wire separating the two countries, and American authorities shut down the nation’s busiest border crossing from the city where thousands are waiting to apply for asylum. The situation devolved after the group began a peaceful march to appeal for the U.S. to speed processing of asylum claims for Central American migrants marooned in Tijuana. Mexican police had kept them from walking over a bridge leading to the Mexican port of entry, but the migrants pushed past officers to walk across the Tijuana River below the bridge. More police carrying plastic riot shields were on the other side, but migrants walked along the river to an area where only an earthen levee and concertina wire separated them from U.S. Border Patrol agents. Some saw an opportunity to breach the crossing. An Associated Press reporter saw U.S. agents shoot several rounds of tear gas after some migrants attempted to penetrate several points along the border. Mexico’s Milenio TV showed images of migrants climbing over fences and peeling back metal sheeting to enter. Honduran Ana Zuniga, 23, also said she saw migrants opening a small hole in concertina wire at a gap on the Mexican side of a levee, at which point U.S. agents fired tear gas at them. Children screamed and coughed. Fumes were carried by the wind toward people who were hundreds of feet away. “We ran, but when you run the gas asphyxiates you more,” Zuniga told the AP while cradling her 3-year-old daughter Valery in her arms. Mexico’s Interior Ministry said around 500 migrants tried to “violently” enter the U.S. The ministry said in a statement it would immediately deport those people and would reinforce security. As the chaos unfolded, shoppers just yards away on the U.S. side streamed in and out of an outlet mall, which eventually closed. Throughout the day, U.S. Customs and Border Protection helicopters flew overhead, while U.S. agents held vigil on foot beyond the wire fence in California. The Border Patrol office in San Diego said via Twitter that pedestrian crossings were suspended at the San Ysidro port of entry at both the East and West facilities. All northbound and southbound traffic was halted for several hours. Every day more than 100,000 people enter the U.S. there. Homeland Security Secretary Kirstjen Nielsen said in a statement that U.S. authorities will continue to have a “robust” presence along the Southwest border and that they will prosecute anyone who damages federal property or violates U.S. sovereignty. “DHS will not tolerate this type of lawlessness and will not hesitate to shut down ports of entry for security and public safety reasons,” she said. More than 5,000 migrants have been camped in and around a sports complex in Tijuana after making their way through Mexico in recent weeks via caravan. Many hope to apply for asylum in the U.S., but agents at the San Ysidro entry point are processing fewer than 100 asylum petitions a day. Irineo Mujica, who has accompanied the migrants for weeks as part of the aid group Pueblo Sin Fronteras, said the aim of Sunday’s march toward the U.S. border was to make the migrants’ plight more visible to the governments of Mexico and the U.S. “We can’t have all these people here,” Mujica told The Associated Press. Tijuana Mayor Juan Manuel Gastelum on Friday declared a humanitarian crisis in his border city of 1.6 million, which he says is struggling to accommodate the crush of migrants. President Donald Trump took to Twitter on Sunday to express his displeasure with the caravans in Mexico. “Would be very SMART if Mexico would stop the Caravans long before they get to our Southern Border, or if originating countries would not let them form (it is a way they get certain people out of their country and dump in U.S. No longer),” he wrote. Mexico’s Interior Ministry said Sunday the country has sent 11,000 Central Americans back to their countries of origin since Oct. 19, when the first caravan entered the country. It said that 1,906 of those who have returned were members of the recent caravans. Mexico is on track to send a total of around 100,000 Central Americans back home by the end of this year.
NEWS-MULTISOURCE
Talk:Hugh de Neville Milhist Any objection to a Milhist tag? I don't plan to tag every medieval article, but it looks like there's a connection here. - Dank (push to talk) 17:23, 13 May 2012 (UTC) * No objection. Although there is one for the crusades task force as you can see above... Ealdgyth - Talk 17:25, 13 May 2012 (UTC) * Oops. - Dank (push to talk) 18:15, 13 May 2012 (UTC) bad data "Neville stated in 1298 that over the previous six and a half years the amount raised by the various revenues of the forests had been £15,000;[18] in 1212 it had been £4,486.[17] " Since de Neville died in 1234, the dates in this snippet must be wrong. Go back to your sources and make sure you copy correctly this time. <IP_ADDRESS> (talk) 12:55, 3 August 2015 (UTC) Separate photo in infobox I vote remove the photo of the Church from the infobox and place it somewhere in the rest of the content. --Tannermessage me 13:30, 14 March 2017 (UTC) * Generally, we don't vote on wikipedia. We instead discuss and arrive at consensus. To do that, we'll need to understand why you feel the need to move the image. Ealdgyth - Talk 13:32, 14 March 2017 (UTC)
WIKI
Traffic (miniseries) Traffic is a three-part miniseries that aired on the United States cable channel USA Network in 2004, featuring an ensemble cast portraying the complex world of drugs, their distribution, the associated violence, and the wide variety of people whose lives are touched by it all. Production The miniseries was inspired by the 1989 Channel 4 television miniseries Traffik and the 2000 motion picture Traffic directed by Steven Soderbergh. Reception The American version was nominated for three Emmy Awards.
WIKI
sex-linked Adjective * 1) Of a mutation or other genetic feature, carried on the chromosome shared by both the male and female of the species (e.g. the X-chromosome in mammals or the Z-chromosome in birds).
WIKI
Dance in NYC This Week Our guide to dance performances. CAMILLE A. BROWN & DANCERS at the Alexander Kasser Theater at Montclair State University (Feb. 1-2, 7:30 p.m.; Feb. 3, 8 p.m.; Feb. 4, 3 p.m.). Peak Performances hosts Camille A. Brown’s “ink,” the final dance in a trilogy of works investigating culture, race and identity. After looking at the stereotypes that black men face in “Mr. TOL E. RAncE” and childhood games in “BLACK GIRL: Linguistic Play,” Ms. Brown points her choreographic lens on rituals, gestural vocabulary and traditions of the African diaspora with an aim of reclaiming “the narratives of African-Americans through self-empowerment, black love, brotherhood, exhaustion and resilience, community and fellowship.” 973-655-5112, peakperfs.org COMPAGNIE HERVÉ KOUBI at the Joyce Theater (Jan. 30-31, 7:30 p.m.; Feb. 1-3, 8 p.m.; Feb. 4, 2 p.m.). The French company unveils the highly physical “What the Day Owes to the Night,” a work for 12 French-Algerian and African dancers. Choreographed by Mr. Koubi, it is inspired by his father’s revelation that his family hailed not from France, but from Algeria. They learned the news at his deathbed. The resulting production, which features capoeira, martial arts and contemporary dance, is Mr. Koubi’s energetic exploration of his roots. 212-242-0800, joyce.org JULIUS EASTMAN AND DANCE: MOLISSA FENLEY, ANDY DE GROAT, AND MORE at the Kitchen (Jan 30, 8 p.m.) In honor of the Minimalist musician and composer Julius Eastman, the Kitchen hosts “Julius Eastman: That Which Is Fundamental,” a series of performances and a two-part exhibition. This evening focuses on his work in dance; the highlight is a reprisal of Molissa Fenley’s “Geologic Moments” (1986), a slowly building dance for six that was developed with the composer. In the program, Eastman’s work as a choreographer is also depicted in formerly unseen video. An influential composer during the 1970s and 1980s, he died in 1990 at 49. 212-255-5793 ext. 11; thekitchen.org LUMBERYARD IN THE CITY WINTER FESTIVAL at New York Live Arts (Through Feb. 10). After Kei Takei/Moving Earth Orient Sphere wraps up performances on Jan. 27, the playwright and performance artist Robbie McCauley presents the New York premiere of “Sugar” (Feb. 1-3) as part of this festival that pays homage to established female dance and performance artists. In “Sugar,” a solo show directed by Maureen Shea, Ms. McCauley performs an autobiographical solo focusing on her battle with diabetes and explores how food relates to issues of race. The festival continues with the choreographer, dancer and visual artist Dana Reitz (Feb. 8-10). 855-459-3849, lumberyard.org NEW YORK CITY BALLET at the David H. Koch Theater (through March 4). The winter season continues with “dance odyssey,” a new work by the corps de ballet member Peter Walker featuring a principal cast of Ashley Laracey, Tiler Peck, Adrian Danchig-Waring, Zachary Catazaro, Anthony Huxley and Devin Alberda. As part of the company’s New Combinations Evening on Feb. 1, it marks Mr. Walker’s second work for City Ballet and is set to an arrangement of selections from several of the British composer Oliver Davis’s works. That same evening is capped by Alexei Ratmansky’s singular “Russian Seasons” and, in it, debuts for Maria Kowroski, Kristen Segin, Cameron Dieck and Troy Schumacher. 212-496-0600, nycballet.com MINA NISHIMURA at Danspace Project (Feb. 1-3, 8 p.m.) The Japanese choreographer and dancer presents the premiere of “Bladder Inn (and X, Y, Z, W),” named in honor of Ms. Nishimura’s use of language to conjure internal landscapes. In the piece, she invites spectators and performers to focus attention on peripheral details of the architecture and sound of St. Mark’s Church, which houses Danspace Project. Here, Ms. Nishimura performs alongside seven dancers to music composed by Masahiro Sugaya. 866-811.4111, danspaceproject.org DAVID THOMSON at Performance Space New York (Jan. 31-Feb 2, 7:30 p.m., Feb. 4, 3 p.m.). In “he his own mythical beast,” the choreographer David Thomson offers what he calls “a meditation on the mythologies and contradictions of identity, race, gender and the black body in postmodern American culture.” For it, his influences range from Alfred Hitchcock’s “Rear Window,” James Baldwin, high school fights and Trisha Brown. The work’s guide? Venus, a character named after the Hottentot Venus, or Sarah Baartman, a black woman who was placed on exhibit in London and Paris in the early 19th century. Peter Born, along with Mr. Thomson, is the work’s director. performancespacenewyork.org, 212-352-3101
NEWS-MULTISOURCE
User:Maggie4461 Last year I moved to Sydney to continue my degree of a Bachelor of Arts majoring in Film Studies and Advanced Studies majoring in Marketing at the University of Sydney. While at the beginning it was difficult to move away from family and the beautiful Queensland beaches, I have found living in Sydney to be a place that I can grow and further explore my passion for filmmaking.
WIKI
Carmakers warn against tough emissions caps ahead of EU vote BRUSSELS (Reuters) - The European carmakers’ lobby on Monday warned that excessively steep cuts in carbon dioxide emissions limits on cars and vans could harm the industry and cost jobs ahead of a vote by the European Parliament on the new targets. EU lawmakers will vote on Wednesday on imposing a stricter CO2 limit of 45 percent by 2030 than the EU executive’s initial proposals of 30 percent - setting the stage for tough talks with national governments this year on the final law. “The more aggressive the CO2 reduction targets are, the more disruptive the socio-economic impacts will be,” Erik Jonnaert, the head of the European Automobile Manufacturers’ Association (ACEA), said in a statement. Weighing in on a clash between concerns about industry competitiveness and climate goals, ACEA said that tougher limits could slow growth in a sector which it said employs over six percent of EU workers. “The stakes of Wednesday’s vote are extremely high for the entire sector,” Jonnaert said. He added that while carmakers are investing in electric vehicles, sales are still low, and governments should invest more in charging infrastructure and buy-side incentives. Environment campaigners, however, say that ambitious targets for the transport sector - the only one where greenhouse gas emissions are still rising - are needed to meet the bloc’s overall climate goal of reducing pollution by 40 percent by 2030 compared to 1990 levels. “It will be a big battle to maintain minus 45 (percent) on the table,” said EU lawmaker Bas Eickhout, a member of the Green group, which has pushed for the higher targets in European Parliament. Reporting by Daphne Psaledakis, Editing by Alissa de Carbonnel, William Maclean
NEWS-MULTISOURCE
The remains of the ancestors of modern elephants, woolly mammoths, have been discovered all over the world, from Alaska to Siberia. These gentle giants had tusks that could reach up to 15-feet in length and could weigh up to 15 tons, although they weren’t necessarily the largest species of mammoths. Remains of these creatures have been dated back between 39 and 40,000 years ago. In 2013, researchers discovered perhaps the most well-preserved woolly mammoth to date, deep in Siberia in a tomb of ice. They believe that the female mammoth had lived nearly 40,000 years ago. The body was still so intact that it still had some blood. The Grave of Richard the Third The grave of Richard the Third was discovered in a parking lot in August of 2012 by archaeologists from the University of Leicester. It is thought that the English king had been buried there for over 500 years. Richard the Third is said to have died at the Battle of Bosworth and was only 32 years old. What was notable for archaeologists was how the king appeared to be buried without the traditional respect in which a royal would be laid to rest. By the looks of his remains, archaeologists believe he was buried with no ceremony or even in a coffin. Historians assume that, whoever buried him, were not supporters of the royal. Egypt holds many incredible archaeological treasures that have revealed much about the sociological traditions of ancient times. One of the fascinating discoveries is Tutankhamun’s Tomb which was found in 1922 by a British archaeologist named Howard Carter. What else was remarkable about Tutankhamun’s Tomb is that the walls are overlaid with gold, and his sarcophagus itself is made of solid gold too! Additionally, it is believed there are two different rooms hidden in his tomb, but it is not yet clear what may be inside. The body was discovered in Canada in 1999, and scientists dated the corpse to be between 300-600 years old. Aside from the well-preserved body, the hunters that discovered him also found his walking stick and a fur coat. What’s interesting about this case is that through DNA testing, they were able to find over 15 living relatives of the man in the ice, who had simply been dubbed “Canadian Ice Man.” A Mysterious Tibetan Skull Millions of people are fascinated with the idea of looking for bargains at markets and antique shops. One such story of success happened in Vienna back in 2011, when a 300-year-old skull was found in a small antique shop. According to the Austrian store owner, this skull belonged to a man who provided medical assistance to Tibetan monks. The man's teeth and skull were perfectly preserved, and his head was carved with various depictions of the macabre. The true origins of the skull remain a mystery, but it is still considered a rare piece of history.
FINEWEB-EDU
Wikipedia:Featured article removal candidates/Ian McKellen Ian McKellen * Article is no longer a featured article. somehow i doubt that this entire article was written from one single interview source. if it *was* then i seriously doubt the reliability of much of the article. an encyclopedia requires distance from the subject, just quoting primary source material only is unacceptable, especially when it comes from an agenda-pushing publication. it certainly fails FAC1, far better biographical articles exist; it fails FAC2 on numerous counts, being non-comprehensive (over-domination of sexuality issues at expense of anything else - did this guy's long life and career only amount to this?), what about critical reception, both positive and negative, for instance? factual accuracy is dubious if all info comes from one LGBT magazine, the lead fails to summarize the article failing FAC3, fairuse images that are tagged with invalid templates, failing FAC4, and even then no rationale is given, finally it is far too short to meet suitable length criteria (failing FAC5) - most of the article is padding with an extensive list. in short, not FA material. Zzzzz 21:44, 13 March 2006 (UTC) * remove per nom Zzzzz 21:46, 13 March 2006 (UTC) * Remove per nom --Subsurd 22:20, 13 March 2006 (UTC) * Remove per nom: too much reliance on one source, too much mention of his sexuality, not enough mention of his work as actor. Why does each section need to mention what relationship he was in at the time? There's a hint of POV in this article. -- 21:54, 17 March 2006 (UTC) * Remove per nom. -Mask [[Image:Flag_of_Alaska.svg|20 px]] Talk 00:36, 18 March 2006 (UTC) * Remove per nomination. ~Linux'''erist L / T 17:05, 18 March 2006 (UTC) * Remove - the nomination sums up my opinion quite well. Rossrs 08:50, 27 March 2006 (UTC)
WIKI
Why Do I Keep Getting Ingrown Toenails? If you’ve ever experienced the pain and swelling of an ingrown toenail, you already know that it hurts. Having an ingrown toenail can really interfere with your life, much more than you think it would. The bad news is that if you’ve gone through this once, you’re likely to go through it again. Certain health conditions, such as diabetes, make it more likely that you’ll have an ingrown toenail happen more than once. But other conditions can also make it likely that ingrown toenails will be a recurring problem. Ingrown toenails are caused by the edge of your nail growing into the surrounding tissue.  The providers at Great Lakes Foot and Ankle Institute explain more about why you’re likely to keep getting ingrown toenails, and how you can stop the cycle. Why you need treatment You might think that because an ingrown toenail seems pretty minor, you can just ignore it and hope that it will go away on its own. But this is the wrong approach to take. Untreated ingrown toenails can spread infection, even to your bloodstream, which can become serious. You can get serious infections like staph or MRSA from ingrown toenails. This is because the bacteria for diseases like staph or MRSA live on your skin, and an ingrown toenail acts as a means to introduce that bacteria into your body.  In some cases, ingrown toenails can even progress into a condition called gangrene, which usually requires surgery to remove dead or dying tissue. Trim your toenails the right way Many cases of ingrown toenails are caused by using an improper technique to trim your nails. You may feel tempted to try to trim your toenails in a rounded shape, but this is a common cause of ingrown toenails.  Trim your toenails straight across. If you have your toenails trimmed at a nail salon, make sure the technician trims them straight. If you have diabetes or can’t reach your toes, please call our office to trim your toenails for you. Wear the right-fitting shoes Wearing shoes that don’t fit well is another common cause of ingrown toenails, especially recurrent ingrown nails. Your shoes and socks shouldn’t fit tightly anywhere. This is one reason teenagers are more likely to get ingrown toenails: their feet often grow quickly, and they may not replace their shoes frequently enough. Trim your nails to the right length You may feel tempted to trim your nails just a bit shorter so you don’t have to do it as often. While the intentions behind this are good, it also makes it more likely that you’ll develop ingrown toenails.  Your toenails should be trimmed to be about the same length as the end of your toes. Trimming them shorter makes it more likely that pressure from your shoes will cause the edge of your toenails to grow into the surrounding tissue. Inspect your feet more often if you have diabetes If you have diabetes, you’re more likely to develop ingrown toenails, in addition to other foot conditions. This is because diabetes causes poor blood flow to your feet. You should inspect your feet on a regular basis, especially when you have diabetes. Make regular appointments with your podiatrist as well — a podiatrist is an essential member of your health care team if you have diabetes. Whether you think that you may have an ingrown toenail or you’re simply due to have your feet inspected, our providers at Great Lakes Foot and Ankle Institute are ready to help. Contact us by calling the Illinois or Michigan office most convenient to you, or get in touch online today.
ESSENTIALAI-STEM
Talk:Stan Rofe King or Prince of Moomba? This article claims that Stan Rofe was "crowned King of Moomba in 1968". Melbourne City Council commissioned a history of Moomba which was written by Craig Bellamy, Gordon Chisholm and Hilary Erikson, (2006), Moomba: A festival for the people and can be read on-line at: http://www.melbourne.vic.gov.au/rsrc/PDFs/Moomba/History%20of%20Moomba.pdf On p 26 we find that the second King of Moomba was appointed in 1968 and was British Actor, Alfred Marks. Furthermore, Stan Rofe's brother Roy Rofe together with Ian B Allen and Bob Hayden state: "[Stan]... was the Prince of Moomba in 1968..." in their retrospective Legendary Australian R & R Disc Jockey passes away found at: http://www.rockabillyhall.com/ThatsNewToMe11.html I contend that Stan Rofe was a Prince of Moomba not a King of Moomba. This is despite the attribution of King of Moomba to him by the Age newspaper in their obituary written by Patrick Donovan (May 17 2003): http://www.theage.com.au/articles/2003/05/16/1052885400356.html If there are no reasonable objections I intend editing this article to reflect the contention that Stan Rofe was a Prince of Moomba; after seven days from this note. Shaidar cuebiyar 15:10, 10 August 2007 (UTC) External links modified (January 2018) Hello fellow Wikipedians, I have just modified 3 external links on Stan Rofe. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: * Corrected formatting/usage for http://www.melbourne.vic.gov.au/rsrc/PDFs/Moomba/History%20of%20Moomba.pdf * Corrected formatting/usage for http://www.abc.net.au/longway/episode_1/ * Added archive https://www.webcitation.org/6YiGE4tE6?url=http://erl.canberra.edu.au/uploads/approved/adt-AUC20050509.095456/public/02whole.pdf to http://erl.canberra.edu.au/uploads/approved/adt-AUC20050509.095456/public/02whole.pdf Cheers.— InternetArchiveBot (Report bug) 07:38, 20 January 2018 (UTC)
WIKI
Talk:ἀθανασία Conjugation The module is having problems generating a paradigm here. — Eru·tuon 01:38, 26 October 2016 (UTC) Aha! There was an extra accent mark on it that I did not notice. The module error was not very informative, so I did not know to look for that... — Eru·tuon 05:51, 30 October 2016 (UTC)
WIKI
@article {650501, title = {Neutrophil Elastase Inhibitors Suppress Oxidative Stress in Lung during Liver Transplantation}, journal = {Oxid Med Cell Longev}, volume = {2019}, year = {2019}, month = {2019}, pages = {7323986}, abstract = {Background: Neutrophil infiltration plays a critical role in the pathogenesis of acute lung injury following liver transplantation (LT). Neutrophil elastase is released from neutrophils during pulmonary polymorphonuclear neutrophil activation and sequestration. The aim of the study was to investigate whether the inhibition of neutrophil elastase could lead to the restoration of pulmonary function following LT. Methods: In experiments, lung tissue and bronchoalveolar lavage fluid (BALF) were collected at 2, 4, 8, and 24 h after rats were subjected to orthotopic autologous LT (OALT), and neutrophil infiltration was detected. Next, neutrophil elastase inhibitors, sivelestat sodium hydrate (exogenous) and serpin family B member 1 (SERPINB1) (endogenous), were administered to rats before OALT, and neutrophil infiltration, pulmonary oxidative stress, and barrier function were measured at 8 h after OALT. Results: Obvious neutrophil infiltration occurred from 2 h and peaked at 8 h in the lungs of rats after they were subjected to OALT, as evidenced by an increase in naphthol-positive cells, BALF neutrophil elastase activity, and lung myeloperoxidase activity. Treatment with neutrophil elastase inhibitors, either sivelestat sodium hydrate or SERPINB1, effectively reduced lung naphthol-positive cells and BALF inflammatory cell content, increased expression of lung HO-1 and tight junction proteins ZO-1 and occludin, and increased the activity of superoxide dismutase. Conclusion: Neutrophil elastase inhibitors, sivelestat sodium hydrate and SERPINB1, both reduced lung neutrophil infiltration and pulmonary oxidative stress and finally restored pulmonary barrier function.}, issn = {1942-0994}, doi = {10.1155/2019/7323986}, author = {Yao, Weifeng and Han, Xue and Guan, Yu and Guan, Jianqiang and Wu, Shan and Chen, Chaojin and Li, Haobo and Hei, Ziqing} }
ESSENTIALAI-STEM
Australia's Flight Centre FY profit falls 5.6 pct SYDNEY, Aug 24 (Reuters) - Australia’s biggest listed travel agent, Flight Centre Travel Group Ltd, said on Thursday its full-year profit fell 5.6 percent as an airfare price war cut its margins. Net profit for the 12 months to June 30 fell to A$230.8 million from A$244.6 million a year ago. That beat an average forecast of A$224.1 million from 9 analysts polled by Thomson Reuters I/B/E/S. The company declared a final dividend of 94 Australian cents per share, up from 92 cents a year ago. (Reporting by Alison Bevege; Editing by Stephen Coates)
NEWS-MULTISOURCE
Home  >  Spring Core JdbcTemplate.batchUpdate in Spring By Arvind Rai, November 07, 2013 Batch update in spring can be achieved by JdbcTemplate. JdbcTemplate has a method as JdbcTemplate.batchUpdate which takes input of sql query and the object of BatchPreparedStatementSetter. BatchPreparedStatementSetter has to override two methods setValues and getBatchSize. In our example we have a farmar table and we will do batch update. FarmarDao.java package com.concretepage.dao; import java.sql.PreparedStatement; import java.sql.SQLException; import java.util.List; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jdbc.core.BatchPreparedStatementSetter; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.stereotype.Repository; import com.concretepage.bean.Farmar; @Repository public class FarmarDao { private JdbcTemplate jdbcTemplate; @Autowired public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); } public int[] farmarBatchUpdate(final List<Farmar> farmars) { int[] updateCnt = jdbcTemplate.batchUpdate( "update farmar set status= ? where age = ?", new BatchPreparedStatementSetter() { public void setValues(PreparedStatement ps, int i) throws SQLException { ps.setString(1,farmars.get(i).getStatus()); ps.setInt(2,farmars.get(i).getAge()); } public int getBatchSize() { return farmars.size(); } } ); return updateCnt; } } SpringTest.java package com.concretepage; import java.util.ArrayList; import java.util.List; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import com.concretepage.bean.Farmar; import com.concretepage.dao.FarmarDao; public class SpringTest { public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext("spring.xml"); FarmarDao farmarDao = (FarmarDao)context.getBean("farmarDao"); Farmar f1 = new Farmar("Ram", 20, "active"); Farmar f2 = new Farmar("Shyam", 25, "active"); List<Farmar> list = new ArrayList<Farmar>(); list.add(f1); list.add(f2); int[] cnt = farmarDao.farmarBatchUpdate(list); System.out.println("Update count : "+cnt.length); } } Farmar.java package com.concretepage.bean; public class Farmar { private String name; private int age; private String status; public Farmar(String name, int age, String status){ this.name = name; this.age = age; this.status = status; } public String getName() { return name; } public int getAge() { return age; } public String getStatus() { return status; } } Download Source Code for Complete Example jdbctemplate-batchupdate-in-spring.zip POSTED BY ARVIND RAI ARVIND RAI FIND MORE TUTORILAS ©2019 concretepage.com | Privacy Policy | Contact Us
ESSENTIALAI-STEM
adding data to the database with the use of jtable adding data to the database with the use of jtable View Answers Related Tutorials/Questions & Answers: adding data to the database with the use of jtable - Java Beginners adding data to the database with the use of jtable  how can i add data to the database with the use of jtable. and also can able to view the records in the database in the table.. tnx :G adding data to the database with the use of jtable - Java Beginners adding data to the database with the use of jtable  how can i add data to the database with the use of jtable. and also can able to view the records in the database in the table.. tnx :G Advertisements how update JTable after adding a row into database in JTable, and it's OK, but after adding a row into database table does't update. How update JTable after adding a row into database? package djile pak.java... which shows data in JTable from database import java.awt.*; import jtable-adding a row dynamically jtable-adding a row dynamically  hi..i am doing a project for pharmacy .. *pblm:* when i want to enter the details in jtable while running... and if the user want enter the data to a 4th row he must press enter at the end of the 3rd Java: Adding Row in JTable Java: Adding Row in JTable   how about if we already have the JTAble created earlier. And i just found nothing to get its DefaultTableModel, thus, I can't call insertRow() method. Is there any work around for this? I found Select Employee and display data from access database in a jtable Select Employee and display data from access database in a jtable  I... server, and implement the needed data objects in a database server. The clients... will use JDBC to connect to the database. Please help me Adding JTable into existing Jframe. Adding JTable into existing Jframe.  i need to add JTable into this code pls help me.. package Com; import Com.Details; import java.awt.Color...(100,60,1300,600); add(pane); //Action Listener for button //for adding new Adding JTable into existing Jframe. Adding JTable into existing Jframe.  i need to add JTable into this code pls help me.. package Com; import Com.Details; import java.awt.Color...(100,60,1300,600); add(pane); //Action Listener for button //for adding new Adding JTable into existing Jframe. Adding JTable into existing Jframe.  i need to add JTable into this code pls help me.. package Com; import Com.Details; import java.awt.Color...(100,60,1300,600); add(pane); //Action Listener for button //for adding new How to refresh a jTable On adding or deleting record .... How to refresh a jTable On adding or deleting record ....   Hii Sir, I am developing a project in which a jtable is getting populated from database and displayed in panel... Now i have to add records in the database JAVA DATABASE CONNECTION WITH JTABLE JAVA DATABASE CONNECTION WITH JTABLE  HOw To Load Database Contents From Access Database to JTable without using Vector JTable Display Data From MySQL Database JTable Display Data From MySQL Database In this section we will discuss about how to display data of a database table into javax.swing.JTable This section will describe you the displaying of data of a database table into a JTable Connecting JTable to database - JDBC to store this JTable content in my database table.. This is a very important..."}; DefaultTableModel model = new DefaultTableModel(data, col); table = new JTable...Connecting JTable to database  Hi.. I am doing a project on Project How To Display both image and data into Swing JTable which is retrieved from ms access database How To Display both image and data into Swing JTable which is retrieved from ms access database  So far this is my code how can i display both image and data from database.. while (rs.next()) { Vector row = new Vector(columns Display both image and data into Swing JTable Display both image and data into Swing JTable  How To Display both image and data into Swing JTable which is retrieved from MySQL database jTable data problem jTable data problem  Hello. I have a code that read file and store in arraylist and then convert to array(To use for table model) My class extends...","Number"}; List<String> data=new ArrayList<String>(); String JTable JTable  Hello, i cannot display data from my table in the database to the cells of my JTable. please help me CONVERT JTable DATA TO PDF FILE CONVERT JTable DATA TO PDF FILE  HOW TO CONVERT JTable DATA TO .PDF... the jtable data from the jframe and stored the data into the pdf file in the form...(data, col); table = new JTable(model); JScrollPane pane = new JScrollPane(table jtable jtable  hey i have build a form and i m also able to add data from database to jtable along with checkbox.the only problem is that if i select multiple checkboxes the data doesnt get inserted into new database and if only one Show multiple identical rows into JTable from database Show multiple identical rows into JTable from database In this tutorial, you will learn how to display the multiple rows from database to JTable. Here is an example that search the data from the database and show multiple identical how to use bean to retrieve data from database how to use bean to retrieve data from database  how to use bean to retrieve data from database   poda sendru JTable JTable  i want to delete record from JTable using a MenuItem DELETE. and values of JTable are fetched from database....please reply soon how to write a query for adding records in database how to write a query for adding records in database  How write fire query in JSP for adding records in database display dinamic data in JTable - Swing AWT display dinamic data in JTable  Hi, I need some help to development... and to read data in each files of this directory and to display it in one JTable... in this directory now i want to display the data of each files How to update,Delete database values from jtable cells .. How to update,Delete database values from jtable cells ..  hello Sir, I am currently working on a project where i have to fetch the data from database into jtable of a jpanel.. Now Sir, According to my need i have How to use JTable with MS-Access How to use JTable with MS-Access   I have Three Column in Database... this data in JTable. How can I do it. I also want to EDIT and DELETE this Data.   Here is an example that retrieves the data from MS Access database How to use JTable with MS-Access How to use JTable with MS-Access   I have Three Column in Database... this data in JTable. How can I do it. I also want to EDIT and DELETE this Data.   Here is an example that retrieves the data from MS Access database view data from jTextArea to jtable view data from jTextArea to jtable  good night Please help senior java all, I want to make a brief program of reading data in the text area..., verb: to use etc. This sentence I input into the textarea, then by my word use data from database as hyperlink and pass the data in the hyperlink use data from database as hyperlink and pass the data in the hyperlink  Hi Friends, I am using the data from database as hyperlink and pass the data... and display tag property I have displayed a column data as hyperlink. I want to pass jtable jtable  how to get the values from database into jtable and also add a checkbox into it and then when selected the checkbox it should again insert into database the selected chewckbox.plzz help Java insert file data to JTable Java insert file data to JTable In this section, you will learn how to insert text file data into JTable. Swing has provide useful and sophisticated set... and store the data into vector. Then pass the vector value as a parameter to the  how to make JTable to add delete and update sql database table how to make JTable to add delete and update sql database table  Hello all I want to know how to make JTable to act actively to add delete and update database table. i am struck ed here from long time please help me adding DSN in Data Sources - JSP-Servlet adding DSN in Data Sources  I have added DSN the way you have told and it has been added in the data sources as online_exam is shown in the ODBC Database Administrator dialog box. But still code gives the following exception jsp pages for dispatchaction for adding user to database jsp pages for dispatchaction for adding user to database  ADDUSER.JSP: function addNewUser(){ //alert("hi"+document.getElementsByName("name").value); document.user.action.value="Test.do?goto Extract File data into JTable Extract File data into JTable In this section, you will learn how to read the data from the text file and insert it into JTable. For this, we have created... the BufferedReader class, we have read the data of the file. This data is then broken ABOUT Jtable ABOUT Jtable  My Project is Exsice Management in java swing Desktop Application. I M Use Netbeans & Mysql . How can retrive Data in Jtable from Mysql Database in Net Beans How to update,Delete database values from jtable cells .. How to update,Delete database values from jtable cells ..  Hello Sir... from database to jtable .Now as per my requirement i need to update and delete the database records from the table cells by entering new values there only How to update,Delete database values from jtable cells .. How to update,Delete database values from jtable cells ..  Hello Sir, I am working on a project in which i have to fetch the values from database to jtable .Now as per my requirement i need to update and delete the database How to add a columns with a button set to a Jtable built with database result set How to add a columns with a button set to a Jtable built with database result..., amount of each transaction etc. now i have stucked in the point of adding that button column to the table which is built with database result set. i would thank Java convert jtable data to pdf file Java convert jtable data to pdf file In this tutorial, you will learn how to convert jtable data to pdf file. Here is an example where we have created... have fetched the data from the jtable and save the data to pdf file. Example how to show data in database ? how to show data in database ?  how to show the data in the database to the user in the textbox so that user can modify it and save it again JTable JTable  Values to be displayed in JTextfield when Clicked on JTable Cells JTable JTable   how to select a definite cell which containing a similar text containg to the one which the user entering from a jtable at runtime in java JTable JTable  Hi I have problems in setting values to a cell in Jtable which is in a jFrame which implements TableModelListener which has a abstract method tableChanged(TableModelEvent e) . I'll be loading values from data base when Adding a New Column Name in Database Table ; data_type; Above code is used for adding a new column in the database table... Adding a New Column Name in Database Table  ... are going to learn about adding a new column in database table. Sometimes it happens JTable "}; JTable table=new JTable(data,labels); JScrollPane pane=new JScrollPane... to rewrite my program so you can scroll and data exists in more than one line... = rs.getString("pais"); } String data[][]=new String[1][11 adding multiples markers to google map from a mysql database adding multiples markers to google map from a mysql database   hello... are strored in table called appreciation in a mysql database. i followed sevrel tutoriels including this one : http://tips4php.net/2010/10/use-php-...g-data database data in xml format database data in xml format  HI, i want to display the database data in the xml format(not as xml file ) on the console using DOM. help will be appreciated. THANKS K.K Adding image to database through jsp or HTML page ,while adding only image should show. Adding image to database through jsp or HTML page ,while adding only image should show.   Adding image to database through jsp or HTML page,while adding only image should show. After that i need view that uploaded image from JTable JTable  need to add values to a JTable having 4 coloumns ,2 of them are comboboxes Ads
ESSENTIALAI-STEM
Important: Chrome will be removing support for Chrome Apps on all platforms. Chrome browser and the Chrome Web Store will continue to support extensions. Read the announcement and learn more about migrating your app. Network Communications Chrome Apps can act as a network client for TCP and UDP connections. This doc shows you how to use TCP and UDP to send and receive data over the network. For more information, see the Sockets UDP, Sockets TCP and Sockets TCP Server APIs. Note: The previous version of the networking APIs (socket) has been deprecated. API Samples: Want to play with the code? Check out the telnet and udp samples. Manifest requirements For Chrome Apps that use TCP or UDP, add the sockets entry to the manifest and specify the IP end point permission rules. For example: "sockets": { "udp": { "send": ["host-pattern1", ...], "bind": ["host-pattern2", ...], ... }, "tcp" : { "connect": ["host-pattern1", ...], ... }, "tcpServer" : { "listen": ["host-pattern1", ...], ... } } The syntax of socket "host-pattern" entries follows these rules: <host-pattern> := <host> | ':' <port> | <host> ':' <port> <host> := '*' | '*.' <anychar except '/' and '*'>+ <port> := '*' | <port number between 1 and 65535>) See Sockets Manifest Key for detailed description of the syntax. Examples of socket manifest entries: • { "tcp": { "connect" : "*:23" } } – connecting on port 23 of any hosts • { "tcp": { "connect" : ["*:23", "*:80"] } } – connecting on port 23 or 80 of any hosts • { "tcp": { "connect" : "www.example.com:23" } } – connecting port 23 of www.example.com • { "tcp": { "connect" : "" } } – connecting any ports of any hosts • { "udp": { "send" : ":99" } } – sending UDP packet to port 99 of any hosts • { "udp": { "bind" : ":8899" } } – binding local port 8899 to receive UDP packets • { "tcpServer": { "listen" : ":8080" } } – TCP listening on local port 8080 Using TCP Chrome Apps can make connections to any service that supports TCP. Connecting to a socket Here's a sample showing how to connect (sockets.tcp.connect) to a socket: chrome.sockets.tcp.create({}, function(createInfo) { chrome.sockets.tcp.connect(createInfo.socketId, IP, PORT, onConnectedCallback); }); Keep a handle to the socketId so that you can later received and send data (sockets.tcp.send) to this socket. Receiving from and sending to a socket Receiving from (sockets.tcp.onReceive) and sending to a socket uses ArrayBuffer objects. To learn about ArrayBuffers, check out the overview, JavaScript typed arrays, and the tutorial, How to convert ArrayBuffer to and from String. chrome.sockets.tcp.send(socketId, arrayBuffer, onSentCallback); chrome.sockets.tcp.onReceive.addListener(function(info) { if (info.socketId != socketId) return; // info.data is an arrayBuffer. }); Disconnecting from a socket Here's how to disconnect (sockets.tcp.disconnect): chrome.sockets.tcp.disconnect(socketId); Using UDP Chrome Apps can make connections to any service that supports UDP. Sending data Here's a sample showing how to send data (sockets.udp.send) over the network using UDP: // Create the Socket chrome.sockets.udp.create({}, function(socketInfo) { // The socket is created, now we can send some data var socketId = socketInfo.socketId; chrome.sockets.udp.send(socketId, arrayBuffer, '127.0.0.1', 1337, function(sendInfo) { console.log("sent " + sendInfo.bytesSent); }); }); Receiving data This example is very similar to the 'Sending data' example, except we setup an event handler for receiving data. var socketId; // Handle the "onReceive" event. var onReceive = function(info) { if (info.socketId !== socketId) return; console.log(info.data); }; // Create the Socket chrome.sockets.udp.create({}, function(socketInfo) { socketId = socketInfo.socketId; // Setup event handler and bind socket. chrome.sockets.udp.onReceive.addListener(onReceive); chrome.sockets.udp.bind(socketId, "0.0.0.0", 0, function(result) { if (result < 0) { console.log("Error binding socket."); return; } chrome.sockets.udp.send(socketId, arrayBuffer, '127.0.0.1', 1337, function(sendInfo) { console.log("sent " + sendInfo.bytesSent); }); }); }); Using TCP Server Chrome Apps can act as TCP servers using the sockets.tcpServer API. Creating a TCP server socket Create a TCP server socket with sockets.tcpServer.create. chrome.sockets.tcpServer.create({}, function(createInfo) { listenAndAccept(createInfo.socketId); }); Accepting client connections Here's a sample showing how to accept connections (sockets.tcpServer.listen) on a TCP server socket: function listenAndAccept(socketId) { chrome.sockets.tcpServer.listen(socketId, IP, PORT, function(resultCode) { onListenCallback(socketId, resultCode) }); } Keep a handle to the socketId so that you can later accept new connections (sockets.tcpServer.onAccept) . var serverSocketId; function onListenCallback(socketId, resultCode) { if (resultCode < 0) { console.log("Error listening:" + chrome.runtime.lastError.message); return; } serverSocketId = socketId; chrome.sockets.tcpServer.onAccept.addListener(onAccept) } When a new connection is established, onAccept is invoked with the clientSocketId of the new TCP connection. The client socket ID must be used with the sockets.tcp API. The socket of the new connection is paused by default. Un-pause it with sockets.tcp.setPaused to start receiving data. function onAccept(info) { if (info.socketId != serverSocketId) return; // A new TCP connection has been established. chrome.sockets.tcp.send(info.clientSocketId, data, function(resultCode) { console.log("Data sent to new TCP client connection.") }); // Start receiving data. chrome.sockets.tcp.onReceive.addListener(function(recvInfo) { if (recvInfo.socketId != info.clientSocketId) return; // recvInfo.data is an arrayBuffer. }); chrome.sockets.tcp.setPaused(false); } Stop accepting client connections Call sockets.tcp.disconnect on the server socket ID to stop accepting new connections. chrome.sockets.tcpServer.onAccept.removeListener(onAccept); chrome.sockets.tcpServer.disconnect(serverSocketId); Back to top
ESSENTIALAI-STEM
Splunk Search Highlighted How to edit my search to use a macro to return an integer being fed a single argument being supplied as an eval variable? New Member Tried doing this via the Splunk docs and the macro is not being processed. My example ... My macro is named wordweight02 and takes a single argument which I identify as named "words" in the macro definition. I expect it to return an integer value. Source for macro follows ... if(like($words$, "% dog %"), 10 ,0) + if(like($words$, "% cat %"), 10 ,0) + if(like($words$, "% snake %"), 15 ,0) + if(like($words$, "% chicken %"), 20 ,0) + if(like($words$, "% truck %"), 25 ,0) + if(like($words$, "% car %"), 25 ,0) + if(like($words$, "% rocket %"), 25 ,0) + if(like($words$, "% and %"), 1 ,0) + if(like($words$, "% he %"), 5 ,0) + if(like($words$, "% she %"), 5 ,0) + if(like($words$, "% they %"), 5 ,0) alt text So now I want to use my macro to return a word weight for selected words occurring in a sentence. The sentences are being captured in an index in a field called "sentence". In my example, I can have duplicate values in "sentence" so ... index=myindex | eval lcsentence=lower(sentence) | eval wordweight=('wordweight02(words=$lcsentence$)') | search wordweight>0 | stats count(sentence) as countsentence, by wordweight, sentence | eval sentencewordscore=wordweight*countsentence | sort -sentencewordscore The macro never seems to return a value ... Any ideas? Splunk docs are a little light on this stuff. 0 Karma Highlighted Re: How to edit my search to use a macro to return an integer being fed a single argument being supplied as an eval variable? Legend Try just using the field name when you pass it to the macro. Like this | eval wordweight='wordweight02(lcsentence)' View solution in original post 0 Karma Highlighted Re: How to edit my search to use a macro to return an integer being fed a single argument being supplied as an eval variable? New Member That does work ... It seems that the macro name delimiter/enclosing character must be the " ` " character (ASCII 96) and not the standard single quote " ' " (ASCII 39). My particular issue is the browser I am using to get to Splunk Enterprise is Firefox and for some weird reason it does not show the ASCII 96 character on the screen. Always an adventure ... Thanks ... 0 Karma Highlighted Re: How to edit my search to use a macro to return an integer being fed a single argument being supplied as an eval variable? Builder You might want to look into the Machine Learning Tool Kit and TF IDF. I am not familiar with this tool yet... but it sounds like where you are headed based on docs and machine learning course I am taking. https://docs.splunk.com/Documentation/MLApp/2.0.0/User/Algorithms 0 Karma
ESSENTIALAI-STEM
Fires destroy more villages in Myanmar's Rohingya region: sources YANGON (Reuters) - Several more villages were burned down on Saturday in a part of northwest Myanmar where many Rohingya Muslims had been sheltering from violence sweeping the area, two sources monitoring the situation said. The fires, which started on Friday when up to eight villages went up in flames in the ethnically mixed Rathedaung region, have increased concerns that more minority Rohingya will flee to neighboring Bangladesh. Blazes started on Saturday engulfed as many as four more settlements in Rathedaung, likely destroying all the Muslim villages in the area, the sources said. “Slowly, one after another villages are being burnt down - I believe that Rohingyas are already wiped out completely from Rathedaung,” said Chris Lewa of the Rohingya monitoring group, the Arakan Project. “There were 11 Muslim villages (in Rathedaung) and after the past two days all appear to be destroyed.” It was unclear who set fire to the villages, located in a part of northwest Myanmar far from where Rohingya insurgents attacked 30 police posts and an army base last month, triggering an army counter-offensive in which at least 400 people have been killed. Independent journalists are not allowed into the area, where Myanmar says its security forces are carrying out clearance operations to defend against “extremist terrorists”. Human rights monitors and fleeing Rohingya say the army and ethnic Rakhine vigilantes have unleashed a campaign of arson aimed at driving out the Muslim population. Some 290,000 people have fled across the Bangladeshi border in less than two weeks, causing a humanitarian crisis. Rathedaung is the furthest Rohingya-inhabited area from the border with Bangladesh and aid workers are concerned that a large number of people were trapped there. The sources said that among the torched villages was the hamlet of Tha Pyay Taw. They were also concerned about the village of Chin Ywa, where many people sheltering from other burnings in the area had been hiding and two other settlements. On Friday, the villages of Ah Htet Nan Yar and Auk Nan Yar, some 65 km (40 miles) north of Sittwe, capital of Rakhine state, were also burned along with four to six other settlements. One source, who has a network of informers in the area, said 300 to 400 Rohingya who had been hiding at Ah Htet Nan Yar were now in the forest or attempting a perilous, days-long journey by foot in the monsoon rain toward the River Naf separating Myanmar and Bangladesh. Myanmar leader Aung San Suu Kyi said on Thursday her government was doing its best to protect everyone, but she has drawn criticism for failing to speak out about the violence and the Muslim minority, including calls to revoke her 1991 Nobel Peace Prize. The country’s Rohingya Muslims have long complained of persecution and are seen by many in Buddhist-majority Myanmar as illegal migrants from Bangladesh. Editing by Helen Popper
NEWS-MULTISOURCE
User:Papita Naranja Hello! this is Papita Naranja from Mexico City. I'm an Advanced Business English student looking forward to improve my writing. I love my country, hope you can visit it soon... or again. I love reading as well and though writing is not a burden, I definitively rather writing on something more like a journal. Still, I would like to write on Business and topics related. Anything on Management I like too. That's my Major. All friends are welcome. I like having them all around the world.:-D
WIKI
0 I'm trying to create a backup script that saves my thumbdrive as an image. I'm planning on having it automatic. I've always seen the thumbdrive listed as /dev/sdb, and created a script that will save it as a gzipped tarball. While trying to make a copy of it via dd, I noticed this error: dd: failed to open ‘/dev/sdb’: Permission denied I wondered if it was just a fluke, so I tried piping a cat command to dd and got this error instead: cat: /dev/sdb: Permission denied 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000620078 s, 0.0 kB/s Of course, since I'm a superuser, I can sudo it out -- but that removes an element of the script being automatic. Why does this device need to have superuser permission if I'm not modifying it? Furthermore, is there a way to bypass this? 2 • Can you just cat a device like that? I'm not sure. I'd mount the device in some location first(which can be automated with fstab (or possibly udev)) and then do the make backup. – Bibek_G Mar 2, 2016 at 5:12 • Bibek_G, yes you can use dd to create a backup of a block device and restore it later. – Chad Clark Mar 2, 2016 at 5:31 3 Answers 3 4 Unix/Linux systems are multi-user. Anyone who has read access to a raw storage device can read all the files on it, regardless of who owns the files and their permissions (modulo the filesystem being encrypted, of course). So traditionally the raw devices have only allowed access to root and to members of an administrative group such as operator or disk, and you'd put your trusted admin users or backup operations users into that group. On GNU/Linux distributions that run udev, 50-udev-default.rules puts all block and SCSI devices in /dev that are not floppies or tapes in the disk group and gives the group read/write permission, so one possible solution for you is to add yourself to that group using usermod or by editing /etc/group directly. Logout and login again. 1 • This can be confirmed for the op, but just doing ls -l /dev/sdb which will probably show something like brw-rw---T 1 root disk 8, 16 Feb 25 21:13 sdb - so as you say, only root and the disk group have access. Mar 2, 2016 at 8:26 1 Why does reading a device require admin permissions? Firstly there are couple of issues here 1. mount'ing the physical storage device and the partitions it contains. 2. Accessing and manipulating the files on it. If the filesystem is permissions based, e.g. ext2,3,4 then permissions are defined on a per file basis. In terms of why you would require the user to have special admin privileges to mount a device there are a few reasons, which are more likely to apply to enterprise type situations and less likely to apply to personal computing - although it still can be relevant 1. It prevents reading in abusive programs Once a disk is mounted an entire collection of untrusted programs are now potentially available for execution which can abuse the operating system. If you were administering that system, you could be more confident that wouldn't happen if casual users couldn't upload their own programs off their own disks. 2. Writing / saving sensitive secret data If you had corporate secrets, or users passwords on a system, and someone could connect an unauthorised storage device they can make copies to it. You can get around the sudo issue by either running the entire script as sudo, and then using sudo to switch back to an ordinary user inside the script for the commands you dont want to run as root (yes you can do that) e.g. file: script.sh #!/bin/sh # this dd command now works dd if=<source> of=<target> bs=<byte size> # normal cammand you want to run as "mathmaniac" user sudo -u "MathManiac" bash -c "touch foo.bar" So you then run this script with: sudo ./script.sh Another quick and dirty but generally not recommended way is simply to put sudo in front of the command inside the script, when the interpreter encounters the sudo it will halt script execution and ask you for your sudo password before continuing. 1 • Out of all the answers I read, this has the most extensive explanation why I need su perms and a solution. Mar 3, 2016 at 0:25 0 You could use edit the fstab to mount the filesystem without being root but that doesn't give you an image of the block device. You could edit the sudoers file so the backup script can sudo a specific command without entering a password. Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct. Not the answer you're looking for? Browse other questions tagged or ask your own question.
ESSENTIALAI-STEM
The detox: a good or a bad idea? Alimentation détox The detox: a good or a bad idea? With the approach of summer, the detox is all the rage. It has an undoubted impact on the way you look, but is it good for your health? Sometimes, in the natural course of events, we feel a need to eat less or to cut certain products out of our daily routine. It’s as though our bodies want to tell us that it’s time to take things in hand. “Spa treatments” have been an integral part of many cultures from time immemorial. At certain times of the year (before Christmas, Easter or the summer) we put our body on pause. The idea is to eliminate toxins and start again from a healthy base. Some specialists say fasting from time to time is vital to help the body renew its cells. Some detox treatments are less restrictive than fasting but can be good for your body, your morale and your figure provided you respect certain rules. The aim of the detox It eliminates accumulated toxins, settles your digestive system and helps you lose a few of those extra kilos. It’s as simple as that! Many aficionados say it gives them an energy boost. They feel fitter and happier with life. Other noted benefits include improved skin tone and sleep patterns. The risks A word of warning! Don’t use this technique as a form of repeat dieting. This can be very damaging for your health. Limit detox treatments to two or three a year and never follow them for more than a week at a time. If you do, you risk weakening your digestive system and leaving your body deficient in nutrients it needs. If that happens, your body is likely to slip back into its old ways and rapidly put back on the kilos it has lost. Once again, it’s all a question of balance. A detox does you good provided you respect certain rules. Detox treatment tips – Fast for a maximum of three days: if you do this, it’s vital to stay hydrated. You must drink a lot: tea, infusions or soups to ensure your body has the fluid it needs to get you through the fast; – Only eat unprocessed foods: for a week, eat nothing but fruit, vegetables, meat, eggs, rice, lentils… Stick to fresh foods unaltered by human hand. No packaged products and no prepared food (such as pasta); – Water + lemon: before each meal drink a large glass of water with the squeezed juice of an organic lemon in it. You will quickly feel the benefits of this approach which is a natural way to make you eat less and more healthily; – Green smoothies: replace one meal a day with a smoothie made from green fruits and vegetables. It’s full of the vitamins and fibre that you need; – Cut out meat and dairy products for a week; – Stay sugar-free: this is the height of fashion at the moment and consists of cutting quick-release and slow-release sugars or carbohydrates from your diet for a week. Related Posts La barre de céréale, un incontournable de la nutrition sportive Why you need to have a cereal bar with you at all times Why you need to have a cereal bar with you... Une alimentation équilibrée, le secret d'une bonne santé Five rules for a healthy and balanced diet Five rules for a healthy and balanced diet Coming to... What exactly is dietary rebalancing? What exactly is dietary rebalancing? any people have now woken...
ESSENTIALAI-STEM
Debugging Typescript with Visual Studio Code The problem involves using Typescript (as opposed to Javascript) as the language for a front-end program with the goal of debugging using the Visual Studio Code IDE’s debugger. The solution involves configuring Typescript compilation options, using a server to host the front-end program, using the Debugger for Chrome extension, and configuring Visual Studio Code launch options. Motivation and context I was resuming work on my game Kawaii Ketchup after a hiatus. An obstacle that appeared in my way, I had to solve the problem of being able to have debugging in Typescript using Visual Studio Code as I was previously using another IDE, NetBeans. So, with courage and skill I faced this problem and successfully found a solution. My particular use case involves a browser-based program, a Phaser game, in fact, so the context for this informative guide will be that of a trying to get Visual Studio Code’s debugger to debug Typescript code that gets compiled to Javascript code which runs in a browser. The Chrome browser will be used as the browser for running the browser program. Requirements • Visual Studio Code version 1.23.1 • Microsoft Debugger for Chrome (with extension identifier msjsdiag.debugger-for-chrome) version 4.4.3 • Live Server (with extension identifier ritwickdey.liveserver) version 4.0.0 Using a server The Debugger for Chrome extension documentation says that a local web server must be used to serve the front-end program. I used the Visual Studio Code extension called Live Server. You could use the same or you could use something entirely different, like Python’s SimpleHTTPServer. To use Live Server–and particularly for my use case of debugging a Phaser game–once the extension is installed, right-click the index.html file in the IDE’s explorer view pane and select “Open With Live Server” to have the index.html file served and the requirement for having a web server satisfied. Typescript configuration Create a mapping of Typescript source files to corresponding Javascript files. This will make the Visual Studio Code debugger be able to operate on your Typescript sources. Do this by enabling the sourceMap option in your tsconfig.json file. { "compilerOptions": { "sourceMap": true }, } Upon compilation, this will tell Typescript to create a .map file in the appropriate directory as configured in the tsconfig.json file. To make Typescript automatically re-compile on file changes, use the following console command (in the appropriate directory) tsc -w Visual Studio Code launch options Add a new debugging configuration in the Visual Studio Code IDE and allow the IDE to attach to Chrome with the “Chrome: Attach” debug configuration option. After selecting “Attach to Chrome” as the debug launch configuration type, make your newly created launch.json file look a little something like this { "type": "chrome", "request": "attach", "name": "Attach to Chrome", "port": 9222, "webRoot": "${workspaceFolder}", "sourceMaps": true } The line "sourceMaps": true is of concern as that will allow the .map files produced by Typescript compilations to be used by the Visual Studio Code IDE thereby allowing debugging. Depending on your project’s directory structure, additional configuration in the launch.json file may be required. Begin debugging The Visual Studio Code IDE needs to have an instance of the Chrome browser with remote debugging enabled. To accomplish this, run an instance of the Chrome browser with the launch options <PATH TO YOUR CHROME BROWSER'S BINARY FILE>/chrome.exe --remote-debugging-port=9222 as instructed by the Microsoft Debugger for Chrome extension [1]. Note that an instance of Chrome must not already be running. If one is, you’ll have to end that instance prior to launching an instance of Chrome with remote debugging enabled or else Visual Studio Code will not be able to debug your front-end program. In Visual Studio Code, use the “Debug” menu or press the “F5” key, after which you’ll be prompted with a “Select a tab” drop-down menu that will allow you to select a Chrome tab as the debugging target. For instance, my use case involves me selecting the tab with the URL http://127.0.0.1:5500/public_html/index.html as the target. You should now be able to debug your front-end program’s Typescript code using the Visual Studio Code IDE’s debugger. Footnotes Written on June 8, 2018
ESSENTIALAI-STEM
France to suspend sales of Vitamin D supplement after infant's death PARIS (Reuters) - France has started measures to suspend sales of a Vitamin D supplement marketed as Uvesterol D after the death of a baby who had been given a dose of it, a French medical safety watchdog said on Wednesday. The agency said in a statement that investigations had found that there was a “probable link” between the infant’s death and the way in which it was administered to the child. Health Minister Marisol Touraine said that oral syringe through which it is administered appeared to be at fault and that the supplement itself was not dangerous. Uvesterol D is developed by the Crinex laboratory and is used against Vitamin D deficiencies in children up to five years old. Officials at Crinex could not be immediately reached for comment. The company told French media on Tuesday, before the ban was announced, that tens of millions of children had been given the supplement since 1990 without any deaths resulting. The European Medicines Agency said that the supplement is not marketed in any other EU country but that it was monitoring the situation. Reporting by Sudip Kar-Gupta and Eric Faye, additional reporting by Michel Rose; writing by Leigh Thomas; Editing by Dominic Evans
NEWS-MULTISOURCE
Why the cosmetics industry is struggling to keep up in age of Glossier The cosmetics industry has been on the decline in recent years, as it fails to keep up with the rise of skin care and the success of buzzy direct-to-consumer brands like Glossier. This week, both Coty and Ulta Beauty missed sales expectations, pointing to ongoing challenges as the industry seeks ways to stay relevant and appeal to younger shoppers. According to Kayla Marci, a market analyst at Edited, traditional companies are struggling to meet consumer demand for clean beauty and more natural, toned-down looks. Visit Business Insider's homepage for more stories.It's been a difficult few years for cosmetics. Following several quarters of slumping sales, Coty — parent company of mass market beauty brands including CoverGirl, Rimmel, and Philosophy — reported that comparable sales in the second quarter of 2019 fell by 8 percent from the same period the year prior. Just two days later, Ulta Beauty shares fell a staggering 22% after the company reported lower than expected earnings.The slumps reflect ongoing stagnation within the industry as it struggles to keep up with shifting consumer demand for skincare and clean, eco-friendly beauty. From iconic brands like Lancome, to big box retailers like Sephora, traditional cosmetic companies are failing to contend with the deluge of buzzy direct-to-consumer brands like Glossier with their large social media followings and arsenals of influencers.Read more: Ulta Beauty slides after revenue miss "With brands like Glossier and Drunk Elephant seeing success in the market, there has been a noticeable shift in beauty from cosmetics into skincare," said Kayla Marci, market analyst at Edited, a retail data firm. "Trends like heavy contouring have fallen out of favor as consumers take a more natural approach to their skin, focusing on getting the base clean and healthy."Thus far in 2019, Benefit Cosmetics has been the only company of the top 10 largest US beauty brands to avoid a sales dip, posting modest growth of just 3 percent from the same period in 2018, according to the Business of Fashion. Meanwhile, mainstays like Revlon have sought out the help financial advisers to consider strategies for revitalizing in an attempt to stay relevant.Relevance has remained an industrywide challenge, as trends have shifted away from bold colors and flashy styles and moved toward simpler natural looks, Marci said. UBS analysts echoed the sentiment, writing in a report on Ulta's earnings on Friday that formerly popular items like color eye shadow and blush palettes have fallen out of favor with shoppers. "We think brands will look towards more innovative products beyond the recent influx of palettes. We expect more focus on emerging trends, such as textures (powder lipsticks and jelly highlighters) as well as CBD-infused products," UBS analysts wrote. According to a recent Edited report, new arrivals of beauty products at major retails has been on the decline over the last three months, while skincare items have been on the rise. At Neiman Marcus, for example, Edited found new makeup offerings declined by 3.5% compared to three months prior, while skincare items increased by 11%. Likewise, though Target showed a 2.3% growth in makeup, it had a whopping 31.6% uptick in skincare products. For Ulta, attempts at bringing in new makeup products that appeal to younger shoppers seem to be coming up empty-handed. Though it forged a buzzy partnerships Kyle Jenner in November 2018, establishing itself as the exclusive third-party seller of Kylie Cosmetics, the products have been heavily discounted recently. Further, Kylie Cosmetics sales had independently been on the decline before the start of the collaboration, decreasing by 62% from November 2016 to November 2018, the New York Post reported. "Ulta Beauty continues to drive meaningful market share growth in makeup across mass and prestige," Dillon said in a call to investors on Thursday. "But it's clear that cosmetics in the overall US market is challenged. After several years of very strong performance, growth in the makeup category has been decelerating over the last two years for recently turned negative."Sucharita Kodali, retail analyst at Forrester, wrote in an email to Business Insider that despite the sales dips, she doesn't believe cosmetics companies like Ulta have imminent reason for concern. "The reaction to the Ulta news seems a bit disproportionate to the news. They are still anticipating positive comps and comps that are better than most of the retail industry," she wrote. "Digitally native brands need store exposure to grow and that suits Ulta and Sephora well.  I'm not worried about the company yet."Marci, the Edited market analyst, said traditional cosmetics companies can stay fresh by taking cues from younger eco-conscious brands like Milk Makeup and Āether Beauty, which offer 100% vegan formulas and recyclable packaging. "With new beauty brands launching every other day, makeup needs to take cues from skincare and tick the eco-friendly boxes," Marci said. "Brands that will be able to remain relevant within the shifting landscape are those that can deliver products with sustainable packaging, refillable containers with clean and cruelty-free ingredients."
NEWS-MULTISOURCE
Talk:Donald M. Frame Why is this article tagged as related to tennis? It is unclear why this article is within the scope of WikiProject Tennis. ShaiGoldman18 (talk) 19:25, 19 January 2024 (UTC)
WIKI
User:Zzzzzjessica/Evaluate an Article Which article are you evaluating? Doubanjiang Why you have chosen this article to evaluate? I chose this article because I cook spicy Chinese cuisine and I like to use Doubanjiang to add flavor for my dishes. I found this article still needs some improvements, then I would like to make effort on expanding this article. Evaluate the article There is no detailed content parts in this article, only the Main Type category. Some contents need to be cited, for example, the production process is detailed but not cited. Only Pixian Doubanjiang is mainly described in the article and a category named “Others,” but the Lead also mentions other types of Doubanjiang like Guangdong Doubanjiang (the non-spicy version) which is not even mentioned in the “Others” part. There are 4 sources in the reference part, but 3 of these websites are deleted, and there are two sources that are duplicated.
WIKI
Roche Limit – Definition & Detailed Explanation – Astronomical Phenomena Glossary I. What is Roche Limit? The Roche Limit is a concept in astronomy that refers to the minimum distance at which a celestial body, such as a moon or a planet, can approach another celestial body without being torn apart by tidal forces. It is named after the French astronomer Edouard Roche, who first proposed the idea in the 19th century. The Roche Limit is a critical boundary that determines the stability of a celestial body in relation to its parent body. II. How is Roche Limit calculated? The Roche Limit can be calculated using a simple formula that takes into account the density and size of the two celestial bodies involved. The formula is as follows: Roche Limit = 2.44 * (radius of the primary body / density of the primary body)^(1/3) This formula gives the distance at which a celestial body would be torn apart by tidal forces if it were to approach the primary body any closer. The Roche Limit is a crucial factor in determining the fate of moons and other objects in orbit around larger bodies. III. What are the effects of Roche Limit on celestial bodies? The Roche Limit has several important effects on celestial bodies. When a moon or other object approaches its parent body within the Roche Limit, tidal forces cause it to deform and eventually break apart. This can lead to the formation of rings around the primary body, as seen in the case of Saturn’s rings. Additionally, the Roche Limit can also influence the formation and evolution of planetary systems. Objects that are too close to their parent body may be unable to form stable orbits and may be ejected from the system altogether. Understanding the Roche Limit is essential for predicting the behavior of celestial bodies in space. IV. Can Roche Limit cause celestial bodies to break apart? Yes, the Roche Limit can cause celestial bodies to break apart if they approach each other too closely. Tidal forces exerted by the primary body can deform the secondary body, eventually tearing it apart. This process is known as tidal disruption, and it can result in the formation of debris fields or rings around the primary body. One famous example of this phenomenon is the breakup of Comet Shoemaker-Levy 9 as it approached Jupiter in 1994. The comet was torn apart by Jupiter’s immense gravitational forces, leading to a series of impacts on the planet’s surface. V. What are some examples of Roche Limit in the solar system? There are several examples of the Roche Limit in the solar system. One of the most well-known examples is Saturn’s rings, which are believed to have formed from the breakup of a moon that approached within the planet’s Roche Limit. The debris from the shattered moon coalesced into the rings that we see today. Another example is the moon Phobos, which orbits Mars at a distance close to its Roche Limit. Phobos is gradually moving closer to Mars due to tidal forces, and it is predicted that it will eventually be torn apart by these forces, forming a ring around the planet. VI. How does Roche Limit impact the study of astronomy? The Roche Limit plays a crucial role in the study of astronomy by helping scientists understand the dynamics of celestial bodies in space. By calculating the Roche Limit for different objects, astronomers can predict how close moons and other bodies can approach their parent bodies without being torn apart. Understanding the Roche Limit also provides insights into the formation and evolution of planetary systems. By studying the effects of tidal forces on celestial bodies, scientists can gain a better understanding of how planets, moons, and other objects interact in space. In conclusion, the Roche Limit is a fundamental concept in astronomy that helps explain the behavior of celestial bodies in space. By calculating the Roche Limit for different objects, scientists can better understand the forces at play in the universe and how they shape the formation and evolution of planetary systems.
ESSENTIALAI-STEM
Bringing the Hearings to Order IT'S been 11 years since the Senate held confirmation hearings for a Supreme Court nominee. With the memory of the proceedings involving Robert H. Bork, Clarence Thomas and Anita Hill still fresh in their minds, the American people are eager for a sense of how the hearings for Judge John G. Roberts will play out. As chairman of the Senate Judiciary Committee, I will try to provide an answer. The nomination of Judge Roberts has extraordinary significance because he will replace Justice Sandra Day O'Connor, who has been the decisive vote in many 5-to-4 decisions on the cutting edge of issues confronting our society. Interest groups at both ends of the political spectrum have long been poised to fight this confirmation battle, which could determine a victor in the so-called cultural war.
NEWS-MULTISOURCE
User:WhisperToMe/sandbox Indonesian novels http://www.thejakartapost.com/life/2016/05/17/12-indonesian-books-you-should-add-to-your-reading-list.html http://www.thejakartapost.com/life/2016/03/14/eka-launches-new-novel-after-entering-man-booker-list.html http://eprints.ums.ac.id/25159/16/02._Article_Publication.pdf The Land of Five Towers (Negeri 5 Menara) is a 2009 novel by Indonesian author Ahmad Fuadi.
WIKI
Talk:Earl Verney [Untitled] The Verney entry is about the family as a whole. the baronets and earls are very distinct from each other in the history, and it would not be relevant to merge the two articles.
WIKI
Things I wish I knew earlier 3. Importing Excel into Outlook appointments At work I use Outlook as my all-in-one software for emails, appointments, and personal task management. All my meetings, appointments, and time I’ve chosen to specifically allocate to work on certain tasks – go into the Outlook calendar. Excpet for one very important thing – the desk roster. Until today. Before I would consult a piece of paper with the roster printed on it which created discordance between my desk duties and other responsibilites which were managed electronically. This meant that: • I would occassionally double-book myself into a meeting when I’m meant to be on the desk, as it is harder to detect conflicts. • My availablity in Outlook would not represent reality, as colleagues would see me as free when I actually have a desk shift. • I would sometimes forget to go out to the desk, being too absorbed in my work and there is no electronic reminder pop up, so I’d either be late or the colleague that’s meant to hand over to me would need to remind me. This is pretty rude on my behalf and I’d like to be punctual to my desk shifts. I hear you asking – why don’t you just enter your desk shifts into Outlook, like everything else? The answer is: the effort outweighed the benefit. It wasn’t a problem often enough to justify entering all my desk shifts into my Calendar, which happen several times a day and are not neatly recurring, so the data entry is a repetive task without good return on investment. But – yesterday I had the sudden insight that it would be much better to put all my forseeable desk shifts into a spreadsheet and import them. You can import data via csv into just about any software these days. So I found this how-to and imported a week’s worth of rosters. Adding several individual appointments every day is too cumbersome, but doing it via excel is a 5 min task that I do every few weeks when the roster comes out and that will help my productivity and organisation immensely. The guide I linked to above gives a quite detailed example, but mine is more simplistic. 2017-10-29 15_38_49-Calendar import.xls - Excel You need to give your table headings that match the field names used in Outlook. Subject: name of the “appointment” Start date: self-explanatory End date: put in the formula so it just equals whatever is in column B. Start and end time: self explanatory Reminder on/off : Entered TRUE to enable reminders Reminder date: Did the same formula as the end date so whatever I put in start date automatically populates. Reminder time: What I did was in column I, I entered how many minutes warning I wanted, usually 2 mins except for Refchatter (where I want a bit more time to get ready before I log in). The reminder time column has a formula which minuses that time from the start time. This is simpler than working it out for each row. Save as an excel file so you can keep the formulas and overall template, so next time you only need to adjsut the dates and times. Then save it as a .csv and import into Outlook. This approach has less manual data entry and is more customisable. (Outlook only lets you set the reminder time a minimum of 5 minutes prior, while I prefer a very short reminder of just 2 minutes). Advertisements Things I wish I knew earlier 2. The “Sort” function in Word word-sort SINCE WHEN. Here’s the long awaited (ha!) second installment of my Things I Wish I Knew Earlier series. It’s for new things I discover that leave me cursing the fact that I didn’t know about them earlier. So just in case you’re like me and you have somehow missed the following fact most of your life… Microsoft Word allows you to sort stuff in alphabetical, number or date order. It’s the friendly little A/Z↓ button in the Paragraph section of the Home ribbon. It’s a good function if you’re compiling a reference list and not using any reference management software (RIP you). Or any list that you want sorted in alphabetical order. It doesn’t matter what order you add items to your list – you can sort the text alphabetically at the end of the process. Also works for tables. If you need to re-order your rows, alphabetically, numerically or by date, you can do it. It’s very similar to Excel’s sorting options, but in Word. Who knew? So stop cutting and pasting and turning your document into a big ol’ mess trying to re-order stuff manually. Word knows how to sort! Hooray! #BlogJune 15 – Things I wish I knew earlier 1. Snipping Tool I once helped a student put a couple of different screenshots onto a page by using the Print Screen key and pasting them into a word document. They were amazed. “It’s my final year of uni and I only just learned about this!” They said. “I’d been printing things out and gluing them and photocopying them… argh, what a waste of time!” I’ve started a little series, “Things I wish I knew earlier”, sharing recent discoveries of life hacks, shortcuts, or just normal ways of doing things that I’ve been doing the hard way for no reason… Snipping Tool I only learned about Windows’ Snipping Tool last year. In case you don’t know about it: it’s a really easy way to screen capture. Open Snipping Tool and drag a box over a section of the screen that you want to save. From there you can easily save it as a png, gif or jpg, directly email it to someone, or add highlighting and pen lines before saving. It’s part of Windows Vista and onwards. Macs have Command-Shift-4, which allows you to draw a box and when you release it the png is saved to your desktop. I think they had that functionality before Microsoft. snipdemo The above demo ironically captured in the most complex way possible (screen capture > video > gif) because the one thing Snipping Tool cannot snip is itself. The long way I used to do it was use the Print Screen key, open Photoshop or Paint, paste as a new document, crop as required, and possibly draw boxes with a yellow border and transparent background colour around important things to highlight them. Then I’d have to save as both a photoshop file and a jpg. NOT ANYMORE! I learnt this tip from a colleague and proceeded to pass it on to anyone else who would listen. SO convenient. The only downside is the jpgs are not top quality. If you need high resolution, the photoshop method may be the best. But for most purposes – capturing an error message, demonstrating procedures, etc – it is totally fine. Let Excel Do Your Searching For You I thought I would pass on a neat trick I figured out that can makes Excel automatically run searches in a search engine / online catalogue of your choice. Here’s an example of what your list might look like: search You can set up a hyperlink function so that column “B” searches what is in the correpsonding cell in column “A”. So simply clicking on cell B2 opens up the following webpage in your default browser: Well played, Old Sport. Well played, Old Sport. Step 1: Find the search link Let’s say I want my spreadsheet to search WorldCat. I go there and search for bunnies. Look at the url of the results page: https://www.worldcat.org/search?qt=worldcat_org_all&q=bunnies Copy that link. Step 2: Make a hyperlink formula The excel formula for a hyperlink is: =HYPERLINK(“url”, “display text”) So I type that into cell B2 (going by the template below), and copy the worldcat URL, and put “Search Worldcat” as my display text. hyperlink Step 3: Hack into the Mainframe! Now, instead of searching for “bunnies” I want to search for whatever text is in cell A2. I go into my formula bar and replace the word “bunnies” with the following: “&A2&” (I got this technique from this page.) I work in a university library, and I developed this kind of spreadsheet for the purposes of checking how many copies of course texts we had in our catalogue. But it could be used for any of your mass-search needs, whatever they may be.
ESSENTIALAI-STEM
C/1996 B2 (Hyakutake) The discovery of Hyakutake’s second comet On January 30, 1996 the Japanese amateur astronomer Yuji Hyakutake left his house in the village of Hayato, about 600 miles south-west of Tokyo, and set off for his normal observing site. After a journey of around 10 miles he stopped and set up his equipment ready for the night’s work. Since the previous July he had been avidly searching for comets using 25×150 binoculars. These had already proven themselves a month earlier when he had discovered the first Comet Hyakutake. That comet wasn’t particularly bright but he was planning to take some photos to record its progress. Frustratingly, the patch of sky where his comet would be was obscured behind some cloud so he started to scan the clear areas using his binoculars. Just before 5am, as he was sweeping through Libra, he noticed the 11th magnitude fuzzy object which was to make him famous throughout the world. The comet was officially named C/1996 B2 (Hyakutake) but at that time there was no indication that this was anything other than a run-of-the-mill, faint comet. CCD astrometry of the comet quickly followed Hyakutake’s discovery and by February 3rd the first parabolic orbital elements were reported. These showed that the comet would reach perihelion on May 2nd at a distance of only 0.22 AU from the sun. A close perihelion distance was remarkable enough but the elements also indicated that the comet would approach to within 16 million km of the Earth in late March. Hyakutake’s comet had grabbed the attention of astronomers around the world. Given the brightness at discovery it was possible that this object could reach first magnitude at close approach. The elements of Hyakutake’s orbit are shown in table 1.1. Shortly after these elements appeared I began to assess how this comet would appear from my location at 52° north. For a change those of us in the northern hemisphere would have the best view since the comet’s orbit and the position of the Earth were ideal. When Hyakutake first spotted his comet it was well below the ecliptic plane (fig 10.1). By 1996 March 12th the comet would pass through the ecliptic heading north and it would be 1.3 AU from the sun. Thirteen days later, on March 25, it would make the close Earth fly-by at a distance of only 0.102 AU and it would then appear near the zenith for northern observers. Following this close approach the comet would move further north reaching its maximum distance above the ecliptic plane on April 22nd, all the time being well seen by northern hemisphere observers. As the comet started back towards the ecliptic plane the elongation would rapidly decrease and northern observers would lose it in the evening twilight sometime around the end of April. The comet would then continue on to perihelion on May 1st at a solar distance of 0.23 AU. The southbound ecliptic crossing (descending node) would occur four days later and it would then be up to southern observers to recover the comet as it rose out of the morning twilight in the second week of May. While the orbital circumstances were good there were more serious problems much nearer home. For much of the northern hemisphere the months of February and March are not particularly favourable for comet observers since cloud cover is a regular nuisance. We knew from the orbit that the best of the display would be in the short period between March 22-27 and it was entirely possible that we would be clouded out completely at the critical time. It was time to start making plans. From the viewpoint of mid-northern observers the comet was initially low in the morning sky. Observers further south fared better and many images began to appear on the Internet. By February 8th the 1.5m ESO telescope at La Silla, Chile, had obtained a spectrum of the comet. This spectrum was dominated by reflected solar radiation but the expected cometary emission from Cyanogen (CN) and carbon molecules (C2 and C3) was present. Observers at Lowell Observatory reported that the comet’s water production rate was around 70% of comet Halley at an equivalent solar distance. This was encouraging news since the it implied an active nucleus. By the second week of February 1996, the magnitude had increased to around 9 and the coma was around 6 arc minutes in diameter. The comet was still a southern hemisphere object at declination -24° and so it remained a difficult object for observers in northern Europe. All the same the comet was showing encouraging signs of activity and a tail was first detected in a CCD image obtained on February 16th using the Danish 1.54-m telescope at La Silla (fig 10.2). The brightness of the comet was increasing rapidly and by February 20th it had reached seventh magnitude and the coma diameter was around 22 arcminutes with a 1° tail. For northern hemisphere observers the comet was south of declination -22°. By early March the comet was still well south of the equator but it had become a naked-eye object at fifth magnitude and binocular observers reported a short tail. The rotation of the nucleus was inferred from images obtained at Pic-du-Midi on March 9. These images, taken with a 1.05m telescope, showed at least two curved jets rotating clockwise and changing orientation over periods of a few minutes. The observed jets were around 2,000km long and the observations implied that the nucleus had a rotation period of around 6.6 hours. Radar contact with the comet was achieved on March 24. Powerful radio pulses were directed towards the comet from the Goldstone dish and echoes were received 107 seconds later. The radar results implied a very small nucleus of 1-3km diameter surrounded by a dense cloud of pebble-sized objects. Shortly afterwards the first ever detection of X-rays from any comet was made using the ROSAT satellite. The brightest parts of the comet in X-rays were diffuse, crescent shaped and offset sunwards by about 6 arcminutes from the nucleus corresponding to a distance of about 30,000km. By March 20, when the comet passed into the northern hemisphere, it had brightened to second magnitude and was a beautiful naked eye sight to those lucky enough to have clear skies. The coma was already larger than 1° and an ion tail of up to 20° had been reported by some observers. The comet was now moving quickly across the sky as it approached the Earth. By this time I had decided that my only chance to see this comet was to flee the UK in search of better weather. In this regard Tenerife is a convenient location for UK observers since flights are cheap and plentiful, the holiday infrastructure is well developed and the weather prospects and latitude were ideal for the comet. A group of us combined our resources and booked the tickets. Having decided to travel to a better observing site I had to decide what equipment to take. At the time of closest approach Hyakutake was going to be huge so simple SLR cameras and a driven mounting, such as those described in chapter 5, were all that was required. We took a combination of cameras, a home-made barndoor mount and a larger Vixen SP mounting for longer focal length shots along with lots of film, both colour and hypered Tech Pan. Airport X-ray machines can be quite a problem for travelling astrophotographers since high speed films can be affected if they are scanned. I would always suggest carrying rolls of film in your pocket so that they don’t pass through the machines. If you don’t have enough pockets you can buy special shielding bags which will protect the film as it is scanned. For observers at mid-northern latitudes the comet was at its best in the early morning hours as it rose towards the zenith. From Tenerife, on the early morning of March 23rd the comet had the distinctive “spring-onion” appearance shown in figure 10.3. The star Arcturus was embedded in a tail which extended over at least 20°. In 11×80 binoculars the coma showed a stellar nucleus surrounded by a classic hood. That evening the tail was less well defined but the coma was slightly brighter than the previous day. At its closest on the morning of March 25th the comet approached zero magnitude and the tail stretched from Ursa Major, through Boötes and possibly as far as the bowl of Virgo. Fortuitously, the comet become most active right at the time of closest approach and the tail detail that we could see on the night of March 24/25 was astounding. This time we were at an altitude of 1,600-m on Mount Teide and the sky was perfect from dusk until dawn. The visible tail extended for at least 25° with direct vision and well over 40° with averted vision (see fig 5.8 on page X). In 11×80 binoculars a bright, tailward pointing spike was very prominent and a major disconnection event was clearly visible to the naked eye. On that night the comet totally dominated the sky and it was easy to understand how ancient people must have been terrified by such objects. The exact time of close approach was 7hr UT on March 25th. After this the comet began to recede from the Earth as it headed in towards the Sun. The viewing geometry meant that the apparent tail-length was expected to grow as the comet moved away from the Earth. In the early morning hours of March 26 we made our final trip to observe the comet from Tenerife. The tail had again changed considerably from the day before and it was longer, possibly up to 50° with averted vision. The comet was now near to Kochab in Ursa Minor. On the night of March 26/27 the head passed within 4° of Polaris and various observers took advantage of this to produce some good fixed-camera photos (fig 5.4 on page X). By March 29 the moon had started to become a problem for visual observers but the comet was bright enough to allow relatively simple equipment to capture a spectrum. One such was obtained by Maurice Gavin on April 1 (fig. 10.4). The coma shows reflected sunlight and two prominent Swan Band emission lines in the cyan and green parts of the spectrum. Two fainter emission lines are visible in the blue and yellow and absorption lines due to Earth’s atmosphere are visible in the red. Since the tail was much fainter only the reflected solar continuum is visible. By the end of the first week of April 1996, the comet was sinking lower into the north-west on its way to perihelion. A prominent dust tail finally began to appear around April 10 although it never surpassed the ion tail. Meanwhile jet activity in the coma continued. The tail now had a classic appearance with a sharp ion tail and a diffuse dust tail. Photographs taken with Schmidt cameras reveal exquisite detail and many streamers are visible in the tail (fig 10.5). As the comet moved towards the sun it entered the field of the LASCO C3 coronograph on board the SOHO satellite. Images of the comet at perihelion were obtained by this instrument (fig 10.6) but terrestrial observers had to wait until May 9 for the comet to reappear from the sun’s glare. By this time it was a southern hemisphere morning object at around third magnitude. The comet faded rapidly as it moved away from the sun. Southern hemisphere observers picked it up on May 9th in a bright sky. By May 18th the comet was visible in a darker sky but the coma had faded to fourth magnitude. It faded past sixth magnitude around June 10th and by late August it was fainter than magnitude 10. Since Hyakutake made such a close approach to the Earth it was possible to see considerable detail in the inner coma with amateur-sized telescopes. The comet showed many interesting features close to the nucleus. Of particular importance were a sunward fan and the very intense tailward-pointing jet that was seen around close approach. This was also the first bright comet to make a close approach since CCD cameras became widely available in the amateur community. Photographers had never been particularly successful in reproducing the details visible in the inner coma of comets since the wide variation in light levels exceeded the available dynamic range of printing papers. The ability of CCD cameras to operate over a very large range of brightness levels meant that they could record the fine detail in the inner coma which in former times was only available to visual observers. Advanced processing techniques such as unsharp masks and radial filters can be applied easily to electronic images (as described in chapter 8) and some of the amateur CCD results were stunning. Terry Platt obtained a sequence of CCD images on April 1st using an SX camera. The exposures were kept short to ensure that the CCD did not saturate on the bright inner coma and each frame was processed with an unsharp mask. The nine frames clearly show the anti-clockwise rotation of various features (fig. 10.7) and the effect is particularly strong when the frames are processed into a movie (see the CD-ROM). One of the most impressive features of this comet was its tail. Since Hyakutake made such a close approach its tail aspect changed rapidly from the end of March to mid-April. It was also possible to observe major changes in the tail structure over periods measured in hours. We do not normally have the opportunity to do this since comets with large tails are usually only visible in a dark sky for a short period. Hyakutake had the unusual property that, at the time of close approach, its massive tail was visible for the entire night. Hyakutake was certainly a Great Comet but its discoverer was a particularly modest man. In a statement released shortly after the discovery he said: “I am a bit perplexed by all the attention paid to me, when it is the comet that deserves the credit.”
FINEWEB-EDU
Friday, September 7, 2012 Modifying the targets of JDBC datasources through WLST script in weblogic server Modifying the targets of JDBC datasources through WLST script in weblogic server This tutorial explains the details on WLST script to change or add the targets for JDBC data sources in weblogic server. The targeting can be enabled through weblogic admin console but the script helps us to automate the server configurations and reduce the manual effort if multiple data-sources need to be configured. wlst-modify-jdbc-datasource-targets The script will target the server MS3 and the Cluser1 to the data source with name “CRM6EAIReference” Define JDBCProperties.properties with required configurations(sample script to show the configuration for multiple data-sources) domain1.total.DS=2 domain1.prefix.1=om domain1.om.datasource.name.1=CRM6EAIReference domain1.om.datasource.target.1=Clusters/Cluster1,Servers/MS3 domain1.prefix.2=soa domain1.soa.datasource.name.2=CRM6EAISOAMetadataSource domain1.soa.datasource.target.2=Clusters/SOACluster Before executing the script change the properties as required.If the target is a Server then it should be added like Servers/ServerName and if it is a cluster then it should be defined as Clusters/ClusterName. Configure domain1.total.DS with total number of data sources need to be configured. Let us now define a WLST script file to change the targets using the property file defined above. import re from java.io import FileInputStream def UntargetTargetJDBCResources(): edit() startEdit() propInputStream = FileInputStream('JDBCProperties.properties') configProps = Properties() configProps.load(propInputStream) totalDataSource_to_untargetTarget=configProps.get("domain1.total.DS") server='AdminServer' cd("Servers/"+server) i=1 while (i <= int(totalDataSource_to_untargetTarget)) : prefix = configProps.get("domain1.prefix."+ str(i)) dsName = configProps.get("domain1."+prefix+".datasource.name."+ str(i)) datasourceTargets = re.split(",",configProps.get("domain1."+prefix+".datasource.target."+ str(i))) targetArray=[] for datasourceTarget in datasourceTargets: print 'DataSourceTargets',datasourceTargets print 'DataSourceTarget',datasourceTarget if datasourceTarget=='': print '' else: cd ('/JDBCSystemResources/'+dsName) set('Targets',jarray.array([], ObjectName)) target=datasourceTarget[datasourceTarget.index("/")+1:len(datasourceTarget)] if datasourceTarget.startswith('Cluster'): targetArray.append(ObjectName('com.bea:Name='+target+',Type=Cluster')) elif datasourceTarget.startswith('Server'): targetArray.append(ObjectName('com.bea:Name='+target+',Type=Server')) print 'Targets: ',targetArray set('Targets',jarray.array(targetArray, ObjectName)) print 'DataSource: ',dsName,',Target has been updated Successfully !!!' print '=========================================' i = i + 1 print '=========================================' save() activate() def main(): adminURL='t3://localhost:7001' adminUserName='weblogic' adminPassword='weblogic1' connect(adminUserName, adminPassword, adminURL) UntargetTargetJDBCResources(); print 'Successfully Modified the JDBC resources' disconnect() main() Script -  The above script update the JDBC data source targets based on the configurations enabled in the property file. Execute the script — <<Oracle_Home>>\oracle_common\common\bin\wlst.cmd ModifyJDBCResourcesTargets.py wlst-modify-jdbc-datasource-targets The JDBC data source targets changed now with the enabled configurations,the WLST script can be used to automate the server configurations and reduce the manual effort need for configurations. 6 comments: 1. Hi How to get the cluster name in the weblogic domain using WLST scripts? the requirement is to target the datasource on the available cluster in the domain. Thanks in avance ReplyDelete 2. How to target a Datasource to Multiple cluster using WLST ReplyDelete Replies 1. The same script can be used - Add the cluster names as a comma separated value to the required datasource target e.g domain1.om.datasource.target.1=Clusters/SOACluster,Clusters/OSBCluster Delete 2. instead of Clusters place how we can assign two Manage servers (AdminServer,MS1,MS2)to one Datasource. I have tried with separated by comma but not working. is there any other way Delete 3. Hi, Thanks for the wonderful script. I am using this script but i am facing an issue where i can only target to one managed server or a cluster but not two things. Can you please help me on this ? Thanks, Kumar ReplyDelete Replies 1. Thanks for reporting the issue, updated the script to support multiple targets. Let me know if the modified script working Delete
ESSENTIALAI-STEM
The lottery was first used by British colonists to fund the construction of cities. Although the game initially faced negative criticism, it was later adopted by the government to finance public projects. In the United States, ten states banned lotteries between 1844 and 1859. Today, lotteries are legal in many states. However, there is no official definition for the game. It has been around for thousands of years. Here are some interesting facts about the history of lotteries. Lotteries are a popular way to divide property. The first recorded lotteries offered money prizes in exchange for tickets. Ancient Romans held public lotteries to raise funds for the poor and for town fortifications. In addition to this, the French emperors used lotteries to distribute free slaves and property to the poor. A record from L’Ecluse in 1445 mentions a lottery that yielded 4,304 tickets valued at 7,000 florins, which is about US$170,000 in 2014. The first lotteries in Europe were created by Francis I in the 1500s and became popular by the 17th century. They were considered to be voluntary taxes at the time, and the money raised from them helped to build some of the country’s colleges. Private lotteries were very common in the United States and England, and they were used to sell products and properties. In 1832, the Boston Mercantile Journal reported that there were 420 lotteries in eight states. The practice of dividing property by lot dates back to ancient times. In the Old Testament, Moses instructed the people of Israel to take a census and divide the land by lot. In Roman times, lotteries were used to award property to the poor and to give slaves to the rich. In ancient Rome, lots were the most popular form of entertainment during dinnertime. The word apophoreta means “that which is carried home.” Although the earliest known lotteries in Europe were held in the Low Countries, the lottery is still widely used today. The first documented lotteries gave prizes in the form of money. These public lotteries were held in order to raise funds for towns and the poor. Interestingly, there is evidence that these public lotteries were as old as 1445. For example, a record dated 9 May 1445 in L’Ecluse, France, mentions a public lottery was held for the purpose of raising funds for walls and fortifications. This lotteries included 4,304 tickets that paid out florins, or about $170,000 in today’s currency. Lotteries began as a way for towns to raise money to build a castle. In the 15th century, French towns held public lotteries to help the poor and improve the town. These lotteries are still in use in France, where they are legal. While many governments have banned the lottery, some states still allow it as a tax-raising measure. Some state governments also allow lotteries to advertise in their own newspapers.
FINEWEB-EDU
Pseudopostega subtila Pseudopostega subtila is a moth of the family Opostegidae. It was described by Donald R. Davis and Jonas R. Stonis, 2007. It is known from the state of Minas Gerais of south-eastern Brazil. The length of the forewings is about 3.8 mm. Adults have been recorded in December. Etymology The species name is derived from the Latin subtilis (meaning thin, slender, acute) in reference to the elongate, slender, caudal lobe of the male gnathos.
WIKI
New Gaol, Bristol The New Gaol (also sometimes known as The Old City Gaol) is in Cumberland Road, Spike Island, Bristol, England, near Bristol Harbour. History In June 1816, the 'shocking state' of Newgate Gaol in Bristol resulted in an Act of Parliament to facilitate the building of a New Gaol in Bedminster, at a cost of £60,000. The original New Gaol was designed by Henry Hake Seward and opened in 1820. In 1831, it was destroyed during the Bristol Riots and was rebuilt to designs by Richard Shackleton Pope, but was never properly completed until 1872. The gaol was closed in 1883 due to poor conditions and was largely demolished in 1898. In 1884, Horfield Prison was built to replace it. In 1821, three days after his eighteenth birthday, John Horwood was the first person to be hanged at the Gaol for murdering Eliza Balsum by hurling a pebble at her which hit her on the right temple and she then tumbled into a brook. English Heritage designated The Gaol entrance wall and gateway and the south-east perimeter wall as a Grade II listed building. It is now the centre-piece of a redevelopment project in this area of the city. Archives Papers related to the New Gaol (Ref. 17128) (online catalogue), and plans including Ref. 17567/5 (online catalogue) and 4312/76 (online catalogue) are held at Bristol Archives.
WIKI
Wikipedia talk:WikiProject Puerto Rico/Archives/2011/July Archive needed? I noticed how big this talk page has gotten. IMHO, a new archive page is needed to store some of the older conversations. Is there any way someone can archive, oh I dunno, the first 50 sections? Madgirl 15 (talk) 09:00, 6 July 2011 (UTC) * Done! Tony the Marine (talk) 09:31, 6 July 2011 (UTC) * I've added to have a bot archive after 30 days of no discussion in a section. DJ Magician Man (talk) 22:42, 6 July 2011 (UTC)
WIKI
Page:The Prime Minister by Hall Caine.djvu/161 Rh The enemy Commander-in-Chief has asked for twelve hours' armistice to propose fresh terms of peace. Our own Commander has given him six. [As before.] It is the beginning of the end! I knew it must come soon! You have released the report? Yes. It will be all over the world to-morrow morning—before midnight, perhaps. [Rapturously.] To-night of all nights, too! What a Christmas greeting! Already I hear it crackling through the dark air all over Europe! Already I hear the Christmas bells ringing! Peace to men, after all the bloodshed and barbarity! We have a Cabinet at ten in the morning. You must be here, Burnley. I shall be. [Carried away, enthusiastically, with exaltation.] Our work comes now. We must hold the ground the free peoples of the world have won. No more brute force! No more military despotism! No more of the wail of death that has been echoing round the world! If it is to be peace it must be worth all the blood and all the tears that have been shed for it by the sons and daughters of this dear land. And it will be—it shall!
WIKI
What Organs Are Needed to Digest Food? The process of digestion begins in the mouth. The process of digestion begins in the mouth. Through digestion is probably not the first thing on your mind when sinking your teeth into a warm slice of veggie pizza, it is an important part of getting the nourishment you need. During digestion, your body breaks food down into nutrients that your body can absorb. The food moves through several different organs until it eventually leaves your body as waste -- a process that takes 40 hours, on average. Your digestive system is made up of the gastrointestinal tract, which includes the mouth, esophagus, stomach, small intestine and large intestine, and secondary, or accessory organs, which include the liver, gallbladder and pancreas. Mouth and Esophagus Digestion starts in your mouth. Your teeth break down food by mashing and grinding it into smaller pieces. This is called mechanical digestion. Glands in the mouth -- the salivary glands -- release an enzyme called salivary amylase that also begins to break down the food chemically. After you swallow the broken down food -- which is now called a bolus -- the muscles in your esophagus contract to move the bolus to your stomach. This process is called peristalsis. It takes the bolus abut 8 seconds to travel through the esophagus. Stomach The stomach produces hydrochloric acid, enzymes and mucus that help break down food. The stomach churns and the muscles contract to mix food with these digestive juices to get it ready to enter your small intestine. By the time the food leaves the stomach, it is a semi-liquid substance called chyme. Intestines Most digestion and nutrient absorption happens in the small intestine. Much like the esophagus, the small intestine goes through peristalsis to mix enzymes from the liver, gallbladder and pancreas with partially digested food. It is also here that the nutrients enter into your bloodstream through small finger-like structures, which are called villi and microvilli. By the time food reaches your large intestine, it has been almost completely digested. The large intestine does absorb water and electrolytes, however. The large intestine also forms the waste that will eventually leave your body. Liver The liver makes bile, a liquid that helps break down the fat you eat. Bile, which is a mixture of water, bile acids, cholesterol and phospholipids, breaks up large fat molecules into smaller ones. This helps your body digest and absorb fat better. Bile also helps mix fat with water so that it can enter your bloodstream correctly. The liver makes 500 to 1,000 milliliters of bile each day. Gallbladder The gallbladder, which is attached to the liver, is where your body stores bile after it's made by the liver. The gallbladder can store 30 to 50 milliliters of bile at a time. When there is fat in your small intestine, the gallbladder releases bile so that can digest the fat properly. Pancreas The pancreas produces and releases enzymes that help break down protein, fat and carbohydrates. The pancreas also makes two hormones -- insulin and glucagon -- that help metabolize sugar and control blood sugar levels.   Photo Credits • Jupiterimages/Pixland/Getty Images
ESSENTIALAI-STEM
Alley of Stars of Kazakh Cinema The Alley of Stars of Kazakh Cinema was unveiled in 2011 within the territory of Kazakhfilm studio, Almaty, Kazakhstan. The event occurred during the VII Eurasia International Film Festival and also associated with the 70th anniversary of Kazakfilm. The list of persons with the first stars included Шакен Айманов, Султан-Ахмет Ходжиков, Мажит Бегалин, Абдулла Карсакбаев, Сералы Кожамкулов, Калибек Куанышбаев, Амина Умурзакова, Ануар Молдабеков, Идрис Ногайбаев, Нурмухан Жантурин, Кененбай Кожабеков. Not only Kazakh nationals have their stars there. In particular, there is the star for Sergey Eisenstein, who was evacuated to Almaty during World War II and shot the first part of the film Ivan the Terrible at the studio then known as Центральная Объединённая киностудия художественных фильмов (which was the merger of the Almaty studio and Mosfilm and Lenfilm evacuated to Almaty). The event included the presentation of the "Encyclopedia of Cinema of Kazakhstan". In 2000 the first attempt was made to establish the Alley of Stars in Almaty, near the Palace of the Republic. Eventually this alley had become neglected and was dismantled in 2011. A similar Alley of Stars was established in 2007 in Karaganda, for various workers of culture, near the monument "Miner's Glory". A star per year was supposed to be installed. The first star was installed for Bibigul Tulegenova, followed by Roza Rymbayeva, Dos Mukasan band, and Kayrat Baybosynov. This tradition was neglected as well. In 2014 the tradition was restored and the star for Yeskendir Khasangaliyev was installed.
WIKI
Commits Anonymous committed 1ba68fa Add awk lexer. • Participants • Parent commits 2f2a9d7 Comments (0) Files changed (3) File pygments/lexers/_mapping.py 'AppleScriptLexer': ('pygments.lexers.other', 'AppleScript', ('applescript',), ('*.applescript',), ()), 'AsymptoteLexer': ('pygments.lexers.other', 'Asymptote', ('asy', 'asymptote'), ('*.asy',), ('text/x-asymptote',)), 'AutohotkeyLexer': ('pygments.lexers.other', 'autohotkey', ('ahk',), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)), + 'AwkLexer': ('pygments.lexers.other', 'Awk', ('awk', 'gawk', 'mawk', 'nawk'), ('*.awk',), ('application/x-awk')), 'BBCodeLexer': ('pygments.lexers.text', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)), 'BaseMakefileLexer': ('pygments.lexers.text', 'Makefile', ('basemake',), (), ()), 'BashLexer': ('pygments.lexers.other', 'Bash', ('bash', 'sh', 'ksh'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass'), ('application/x-sh', 'application/x-shellscript')), File pygments/lexers/other.py 'BashSessionLexer', 'ModelicaLexer', 'RebolLexer', 'ABAPLexer', 'NewspeakLexer', 'GherkinLexer', 'AsymptoteLexer', 'PostScriptLexer', 'AutohotkeyLexer', 'GoodDataCLLexer', - 'MaqlLexer', 'ProtoBufLexer', 'HybrisLexer'] + 'MaqlLexer', 'ProtoBufLexer', 'HybrisLexer', 'AwkLexer'] line_re = re.compile('.*?\n') (r'[a-zA-Z0-9_.]+\*?', Name.Namespace, '#pop') ], } + +class AwkLexer(RegexLexer): + """ + For Awk scripts. + """ + + name = 'Awk' + aliases = ['awk', 'gawk', 'mawk', 'nawk'] + filenames = ['*.awk'] + mimetype = ['application/x-awk'] + + tokens = { + 'commentsandwhitespace': [ + (r'\s+', Text), + (r'#.*$', Comment.Single) + ], + 'slashstartsregex': [ + include('commentsandwhitespace'), + (r'/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/' + r'\B', String.Regex, '#pop'), + (r'(?=/)', Text, ('#pop', 'badregex')), + (r'', Text, '#pop') + ], + 'badregex': [ + ('\n', Text, '#pop') + ], + 'root': [ + (r'^(?=\s|/)', Text, 'slashstartsregex'), + include('commentsandwhitespace'), + (r'\+\+|--|\|\||&&|in|\$|!?~|' + r'(\*\*|[-<>+*%\^/!=])=?', Operator, 'slashstartsregex'), + (r'[{(\[;,]', Punctuation, 'slashstartsregex'), + (r'[})\].]', Punctuation), + (r'(break|continue|do|while|exit|for|if|' + r'return)\b', Keyword, 'slashstartsregex'), + (r'function\b', Keyword.Declaration, 'slashstartsregex'), + (r'(atan2|cos|exp|int|log|rand|sin|sqrt|srand|gensub|gsub|index|' + r'length|match|split|sprintf|sub|substr|tolower|toupper|close|' + r'fflush|getline|next|nextfile|print|printf|strftime|systime|' + r'delete|system)\b', Keyword.Reserved), + (r'(ARGC|ARGIND|ARGV|CONVFMT|ENVIRON|ERRNO|FIELDWIDTHS|FILENAME|FNR|FS|' + r'IGNORECASE|NF|NR|OFMT|OFS|ORFS|RLENGTH|RS|RSTART|RT|' + r'SUBSEP)\b', Name.Builtin), + (r'[$a-zA-Z_][a-zA-Z0-9_]*', Name.Other), + (r'[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?', Number.Float), + (r'0x[0-9a-fA-F]+', Number.Hex), + (r'[0-9]+', Number.Integer), + (r'"(\\\\|\\"|[^"])*"', String.Double), + (r"'(\\\\|\\'|[^'])*'", String.Single), + ] + } File tests/examplefiles/test.awk +#!/bin/awk -f + +BEGIN { + # It is not possible to define output file names here because + # FILENAME is not define in the BEGIN section + n = ""; + printf "Generating data files ..."; + network_max_bandwidth_in_byte = 10000000; + network_max_packet_per_second = 1000000; + last3 = 0; + last4 = 0; + last5 = 0; + last6 = 0; +} +{ + if ($1 ~ /Average/) + { # Skip the Average values + n = ""; + next; + } + + if ($2 ~ /all/) + { # This is the cpu info + print $3 > FILENAME".cpu.user.dat"; +# print $4 > FILENAME".cpu.nice.dat"; + print $5 > FILENAME".cpu.system.dat"; +# print $6 > FILENAME".cpu.iowait.dat"; + print $7 > FILENAME".cpu.idle.dat"; + print 100-$7 > FILENAME".cpu.busy.dat"; + } + if ($2 ~ /eth0/) + { # This is the eth0 network info + if ($3 > network_max_packet_per_second) + print last3 > FILENAME".net.rxpck.dat"; # Total number of packets received per second. + else + { + last3 = $3; + print $3 > FILENAME".net.rxpck.dat"; # Total number of packets received per second. + } + if ($4 > network_max_packet_per_second) + print last4 > FILENAME".net.txpck.dat"; # Total number of packets transmitted per second. + else + { + last4 = $4; + print $4 > FILENAME".net.txpck.dat"; # Total number of packets transmitted per second. + } + if ($5 > network_max_bandwidth_in_byte) + print last5 > FILENAME".net.rxbyt.dat"; # Total number of bytes received per second. + else + { + last5 = $5; + print $5 > FILENAME".net.rxbyt.dat"; # Total number of bytes received per second. + } + if ($6 > network_max_bandwidth_in_byte) + print last6 > FILENAME".net.txbyt.dat"; # Total number of bytes transmitted per second. + else + { + last6 = $6; + print $6 > FILENAME".net.txbyt.dat"; # Total number of bytes transmitted per second. + } +# print $7 > FILENAME".net.rxcmp.dat"; # Number of compressed packets received per second (for cslip etc.). +# print $8 > FILENAME".net.txcmp.dat"; # Number of compressed packets transmitted per second. +# print $9 > FILENAME".net.rxmcst.dat"; # Number of multicast packets received per second. + } + + # Detect which is the next info to be parsed + if ($2 ~ /proc|cswch|tps|kbmemfree|totsck/) + { + n = $2; + } + + # Only get lines with numbers (real data !) + if ($2 ~ /[0-9]/) + { + if (n == "proc/s") + { # This is the proc/s info + print $2 > FILENAME".proc.dat"; +# n = ""; + } + if (n == "cswch/s") + { # This is the context switches per second info + print $2 > FILENAME".ctxsw.dat"; +# n = ""; + } + if (n == "tps") + { # This is the disk info + print $2 > FILENAME".disk.tps.dat"; # total transfers per second + print $3 > FILENAME".disk.rtps.dat"; # read requests per second + print $4 > FILENAME".disk.wtps.dat"; # write requests per second + print $5 > FILENAME".disk.brdps.dat"; # block reads per second + print $6 > FILENAME".disk.bwrps.dat"; # block writes per second +# n = ""; + } + if (n == "kbmemfree") + { # This is the mem info + print $2 > FILENAME".mem.kbmemfree.dat"; # Amount of free memory available in kilobytes. + print $3 > FILENAME".mem.kbmemused.dat"; # Amount of used memory in kilobytes. This does not take into account memory used by the kernel itself. + print $4 > FILENAME".mem.memused.dat"; # Percentage of used memory. +# It appears the kbmemshrd has been removed from the sysstat output - ntolia +# print $X > FILENAME".mem.kbmemshrd.dat"; # Amount of memory shared by the system in kilobytes. Always zero with 2.4 kernels. +# print $5 > FILENAME".mem.kbbuffers.dat"; # Amount of memory used as buffers by the kernel in kilobytes. + print $6 > FILENAME".mem.kbcached.dat"; # Amount of memory used to cache data by the kernel in kilobytes. +# print $7 > FILENAME".mem.kbswpfree.dat"; # Amount of free swap space in kilobytes. +# print $8 > FILENAME".mem.kbswpused.dat"; # Amount of used swap space in kilobytes. + print $9 > FILENAME".mem.swpused.dat"; # Percentage of used swap space. +# n = ""; + } + if (n == "totsck") + { # This is the socket info + print $2 > FILENAME".sock.totsck.dat"; # Total number of used sockets. + print $3 > FILENAME".sock.tcpsck.dat"; # Number of TCP sockets currently in use. +# print $4 > FILENAME".sock.udpsck.dat"; # Number of UDP sockets currently in use. +# print $5 > FILENAME".sock.rawsck.dat"; # Number of RAW sockets currently in use. +# print $6 > FILENAME".sock.ip-frag.dat"; # Number of IP fragments currently in use. +# n = ""; + } + } +} +END { + print " '" FILENAME "' done."; +}
ESSENTIALAI-STEM
Keto Diet Why keto doesn’t work for some people over 50 (and how to fix it) If you are over 50 and found that keto didn’t work for you, that might be because you tried an approach that wasn’t tailored to your age and personality. Keto is could be one of the most effective diet plans for people over 50, however, only if you do it right; so before you decide that keto doesn’t work, read on to learn five things that can cause people over 50 fail at keto, and how to fix them. 1.  You might be following a keto diet designed for younger people Our eating habits tend to change over our lifetimes and other factors come into play as we get older too; such as where we carry excess fat, and how efficiently we metabolize food. One of the main reasons people over 50 may fail at keto is that they’re trying to follow a keto diet designed for younger people, instead of one designed for their own age group. While the basic principles of keto hold true for everyone, how they work in practice can be quite different. A regular keto diet “for everyone” tends to be based on the metabolism of younger people, and the ways in which the younger body metabolizes fat. Therefore, all-purpose keto plans may not be the best option here. A personalized food plan is something drawn up just for you, while most generic keto diets and food plans are developed for “everyone,” with the hidden caveat that “everyone” is under 30. It’s a little like finding something “one size fits all” when clothes shopping; for some of us, it might as well say “one size fits all, except some people.” If you’re over 50 and failed at keto, you most likely tried a plan that not only wasn’t designed for you but was actually designed for someone very different from you; and that’s about as effective as trying to correct your eyesight by wearing someone else’s prescription glasses. But with a personalized keto meal plan developed not just for people like you but for you personally, you can begin to see results and enjoy the foods you eat while you do. 2. Some of us are programmed to think that eating fatty food and losing weight are mutually exclusive People our age have a lot of life experience and tend to have fairly fixed ideas about things as a result. And haven’t we all spent our whole lives being programmed to know that that eating fatty food is bad? This type of mental barrier can be hard for over-50’s to get past, even for those who are really committed to making keto work.  The idea of eating dietary fat to lose body fat does seem like a strange concept. Back when keto really began to gain traction, the whole idea of this was viewed with great suspicion. However, a personalized keto diet plan can be not only effective, but also supported by hard science explaining why it is effective. The ketogenic diet (to give it its formal name) was originally developed by doctors to help to control serious health conditions (like epilepsy). couple Promising research is underway right now investigating the benefits of a ketogenic diet for a whole bunch of other health issues too, including Parkinson’s disease and Alzheimer’s disease,1 both of which are devastating health conditions most commonly diagnosed in the over-50s. You might have prepped a keto meal with great precision, but if you’re used to the idea that eating fat and losing weight don’t mix, you might find it difficult to resist cutting out some of that same fat that your keto diet relies on. If it sounds like eating fat feels like doing something wrong, consider giving it a couple of weeks to “get comfortable with being uncomfortable” where fat is concerned, and when the results begin to speak for themselves, you’ll find things get a whole lot easier. 3. You may not be ready to see it through One of the main reasons that a keto diet appeals to a wide variety of people is because following a personalized keto diet plan produces the first noticeable results rather fast. Most diets may result in a degree of weight loss over time, but they sometimes tend to take months before the results are visibly noticeable to you and other people. However, following the right keto diet plan achieves noticeable results within just a month for most people; but it still doesn’t happen overnight. Winning at keto takes time. You need to understand how the body responds to a keto diet and trust the process. It takes a while to get into a ketogenic state in the first place, and waiting for this to happen is usually the hardest part. This is why many over-50s drop out of keto before they’ve even given it a chance to work. Most people don’t see a big change in their figure in the first couple of weeks following a keto diet plan. But when you achieve ketosis and then maintain an active keto state, that’s the point at which your weight loss becomes evident. Keto can offer a direct route to maintainable, rapid weight loss, but you’re not going to wake up thin after a week! 4. You could be eating carbs without realizing it  Most people could list a whole bunch of high-carb foods, like bread, pasta, and rice. As most people know, keto diet means you need to limit your intake of carbs, but carbs are present in a whole range of other less obvious foods too. Hidden carbs can completely sabotage your keto diet and slow your body’s journey into ketosis. Foods like milk and innocent-seeming condiments like ketchup often have far more carbs than you’d think. Even prepared meats like salami and bacon, which seem like they should be a safe choice for keto, sometimes pack in more carbs than you’d expect depending on how they’re processed. Following a personalized keto diet plan that’s designed to match your lifestyle and incorporate the foods you enjoy helps to avoid hidden carbs, and stay on track. 5. …Or maybe you’re just not ready to lose weight This is a difficult subject for many over-50s to face; and nobody would ever deliberately start a diet with the intention of failing. However, a lot of people may say that they want to lose weight, but what they really want is to be slim and look good, rather than actually put the work in to get there. Many of us face pressure to slim down as we get older; from our adult kids, our physician, our partner; and anyone who is carrying just a few extra pounds may think that they “should” lose weight. This isn’t the same thing as wanting to lose weight though, and in turn, “wanting” isn’t the same as “doing.” Not having the right motivations – could this be the main reason why over-50s fail at keto? When this approach fails, the instinctive response from most people is “guess I’m at that age now where the weight just won’t come off,” or “It’s just how I am;” and all variants of “I tried keto and it doesn’t work.” That’s when it’s time to ask yourself – am I serious about my diet? Am I willing to put in the work? Energetic Keto works for over-50s just as effectively as it does for any other age group, and if you’re really serious about using it for weight loss, you should focus on balanced nutrition and get a personalized keto diet plan designed for you and follow it through. Losing weight depends less on what’s in your fridge and more on what’s in your heart. If you’re ready to shed your excess pounds, keto may be the way to go. But if your heart isn’t in it, or you’re trying keto because other people are making you feel bad about your weight, then you’re right; weight loss may not be what you need. Can people over 50 lose weight with keto? Yes. Millions of over-50s are out there right now smashing their weight loss goals with keto, without feeling hungry or cutting out all of the foods they enjoy. Are you ready to join them?  Begin by taking this quick quiz on your current stats, weight loss goals, favorite foods, and meal prep preferences. You will then receive your own professionally developed personalized keto diet plan, to ensure that your weight loss journey is simple, speedy, and vitally, successful. See you on the slim side! Leave a reply Your email address will not be published. Required fields are marked * . 0 %
ESSENTIALAI-STEM
Fiddle Fire: 25 Years of the Charlie Daniels Band Fiddle Fire: 25 Years of the Charlie Daniels Band is a compilation album by American musician Charlie Daniels. Released on August 18, 1998, the album consists of re-recordings of a number of his hits. The compilation was reissued on July 12, 2005. Track listing * 1) "Texas" * 2) "The Devil Went Down to Georgia" * 3) "High Lonesome" * 4) "Fais Do Do" * 5) "Boogie Woogie Fiddle Country Blues" * 6) "The South's Gonna Do It" * 7) "Drinkin' My Baby Goodbye" * 8) "Fiddle Fire" * 9) "The Fiddle Player's Got the Blues" * 10) "Layla" * 11) "Orange Blossom Special" * 12) "Talk to Me Fiddle" Reception The album received four out of five stars from Michael B. Smith of Allmusic. He concludes that "Charlie Daniels displays his exceptional fiddle playing in this compilation of his best fiddle songs. There isn't a bad track on the disc. An excellent collection of Tennessee mountain-inspired fiddle-sawing."
WIKI
Rate this post Deploying Machine Learning Models With Docker Photo by Rahul Chakraborty on Unsplash I wo n’t be talking about how to create machine learning or deep learning models here, there are plenty of articles, blog post, and tutorials on that subject and I would recommend checking out Machine Learning Mastery if that is what you ‘re looking for or if you ‘re looking to improve In this article, I will talk about preparing models so they can be deployed on any device and through any online deployment method. Why do you use it ? Data science models can be deployed anywhere and run on a few lines of code with the help of the containerization service, which allows for websites, APIs, databases, and, in our case, data science models to be deployed anywhere and run on a few lines of code. The method has a faster startup time and does n’t take up as much memory as other methods. You can easily update your models and test the changes. There is a machine learning model. While each of your models will have their own quirks, the process usually follows these steps. 1. Save your model by train. 2. Data can be sent and predictions made with your model. 3. There is an optional Docker-Compose file specified to your model. 4. You can create and test a container of your model. 5. You can deploy to application hosting service. Saving and training your model. Depending on what library/framework you are using, there are many ways to save your model. • You will be using their built-in save model methods if you are using Tensorflow. • Pytorch has its own save model method. • If you ‘re creating a model from scratch, I recommend saving it in a pickle file and turning it into an object. This is how to do that in python. • If your framework is not mentioned, I recommend searching for Saving models and selecting the first result. You can create an application for your model. The API can be created using many different services. I will be using Flask because it is easy to read and interpret into other languages and because it is written in Python, the primary language for creating models in the first place. There is a template for creating an interface for your model. The file needs to be created. The model itself is not the most important part of this process. The container that hosts the model and API is created with the help of the Dockerfile. It may be tempting to add the code that creates/trains the model, but it is best practice for an image to have one purpose and that is to host the model. This is a template for your model. We are going to look at this step by step. The first step is to add our base image, it ‘s pulled from a site called DockerHub. The image pulled can have its own pre-downloaded dependencies, just like the one you ‘re making here. In the second step, we add the requirements text file to the base image. You need to add a requirements file if you do n’t know what it is. This file tells you what dependencies you have installed and how to load the model. This includes data processing libraries and your framework. We install the new requirements ‘ dependencies. The text should be in the image. We add the files to the image. You can add the API if you created it in the same folder as the model. It ‘s called dockerignore. The ignore file is the same as the ignore file. You should use gitignore to prevent files that are not being used in the model from being loaded. The port will be exposed in step six. The command is run to start the python. The YML file can beComposed using the Docker. The docker-compose file can make building the image and starting the container easier. To give a quick explanation of this file, the version is the docker version, the services are the different applications for the app ( could be a website and database or just the API or website ), the build is where docker needs the build the service from, the ports are where the API will The container and image are being built. The image is a read-only file and is used to make the container that hosts the API. If you created a docker-compose, this step is easy. You can do both in one command. docker-compose up You need to build the image and then start the container if you did n’t create a file. build -t The name of the run is docker run — rm. Make sure that everything runs correctly and that you can make a query. The model container is being deployed. There are a lot of options when it comes to deployment. There is a great overview of how Caprover works. Source: https://nhadep247.net Category: Machine
ESSENTIALAI-STEM
Talk:Responses to Objectivism Deleted self-published source I deleted the stuff sourced from some professor named "Parrot." It's not a credible source, according to Wikipedia policy. It's self-published by Professor Parrot himself. WP:V says, "Anyone can create a website or pay to have a book published, and then claim to be an expert in a certain field. For that reason, self-published books, personal websites, and blogs are largely not acceptable as sources. Exceptions may be when a well-known, professional researcher in a relevant field, or a well-known professional journalist, has produced self-published material. In some cases, these may be acceptable as sources, so long as their work has been previously published by credible, third-party publications. However, exercise caution: if the information on the professional researcher's blog is really worth reporting, someone else will have done so." Rimric press is the publisher, which is Professor Parrott's own press. RJII 03:12, 17 June 2006 (UTC) Merge Unless there are any specific objections I will proceed with merging in a few days ≈ jossi ≈ t &bull; @ 02:30, 27 June 2006 (UTC)
WIKI
Britain to legislate to reform company auditing LONDON, Dec 19 (Reuters) - Britain will toughen up supervision of accountants after high profile company collapses at builder Carillion and retailer BHS undermined trust in auditing, it said on Thursday. Setting out its new legislative agenda in a Queen’s Speech on Thursday, the government said it will develop proposals on company audit and corporate reporting, including “a strong regulator” with the necessary powers to reform the sector. “These proposals aim to improve public trust in business, following three independent reviews commissioned in 2018,” it said. “It will also help workers employed by a large company in future to know how resilient it is.” Three government-backed reviews have proposed scrapping the Financial Reporting Council (FRC) and creating a new, more powerful regulator, the Audit, Reporting and Governance Authority or Arga. New laws are needed to implement many of the recommendations. (Reporting by Huw Jones, editing by James Davey)
NEWS-MULTISOURCE
Maryland Route 64 Maryland Route 64 (MD 64) is a state highway in the U.S. state of Maryland. The state highway runs 13.33 mi from U.S. Route 40 (US 40) in Hagerstown east to the Pennsylvania state line near Ringgold, where the highway continues north as Pennsylvania Route 997 (PA 997). MD 64 is an L-shaped route in northeastern Washington County, connecting Hagerstown with Smithsburg, Cavetown, and Chewsville and Smithsburg with Ringgold and Waynesboro, Pennsylvania. The state highway is maintained by the Maryland State Highway Administration except for the municipally-maintained portion within the city limits of Hagerstown. MD 64 was once a turnpike between Hagerstown and Smithsburg. The state highway was constructed from Smithsburg to the Pennsylvania state line in the mid 1910s and from Hagerstown to Smithsburg in the early 1920s. MD 64 was reconstructed in its entirety in the 1950s, resulting in bypasses of all four communities east of Hagerstown the highway serves. Route description MD 64 begins at an intersection with US 40 (Dual Highway) in a commercial area adjacent to the municipal golf course in the eastern part of the city of Hagerstown. The state highway heads north as Cleveland Avenue, a two-lane undivided street through a residential neighborhood. MD 64 turns east onto Jefferson Street, which heads east across Hamilton Run past Westwood Street, beyond which the highway leaves the city limits and becomes state-maintained. Jefferson Street becomes Jefferson Boulevard at Pangborn Boulevard. MD 64 crosses Antietam Creek and passes close to Antietam Hall as the highway passes between residential subdivisions. The state highway intersects Robinwood Drive, which passes through the suburb of Robinwood and Hagerstown Community College, before beginning to parallel CSX's Hanover Subdivision railroad line just west of Chewsville. MD 64 veers east away from the rail line as Track Side Drive, which is unsigned MD 804 and the old alignment of MD 64, continues to parallel the tracks. MD 804 provides access to the southern end of MD 62. MD 64 receives the other end of MD 804, Twin Springs Drive, on the east side of Chewsville. After passing between salvage yards, MD 64 continues east through farmland. The state highway passes between a pair of residential subdivisions before crossing Beaver Creek and reaching Cavetown. Old Georgetown Road parallels the north side of the highway as the old alignment while MD 64 bypasses the unincorporated village to the south, where the highway intersects MD 66 (Mapleville Road) and its name changes to Smithsburg Pike. MD 64 curves to the northeast, intersecting Cavetown Church Road, unsigned MD 844 and the old alignment of MD 77, and modern MD 77 (Foxville Road), which heads east opposite Leitersburg Smithsburg Road, which heads northwest into the town of Smithsburg. MD 64 intersects the southern end of MD 491 (Raven Rock Road) and crosses over the Hanover Subdivision before curving to the north. The state highway veers north and intersects Fruit Tree Drive and Water Street, which were formerly MD 92, then continues northwest and meets the northern end of MD 66 (Bradbury Avenue), which is the old alignment of MD 64 heading north from Smithsburg. MD 64 continues north, crossing Little Antietam Creek before reaching the community of Ringgold, where the highway intersects Windy Haven Road, the old alignment of MD 64, and MD 418 (Ringgold Pike). MD 64 receives the other end of the old alignment, Barkdoll Road, before reaching its eastern terminus at the Pennsylvania state line. The highway continues north as PA 997 (Anthony Highway) toward the borough of Waynesboro. MD 64 is a part of the National Highway System as a principal arterial from Eastern Boulevard in Hagerstown east to MD 77 near Smithsburg. History One of the predecessor highways of MD 64 was the Hagerstown and Smithsburg Turnpike between the two municipalities. The other major section of MD 64, the highway from Smithsburg to the Pennsylvania state line, was paved in 1916. The Hagerstown–Smithsburg highway was paved around 1923. The all-weather highway generally followed the old turnpike except for a deviation just west of Chewsville to avoid two grade crossings of the Western Maryland Railway. Within Hagerstown, MD 64 was marked from US 40 (now US 40 Alternate) north along Mulberry Street to Jefferson Street. By 1950, MD 64's western terminus was moved to the new US 40 and used Cannon Avenue to reach Jefferson Street. The state highway was moved to its present course along Cleveland Avenue in 1954. MD 64's name east from Hagerstown has varied, being named Cavetown Pike by 1950, becoming Smithsburg Pike by 1964, and changed to Jefferson Boulevard around 1983. MD 64 was rebuilt from end-to-end in the 1950s. The first section to be reconstructed was from the Hagerstown city limit to just west of Chewsville in 1952. The next section, from just west of Chewsville to just west of Cavetown, was completed in 1954 and featured the bypass of Chewsville. The third section was a bypass of Cavetown from just west of the community to Wolfsville Road, now MD 64's intersection with MD 77. That highway was completed in 1956. The old alignment through Chewsville was designated MD 804. The final two segments of MD 64 to be worked on were from Wolfsville Road to south of Ringgold, completing the bypass of Smithsburg; and from south of Ringgold to the Pennsylvania state line, including a bypass of Ringgold. Both projects were completed in 1958. The old alignment of MD 64 from Cavetown to north of Smithsburg, consisting of Water Street, Pennsylvania Avenue, and Bradbury Avenue, was remarked as a northern extension of MD 66.
WIKI
#Include Once "windows.bi" Type tBipdata hProcessHandle As HANDLE hWritePipe As HANDLE hReadPipe As HANDLE End Type Function bipOpen(PrgName As String, showmode As Short = SW_NORMAL) As tBipdata Ptr Dim As STARTUPINFO si Dim As PROCESS_INFORMATION pi Dim As SECURITY_ATTRIBUTES sa Dim As HANDLE hReadPipe, hWritePipe, hReadChildPipe, hWriteChildPipe Dim pPipeHandles As tBipdata Ptr 'set security attributes sa.nLength = SizeOf(SECURITY_ATTRIBUTES) sa.lpSecurityDescriptor = NULL 'use default descriptor sa.bInheritHandle = TRUE 'create one pipe for each direction CreatePipe(@hReadChildPipe,@hWritePipe,@sa,0) 'parent to child CreatePipe(@hReadPipe,@hWriteChildPipe,@sa,0) 'child to parent GetStartupInfo(@si) si.dwFlags = STARTF_USESTDHANDLES Or STARTF_USESHOWWINDOW si.wShowWindow = showmode 'appearance of child process window si.hStdOutput = hWriteChildPipe si.hStdError = hWriteChildPipe si.hStdInput = hReadChildPipe CreateProcess(0,PrgName,0,0,TRUE,CREATE_NEW_CONSOLE,0,0,@si,@pi) CloseHandle(hWriteChildPipe) CloseHandle(hReadChildPipe) pPipeHandles = Allocate (SizeOf(tBipdata)) 'area for storing the handles pPipeHandles->hProcessHandle = pi.hProcess 'handle to child process pPipeHandles->hWritePipe = hWritePipe pPipeHandles->hReadPipe = hReadPipe Return pPipeHandles 'pointer to handle array End Function Sub bipClose(ByRef pPipeHandles As tBipdata Ptr) If pPipeHandles = 0 Then Return TerminateProcess(pPipeHandles->hProcessHandle, 0) CloseHandle(pPipeHandles->hWritePipe) CloseHandle(pPipeHandles->hReadPipe) DeAllocate(pPipeHandles) pPipeHandles = 0 End Sub Function bipWrite(pPipeHandles As tBipdata Ptr, text As String, mode As String = "") As Integer Dim As Integer iNumberOfBytesWritten 'Dim As String txt = text '? Len(text);" "; If pPipeHandles = 0 Then Return 0 If LCase(mode) <> "b" Then 'not binary mode text += Chr(13,10) EndIf WriteFile(pPipeHandles->hWritePipe,StrPtr(text),Len(text),@iNumberOfBytesWritten,0) Return iNumberOfBytesWritten End Function Function bipRead(pPipeHandles As tBipdata Ptr, timeout As UInteger = 100) As String 'returns the whole pipe content until the pipe is empty or timeout occurs. ' timeout default is 100ms to prevent a deadlock Dim As Integer iNumberOfBytesRead, iTotalBytesAvail, iBytesLeftThisMessage Dim As String buffer, retText Dim As Double tout = Timer + Cast(Double,timeout) / 1000 If pPipeHandles = 0 Then Return "" 'no valid pointer Do PeekNamedPipe(pPipeHandles->hReadPipe,0,0,0,@iTotalBytesAvail,0) If iTotalBytesAvail Then buffer = String(iTotalBytesAvail,Chr(0)) ReadFile(pPipeHandles->hReadPipe,StrPtr(buffer),Len(buffer),@iNumberOfBytesRead,0) retText &= buffer ElseIf Len(retText) Then Exit Do EndIf Loop Until Timer > tout Return retText End Function Function bipReadLine(pPipeHandles As tBipdata Ptr, separator As String = "a" & Chr(13,10), timeout As UInteger = 100) As String 'returns the pipe content till the first separator if any, or otherwise the whole pipe ' content on timeout. timeout default is 100ms to prevent a deadlock Dim As Integer iNumberOfBytesRead, iTotalBytesAvail, iBytesLeftThisMessage, endPtr Dim As String buffer, retText, mode Dim As Double tout = Timer + Cast(Double,timeout) / 1000 If pPipeHandles = 0 Then Return "" 'no valid pointer mode = LCase(Left(separator,1)) separator = Mid(separator,2) Do PeekNamedPipe(pPipeHandles->hReadPipe,0,0,0,@iTotalBytesAvail,0) If iTotalBytesAvail Then buffer = String(iTotalBytesAvail,Chr(0)) PeekNamedPipe(pPipeHandles->hReadPipe,StrPtr(buffer),Len(buffer),@iNumberOfBytesRead, _ @iTotalBytesAvail,@iBytesLeftThisMessage) 'copy pipe content to buffer Select Case mode Case "a" 'any endPtr = InStr(buffer, Any separator) 'look for line end sign Case "e" 'exact endPtr = InStr(buffer, separator) 'look for line end sign End Select If endPtr Then 'return pipe content till line end Select Case mode Case "a" Do While (InStr(separator,Chr(buffer[endPtr - 1]))) And (endPtr < Len(buffer)) endPtr += 1 Loop endPtr -= 1 Case "e" endPtr += Len(separator) End Select retText = Left(buffer,endPtr) ReadFile(pPipeHandles->hReadPipe,StrPtr(buffer),endPtr,@iNumberOfBytesRead,0) 'remove read bytes from pipe Select Case mode Case "a" Return RTrim(retText,Any separator) 'remove line end sign from returned string Case "e" Return Left(retText,Len(retText) - Len(separator)) End Select EndIf EndIf Loop Until Timer > tout If iTotalBytesAvail Then 'return all pipe content buffer = String(iTotalBytesAvail,Chr(0)) ReadFile(pPipeHandles->hReadPipe,StrPtr(buffer),Len(buffer),@iNumberOfBytesRead,0) Return buffer EndIf Return "" End Function
ESSENTIALAI-STEM
Dollar Holds Steady Dollar Holds SteadyUnited States CurrencyThe dollar index held steady above 112 on Monday as investors looked ahead to US manufacturing data and speeches from Federal Reserve officials for insight on the state of the world’s largest economy and to guide the rates outlook. Markit PMI and ISM manufacturing data for the US are due for release later on Monday, while Fed officials Raphael Bostic and Thomas Barkin are set to give separate speeches. The dollar has climbed steadily throughout the year as the US central bank aggressively raised interest rates to quell surging inflation. The Fed also forecasted rates to peak at 4.6% next year with no cuts until 2024, shooting down any dovish pivot that the markets were hoping for in the near term. Global recession fears and the US economy’s relative strength also boosted safe-haven demand for the dollar at the expense of other assets.2022-10-03T05:30:21.48
NEWS-MULTISOURCE
What Is Geographic Atrophy? Causes, Symptoms, Diagnosis, and Treatment In geographic atrophy, cells in the macula are progressively lost, resulting in distinct patches of atrophy, or cell death.National Eye Institute Geographic atrophy is a progressive and advanced form of age-related macular degeneration (AMD), a chronic eye condition affecting one in eight people age 60 or older worldwide. As a form of macular degeneration, geographic atrophy specifically targets the macula, the central portion of the retina that’s responsible for sharp, central vision. Geographic atrophy does not cause total blindness — but it can greatly impair your central vision, causing blind spots in the center of your vision while leaving your peripheral vision intact. Geographic Atrophy Basics Affecting nearly 20 million Americans, age-related macular degeneration is the leading cause of irreversible vision loss in people over age 60, according to the American Macular Degeneration Foundation. The vast majority of cases are of the “dry” type, in which small yellow deposits called drusen develop under the macula, leading to thinning and drying of the macula. Wet AMD, on the other hand, involves the production of new, abnormal blood vessels under the macular that are very fragile and leak blood and fluid. Geographic atrophy is a late-stage form of dry AMD. It’s not exactly rare, affecting about one million people in the United States, according to research published in 2023.  It accounts for about 20 percent of all legal blindness caused by AMD. Geographic atrophy is characterized by the progressive loss of cells in the macula, resulting in the formation of distinct patches of atrophy (cell death). The atrophic patches often resemble a map or geographic shapes, hence the name. The areas of atrophy, called lesions, usually develop initially away from the fovea, the central area of the macula that’s responsible for the highest-quality vision, according to the Macular Society. As geographic atrophy worsens, these areas expand or spread and start affecting the fovea, leading to further impairment of central vision. Signs and Symptoms Individuals with geographic atrophy often experience a gradual decline in central vision, most often beginning with blind spots or dark areas in their central vision (scotomas). Other common symptoms include: • Patchy vision that’s distorted or fragmented, with irregular patches of clear and blurred vision • Difficulty seeing in dim or low-light settings • Difficulty reading, driving, sewing, recognizing faces, and other tasks relying on central vision • Diminished contrast sensitivity, making it difficult to distinguish between shades of color or perceive fine details • Reduced vibrancy of colors senior woman looking at photo with impaired vision due to geographic atrophy People with geographic atrophy often have a gradual decline in central vision, making tasks such as reading difficult.Adobe Stock Causes and Risk Factors Geographic atrophy is the last stage of dry AMD, which is believed to arise from a combination of genetic, environmental, and lifestyle factors. Research suggests that geographic atrophy may result from chronic inflammation and immunological responses that damage photoreceptors, the pigmented layer of the retina, and other visual structures. Risk factors for geographic atrophy include: • Advanced age (60 or older) • Race (white) • Family history of macular degeneration or other eye-affecting genetic conditions • Smoking • Chronic conditions including obesity, cardiovascular disease, diabetes, and high cholesterol • Chronic inflammation • Poor vision (20/200 or worse) • Poor diet low in fruits and vegetables How Is Geographic Atrophy Diagnosed? Diagnosing geographic atrophy usually begins with your provider asking you about your symptoms, medical history, and family history. They’ll then conduct a comprehensive eye examination and order various diagnostic tests, according to the American Academy of Ophthalmology.  These may include: • Visual acuity tests to evaluate your central vision by having you read an eye chart. • Microperimetry, a technique to evaluate visual function by stimulating the macula with varying intensities of light. • Fundus autofluorescence, the standard imaging technology to visualize retinal structures and detect geographic atrophy lesions. • Fluorescein angiography, which involves injecting a dye into the bloodstream to highlight any abnormal blood vessel growth. • Optical coherence tomography, a noninvasive imaging technique that identifies lesions by detecting the loss of retinal layers. • Multifocal electroretinography, an exam that measures the electrical activity in the retina when it is exposed to light of varying intensities. Treatment of Geographic Atrophy Currently, there is no cure for geographic atrophy. In 2023, however, the U.S. Food and Drug Administration approved the first medications to treat geographic atrophy and slow its progression: pegcetacoplan (Syfovre) and avacincaptad pegol intravitreal solution (Izervay). Both of the drugs calm the immune response to prevent further damage of retinal cells. Approved in February 2023, Syfovre is injected into the eyes every 25 to 60 days, depending on your individual response to the drug. Clinical trials showed that Syfovre, when administered on a monthly or bimonthly basis for 24 months, can reduce the rate of geographic atrophy lesion growth by up to 36 percent. Potential side effects of Syfovre include: • Eye infection or retinal detachment (separation of the layers of the retina) • Eye inflammation, which may cause eye redness, light sensitivity, and eye pain • Changes in vision, including blurred, distorted vision, flashing lights, and small specks floating in your vision • Blood in the white of the eye • Development of wet AMD Izervay was approved in August 2023. It works differently than Syfovre, but it is also given as an eye injection once a month. Clinical trials showed Izervay can reduce the rate of geographic atrophy lesions by 35 percent over 12 months. Izervay can produce similar side effects as Syfovre. A data analysis published in July 2024 showed that taking AREDS2 supplements — often recommended for intermediate-stage dry macular degeneration — can also slow the progression of geographic atrophy. AREDS2 tablets and capsules are sold over the counter and contain a formula of vitamin C, vitamin E, copper, zinc, lutein, and zeaxanthin. Taking a daily supplement could help people with geographic atrophy preserve their central vision for longer. Trials are under way for many other potential therapies for geographic atrophy, including: • Intravitreal brimonidine, which reduced lesion area growth within three months in a phase 2 clinical trial • Stem cell therapy • Intravitreal lampalizumab • Topical ocular tandospirone • Intravitreal Avacincaptad pegol • Oral ALK-001 • Oral emixustat hydrochloride • Oral doxycycline  Patient Resources Living with low vision usually requires learning new ways to carry out daily activities, from personal hygiene tasks, housekeeping, and cooking to carrying out work, school, parenting, or caregiving responsibilities. Your eye doctor can likely put you in touch with local resources that can help. For more help and guidance on accessing rehabilitation services, two organizations may be able to help. The Lighthouse Guild offers a variety of services to people with low vision, including in-person vision exams, technology training, and more in New York City, and telehealth consultations for those not in New York. Additionally, it has online support services for teens, young adults, adults, and parents of children with vision loss. Contact the Lighthouse Guild to see whether it can help with your needs. VisionAware, a part of the APHConnect Center, provides written information for people living with vision loss — as well as their families, caregivers, healthcare providers, and social service professionals — on eye diseases and disorders, along with tips for living with blindness or low vision. It produces webinars on topics related to blindness and low vision, and it maintains a directory of services, where individuals can find contact information for organizations and agencies that serve people who are blind or visually impaired in the United States and Canada. Common Questions & Answers Is geographic atrophy a rare disease? Geographic atrophy is not considered a rare disease. It affects about one million people in the United States and more than five million worldwide. Does geographic atrophy cause total blindness? No, geographic atrophy does not cause total blindness. It causes blind areas in your central vision, but does not affect your peripheral (side) vision. What does a person with geographic atrophy see? People with geographic atrophy tend to have dark areas in their central vision, or irregular patches of clear and blurred vision. How quickly does geographic atrophy progress? While the rate at which geographic atrophy progresses differs from one person to the next, it does typically progress, which is why talking to your ophthalmologist about treatment to slow progression is important. What is the treatment for geographic atrophy? Two treatments that slow the progression of geographic atrophy were approved in 2023. Both involve injections into the eye that are done monthly or every other month. Resources We Trust Editorial Sources and Fact-Checking Everyday Health follows strict sourcing guidelines to ensure the accuracy of its content, outlined in our editorial policy. We use only trustworthy sources, including peer-reviewed studies, board-certified medical experts, patients with lived experience, and information from top institutions. Sources 1. Vyawahare H et al. Age-Related Macular Degeneration: Epidemiology, Pathophysiology, Diagnosis, and Treatment. Cureus. Sept 26, 2022. 2. What Is Macular Degeneration. American Macular Degeneration Foundation. 3. Bakri SJ et al. Geographic Atrophy: Mechanism of Disease, Pathophysiology, and Role of the Complement System. Journal of Managed Care and Specialty Pharmacy. May 29, 2023. 4. Geographic Atrophy. American Macular Degeneration Foundation. 5. What Is Geographic Atrophy. Macular Society. 6. Geographic Atrophy. Cleveland Clinic. April 5, 2023. 7. Geographic Atrophy. American Academy of Ophthalmology. 8. Mukamal R. What to Know About Syfovre and Izervay for Geographic Atrophy. American Academy of Ophthalmology. February 18, 2024. 9. Frequently Asked Questions. Syfovre. 10. What to Expect. Izervay. 11. Keenan TDL et al. Oral Antioxidant and Lutein/Zeaxanthin Supplements Slow Geographic Atrophy Progression to the Fovea in Age-Related Macular Degeneration. Ophthalmology. July 16 2024. 12. Kuppermann BD et al. Phase 2 Study of the Safety and Efficacy of Brimonidine Drug Delivery System (Brimo DDS) Generation 1 in Patients With Geographic Atrophy Secondary to Age-Related Macular Degeneration. Retina. January 2021. 13. NIH Discovery Brings Stem Cell Therapy for Eye Disease Closer to the Clinic. National Eye Institute. January 2, 2018. Show Less Ghazala O'Keefe, MD Medical Reviewer Ghazala O'Keefe, MD, is an assistant professor of ophthalmology at Emory University School of Medicine in Atlanta, where she also serves as the section director for uveitis and as a fellowship director. A retina and uveitis specialist, she cares for both pediatric patients and adults with inflammatory and infectious eye diseases. She oversees the largest uveitis section in the Southeast and manages the care of complex patients with physicians from other disciplines.  She is the lead editor of the EyeWiki uveitis section. She is a member of the executive committee of the American Uveitis Society and was inducted into the International Uveitis Study Group. She has served as the director of the Southeastern Vitreoretinal Seminar since 2019. Joseph Bennington-Castro Author Joseph Bennington-Castro is a science writer based in Hawaii. He has written well over a thousand articles for the general public on a wide range topics, including health, astronomy, archaeology, renewable energy, biomaterials, conservation, history, animal behavior, artificial intelligence, and many others. In addition to writing for Everyday Health, Bennington-Castro has also written for publications such as Scientific American, National Geographic online, USA Today, Materials Research Society, Wired UK, Men's Journal, Live Science, Space.com, NBC News Mach, NOAA Fisheries, io9.com, and Discover. See Our Editorial PolicyMeet Our Health Expert Network
ESSENTIALAI-STEM
Probability of having boy By | December 11, 2017 Probability of having boy : This is a question often asked with referencing different things. Most of the time it is asked with probability of having boys in a country. It’s a very simple probability question in a software interview. This question might be a little old to be ever asked again but it is a good warm up. The question goes like,  In a country where everyone wants a boy, each family continues having babies till they have a boy. After a considerable amount of time, what is the proportion of boys to girls in the country? (Assuming probability of having a boy or a girl is the same).   Take some time to think before looking at the explanation below.   Solution & Explanation: It will always be 1:1 for large number of couples… Take any number as couples N e.g. 1028. There will be 512 boys & 512 girls at first delivery….. (1:1) Couples with Boy stops having child & 512 couples with Girls take another chance. There will be 256 boys & 256 girls in second chance…..(1:1) 256 Couples with Boys stops having child & 256 Couples with 2 Girls Now will take another chance…. There will be 128 boys & 128 girls in third chance………(1:1) 128 Couples with Boys stops having child & 128 Couples with 3 Girls Now will take another chance…. There will be 64 boys & 64 girls in fourth chance………(1:1) and ratio will continue till everyone has a boy each………………….then at end there will be N boys &  N-1 girls… So ratio will be N:N-1 which is 1:1 in case of large sample size of N.   Mathematical Explanation:  Assume there are C number of couples so there would be C boys. The number of girls can be calculated by the following method. Number of girls = 0*(Probability of 0 girls) + 1*(Probability of 1 girl) + 2*(Probability of 2 girls) + … Number of girls = 0*(C*1/2) + 1*(C*1/2*1/2) + 2*(C*1/2*1/2*1/2) + … Number of girls = 0 + C/4 + 2*C/8 + 3*C/16 + … Number of girls = C (using mathematical formulas; it becomes apparent if you just sum up the first 4-5 terms) The proportion of boys to girls is 1 : 1.  
ESSENTIALAI-STEM
How is pleural effusion treated? • Treatment of pleural effusion is based on the underlying condition and whether the effusion is causing severe respiratory symptoms, such as shortness of breath or difficulty breathing. • Diuretics and other heart failure medications are used to treat pleural effusion caused by congestive heart failure or other medical causes. A malignant effusion may also require treatment with chemotherapy, radiation therapy or a medication infusion within the chest. • A pleural effusion that is causing respiratory symptoms may be drained using therapeutic thoracentesis or through a chest tube (called tube thoracostomy). • For patients with pleural effusions that are uncontrollable or recur due to a malignancy despite drainage, a sclerosing agent (a type of drug that deliberately induces scarring) occasionally may be instilled into the pleural cavity through a tube thoracostomy to create a fibrosis (excessive fibrous tissue) of the pleura (pleural sclerosis). • Pleural sclerosis performed with sclerosing agents (such as talc, doxycycline, and tetracycline) is 50 percent successful in preventing the recurrence of pleural effusions. Surgery Pleural effusions that cannot be managed through drainage or pleural sclerosis may require surgical treatment. The two types of surgery include: Video-assisted thoracoscopic surgery (VATS) A minimally-invasive approach that is completed through 1 to 3 small (approximately ½ -inch) incisions in the chest. Also known as thoracoscopic surgery, this procedure is effective in managing pleural effusions that are difficult to drain or recur due to malignancy. Sterile talc or an antibiotic may be inserted at the time of surgery to prevent the recurrence of fluid build-up. Thoracotomy (Also referred to as traditional, “open” thoracic surgery) A thoracotomy is performed through a 6- to 8-inch incision in the chest and is recommended for pleural effusions when infection is present. A thoracotomy is performed to remove all of the fibrous tissue and aids in evacuating the infection from the pleural space. Patients will require chest tubes for 2 days to 2 weeks after surgery to continue draining fluid. Your surgeon will carefully evaluate you to determine the safest treatment option and will discuss the possible risks and benefits of each treatment option. Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy
ESSENTIALAI-STEM
Your cart is empty! There is only one crayfish that is native to Oregon: the Signal Crayfish, which has a characteristic white spot on the claw pivot. There are at least three other crayfish that are invasive to Oregon waters. (Read DFW report on crayfish in Oregon). To our knowledge, all of the crayfish pictured here are natives. Signal crayfish mate in autumn, and the female carries 200-400 eggs on her abdomen until they hatch in the spring. Juveniles hatch from these eggs and stay with their mother until after they molt three times. Crayfish, like all creatures with exoskeletons, have to shed their shell in order to grow. Signal crayfish typically reach sexual maturity after their second year, and may have a lifespan of twenty years. Hatchling crayfish are tiny; no larger than #16 mayfly nymphs. Many of the trout we have caught on dull, tannish-brown nymphs may have mistaken them for baby crayfish. The tiny crayfish pictured above is not as long as a joint in my finger, yet I have seen crayfish that were smaller. (If you check out the second picture in Utilizing Crayfish Patterns, you will see one that is half this size). You can bet that many of these small crayfish get eaten by fish and possibly larger aquatic insects. Diving birds, such as ouzels and mergansers are sure to get their share. As crayfish grow larger, so do the predators that eat them, such as raccoons, and even humans. The way crayfish grow in size is to grow a new skin under their hard shell. Then they discard the hard shell and expand the new shell before it hardens. During this period they tend to hide, because they are vulnerable. In the soft-shell phase, Signal crayfish are dark olive with a bluish tinge. These dark colored vulnerable crayfish are considered to be delicacies by most trout and char that are large enough to eat them. Crayfish are native to nearly all freshwater in North America where game fish live. They are a large part of the diet of many trout and bass. In many lakes in Oregon, crayfish supply much of the protein for trout over 14" long, and even many trout under 9". The first time I fished the Deschutes River in Oregon in 1966, the guy who took me fishing was an expert at finding crayfish and using them for bait. He had his 15 fish limit in no time. When the fish were butchered, stomach autopsies revealed that every redband trout over 14" had recently eaten at least one crayfish. Now of course, these trout are catch-and-release, but their diet probably hasn't changed significantly. In the late 1980's I was involved in a research project which caught 78 Eastern brook trout from one of the natural high lakes in the Mount Hood National Forest in Oregon. Most of these trout were under 11" long, but most had at least one crayfish in their stomach. Many seemingly barren lakes have big crayfish populations, and these crayfish become the major food source for trout. Crayfish are omnivorous. Our experiment with the gill net study proved that as soon as trout were caught in the net, crayfish climbed up the net and started to eat them. The fish were feeding on crayfish, and the crayfish were feeding on fish, forming a complete food chain between them. Even though there are more insects in weedy lakes, crayfish are at the top of the menu in these places too. Many crayfish live in rivers also. Crayfish don't only inhabit the calm water parts of our rivers. There are also crayfish in some of the fastest water as well. Fishing techniques using crayfish fly patterns vary with depth and water speed. Given the variability of sizes of crayfish available, and the range of aquatic habitats they inhabit, crayfish are worth studying by fly fishers. All types of fly tackle and fly presentations work, depending on the water being fished. Nymph fishing with floating lines and long leaders, as well as full sinking lines and various retrieves, and two-hand rods and deep swinging presentations with Skagit line and sinking tips are all in the mix. Be sure to read our blog post "Utilizing Crayfish Patterns" by Jacob Noteboom.
FINEWEB-EDU
bunions A bunion is a bony, painful swelling that is often formed on the first joint of the big toe. Bunions can be extremely painful due to the weight of all your body resting on it each time you take a step. Everyday activities, such as walking and wearing shoes, can cause you extreme discomfort if you have a bunion. Reasons that a person may develop a bunion can vary. Some patients may form bunions due to genetic factors, complications with arthritis, or a weak foot structure. General aging can also play a role in the formation of a bunion. If you have a bunion, you may notice a bony bump on your big toe, experience swelling and redness, and the area may feel tender to the touch. To help alleviate the pain that often comes with having a bunion, it’s suggested to maintain a healthy weight to help lessen the pressure on your toe, practice both heating and icing the affected area, wear wide-fitting shoes wear to leave plenty of space for your toes and to minimize rubbing, and look into shoe inserts that can help position your foot correctly. Because bunions can result in other painful foot problems, such as hammertoes and bursitis, we recommend that you meet with a podiatrist for a professional diagnosis and for information regarding all your treatment options. Bunion FAQ What causes bunions? This is a great question. There are several contributing factors to bunions. One is a family disposition to bunions. Most patients have someone in their family who has had a bunion. Ill-fitting and narrow pointing shoes contribute to the development of a bunion. Rarely are bunions caused by trauma. The biggest factor is the function of the foot over time. Bunions gradually progress and mostly due to the function of tendons and ligaments that stabilize the big toe joint. What is a bunion? It is a gradual dislocation or malalignment of the big toe joint (metatarsophalangeal joint). The big toe drifts to the outside of the foot gradually and the metatarsal bone becomes prominent on the inside of the foot. Bunions can range in severity from mild to severe. At the end stage of the deformity the big toe and second toe over or underlap one another. What are my treatment options for bunions? Treatment for bunions is divided into two camps. Nonsurgical conservative treatment is the first camp. Treatment here consists of changing the types of shoes you wear. So that the toe box of the shoes is wider to accommodate the bunion deformity. Pads that go over the deformity can be purchased to decrease the pain associated with the deformity. Toe spacers can be worn to separate the first and second toes from one another. Additionally, medications can be used to treat the pain. Most commonly anti-inflammatory medications are used for the treatment of pain, oral or topical medication can be used. Surgical treatment is the other camp. When your bunion becomes painful or starts to impact daily activity and shoe wear it is time to consider surgery. The goal of surgery is to realign the joint, reduce pain, and improve function for the patient. For more information on bunion surgery please click on the link. Do bunions go away? Despite what some marketing companies may say bunions do not go away. The only way to correct the deformity is with surgery. Surgery requires a realignment of the joint. I often compare it to a warped wall. The only way to straighten the wall is to structurally rebuild the wall. Bunions can be treated with accommodation which is commonly done but correction of the deformity requires surgery. What are my surgery options for bunions? There are approximately 100 different procedures for correcting a bunion deformity. Depending on the severity of the deformity the surgeon should choose the most appropriate procedure for you. It is important that you select a board-certified surgeon foot and ankle surgeon who has the experience to perform the surgery to get the best outcomes possible. What is recovery like from bunion surgery? Recovery can vary depending on the type of procedure that is performed. Generally speaking, it will require you to stay off of your foot for a few days to a few weeks. You will need to ice your foot and ankle and elevate it during the initial postoperative period. You will be given medications to help control your pain after surgery. You may be required to use crutches or a walker to assist with keeping weight off of your foot after surgery. It will take several weeks before you can return to normal activities. For detailed questions about recovery make sure you have an informed discussion with your surgeon. Also, visit our page about what to expect from your foot and ankle surgery. What is 3-D bunion correction? 3D bunion correction has been popularized by marketers in the medical industry. This term simply refers to correcting all the planes of the bunion deformity. Both the movement and prominence of the bone and the rotation of the bone. Correction in all planes of the deformity allows for the best correction. However not all bunion deformities require 3D correction because not all bunion deformities have a rotational component. At Platte River Foot and Ankle Surgeons, we have been performing 3D bunion correction when appropriate for many years with techniques that Dr. David Waters has personally developed and utilized it with great success. It's best to have a consultation with an experienced surgeon about what bunion procedure would be the best for the deformity you have. What is minimally invasive bunion surgery? Minimally invasive foot surgery has been around for many years. Recently it has been popularized for the treatment of bunions. It involves making small incisions and utilizing bone burrs to cut the bone and then reposition the bone to correct the deformity. Plates and or screws are still used to hold the bone in the corrected position while the bone heals. These are inserted through additional small incisions made on the foot. MIS (minimally invasive surgery) is beneficial when it is selected for the right patient and right deformity. At Platte River Foot and Ankle Surgeons, we perform MIS bunion surgery when it is appropriate the best procedure for the patient. We would love the opportunity to have you come in for a consultation and evaluate your bunion. Who should I see for my bunions? Bunions should be evaluated by a board-certified foot and ankle doctor or surgeon. They have the most expertise in this area and can provide you with the most comprehensive information about the deformity and treatment options. The American Board of Foot and Ankle Surgery is the certifying body in the United States. Do orthotics correct bunions? Orthotics are a great treatment option for mild to moderate bunion deformity. The goal of orthotics is to slow the progression of the deformity by controlling the function of your foot and ankle. You may need a custom or over-the-counter medical-grade orthotic depending on the mechanics of your foot and ankle. If you would like more information please click on the link about orthotics. Connect With Us
ESSENTIALAI-STEM
Toric Divisors Introduction Toric divisors are those divisors that are invariant under the torus action. They are formal sums of the codimension one orbits, and these in turn correspond to the rays of the underlying fan. Constructors General constructors divisor_of_characterMethod divisor_of_character(v::NormalToricVarietyType, character::Vector{T}) where {T <: IntegerUnion} Construct the torus invariant divisor associated to a character of the normal toric variety v. Examples julia> divisor_of_character(projective_space(NormalToricVariety, 2), [1, 2]) Torus-invariant, non-prime divisor on a normal toric variety source toric_divisorMethod toric_divisor(v::NormalToricVarietyType, coeffs::Vector{T}) where {T <: IntegerUnion} Construct the torus invariant divisor on the normal toric variety v as linear combination of the torus invariant prime divisors of v. The coefficients of this linear combination are passed as list of integers as first argument. Examples julia> toric_divisor(projective_space(NormalToricVariety, 2), [1, 1, 2]) Torus-invariant, non-prime divisor on a normal toric variety source Addition, subtraction and scalar multiplication Toric divisors can be added and subtracted via the usual + and - operators. Moreover, multiplication by scalars from the left is supported for scalars which are integers or of type ZZRingElem. Special divisors trivial_divisorMethod trivial_divisor(v::NormalToricVarietyType) Construct the trivial divisor of a normal toric variety. Examples julia> v = projective_space(NormalToricVariety, 2) Normal, non-affine, smooth, projective, gorenstein, fano, 2-dimensional toric variety without torusfactor julia> trivial_divisor(v) Torus-invariant, non-prime divisor on a normal toric variety source anticanonical_divisorMethod anticanonical_divisor(v::NormalToricVarietyType) Construct the anticanonical divisor of a normal toric variety. Examples julia> v = projective_space(NormalToricVariety, 2) Normal, non-affine, smooth, projective, gorenstein, fano, 2-dimensional toric variety without torusfactor julia> anticanonical_divisor(v) Torus-invariant, non-prime divisor on a normal toric variety source canonical_divisorMethod canonical_divisor(v::NormalToricVarietyType) Construct the canonical divisor of a normal toric variety. Examples julia> v = projective_space(NormalToricVariety, 2) Normal, non-affine, smooth, projective, gorenstein, fano, 2-dimensional toric variety without torusfactor julia> canonical_divisor(v) Torus-invariant, non-prime divisor on a normal toric variety source Properties of toric divisors Equality of toric divisors can be tested via ==. To check if a toric divisor is trivial, one can invoke is_trivial. This checks if all coefficients of the toric divisor in question are zero. This must not be confused with a toric divisor being principal, for which we support the following: is_principalMethod is_principal(td::ToricDivisor) Determine whether the toric divisor td is principal. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_principal(td) false source Beyond this, we support the following properties of toric divisors: is_ampleMethod is_ample(td::ToricDivisor) Determine whether the toric divisor td is ample. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_ample(td) false source is_basepoint_freeMethod is_basepoint_free(td::ToricDivisor) Determine whether the toric divisor td is basepoint free. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_basepoint_free(td) true source is_cartierMethod is_cartier(td::ToricDivisor) Checks if the divisor td is Cartier. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_cartier(td) true source is_effectiveMethod is_effective(td::ToricDivisor) Determine whether the toric divisor td is effective, i.e. if all of its coefficients are non-negative. Examples julia> P2 = projective_space(NormalToricVariety,2) Normal, non-affine, smooth, projective, gorenstein, fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(P2, [1,-1,0]) Torus-invariant, non-prime divisor on a normal toric variety julia> is_effective(td) false julia> td2 = toric_divisor(P2, [1,2,3]) Torus-invariant, non-prime divisor on a normal toric variety julia> is_effective(td2) true source is_integralMethod is_integral(td::ToricDivisor) Determine whether the toric divisor td is integral. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_integral(td) true source is_nefMethod is_nef(td::ToricDivisor) Determine whether the toric divisor td is nef. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_nef(td) true source is_primeMethod is_prime(td::ToricDivisor) Determine whether the toric divisor td is a prime divisor. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_prime(td) true source is_q_cartierMethod is_q_cartier(td::ToricDivisor) Determine whether the toric divisor td is Q-Cartier. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_q_cartier(td) true source is_very_ampleMethod is_very_ample(td::ToricDivisor) Determine whether the toric divisor td is very ample. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_very_ample(td) false source Attributes coefficientsMethod coefficients(td::ToricDivisor) Identify the coefficients of a toric divisor in the group of torus invariant Weil divisors. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> D = toric_divisor(F4, [1, 2, 3, 4]) Torus-invariant, non-prime divisor on a normal toric variety julia> coefficients(D) 4-element Vector{ZZRingElem}: 1 2 3 4 source polyhedronMethod polyhedron(td::ToricDivisor) Construct the polyhedron $P_D$ of a torus invariant divisor $D:=td$ as in 4.3.2 of David A. Cox, John B. Little, Henry K. Schenck (2011). The lattice points of this polyhedron correspond to the global sections of the divisor. Examples The polyhedron of the divisor with all coefficients equal to zero is a point, if the ambient variety is complete. Changing the coefficients corresponds to moving hyperplanes. One direction moves the hyperplane away from the origin, the other moves it across. In the latter case there are no global sections anymore and the polyhedron becomes empty. julia> F4 = hirzebruch_surface(NormalToricVariety, 4) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor julia> td0 = toric_divisor(F4, [0,0,0,0]) Torus-invariant, non-prime divisor on a normal toric variety julia> is_feasible(polyhedron(td0)) true julia> dim(polyhedron(td0)) 0 julia> td1 = toric_divisor(F4, [1,0,0,0]) Torus-invariant, prime divisor on a normal toric variety julia> is_feasible(polyhedron(td1)) true julia> td2 = toric_divisor(F4, [-1,0,0,0]) Torus-invariant, non-prime divisor on a normal toric variety julia> is_feasible(polyhedron(td2)) false source toric_varietyMethod toric_variety(td::ToricDivisor) Return the toric variety of a torus-invariant Weil divisor. Examples julia> F4 = hirzebruch_surface(NormalToricVariety, 4); julia> D = toric_divisor(F4, [1, 2, 3, 4]); julia> toric_variety(D) Normal, non-affine, smooth, projective, gorenstein, non-fano, 2-dimensional toric variety without torusfactor source
ESSENTIALAI-STEM
Stephanie White de Goede Stephanie White was the first captain of the Canada women's national rugby union team in 1987, co-captained the national team at the first Women's Rugby World Cup in 1991, captained the team at the second World Cup in 1994, and also captained the first participating Canada women's national rugby sevens team in the Hong Kong Sevens invitational tournament in 1997. Rugby career Between 1987 and 1997, she earned a total of 17 caps for the national team. She also served her community by building women's rugby in Canada alongside being a director of the Alberta Women's Rugby Union and a director of Women's Rugby on the Board of Directors for Alberta Rugby Football Union in the late 1980s. She also sat on the British Columbia Rugby Union Board of Directors, bringing the West Coast Women's Rugby Association into the BCRU in the early 2000s. As well, White was the Women's Players representative at the Rugby Canada Strategic Planning session in 1996 and served on the Rugby Canada Board of Directors from 2007 to 2013. White is also the Chairperson of the Monty Heald Fund which aimed to eliminate Pay to play for the senior women's team. Honours She is recognized in the player category as a 2018 inductee of the Rugby Canada Hall of Fame alongside Ruth Hellerud-Brown (builder) and Maria Gallo (player). In 2017, she was given an Honorary Life Member award at the 2017 Rugby Canada Hall of Fame. Personal Her husband Hans de Goede and their children, Sophie and Thyssen, have represented Canada at the national level. Her 2nd son Jacobus also plays rugby.
WIKI
Apollo’s brain: The computer that guided man to the Moon When Apollo 11 touched down in the Sea of Tranquility on July 20, 1969, it was more than a triumph of the human spirit, it was also the story of a cybernetic wonder called the Apollo Guidance Computer (AGC), which helped the Apollo astronauts safely navigate to the Moon and back. It was a computer so advanced for its time that the engineers who created it said they probably wouldn't have tried to do so if they'd known what they were getting themselves into. The Apollo Guidance Computer is one of the unsung successes of the Space Race. This is probably because it was so phenomenally successful, having had very few in-flight problems – and most of those were due to human error. Carried aboard both the Command Service Module (CSM) and the Lunar Module (LM), it flew on 15 manned missions, including nine Moon flights, six lunar landings, three Skylab missions, and the Apollo-Soyuz Test Mission in 1975. At the time it was the latest and most advanced fly-by-wire and inertial guidance system, the first digital flight computer, the most advanced miniature computer to date, the first computer to use silicon chips, and the first onboard computer where the lives of crew depended on it functioning as advertised. Not that the Apollo Guidance Computer was much to look at. At first glance, it appeared like a brass suitcase in two parts, measuring a total of 24 × 12.5 × 6.5 in (61 × 32 × 17 cm) and weighing in at 70 lb (32 kg). Inside, it isn't even very impressive by modern computer standards, having about as much oomph as a smart bulb with a total of about 72 K of memory and a 12-microsecond clock speed. It's also hard to make an accurate comparison with modern devices because the AGC wasn't a general purpose computer, but one that was literally hardwired for a particular task, which allowed it to perform at the level of the Commodore 64 or ZX Spectrum of the early 1980s – try to imagine getting to the Moon using a Commodore 64 to handle the navigation and not break into a cold sweat. A job for a computer The reason why all the Apollo missions carried at least one of these computers is that the Moon missions involved navigation problems that would have made Captain Cook go bug eyed. On Earth, navigation is, at its simplest, about finding one's way from one fixed point on the globe to another. For a trip to the Moon, it's like standing with a rifle on a turntable that's spinning at the center of a much larger turntable on which is a third turntable sitting on the rim, with all the tables spinning at different and varying speeds, and trying to hit the target by aiming at where it will be three days from now. Given the enormous number of variables, the above analogy only gives a small taste of the complexity of the equations that need to be solved for such a journey – and even then, the result will be a series of increasingly accurate estimates rather than a certain path. But what was certain from very early in the Apollo program was that space navigation is too complex and too counterintuitive for the astronauts to handle. In private, the engineers preferred that they not be allowed to have anything to do with it at all. On top of that, designing and building a computer for the Apollo missions began as little more than a lot of hand waving. Though the first of all the Apollo contracts to be awarded, the AGC was one of thousands of sub-projects that were all chasing after a program where the basics were still in flux and where no one even knew if it was a mission where one, two, or more spacecraft would be used for the Moon landing. A new technology Things were already bad enough, but the AGS was being developed at a time when not only computers, but the entire field of electronics was undergoing an astonishing evolution that no one could predict. When the Apollo program began, computers were still gigantic machines that took up whole rooms. There were only a handful in the entire world and trying to get information into and out of one was so complex that it took a clerisy of top-level mathematicians to handle the job. However, it was a field that was fast evolving and by the time work began on the AGC, the technology was set to explode into the computer revolution that we're still trying to get a handle on today. And it wasn't just computing technologies that were advancing apace, but basic electronics as well. In the late 1940s, transistors had sent radio valves the way of the buggy whip and the printed circuit board was conquering the old wire-and-solder circuit boards. But both of these were threatened by the integrated circuit (IC), direct ancestor to the silicon chip, that hit the scene in 1958. All this new technology wasn't just having a synergistic effect on computer design, it was also convulsing the entire electronics industry as the IC blurred the line between electronic firms, who traditionally designed and built circuity, and component suppliers, who just made the parts. The IC threw the whole question of who was designing and who was supplying into flux. This caused no end of trouble for the AGC. How do you design a computer that won't fly for six years when the technology keeps changing? Worse, how do you get industry support for a computer that has to remain in production and use for 10 years when the industry expects everything to change within 18 months? It did not encourage confidence. MIT Instrumentation Laboratory It was into the maelstrom that MIT fell in August 1961 when NASA decided to award the Apollo Guidance Computer contract to the MIT Instrumentation Laboratory instead of the agency's usual supplier, IBM. This was in large part because MIT, under Instrumentation Laboratory head Charles Stark Draper, had a strong track record in developing inertial guidance systems with Eldon C. Hall designing the latest for the US Navy's Polaris Missile Computer. At first, there was trepidation about giving the contract to MIT, but Draper showed so much confidence that his team could deliver the computer to spec and on time that he volunteered to fly on the first mission. By 1962, it was agreed that MIT would spearhead the effort with the support of the AC Spark Plug Division of General Motors, Kollsman Instrument Corporation, and Raytheon, which would build the computer itself. The big hurdle was that the specifications for the AGC were a blank sheet of paper. No one had built anything like this and no one had any idea of how to go about it. So, as a starting point, MIT fell back on a four-volume Mars mission study from 1958 that postulated an unmanned 150 kg (331 lb) Mars probe that could navigate autonomously using star fixes as it did a flyby of the Red Planet, took photographs, and then looped back to Earth for recovery. This was a very long way from something suitable for a manned lunar landing, but it was a start. Soon the basic design began to emerge of a small, self-contained, low-power general computer that could handle all the navigation problems of a Moon voyage. Based on the technology from the Polaris missile, it would use a gyroscope and accelerometers combined with a sextant to fix the position of the spacecraft and keep it on course. Beyond this, the new computer would have to conform to general Apollo specifications, It had to be rugged enough to withstand spaceflight, and use the minimum number of transistors, which were still unreliable. In addition, it needed a simple control interface for the astronauts – though the engineers would have preferred the crew to just go along for the ride. Tyranny of hardware It seemed like a promising start, but it didn't last. By 1966, MIT was obviously in way over its head and NASA sent in a troubleshooter named Bill Tindall, who rode the team and became notorious for his blistering "Tindalgrams." These lengthy, yet blunt missives outlined how MIT was suffering from not being a proper contractor and didn't have the requisite culture or discipline for a job like the AGC. The result was slow progress and needless duplications in the software, which was full of bugs and took up too much memory. But not all of the problems were organizational. We mentioned software, but in 1966, "software" was a new word and many computer professionals had trouble understanding the concept. In fact, no one was exactly certain what a computer could actually do. What were its strengths? What were its weaknesses? It was learn as you go. One of the lessons that MIT learned was that the technology was still too primitive for the machine they'd envisaged. By 1966, it was clear that the AGC was just too small with not enough memory, was too slow to handle enough tasks at one time, and it couldn't handle data the way analog circuits could. In other words, it was a return to the tyranny of hardware. The idea of a general computer controlling the mission was abandoned and the AGC was now a specialized machine relegated to backup status. Going forward, ground computers would feed navigation data to the Apollo spacecraft while the onboard computer stood by in case there was a breakdown in the communications link. That may seem like a big step down, but the AGC still had a vital function. Ground control might be able to handle navigation, but there was still a one-second time lag between Earth and the Moon, and when an Apollo spacecraft went behind the Moon all communications were cut off. Worse, this was during the depths of the Cold War and the Americans were worried that the Soviets would deliberately jam transmissions. How it worked One thing to bear in mind when looking at the AGC is that it was both cutting-edge in design and very old-fashioned in how it was built – both of which presented their own challenges. Unlike modern computers, the AGCs were all handmade in a slow, laborious process that even partial automation and new testing methods did little to speed up or make easier. In all, it took 2,000 man-years to build the computers. One difficulty was that the AGC incorporated a lot of cutting-edge technologies, such as being the first computer to rely on chip components for its logic circuitry – specifically, a three input NOR gate. These were just coming on the market, but by 1963 the MIT Instrumentation Laboratory was buying up 60 percent of the chip production in the United States. In its final form, the AGC was no longer a general purpose computer, but one designed to carry out specific tasks, and was wired to do so. It consisted of two metal trays – one for the memory and one for the logic circuits for a total of 30,000 components. Because of the limitations of the technology, despite its complexity, the AGC was designed to be as simple as possible, with as few parts as possible for lower weight and greater reliability. Since the AGC had to operate a quarter of a million miles from the nearest repair shop, this reliability was a top priority. At one point, there was the suggestion of installing a duplicate computer aboard the spacecraft, but this was turned down in favor of vigorous and aggressive testing, then hermetically sealing the components to keep out dust and moisture. One major design concern was how to store programs and data in the AGC. Computer memories had come a long way from the days when data was fed in using punched paper tape or stored by sending sound waves through tubes filled with mercury, but the technology was still in its infancy and none of the current storage techniques were practical for Apollo. Instead, MIT came up with a novel approach where the software was literally woven into the memory banks. Using a technique called "rope memory," the engineers came up with a method where tiny rings of iron had wires running through them. When a wire ran through the center of the ring, it represented the binary number 1. When it ran outside, it was 0. To program these rope memories, MIT used what they dubbed the LOL method, for "little old ladies." This was because the programming was done by ex-textile workers, who skillfully sent wire-carrying needles through the iron rings. They were aided by an automated system that showed them which hole in the workpiece to insert the needle into, but it was still a highly skilled job that required concentration and patience. The result was an indestructible memory that could not be erased, altered, or corrupted. The disadvantage was that producing this memory was very hard to do and even harder to correct when an error was found. However, this was also an advantage because it meant that last-minute "good ideas" could be disregarded. Priority over time-sharing Of course, this wasn't the only memory in the AGC. There was also a 2,000-word RAM memory bank that acted like a scratch pad for temporary data while the computer was running a program. This was particularly important because of a special safety feature of the machine. In the 1960s, the common practice for a computer that was used by several people or ran multiple programs was time-sharing. In this, the computer would allocate microseconds of time to each of these and switch between them. This might have been fine for a university mainframe, but for Apollo it could have been fatal because the computer might end up preoccupied with trivia in a life or death situation or could crash in a manner all too familiar to computer users of today. That was when computer pioneer Halcombe Laning came up with a solution. Instead of timesharing, the AGC was programmed for priority. In other words, each program was numbered in order of importance at any particular point in the mission timeline. If a higher priority program needed the computer, the others would simply stop and wait for it to finish, then resume. Meanwhile, the temporary memory held the data up to the point of stoppage in a way similar to that of a modern computer going into sleep mode. This not only eliminated crashes, but also allowed the crew to interrupt a running program with new data as it came in. This data came from a number of devices, including the sextant, the telescope, the Inertial Measurement Unit (IMU) that consisted of a gyroscope and three accelerometers, the manual control used by the astronauts, the Command Module Rendezvous Radar, the Lunar Module Landing Radar, telemetry from Earth, the main engine, and the reaction control system. But the most important of the inputs was the Display and Keyboard (DSKY) unit with which the astronauts communicated with the computer. This user interface was so bulky that many people who see it today think that it's the computer itself, but it's actually nothing more than a collection of warning and status lights, buttons, and a numeric display. Designed by Alan Green of MIT, the DSKY seems, at first, to be very difficult to use (there are a number of simulators online, if you'd like to have a go). The astronauts thought so, too, but with practice, they were soon surprised by how good the device actually was and they became big fans of it. What was fascinating was the unique way the DSKY worked. Instead of typing in word commands or clicking on icons, the astronauts used a special numeric language of "nouns" and "verbs." A noun was an object and a verb was an action to be taken by the object. To operate, the astronaut would first press the unlock key that prevented accidental button pressing, then enter the number code for the noun and then the code for the verb. The result would be a command like ""display-gimbal angles" or "load-star number." If the command required the astronaut to enter data, such as the star number, the computer would flash a ready signal and wait for the data. There was even a cheat sheet printed on one of the bulkheads listing computer commands. Of course, a computer is only as good as its software, and the AGC took 350 engineers the equivalent of 1,400 man-years to develop before the first Moon landing. The effort got off to a rocky start because the programmers hadn't any specifications, and not having a solid grasp of the concept of software didn't help. Unlike today, the code was written by hand and then transferred to huge stacks of punch cards for testing. Despite this primitive method, the software was a huge leap forward and the first that had to handle real-time problem solving that three men's lives would depend upon. It was basically a mix of assembly language and interpreted mathematical language that had to be constantly updated as the hardware, the mission, and the role of the astronauts continued to evolve. It was so intense a task that it became all-consuming for the MIT team and it soon took its toll as the members' personal live suffered, as evidenced by a staggering divorce rate. The Apollo Computer in action In practice, AGC in particular performed flawlessly on the Apollo flights, with the only problems arising from entering the wrong code or flipping the wrong switch. On Apollo 8, Command Module Pilot Jim Lovell conscientiously attended the computer as he fed in data from the CM sextant. When the spacecraft reached the Moon in December 1968, the computer and NASA agreed on Apollo 8's position to within 2.5 km (1.6 mi) and on the return trip, only one course correction was required. But it was on Apollo 11 that the AGC really showed its stuff. During the historic descent to the Sea of Tranquility, the computer suddenly had a fit due to the rendezvous radar being accidentally left on. The radar was flooding the computer with meaningless data and in a modern computer this would have resulted in a freeze or a crash. Instead, the AGC signaled "1202" for an overload error and switched off every program except the number one priority and restarted. A possible abort avoided, Mission Commander Neil Armstrong was given the GO command to proceed with the landing. Most people are unaware of that little story, and perhaps that is the best tribute that a computer can ever have. It did the job. Please keep comments to less than 150 words. No abusive material or spam will be published. Also, "rope memory" wasn't all that uncommon; we micro-coded an Interdata Mod 3 that way to emulate an IBM 2250 Display. Things were primitive enough; there's no need to exaggerate. My father worked at MIT during the latter part of the time period in question. He saw these machines in person and knew many of the people directly responsible for their design, programming, construction and testing. He didn't work on them himself, he was a field engineer for IBM supporting various other computing resources at MIT, but it was an amazing endeavor. If anything the author understates its importance in enabling Americans to land on the moon before the Soviets, who could not accomplish the same feats of miniaturization. Thus they were stuck with the problem of not having a guidance computer like this one as their spacecraft lost line-of-sight telemetry during its pass around the dark side of the moon. We needed this computer in the spacecraft. If you look at the burns you can see where they took place, and having the guidance computer on-board was crucial. The Wikipedia entry on the AGC is also quite good with a lot of footnotes.
FINEWEB-EDU
Convert To Pdf Written by Kevin Tavolaro Bookmark and Share When you convert a document to PDF, you can configure it for a number of additional uses that would have otherwise been hampered by the original file format. PDF files are universal, and can be opened and viewed on any computer and operating system, from Windows to Unix to Mac. The universal accessibility of these files makes them ideal for data distribution and transfer, as they can be opened and viewed by all potential recipients. In addition, PDF files are usually much more compact than their counterparts, thanks to compression. The smaller file size helps to increase the speed and efficiency of any data transfers. In order for PDF files to be universal, they must be able to present their data as a cohesive whole, unaffected by the computer environment or software. This means that all the individual components of a document are fused together into a solitary, stand-alone entity during the creation of a PDF file. For example, an MS Word document might contain text, images, specific alignment, and other variables, such as the type and size of font used in the file. In fact, the font selected for most documents is contingent on that font's availability on the computer it is being viewed on. If the font is not present on the recipient's computer, their word processing program will automatically select a font and font size for the file. By converting a Word file to PDF, you can help to secure the data by adding it to an unalterable, universally readable document. Why Convert to PDF? Many programs are now equipped to convert a file to PDF from directly within the software. Software that includes this option for basic documents usually does so with an "export to PDF" selection in the file menu. Nearly all programs offered by Adobe Systems, the originators of PDF, include such a function. This includes specialty applications such as Photoshop, CorelDraw, and Illustrator. If you want to convert a standard document format to PDF but your software lacks an "export-to-PDF" option, you will need to get a PDF converter. This can be anything from a small plug-in that allows you to create PDF documents from other file formats via virtual print-to-PDF option to larger, specialized stand-alone PDF conversion programs that can process batch conversions and compile searchable files from scanned document images. Although PDF files can be created from existing documents, they are still highly difficult to arrange and edit once created. This is by design, as the format is meant to preserve the data and appearance of the file across platforms. If PDF components could be easily manipulated, they would be subject to each computer's settings, resolution, operating system, and software, instead of maintaining the system and resolution independence that enables their cross-platform compatibility. Bookmark and Share
ESSENTIALAI-STEM
Kummer's congruence In mathematics, Kummer's congruences are some congruences involving Bernoulli numbers, found by. used Kummer's congruences to define the p-adic zeta function. Statement The simplest form of Kummer's congruence states that * $$ \frac{B_h}{h}\equiv \frac{B_k}{k} \pmod p \text{ whenever } h\equiv k \pmod {p-1}$$ where p is a prime, h and k are positive even integers not divisible by p−1 and the numbers Bh are Bernoulli numbers. More generally if h and k are positive even integers not divisible by p − 1, then * $$ (1-p^{h-1})\frac{B_h}{h}\equiv (1-p^{k-1})\frac{B_k}{k} \pmod {p^{a+1}}$$ whenever * $$ h\equiv k\pmod {\varphi(p^{a+1})}$$ where φ(pa+1) is the Euler totient function, evaluated at pa+1 and a is a non negative integer. At a = 0, the expression takes the simpler form, as seen above. The two sides of the Kummer congruence are essentially values of the p-adic zeta function, and the Kummer congruences imply that the p-adic zeta function for negative integers is continuous, so can be extended by continuity to all p-adic integers.
WIKI
Harry Knapp Harry Knapp may refer to: * Harry Shepard Knapp (1856–1928), Vice Admiral of the United States Navy * Harry K. Knapp (1865–1926), United States financier and horse racing executive
WIKI
Carrozzeria Varesina Carrozzeria Varesina (established 1845 in Varese) was an Italian coachbuilder, known for their work on industrial vehicles such as double-decker buses for both touring and urban transport. Among their models was ten units of the "Filobus" Alfa Romeo 110AF (1939). They also made bodies for prototypes of cars for companies such as Lancia and Zagato. A current company with the same name and area of expertise (established 1975) resides in Ospiate di Bollate.
WIKI
Virgin Islands at the 1988 Summer Olympics The United States Virgin Islands competed at the 1988 Summer Olympics in Seoul, South Korea. 22 competitors, 19 men and 3 women, took part in 27 events in 6 sports. Competitors The following is the list of number of competitors in the Games. Athletics * Men * Track & road events Cycling * Road Equestrian * Eventing Sailing * Men * Open Swimming Men's 50m Freestyle * Hans Foerster * Heat – 24.72 (→ did not advance, 44th place) * Ronald Pickard * Heat – 25.01 (→ did not advance, 47th place) Men's 100m Freestyle * Hans Foerster * Heat – 54.29 (→ did not advance, 54th place) * Ronald Pickard * Heat – 54.72 (→ did not advance, 58th place) Men's 200m Freestyle * Hans Foerster * Heat – 2:01.94 (→ did not advance, 58th place) * Kraig Singleton * Heat – 2:06.45 (→ did not advance, 59th place) Men's 100m Breaststroke * Kristan Singleton * Heat – 1:11.68 (→ did not advance, 55th place) Men's 200m Breaststroke * Kristan Singleton * Heat – 1:00.97 (→ did not advance, 44th place) * William Cleveland * Heat – 1:01.10 (→ did not advance, 45th place) Men's 200m Butterfly * William Cleveland * Heat – 2:13.19 (→ did not advance, 39th place) * Kristan Singleton * Heat – 2:19.68 (→ did not advance, 40th place) Men's 200m Individual Medley * Kraig Singleton * Heat – 2:16.93 (→ did not advance, 46th place) Men's 4 × 100 m Freestyle Relay * Hans Foerster, Kraig Singleton, Kristan Singleton, and William Cleveland * Heat – 3:43.23 (→ did not advance, 18th place) Men's 4 × 100 m Freestyle Relay * Hans Foerster, Kraig Singleton, Ronald Pickard, and William Cleveland * Heat – 8:15.51 (→ did not advance, 13th place) Men's 4 × 100 m Medley Relay * William Cleveland, Kraig Singleton, Kristan Singleton, and Hans Foerster * Heat – 4:15.03 (→ did not advance, 24th place) Women's 100m Backstroke * Tricia Duncan * Heat – 1:10.37 (→ did not advance, 34th place) Women's 200m Backstroke * Tricia Duncan * Heat – 2:33.97 (→ did not advance, 30th place)
WIKI
H2Overdrive H2Overdrive is a powerboat arcade racing game developed by Specular Interactive and released in 2009 by Raw Thrills. It is considered a spiritual successor to 1999's Hydro Thunder. The game was also released in the People's Republic of China by UNIS (Universal Space). Gameplay Players race using powerboats with various levels of difficulty against other boats, as well as seven other players on nearby cabinets. During the races, players can run into crates that give them power-ups that can range from speed boosts to MegaHull and HullCrusher abilities, which can be used to destroy opposing boats. On the cabinet, a throttle is used to move the boat, instead of pedals. On the throttle, a red button is used to boost the boat. Unlike in Hydro Thunder, boats can now perform stunts, as well as various acrobatic maneuvers. The game has a password entry system, which features a number keypad to store game data as the player progresses through the game, and achievements that the player unlocks are available, and players can increase their rank and level until it reaches the highest possible rank/level. The game is played on a 42" LCD screen. Development The game was developed by Specular Interactive which consisted of former Midway San Diego employees, along with Hydro Thunder creator Steve Ranck. It was unveiled by Raw Thrills at the Amusement Trades Exhibition International in January 2009. The game is considered a spiritual successor to Hydro Thunder, and was originally named Hydro Thunder 2 before being renamed to its current name; Midway had been intending on creating a sequel to the game, but the plan would later be scrapped – Midway would also declare bankruptcy the same year H2Overdrive released. An unrelated game by the same name was developed for the PlayStation 2 by Crave Entertainment in 2003, though the game was never released. A similar game titled Hydro Thunder Hurricane would however be released for the Xbox 360 console via Xbox Live Arcade.
WIKI
Why Appian Is Buying a Process Mining Company Despite strong results, shares of Appian (NASDAQ: APPN) slipped following the company's second-quarter earnings on Aug. 6. The cloud-based low-code software company reported a 24% increase in overall revenue to $83 million, ahead of estimates at $79.1 million. Cloud-based subscription revenue -- the part of the business the company is most focused on -- jumped 44% to $42.5 million, its fastest growth in that category in several quarters. On the bottom line, the company's adjusted loss per share expanded from $0.12 to $0.24, which was just shy of estimates at $0.23, as Appian stepped up investments in sales and marketing and other corporate expenses. However, the biggest news of the quarter wasn't the results but the company's acquisition of a process mining company, Lana Labs, a Berlin-based company founded in 2016 that has fewer than 100 employees. It's a cloud-native company that has been recognized by analysts for ease of use and customer satisfaction, and management called its revenue "negligible" on the call. Image source: Getty Images. What is process mining? Process mining may be unknown to most investors, but it's a fast-growing field that's projected to grow 42% annually through 2028 to $6.9 billion, as businesses adopt the technology to unlock greater efficiencies in operations and workflows. On the earnings call, CEO Matt Calkins said that process mining "is a technology that examines usage logs to discover what work people are actually doing and see the patterns in that work and find processes that have not yet been automated." In other words, process mining is a natural complement to Appian's core business, which helps businesses automate workflows and rapidly deploy applications through low-code software and robotics process automation. In an interview with The Motley Fool, Calkins explained that process mining acts as a diagnostic tool that lets business know what areas need improvement, and the company's other products like low-code software and robotics process automation give it the tools to make those improvements. Calkins called the move into diagnostics "a true break from the history of the company" as it extends its focus workflows to diagnostics. The long game Historically, Appian has avoided making acquisitions. Calkins prefers to grow the company organically and sees its shares as a high-value asset that he's reluctant to spend. The decision to purchase Lana Labs came not because of Lana's customers, revenue, or market impact, but because of its technology. Calkins believes this approach to mergers and acquisitions differentiates the company from enterprises that may be making acquisitions to please Wall Street, gain publicity, or do something new. He said his top priority in acquisitions is "creating value," adding, "I'm looking for technology that will fit well with ours, that shares the values that should have high quality, and is written by good people who will stick around and fit in with our culture." Lana Labs won't have an immediate impact on the company's financials, and the acquisition is more of an investment project for now as the company plans to spend $3 million integrating Lana, but it will add significantly to the product offering and long-term potential. Appian has only one product, and adding value to that product is a smart way to increase customer retention, expand with existing customers, and attract new ones. With Lana's technology, Appian's goal is to show customers that process mining is not a stand-alone market but one that dovetails well with low-code and robotics process automation. Additionally, the acquisitions should strengthen Appian's relationship with its partners as it gives them a new line of business to consult on, and some of its top partners like KPMG and PwC already have Lana practices. Low-code software, which allows businesses to deploy apps with little or no code, took a big step forward during the pandemic, and Appian is laying the groundwork for a more comprehensive company that can be an end-to-end solution for accelerating workflows and optimizing efficiencies. In an era of digital transformation, the Lana deal makes what already seemed like a promising growth path for the cloud stock an even longer one. 10 stocks we like better than Appian When our award-winning analyst team has a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.* They just revealed what they believe are the ten best stocks for investors to buy right now... and Appian wasn't one of them! That's right -- they think these 10 stocks are even better buys. See the 10 stocks *Stock Advisor returns as of August 9, 2021 Jeremy Bowman has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends Appian. The Motley Fool has a disclosure policy. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
NEWS-MULTISOURCE
Maxim Sytch Maxim Sytch is an organizational scholar and Professor of Management and Organizations at the Ross School of Business at the University of Michigan. He is known for his work on the dynamics of knowledge and influence in interorganizational networks and in legal systems. His work has shown that small-world networks can be inherently unstable structures, how the dynamics of networks communities can sustain invention, and how organizations can influence their legal environments and judicial arbiters. Sytch is an Associate Editor of Administrative Science Quarterly. Biography After completing his Ph.D. at the Kellogg School of Management at Northwestern University, Sytch held appointments at the Ross School of Business at the University of Michigan as an Assistant Professor of Management and Organizations from 2009 till 2014, and as Sanford R. Robertson Assistant Professor from 2012 till 2013. In 2014, he became Michael R. and Mary Kay Hallman Fellow, and Associate Professor of Management and Organizations, and in 2021, he was appointed the Professor of Management and Organizations. Since 2018, he has been an Associate Editor of Administrative Science Quarterly. Sytch's research has been covered by BBC, Harvard Business Review, Phys.org, Reuters, and Yahoo News Research Sytch has conducted research on the origins and evolutionary dynamics of the dual social structure of markets that encompasses both collaborative and conflictual interorganizational relationships. His research also focuses on the roles of varying global network topologies in shaping performance consequences for entire communities of firms, and the way companies influence their legal environments by leveraging social relationships between lawyers and judicial arbiters. and how networked systems withstand shocks and recover from them. Awards and honors * 2014 - Top 40 Business School Professors under 40 in the World, Poets and Quants * 2014 - Ross Executive Education Teaching Impact Award, Michigan Ross * 2014-2016 - Fellow, Michael R. and Mary Kay Hallman
WIKI
Josh Dawsey Josh Dawsey is an American journalist who is a political investigations and enterprise reporter for The Washington Post. Education Dawsey received a B.A. in journalism from the University of South Carolina. Career Dawsey began his career as a reporter at The Wall Street Journal, first covering Governor Chris Christie before being assigned to New York to write about Mayor Bill de Blasio. He moved to Politico to become a White House reporter in 2016, before assuming the same role at The Washington Post in 2017. Dawsey won the White House Correspondents Association's Award for Deadline Reporting in 2018 and again in 2019. He was part of a team that won the 2022 Pulitzer Prize for Public Service for its coverage of the January 6 United States Capitol attack.
WIKI
Care advice issued as eating disorders spread | Society | The Guardian Press Association Wednesday 28 January 2004 06.42 EST New guidelines aimed at improving the care of people with eating disorders, particularly among young people, have been launched today by a NHS watchdog. The National Institute for Clinical Excellence (Nice) has set out treatment plans for anorexia, bulimia and binge eating, making specific recommendations for children and teenagers because of the rising numbers who have eating disorders. Nice, which decides which health treatments and technologies should be available on the NHS in England and Wales, called for eating disorder services to be tailored to the needs of young people and to involve other family members. The watchdog stressed the importance of psychological therapies in tackling eating disorders. Cognitive behavioural therapy, which aims to positively change beliefs and behaviour, is especially recommended for patients with bulimia. It warned that antidepressants should not be used as the sole or primary treatment for patients with anorexia because of their weakened state. But emaciated patients should be prioritised for treatment. The guidelines, produced in partnership with the National Collaborating Centre for Mental Health, also call for more awareness of eating disorder among GPs, because they are best placed to spot the symptoms early, due to their regular contact with families. The guidelines will be sent to family doctors and mental health specialists, while an advice and information booklet will also be produced for patients and families. The Eating Disorders Association (EDA) voiced concern about how the guidelines would be implemented because they are not accompanied by extra resources. The charity's chief executive, Susan Ringwood said: We welcome the Nice guidelines but we are very concerned about how they will be implemented across the NHS. Services are still very patchy and some areas have little or no service provision. But Andrea Sutcliffe, the executive at Nice who led development of the guidelines, said they should help to iron out variations in the availability of NHS services for people with eating disorders varied across England and Wales. About 1.1 million people in the UK, including children as young as eight and some over 65, have an eating disorder. Simon Gowers, a consultant child and adolescent psychiatrist at Cheshire and Merseyside eating disorders service for adolescents, said the prevalence of eating disorders was rising. They are particularly seen in young people - as many as 50% of people with anorexia nervosa are females between 13 and 19 years, he said.
NEWS-MULTISOURCE
Gautama Buddha left a profound impact on the progress of human civilization. Who was Gautama Buddha? Gautama Buddha, popularly known as the Buddha, was an ascetic, a religious leader, and a teacher who lived in ancient India. He is regarded as the founder of Buddhism and revered by Buddhists as an enlightened being who rediscovered an ancient path to freedom from ignorance, craving, and the cycle of rebirth and suffering. Subscribe to our Newsletter! Receive selected content straight into your inbox. He taught for around 45 years and built a large following, both monastic and lay. His teachings are based on his insight into the arising of suffering or dissatisfaction and its ending — the state called Nirvana. His early life There remains significant debate regarding the actual birth time of Gautama Buddha, although it is generally agreed he was born sometime around 563 BCE. Some Buddhist scriptures like Vishnu Bhagwat, Sthaviravali, and Harvansh point to his birth in 1760 BCE. Gautama Buddha was born into an aristocratic family in the Shakya clan, but he eventually renounced worldly life. Before his birth, his father, King Suddhodhanaâ, was told by astrologers that his son would be a holy man. However, the king wanted his son to live a normal life and ensured that he did not receive any religious teaching. Suddhodhanaâ kept his son in palaces replete with amenities and worldly luxuries, hoping that he would not be concerned with anything else. After being disillusioned with earthly life, Gautama Buddha eventually realized a higher spiritual goal and became an ascetic or sramana. He sees the outside world for the first time When Gautama left his palace to see the outside world for the first time, he was shocked by his encounter with human suffering. Gautama is said to have seen an old man. When his charioteer Chandaka explained that all people grew old, the prince went on further trips beyond the palace. He encountered a diseased man, a decaying corpse, and an ascetic that inspired him. Shortly after seeing the four sights, Gautama woke up at night and saw his female servants lying in unattractive, corpse-like poses, which shocked him. He discovered what he would later understand more deeply during his enlightenment: suffering and the end of suffering. His quest for enlightenment Moved by all the things he had experienced, Gautama decided to leave the palace in the middle of the night against his father’s will to live the life of a wandering ascetic. Accompanied by Chandaka and leaving behind his son Rahula and wife Yaśodhara, he traveled to the river Anomiya and cut off his hair. Then, leaving his servant and horse behind, he journeyed into the woods and changed into monk’s robes. Gautama traveled throughout the Gangetic plain, teaching and building a religious community. He taught a middle way between sensual indulgence and the severe asceticism found in the Indian śramaṇa movement. He taught training of the mind that included ethical training, self-restraint, and meditative practices such as jhana and mindfulness. He also criticized the practices of Brahmin priests, such as animal sacrifice and the caste system. Gautama later reconnected with his family, and his son Rahul became a monk and his spouse Yaśodhara a nun, both following his teachings. He achieves Nirvana At age 35, Gautama Buddha achieved enlightenment or Nirvana after a rigorous meditation phase lasting six years. It took place in Bodhgaya, and the tree under which he became enlightened is said to be still alive. He continued to travel for 45 years spreading the dharma called Buddhism. He stuck to the path of truth all along. He always advised his disciples about the need to be selfless. His view was we should learn to be selfless as nothing is permanent in life. Gautama Buddha expired at the age of 80 years, and his funeral occurred at Kushinagar. His mortal remains were taken to eight places post-cremation, and stupas were set up over the ashes. The oldest stupa among these is called Mahastupa. His teachings spread A couple of centuries after his death, he came to be known by the term Buddha, which means “Awakened One” or “Enlightened One.” Gautama Buddha’s teachings were compiled by the Buddhist community in the Vinaya, his codes for monastic practice, and the Suttas, texts based on his discourses. These were passed down in Middle Indo-Aryan dialects through an oral tradition. Later generations composed additional texts, such as systematic treatises known as Abhidharma, biographies of the Buddha, collections of stories about the Buddha’s past lives known as Jataka tales, and other discourses, i.e., the Mahayana sutras. Due to his influence on Indian religions, in Vaishnavism, he came to be regarded as the 9th avatar of Vishnu.
FINEWEB-EDU
User:Anthony Payyapally Anthony Payyapally born in mumbai on 17th of October 1983. basically his from keral, trichur district. his father from Kerala and mother from Maharashtra. he was complite his gratuation from Mumbai Univercity and now he working in Afghanistan as a Movement Coordinator supporting in US ARMY in Kandahar. He like to write a story. he already written one story the title is "My native place in Pakistan" and sumbitted in many film production house in mumbai. he would like to work with Mr.Aamir Khan in his production house. he is trying to come in bollywood as a Writer, Direction.
WIKI
Looking to Invest in SpaceX? This Public Company Already Generates Revenue From Its Next-Gen Satellite Constellation Space-based broadband internet for the masses is being built over at Elon Musk's SpaceX, with the initial launch of the service -- dubbed Starlink -- expected by the end of 2020. The company's forward-thinking mentality on the commercialization of the final frontier is partially responsible for the surge in interest in investing in the movement. Problem is, SpaceX isn't a publicly traded company, leaving just a few names like Virgin Galactic (NYSE: SPCE) for investors to choose from when it comes to innovation in the burgeoning space economy. But there's another possibility: Iridium Communications (NASDAQ: IRDM), which just completed its first year operating its own broadband-speed service with global coverage. The company has yet to reach peak operational efficiency with its new satellite constellation, but solid progress is being made. Image source: Getty Images. Year one is in the books When I first bought in on Iridium a few years ago, it was all on the promise of the company being able to deploy its NEXT satellite constellation, a broadband-speed communications service with 100% coverage of the globe. The company already provided services -- primarily to government organizations, and the maritime and aviation industries -- but high-speed data services were something else entirely. Fast-forward a few years (and some 220% in share price advance), and NEXT is up and running. The service has helped Iridium reignite growth with new global Internet of Things connectivity capabilities and broadband internet for industries operating in remote locations (like aviation, maritime, mining, and government entities). A new air traffic control system based on the satellites, called Aireon, is also just getting going. With the first year of next-gen operations in the books, 18% total growth in billable subscribers led to a 10% increase in service revenues. SERVICE END OF 2019 BILLABLE SUBSCRIBERS END OF 2018 BILLABLE SUBSCRIBERS % GROWTH Commercial voice and data 363,000 361,000 1% Commercial IoT 802,000 647,000 24% Government voice and data 57,000 54,000 6% Government IoT 78,000 59,000 32% Data source: Iridium Communications. Expect more slow-and-steady progress ahead For 2020, management said to expect another 6% to 8% increase in service revenue to $474 million to $483 million, and operational earnings before interest, tax, depreciation, and amortization (EBITDA) to increase 7% to 10% to $355 million to $365 million. Not too shabby. More importantly, though, is that with the NEXT satellites now in orbit and Iridium having recently restructured its debt associated with getting the constellation in orbit, free cash flow (what's left after cash operating and capital expenses are paid for) ran at positive $80.3 million in 2019. Management expects free cash flow should increase somewhere in the 20% ballpark in 2020. But what about that new SpaceX Starlink service coming online? Iridium CEO Matthew Desch had this to say on the lastearnings call I would say that they're mostly focused on what I would call the commodity broadband space, trying to provide services that compete almost -- more with existing [very small aperture terminal (VSAT)] and fixed satellite services space, but even going beyond that to someone like StarLink or maybe someday Amazon Web Services, I would say that they're even going after what I would call the core fixed market, particularly consumer kind of markets that maybe are served by Hughes and ViaSat today or even by fiber and other kinds of solutions. Those are completely different markets from what Iridium is interested in or has been addressing. We've stayed away from those. Desch said he and the rest of Iridium are rooting on the progress of Starlink and others because it's good for the space industry and complementary to the service Iridium already provides -- namely, commercial and government broadband versus consumer broadband. That should put a potential investor's mind at ease for now about future disruption. For the time being, though, Iridium Communications looks like one of the better ways to invest in the growing space economy. 10 stocks we like better than Iridium Communications When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.* David and Tom just revealed what they believe are the ten best stocks for investors to buy right now... and Iridium Communications wasn't one of them! That's right -- they think these 10 stocks are even better buys. See the 10 stocks *Stock Advisor returns as of December 1, 2019 John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Nicholas Rossolillo and his clients own shares of Iridium Communications. The Motley Fool owns shares of and recommends Amazon. The Motley Fool owns shares of Virgin Galactic Holdings Inc. The Motley Fool has a disclosure policy. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
NEWS-MULTISOURCE
1901 Virginia Orange and Blue football team The 1901 Virginia Orange and Blue football team represented the University of Virginia as an independent during the 1901 college football season. Led by Westley Abbott in is first and only season as head coach, the team compiled a record of 8–2 and claims a Southern championship. Several Virginia players were selected All-Southern, including Christie Benet, later a United States senator for South Carolina, and Bradley Walker, later a Nashville attorney and prominent referee. Other All-Southerns were captains Robert M. Coleman, Buck Harris, and Ed Tutwiler. Honors and awards * All-Southern: Christie Benet, Buck Harris, Ed Tutwiler, Robert M. Coleman, Bradley Walker.
WIKI
Accession Number: AD0785735 Title: Evaluation and Development of Electrical Anesthesia Device. Descriptive Note: Rept. no. 5 (Final comprehensive), Corporate Author: MEDICAL COLL OF WISCONSIN MILWAUKEE Report Date: 1974-05-24 Pagination or Media Count: 46.0 Abstract: A study was done in the monkey to compare sine wave electroanesthesia EA currents at frequencies ranging from 100 Hz to 10 kHz dual sine wave EA currents at three separation frequencies of 100 Hz, 500 Hz and 1 kHz biased rectangular pulses with a 10 to 20 on time and repetition rates from 30 to 1,000 pulses per second rectangular interburst frequencies of 10 to 20 on time, repetition rates from 50 to 500, with interburst frequencies from 1 kHz to 50 kHz. Each of these waveforms was evaluated in monkeys given no drugs, Midocalm, Droperidol, Sodium Pentathol, Phenobarbital or Atropine. For the levels of drugs used the most consistent physiologic findings were obtained with the barbiturate preanesthetic drugs. For anesthesia, the average value of the 10 on time rectangular pulse was less than the 20 on time pulse. The least amount of average EA current was required for the rectangular interburst pulse currents, the next greatest amount for biased rectangular pulses, followed by the single sine wave and double sine wave currents. Modified author abstract Subject Categories: • Medical Facilities, Equipment and Supplies Distribution Statement: APPROVED FOR PUBLIC RELEASE
ESSENTIALAI-STEM
Canton of Quillebeuf-sur-Seine The canton of Quillebeuf-sur-Seine is a former canton of the Eure département, in northwestern France. It had 6,228 inhabitants (2012). It was disbanded following the French canton reorganisation which came into effect in March 2015. It consisted of 14 communes, which joined the canton of Bourg-Achard in 2015. The canton comprised the following communes: * Aizier * Bouquelon * Bourneville * Marais-Vernier * Quillebeuf-sur-Seine * Saint-Aubin-sur-Quillebeuf * Sainte-Croix sur-Aizier * Sainte-Opportune-la-Mare * Saint-Ouen-des-Champs * Saint-Samson-de-la-Roque * Saint-Thurien * Tocqueville * Trouville-la-Haule * Vieux-Port
WIKI
Talk:Focke Wulf Schnellflugzeug Where Eagles Dare This reference was removed as the helicopter in this movie was a Bell 47.
WIKI
Wikipedia:Possibly unfree files/2015 May 9 File:Old Dominion University Logo.png The result of the discussion was: Delete; deleted by A file with this name on Commons is now visible. AnomieBOT ⚡ 16:11, 17 May 2015 (UTC) * File:Old Dominion University Logo.png ([ delete] | talk | [ history] | [ logs]). * procedural nomination: synching with c:Commons:Deletion requests/File:Old Dominion University Logo.png. Magog the Ogre (t • c) 03:29, 9 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. File:Harbhajan Singh.png The result of the discussion was: Delete; deleted by AnomieBOT ⚡ 16:11, 17 May 2015 (UTC) * File:Harbhajan Singh.png ([ delete] | talk | [ history] | [ logs]). * Looks to be a screenshot of a television broadcast. Nev1 (talk) 11:00, 9 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. Files uploaded by User:Renuka hatti The result of the discussion was: Delete all as copyright status is unknown.. Cheers, TLSuda (talk) 14:36, 17 May 2015 (UTC) * File:Athani factory.jpg ([ delete] | talk | [ history] | [ logs]). * File:Athani sugar factory.jpg ([ delete] | talk | [ history] | [ logs]). * File:Ugar sugars.jpg ([ delete] | talk | [ history] | [ logs]). * File:Athani.Taluk.jpeg ([ delete] | talk | [ history] | [ logs]). * File:Siddeshwar Temple.jpg ([ delete] | talk | [ history] | [ logs]). * File:Ramtheerth temple.jpg ([ delete] | talk | [ history] | [ logs]). * File:Hipparagi barrage.jpg ([ delete] | talk | [ history] | [ logs]). * File:Murugendra shivayogi.jpg ([ delete] | talk | [ history] | [ logs]). Most of these use Commons templates such as with seemingly nonsense content. There are also texts such as "From www.ugarsugar.com transfer was made by User:Renuka hatti." which also are associated with Wikipedia-to-Commons transfers, sourced to various different websites. One of the websites,, seems to be a Wikipedia mirror. Wikipedia content is typically free to use, but you must usually attribute the author, so if this is unattributed Wikipedia content, then it is a copyright violation. There is no indication that the other websites host free content. One file (File:Athani sugar factory.jpg) is a duplicate of File:Athani factory.jpg with an obviously bogus PD-old-100 copyright tag. --Stefan2 (talk) 13:12, 9 May 2015 (UTC) * File:Sugarcane Farm.jpeg ([ delete] | talk | [ history] | [ logs]). This one is sourced to Wikimapia. Isn't Wikimapia content available under a free licence of some kind? You probably need to attribute the author, though, and I can't find out who the author is. --Stefan2 (talk) 13:12, 9 May 2015 (UTC) * Update: The uploader has now removed the puf template and added GFDL-self to some or all of the files. However, the other information provided on the file information pages suggest that the files aren't own works at all. --Stefan2 (talk) 17:26, 9 May 2015 (UTC) * Wikimapia, is a site that uses Google Imagery as a base layer.. I wouldn't recomend relying it on as a source without making further checks...ShakespeareFan00 (talk) 21:28, 9 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. File:Drew Peterson mugshot 2008.jpg The result of the discussion was: Delete; deleted by AnomieBOT ⚡ 16:11, 17 May 2015 (UTC) * File:Drew Peterson mugshot 2008.jpg ([ delete] | talk | [ history] | [ logs]). * No PD-ILGov license exists; unclear that county employees' work is automatically PD in Illinois. (ESkog)(Talk) 16:54, 9 May 2015 (UTC) * Also the problem that Drew Peterson was actually arrested in May 2009, not 2008. Possibly this was taken when he was interviewed but not arrested, but the file name suggests he was arrested in 2008 when this is not the case (and indeed this is how the photo was captioned at enWiki). Nothing about this photo (or Peterson) is currently found on the Will County Sheriff's Web site to clarify the photo's history. Dwpaul Talk 01:52, 10 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. File:NRA Law Enforcement Explorer Badges-Bars.png The result of the discussion was: Delete; deleted as G7 by AnomieBOT ⚡ 23:00, 10 May 2015 (UTC) * File:NRA Law Enforcement Explorer Badges-Bars.png ([ delete] | talk | [ history] | [ logs]). * NRA badge designs are not necessarily PD. ShakespeareFan00 (talk) 21:24, 9 May 2015 (UTC) * I wondered the same thing, but I thought I would give it a try. I will make this for deletion. --McChizzle (talk) 10:34, 10 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. File:New VCU Rams Logo.png The result of the discussion was: Delete; deleted by AnomieBOT ⚡ 16:11, 17 May 2015 (UTC) * File:New VCU Rams Logo.png ([ delete] | talk | [ history] | [ logs]). * Rams horn TOO concern... ShakespeareFan00 (talk) 21:24, 9 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section. File:ALEKSANDR BENUA -Costume for the play “The Bourgeois Gentleman.“ By Molière. 1920.jpg The result of the discussion was: Delete; deleted by AnomieBOT ⚡ 16:11, 17 May 2015 (UTC) * File:ALEKSANDR BENUA -Costume for the play “The Bourgeois Gentleman.“ By Molière. 1920.jpg ([ delete] | talk | [ history] | [ logs]). * PUF query concerning exact interpretation of Georgian FOP. Sfan00 IMG (talk) 22:40, 9 May 2015 (UTC) * ''The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the media's talk page or in a deletion review). No further edits should be made to this section.
WIKI
BAML says buy equities as sentiment falls to 'extreme bear' territory LONDON (Reuters) - Bank of America Merrill Lynch’s “Bull & Bear” gauge of market sentiment has fallen to 1.8, a level the U.S. bank’s strategists describe as “extreme bear” territory that has triggered a “buy” signal for equities. This is the first time the “buy” signal for risk assets has been triggered since June 2016 when the Brexit vote sent global markets spiraling lower, the strategists said on Friday, adding it was “time to buy”. The note came as markets rallied strongly after blowout employment figures from the United States lifted some investors’ concerns about slowing economic growth. BAML said global equities have tended to rise after their “buy” signal was triggered. Their median performance the following three months - in the 15 occurrences since 2000 - has been 6.1 percent. Investors should buy the S&P 500, Chinese stocks, German stocks, U.S. small-caps, semiconductor and energy shares, U.S. and European high-yield bonds and emerging currencies, all of which are “very oversold,” the strategists said. They saw the S&P 500 rallying to 2,650 points. Global markets have had a torrid December and difficult start to 2019 as trading was marred by a slowing global economy, and multiplying signs of the impact of a U.S.-China trade war, the most recent of which was a revenue warning from Apple Inc. (AAPL.O). In recommended sells, BAML pointed to U.S. Treasuries, Japan’s yen and healthcare and utility stocks considered “defensive”. BAML strategists said global equities have lost $19.9 trillion in market value since their peak in January 2018, close to the United States’ nominal gross domestic product, which stood at $20.66 trillion a year in the third quarter of 2018. ($20.66 trillion in the third quarter of 2018). Citing data from flows tracker EPFR, BAML said the past six weeks have seen record outflows from equities ($84 billion) and investment-grade bonds ($34 billion), while government bonds have seen inflows of $24 billion. Reporting by Helen Reid, Editing by Josephine Mason and Louise Heavens
NEWS-MULTISOURCE
G. Henle Verlag G. Henle Verlag is a German music publishing house specialising in Urtext editions of classical music. The catalogue includes works by composers from different epochs periods, in particular composers from the Baroque to the early twentieth century whose works are no longer subject to copyright. In addition to sheet music, G. Henle Publishers also produces scholarly complete editions, books, reference works, and journals. Since 1995, Henle the range also includes pocket scores (17 x 24 cm). In 2016 Henle began offering the Urtext editions in digital format in an app for iOS and Android tablets ("Henle Library"). History The publishing house G. Henle Verlag was founded on 20 October 1948 by Günter Henle with the permission of the US military government. The publishing house initially had offices in Duisburg and Munich. Under the direction of its founder, scholarly readings of musical sources and, based on this work, the editing, production and distribution of Urtext editions constituted the objective of the company from the very beginnings. The typical blue book cover, which is still used today, was chosen at that time, as was the design of the title font, created by Joseph Lehnacker (1895–1965). In 2000 the logo and title design were fundamentally modernised by communication designer Rolf Müller (1940–2015). From the start, Günter Henle attached great importance to 'clear and precise engraving, a balanced arrangement of the pages, legible type-faces and aesthetically pleasing typesetting of the word parts. For several decades, music engraving was commissioned to the Universitätsdruckerei H. Stürtz (Würzburg), later other engravers in Leipzig and Darmstadt contributed. Towards the end of the 1990s, computer typesetting replaced manual engraving. Nearly all of the original engraving plates produces by the engraving workshops are still to be found in the publisher's archives. Because of Günter Henle's involvement in large-scale industry, the publishing house was initially ridiculed as a 'Klöckner music factory', Klöckner being a mass producer of steel and metal products. But it quickly developed to become an important German music publisher. The first publications to appear were Urtext editions of Wolfgang Amadeus Mozart's piano sonatas in two volumes, edited by Walther Lampe, and Franz Schubert's Impromptus and Moments Musicaux, edited by Günter Henle and Walter Gieseking. The founder of the publishing house maintained close contacts and friendships with Gieseking and numerous important musicians of his time, including Claudio Arrau, Erich Kleiber, Yehudi Menuhin, Igor Oistrakh, Arthur Rubinstein and Rudolf Serkin. Today, the publishing catalogue comprises around 1500 Urtext editions and some 750 scholarly publications, making it the world's leading publisher of Urtext music editions. In 1955, the staff relocated to a newly acquired publishing house at Schongauer Strasse 24; 23 years later, in 1978, the publishing house acquired its present offices at Forstenrieder Allee 122 in Munich. The following year (1979), Günter Henle died, leading to the closure of the Duisburg offices and the expansion of the Munich subsidiary. In 1993, an upper floor was added to the building; in 2005 the ground floor was modernised. The first managing director of the publishing house, alongside Günter Henle, was Friedrich Joseph Schaefer (1907–1981). He was succeeded in 1969 by Martin Bente (born 1936), first as commercial manager in the Munich branch, then from 1979 as managing director and head of publishing. In 2000, Wolf-Dieter Seiffert (born 1959) was appointed managing director and head of publishing. He had been chief editor or editor of the publishing house since 1990. The academic editing department plays a central role at the Henle publishing house. All Urtext editions are subject to strict scholarly and aesthetic criteria with regard to source and text criticism, correctness, beauty and user-friendly presentation of the music (pagination, fingerings etc.), as well as in terms of the stringency and comprehensibility of the prefaces and the critical apparatus. The following musicologists with doctorates have shaped or continue to shape the Urtext profile of the publishing house: Ewald Zimmermann (1910–1998), from 1953 to 1975 (also director of the Duisburg publishing office) (see also); Ernst Herttrich (born 1942) from 1970 to 1990; Ernst-Günter Heinemann (born 1945) from 1978 to 2010; Wolf-Dieter Seiffert (born 1959) from 1990 to 2000; Norbert Gertsch (born 1967) since 1997 (from 2009 at the same time deputy head of publishing and programme director); Norbert Müllemann (born 1976) since 2008 (chief editor since 2017). In 1972 Günter Henle founded Günter Henle Stiftung München [the Günter Henle Foundation, Munich], which later became the owner of the publishing house. The Foundation was initially chaired by Henle himself and, after his death, by Walter Keim (1979–1981); he was followed 1981–1994 by Anne Liese Henle, wife of Günter Henle, and then C. Peter Henle (born 1938), son of Günter and Anne Liese Henle from 1994 to 2016. Since 2016 Felix Henle (born 1968), son of C. Peter Henle, has been chairman of the Board of the Foundation. With the founding of the Joseph Haydn Institute in Cologne in 1955, in which Günter Henle played a decisive role, the Henle publishing house expanded beyond its Urtext editions to include the publication of other important scholarly complete editions and scholarly publications in book form. In 1981, when the publishing house appeared at the first German Music Fair in Tokyo, Japan, G. Henle USA Inc. was also founded in North America, initially as a joint venture based in St. Louis, Missouri. From 1985, this sales office was continued as a sole subsidiary of the Munich parent company. Holger A. Siems (born 1942) became managing director, having been the sales manager of the publishing house since 1976. This branch was closed in 2007. Since then, G. Henle Verlag has been represented exclusively on the North American market by the Hal Leonard Corporation in Milwaukee, Wisconsin. The publishing house also presented itself in China when it attended the first International Book Fair in Beijing in 1986. In 1995, it granted its first licence for sheet music production to the Chinese state publisher People's Music Publishing House in Beijing. To this day, numerous Urtext editions of G. Henle Verlag for the Chinese market are published under licence with this partner firm, as well as with the Shanghai Music Publishing House. G. Henle Verlag is preparing a printed history of the publishing house for the anniversary year 2023. Urtext Editions The heart of the G. Henle Verlag catalogue are the Urtext (original text) editions: musicologically researched, accurate musical texts for practising musicians, they contain an explanatory apparatus expounding on the sources consulted (autographs, copies, early printed editions) and on the readings chosen (‘Critical Report’). The Urtext programme covers almost the entire range of important piano music and chamber music for smaller orchestrations, complete piano works by J. S. Bach, Beethoven, Brahms, Chopin, Debussy, Joseph Haydn, W. A. Mozart, Schubert, Robert Schumann; in addition, numerous other selected piano works for two or four hands (including Dvořák, Granados, Grieg, Handel, Liszt, Mendelssohn, Rachmaninov, Ravel, Reger, Satie, Scarlatti, Scriabin and many more), as well as organ works and the entire standard repertoire for chamber music ensemble. In addition, there are complete editions of Beethoven's and Haydn's Lieder, and the principal song cycles of Robert Schumann. Urtext editions in small pocket format (the ‘Studien-Edition’ series), and several facsimile editions of composers’ manuscripts, are also a part of the Henle catalogue. Level of difficulty The entire repertoire of G. Henle Verlag for piano solo (author: Rolf Koenen), violin (Ernst Schliephake) and flute (András Adorján) has been classified in difficulty levels from 1–9. Easy (1–3), Medium (4–6), Difficult (7–9). For example, the Prelude in C major from The Well-Tempered Clavier I has been classified as "moderately easy" (grade 2 out of 9), and the Toccata, Op. 7 by Robert Schumann as "very difficult" (9 out of 9). This classification is intended to make it easier to find suitable pieces for a particular level of ability. Complete Editions * Joseph Haydn Works, ed. Joseph Haydn-Institut Köln. Munich, since 1958ff. Musicological complete edition of Joseph Haydn's works.The edition, which is planned for 111 volumes, comprises 34 series. The edition is close to being completed. * Ludwig van Beethoven: Works. Complete edition, ed. Beethoven-Archiv Bonn. Munich, since 1961 (publications released by Beethoven-Hauses Bonn). The edition is planned for 56 volumes. Three quarters of these have been published so far. * Johannes Brahms. New Edition of the Complete Works, ed. Johannes Brahms Gesamtausgabe, Munich, in conjunction with the Gesellschaft der Musikfreunde in Wien. Munich, since 1996. Historical-critical new edition of the complete compositional works of Johannes Brahms, with editorial management in Kiel. The edition is planned for 65 volumes in 10 series. * Béla Bartók. BBCCE (Béla Bartók Complete Critical Edition), Munich (G. Henle Verlag) and Budapest (Editio Musica Budapest Zeneműkiadó), ed. Bartók-Archives of the Hungarian Academy of the Sciences. Since 2017. The edition is planned for 48 volumes and is trilingual (English, German, Hungarian; critical reports and footnotes in English only). Ludwig van Beethoven * Ludwig van Beethoven. Correspondence. Complete edition, ed. Sieghard Brandenburg, Munich 1996–1998. Volumes 1–6 cover letters from 1783 to 1827, volume 7 the index. Volume 8 (documents, subject index) is in preparation. * Beethoven aus der Sicht seiner Zeitgenossen [Beethoven as seen by his contemporaries], ed. Klaus Martin Kopitz and Rainer Cadenbach, 2 vol., Munich 2009. * Ludwig van Beethoven. Thematic Bibliographical Catalogue of Works, ed. by Kurt Dorfmüller, Norbert Gertsch and Julia Ronge with the additional support of Gertraut Haberkamp and the Beethoven-Haus Bonn, revised and significantly expanded new edition of the Catalogue of Works by Georg Kinsky and Hans Halm, Munich 2014. Johannes Brahms * Johannes Brahms. Thematic Biliographical Catalogue of Works, ed. Margit L. McCorkle after conjunctual preparatory work with Donald McCorkle, Munich1984. Joseph Haydn * Haydn-Studies. Publications of the Joseph Haydn-Instituts Köln, since 1965. The journal is published irregularly. Max Reger * Max Reger Catalogue of Works, ed. Susanne Popp for the Max-Reger-Institut in collaboration with Alexander Becker, Christopher Grafschmidt, Jürgen Schaarwächter and Stefanie Steiner, Munich 2011. Robert Schumann * Robert Schumann. Thematic Biliographical Catalogue of Works, ed. Margit L. McCorkle, Munich 2003. Other publications * Répertoire International des Sources Musicales (RISM), ed. under the patronage of the International Society of Musicology and the International Association of Music Libraries, Archives and Documentation Centres, Series B. Munich since 1960. RISM is a worldwide catalogue of musical, manuscript and printed sources preserved in all countries up to the year 1800 and beyond. The G. Henle Verlag publishes the systematically conceived Series B.. * The Legacy of German Music, ed. by Musikgeschichtliche Kommission e.V., Munich. Since 1964. * Kataloge Bayerischer Musiksammlungen [Catalogue of Bavarian Music Collections], ed. by Bayerischen Staatsbibliothek, München, since1971. * Scholarly Compilations, books and periodicals
WIKI
Talk:Malicious caller identification Relevance Is this service relevant anymore since all metadata is saved?RobertBolan (talk) 01:02, 17 July 2014 (UTC) * It's still listed as an offered service by at least some phone companies (including mine). Meters (talk) 01:11, 17 July 2014 (UTC) could the metadata be compared to the callerID that was transmitted? could the metadata be compared to the callerID that was transmitted? Where can the details of the metadata be found? — Preceding unsigned comment added by <IP_ADDRESS> (talk) 18:54, 13 February 2019 (UTC)
WIKI
What Are Butyrate Supplements? Can They Improve Gut Health? Butyrate supplements have gained popularity in recent years for their potential to improve gut health. These supplements contain butyrate, a short-chain fatty acid that plays a crucial role in supporting the health of the gut and its microbiome. In this article, we will explore the science behind butyrate supplements, their connection to gut health, and their role in maintaining a healthy microbiome. We will also discuss the recommended dosage and usage, potential side effects and precautions, as well as the controversy surrounding these supplements. Understanding Butyrate Supplements Before delving into the benefits of butyrate supplements, it is important to understand what they are and how they work. Butyrate is a short-chain fatty acid that is naturally produced by bacteria in the gut during the fermentation of dietary fibers. It acts as an energy source for the cells lining the colon and plays a vital role in maintaining the integrity of the gut barrier. However, certain factors such as a poor diet, stress, and the use of antibiotics can disrupt the production of butyrate in the gut, leading to imbalances and potential health issues. Butyrate supplements have gained popularity in recent years due to their potential health benefits. These supplements provide an additional source of butyrate, helping to restore the balance in the gut and support overall gut health. By supplementing with butyrate, individuals can ensure that their gut receives an adequate supply of this important fatty acid, even in the presence of factors that may disrupt its production. The Science Behind Butyrate Research has shown that butyrate acts as a signaling molecule that can modulate various cellular processes in the gut. It has anti-inflammatory properties and can help reduce the production of pro-inflammatory cytokines, which play a role in the development of chronic gut conditions. Butyrate also promotes the production of antimicrobial peptides, which help to maintain a healthy balance of bacteria in the gut. Furthermore, butyrate has been found to have a positive impact on the gut barrier function. It helps to strengthen the tight junctions between the cells lining the gut, preventing the leakage of harmful substances into the bloodstream. This can be particularly beneficial for individuals with conditions such as leaky gut syndrome, where the integrity of the gut barrier is compromised. Additionally, butyrate has been shown to support the growth and differentiation of cells in the gut lining. It helps to promote the proliferation of healthy cells while inhibiting the growth of abnormal cells, which may have implications for individuals at risk of developing colorectal cancer. Key Components of Butyrate Supplements Butyrate supplements typically contain sodium, calcium, or magnesium salts of butyric acid. These salts allow for easier absorption in the gut, ensuring that the butyrate reaches its target cells. The supplements are available in various forms, including capsules, tablets, and powders, making them convenient and easy to incorporate into one's daily routine. It is important to note that the efficacy of butyrate supplements may vary depending on the individual's gut health and overall lifestyle factors. While supplements can provide a supplemental source of butyrate, it is also crucial to focus on maintaining a balanced diet rich in fiber and to address any underlying factors that may be affecting gut health. In conclusion, butyrate supplements offer a promising approach to support gut health and overall well-being. By understanding the science behind butyrate and the key components of these supplements, individuals can make informed decisions about incorporating them into their daily routine. However, it is always advisable to consult with a healthcare professional before starting any new supplement regimen to ensure it is appropriate for your specific needs. The Connection Between Butyrate and Gut Health Now that we understand the basics of butyrate supplements, let's explore how they can improve gut health. Butyrate, a short-chain fatty acid, plays a crucial role in supporting the health of the gut lining. It is produced by the fermentation of dietary fiber by beneficial bacteria in the colon. Butyrate strengthens the intestinal barrier, preventing the leakage of harmful substances into the bloodstream and reducing the risk of gut-related disorders. By promoting the production of tight junction proteins, butyrate helps maintain the integrity of the gut barrier and prevents the entry of toxins and pathogens into the body. This protective mechanism is essential for overall gut health. How Butyrate Affects the Gut Microbiota In addition to its role in maintaining gut barrier function, butyrate also has a significant impact on the gut microbiota. The gut microbiota refers to the trillions of microorganisms that reside in our digestive tract. These microorganisms play a crucial role in various aspects of our health, including digestion, immune function, and even mental health. Studies have shown that butyrate has the ability to modulate the composition and diversity of the gut microbiota. It promotes the growth of beneficial bacteria, such as Bifidobacteria and Lactobacilli, while inhibiting the growth of harmful bacteria. This shift in the microbial balance can have a positive impact on digestive health. Potential Benefits for Digestive Health Butyrate supplements have shown promising results in improving digestive health, particularly in individuals with inflammatory bowel diseases (IBD) such as ulcerative colitis and Crohn's disease. Ulcerative colitis and Crohn's disease are chronic inflammatory conditions that affect the digestive tract. They can cause symptoms such as abdominal pain, diarrhea, and rectal bleeding. Butyrate's anti-inflammatory properties help to reduce gut inflammation, providing relief from these symptoms. Furthermore, butyrate supplementation has been found to restore the balance of the gut microbiota in individuals with IBD. This restoration of microbial balance can lead to improved digestive function and a reduction in disease activity. In addition to its role in IBD, butyrate has also been studied for its potential benefits in other digestive disorders, such as irritable bowel syndrome (IBS) and colorectal cancer. While more research is needed in these areas, preliminary studies suggest that butyrate may have a positive impact on these conditions as well. In conclusion, butyrate supplements have shown great potential in improving gut health. By strengthening the gut barrier, modulating the gut microbiota, and reducing inflammation, butyrate can help alleviate symptoms of digestive disorders and promote overall digestive well-being. The Role of Butyrate in the Microbiome The gut microbiome refers to the community of microorganisms that resides in the digestive tract. It plays a crucial role in maintaining overall health and well-being. Butyrate has a significant impact on the composition and diversity of the gut microbiome. Butyrate, a short-chain fatty acid, is produced by certain beneficial bacteria in the gut, such as Faecalibacterium prausnitzii and Roseburia species. These bacteria ferment dietary fiber and produce butyrate as a byproduct. Butyrate acts as a fuel for these bacteria, helping them thrive and maintain a healthy gut environment. Butyrate also plays a key role in maintaining bacterial balance in the gut. When the gut microbiome is imbalanced, harmful bacteria can proliferate, leading to various health issues. By supplying the gut with butyrate through supplements, we can support the growth of beneficial bacteria and restore bacterial balance. Butyrate and Bacterial Balance Butyrate acts as a fuel for certain beneficial bacteria in the gut, such as Faecalibacterium prausnitzii and Roseburia species. These bacteria produce butyrate as a byproduct of fermentation and help maintain a healthy gut environment. By supplying the gut with butyrate through supplements, we can support the growth of these beneficial bacteria and restore bacterial balance in the gut. When the gut microbiome is imbalanced, harmful bacteria can overpopulate and cause inflammation and other health issues. Butyrate helps maintain a healthy balance by promoting the growth of beneficial bacteria, which can outcompete harmful bacteria for resources and space in the gut. Moreover, butyrate has been shown to have anti-inflammatory properties, which can further contribute to restoring bacterial balance. Inflammation in the gut can disrupt the delicate ecosystem of the microbiome, but butyrate helps reduce inflammation and create a more favorable environment for beneficial bacteria to thrive. Impact on Gut Flora Diversity A diverse gut microbiome is associated with better overall health. Butyrate has been found to promote the growth of diverse bacterial species, thus enhancing microbial diversity. This, in turn, can have positive effects on various aspects of health, including digestion, immunity, and even mental health. When the gut microbiome lacks diversity, it can lead to dysbiosis, a condition characterized by an imbalance in the microbial community. Dysbiosis has been linked to various health problems, including gastrointestinal disorders, metabolic disorders, and even mood disorders. Butyrate helps promote microbial diversity by providing energy and nutrients for a wide range of bacterial species. It acts as a prebiotic, selectively feeding beneficial bacteria and encouraging their growth. By supporting microbial diversity, butyrate contributes to a healthier gut environment and overall well-being. Furthermore, butyrate has been shown to have a positive impact on the integrity of the gut barrier. The gut barrier is a protective layer that prevents harmful substances from entering the bloodstream. By maintaining the integrity of the gut barrier, butyrate helps prevent the translocation of harmful bacteria and toxins, reducing the risk of systemic inflammation and related health issues. Taking Butyrate Supplements When considering taking butyrate supplements, it is important to understand the recommended dosage and usage as well as any potential side effects and precautions. Recommended Dosage and Usage The recommended dosage of butyrate supplements can vary depending on the individual and their specific needs. It is best to consult with a healthcare professional to determine the appropriate dosage for you. Generally, it is advised to start with a low dosage and gradually increase it over time to allow the body to adjust. Possible Side Effects and Precautions While butyrate supplements are generally considered safe, some individuals may experience minor side effects such as gastrointestinal discomfort, bloating, or diarrhea. It is important to follow the recommended dosage and consult a healthcare professional if you experience any adverse effects. Additionally, pregnant or breastfeeding women, as well as individuals with underlying medical conditions, should exercise caution and seek medical advice before starting any new supplement. The Controversy Surrounding Butyrate Supplements As with any dietary supplement, there is some debate in the medical community regarding the effectiveness and safety of butyrate supplements. The Debate in the Medical Community While there is promising research supporting the use of butyrate supplements for gut health, some experts argue that more high-quality studies are needed to fully understand their benefits and potential risks. Additionally, certain factors such as the variability in the composition of gut bacteria among individuals can influence the response to butyrate supplementation. Further research is necessary to determine the most effective dosage and formulation of these supplements. Addressing Common Misconceptions Butyrate supplements should not be seen as a cure-all for gut-related issues. They can be a beneficial addition to a healthy lifestyle that includes a balanced diet, regular exercise, and stress management. It is important to remember that individual responses may vary, and consulting with a healthcare professional is advisable before starting any new supplement or making significant changes to your health routine. In conclusion, butyrate supplements offer a potential means to improve gut health by supporting the gut lining, promoting a healthy microbiome, and reducing inflammation. However, further research is needed to fully understand their effectiveness and optimal usage. It is always best to consult with a healthcare professional before incorporating any new supplement into your routine to ensure it aligns with your individual health needs. Back to blog Keto, Paleo, Low FODMAP Certified Gut Friendly 1 of 12 Keto. Paleo. No Digestive Triggers. Shop Now No onion, no garlic – no pain. No gluten, no lactose – no bloat. Low FODMAP certified. Stop worrying about what you can't eat and start enjoying what you can. No bloat, no pain, no problem. Our gut friendly keto, paleo and low FODMAP certified products are gluten-free, lactose-free, soy free, no additives, preservatives or fillers and all natural for clean nutrition. Try them today and feel the difference!
ESSENTIALAI-STEM
Exploring the Advantages and Challenges of Distributed File Systems and Object Storage, According to Gartner Distributed file systems and object storage are two technologies that have gained significant attention in recent years as businesses seek more efficient and scalable solutions for managing their data. According to Gartner, a leading research and advisory firm, these technologies offer numerous advantages but also come with their fair share of challenges. Let’s first explore the advantages of distributed file systems. One of the key benefits is improved performance and scalability. By distributing data across multiple nodes in a network, distributed file systems can handle large amounts of data and provide faster access to it. This is particularly beneficial for organizations dealing with big data or high-traffic workloads. Another advantage is increased fault tolerance and resilience. With data distributed across multiple nodes, a distributed file system can continue functioning even if one or more nodes fail. This ensures data availability and reduces the risk of data loss. Additionally, distributed file systems often provide built-in data redundancy mechanisms, further enhancing data reliability. Distributed file systems also offer better data management capabilities. They typically provide features like data replication, data migration, and data compression, which allow organizations to efficiently manage and organize their data. This is crucial for businesses that need to store and access large volumes of data while maintaining proper data governance. On the other hand, object storage also has its own set of advantages. One of the key advantages is its scalability. Object storage can handle immense amounts of data by distributing it across numerous storage nodes. This makes it suitable for organizations dealing with massive data growth, such as cloud service providers or media companies. Object storage also provides high durability and availability. By storing data in multiple locations, object storage ensures that data remains accessible even in the event of hardware failures or disasters. This makes it a reliable solution for organizations that require uninterrupted access to their data. Additionally, object storage offers simplified data management. Unlike traditional file systems, which organize data in a hierarchical structure, object storage uses a flat address space. This makes it easier to manage and locate data, especially when dealing with large-scale datasets. Object storage also supports metadata, allowing for efficient searching and indexing of data. Despite these advantages, both distributed file systems and object storage face their fair share of challenges. One of the main challenges is the complexity of implementation and management. These technologies often require specialized knowledge and expertise to set up and maintain, which can be a barrier for some organizations. Furthermore, compatibility with existing systems and applications can be a challenge. Integrating distributed file systems or object storage with legacy systems may require additional development work or modifications to existing applications. This can result in additional costs and complexities for organizations adopting these technologies. Another challenge is data security. Distributed file systems and object storage often involve data being distributed across multiple nodes or locations. This raises concerns about data privacy, confidentiality, and compliance with regulatory requirements. Organizations must implement robust security measures to protect their data and ensure compliance. In conclusion, distributed file systems and object storage offer significant advantages in terms of performance, scalability, fault tolerance, and data management. However, organizations must be aware of the challenges they may face during implementation and operation. By understanding these advantages and challenges, businesses can make informed decisions about adopting distributed file systems or object storage solutions that best suit their specific needs. #Exploring #Advantages #Challenges #Distributed #File #Systems #Object #Storage #Gartner Leave a Comment
ESSENTIALAI-STEM
User:不肖生 Hello, and welcome to Wikinews. It would be helpful if you posted the english translation of your user name (Not Xiao-Sheng) to your user page for non chinese speakers. --Anonymous101 (talk &middot; contribs) (Note I have no link with the organization anonymous) 09:28, 5 April 2008 (UTC)
WIKI
DFS using Adjacency Matrix DFS using adjacency matrix DFS using adjacency matrix Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. A version of the depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux as a strategy for solving mazes. Example: A depth-first search starting at A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory. Performing the same search without remembering previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and never reaching C or G. Iterative deepening is one technique to avoid this infinite loop and would reach all nodes. Output of a Depth-First Search: A convenient description of a depth-first search of a graph is in terms of a spanning tree of the vertices reached during the search. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forwarding edges. If the original graph is undirected then all of its edges are tree edges or back edges. DFS algorithm A standard DFS implementation puts each vertex of the graph into one of two categories: • Visited • Not Visited The purpose of the algorithm is to mark each vertex as visited while avoiding cycles. The DFS algorithm works as follows: • Start by putting any one of the graph’s vertices on top of a stack. • Take the top item of the stack and add it to the visited list. • Create a list of that vertex’s adjacent nodes. Add the ones which aren’t in the visited list to the top of the stack. • Keep repeating steps 2 and 3 until the stack is empty. Pseudocode: DFS-iterative (G, s): //Where G is graph and s is source vertex let S be stack S.push( s ) //Inserting s in stack mark s as visited. while ( S is not empty): //Pop a vertex from stack to visit next v = S.top( ) S.pop( ) //Push all the neighbours of v in stack that are not visited for all neighbours w of v in Graph G: if w is not visited : S.push( w ) mark w as visited DFS-recursive(G, s): mark s as visited for all neighbours w of s in Graph G: if w is not visited: DFS-recursive(G, w) DFS implementation with Adjacency Matrix Adjacency Matrix:- An adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. Representation A common issue is a topic of how to represent a graph’s edges in memory. There are two standard methods for this task. An adjacency matrix uses an arbitrary ordering of the vertices from 1 to |V |. The matrix consists of an n × n binary matrix such that the (i, j) th element is 1 if (i, j) is an edge in the graph, 0 otherwise. An adjacency list consists of an array A of |V | lists, such that A[u] contains a linked list of vertices v such that (u, v) ∈ E (the neighbors of u). In the case of a directed graph, it’s also helpful to distinguish between outgoing and ingoing edges by storing two different lists at A[u]: a list of v such that (u, v) ∈ E (the out-neighbors of u) as well as a list of v such that (v, u) ∈ E (the in-neighbors of u).  What are the tradeoffs between these two methods? To help our analysis, let deg(v) denote the degree of v, or the number of vertices connected to v. In a directed graph, we can distinguish between out-degree and in-degree, which respectively count the number of outgoing and incoming edges. • The adjacency matrix can check if (i, j) is an edge in G in constant time, whereas the adjacency list representation must iterate through up to deg(i) list entries. • The adjacency matrix takes Θ(n 2 ) space, whereas the adjacency list takes Θ(m + n) space. • The adjacency matrix takes Θ(n) operations to enumerate the neighbours of a vertex v since it must iterate across an entire row of the matrix. The adjacency list takes deg(v) time. What’s a good rule of thumb for picking the implementation? One useful property is the sparsity of the graph’s edges. If the graph is sparse, and the number of edges is considerably less than the max (m << n 2 ), then the adjacency list is a good idea. If the graph is dense and the number of edges is nearly n 2 , then the matrix representation makes sense because it speeds up look-ups without too much space overhead. Of course, some applications will have lots of space to spare, making the matrix feasible no matter the structure of the graphs. Other applications may prefer adjacency lists even for dense graphs. Choosing the appropriate structure is a balancing act of requirements and priorities. CODE: // IN C++ #include<bits/stdc++.h> using namespace std; void DFS(int v, int ** edges, int sv, int * visited ) { cout << sv << endl; visited[sv]==1; cout << “** ” << visited[sv] << ” sv is ” << sv<< endl; for(int i=0;i> v >> e; //Dynamic 2-D array int ** edges = new int*[v]; for(int i=0;i> f >> s; edges[f][s]=1; edges[s][f]=1; } int * visited = new int[v]; for(int i=0;i<v;i++) { visited[i]=0; } /*Adjacency Matrix Code, if you want to print it as well remove comments for(int i=0;i<v;i++) { cout << endl; for(int j=0;j<v;j++) { cout << edges[i][j] << ” “; } } */ //here 0 is starting vertex. DFS(v,edges,0,visited); } Applications of DFS: Algorithms that use depth-first search as a building block include: • Finding connected components. • Topological sorting. • Finding 2-(edge or vertex)-connected components. • Finding 3-(edge or vertex)-connected components. • Finding the bridges of a graph. • Generating words in order to plot the limit set of a group. • Finding strongly connected components. • Planarity testing. • Solving puzzles with only one solution, such as mazes. (DFS can be adapted to find all solutions to a maze by only including nodes on the current path in the visited set.) • Maze generation may use a randomised depth-first search. • Finding connectivity in graphs. DFS pseudocode (recursive implementation): The pseudocode for DFS is shown below. In the init() function, notice that we run the DFS function on every node. This is because the graph might have two different disconnected parts so to make sure that we cover every vertex, we can also run the DFS algorithm on every node. DFS(G, u) u.visited = true for each v ∈ G.Adj[u] if v.visited == false DFS(G,v) init() { For each u ∈ G u.visited = false For each u ∈ G DFS(G, u) } Complexity of DFS: Space Complexity: The space complexity for BFS is O(w) where w is the maximum width of the tree. For DFS, which goes along a single ‘branch’ all the way down and uses a stack implementation, the height of the tree matters. The space complexity for DFS is O(h) where h is the maximum height of the tree. To read more on C++, click here. By Akhil Sharma
ESSENTIALAI-STEM
Talk:LVM What about Linux virtual machine? What about differences : Logical Volume Manager vs Linux Virtual Machine? —Preceding unsigned comment added by <IP_ADDRESS> (talk) 15:39, 12 November 2009 (UTC) There is no Linux Virtual Machinearticle.CristianChirita (talk) 13:49, 13 November 2009 (UTC)
WIKI
I have a function something like this: char *foo(...) { char *result; switch (...) { case 1: result = strdup(...); break; case 2: result = strdup(...); break; case 3: result = malloc(...); strcpy(result, "a"); for (...) { strcat(result, ...); } } return result; } So there are a number of branches, most of which are pretty easy for splint to check, but there's one hard one (involving traversing a linked list) which it's not likely to be able to check. So I get the warning: Returned storage result not completely defined (*result is undefined): (result) Storage derivable from a parameter, return value or global is not defined. Use /*@out@*/ to denote passed or returned storage which need not be defined. (Use -compdef to inhibit warning) So how do I say, within this one branch, that result really has been defined? I know I could switch off compdef checking for the whole function, but that seems undesirable in general (although probably OK for this case, since the part that splint probably can't check is the non-trivial part of the function). I think I have a similar situation (similar because I think I want to make a declaration about a variable in one branch) where a function is mostly returning structure elements, but on one branch it returns the static string "". It does this kind of thing: case MDT_ATTR: ns_uri = node_data->d.attr->ename->ns_uri; break; } if (ns_uri == NULL) ns_uri = ""; return (ns_uri); This produces the warning: mdoctree.c:669:2: Clauses exit with ns_uri referencing unqualified static storage in true branch, local storage in continuation The state of a variable is different depending on which branch is taken. This means no annotation can sensibly be applied to the storage. (Use -branchstate to inhibit warning) mdoctree.c:669:2: Storage ns_uri becomes unqualified static So how do I fix this? It feels like /*@observer@*/ is almost the right thing: if the function were only returning mysterious pointers which the callers shouldn't fiddle with, then declaring the result as /*@observer@*/ ought to work. But in thise case, the warning seems before that, so something else is required. -- Bruce Stephens [EMAIL PROTECTED] ACI Worldwide/MessagingDirect <URL:http://www.MessagingDirect.com/> Reply via email to
ESSENTIALAI-STEM
Andrew Collins (broadcaster) Andrew Collins is an English writer and broadcaster. He is the creator and writer of the Radio 4 sitcom Mr Blue Sky. His TV writing work includes EastEnders and the sitcoms Grass (which he co-wrote with Simon Day) and Not Going Out (which he initially co-wrote with Lee Mack). Collins has also worked as a music, television and film critic. Personal life Collins was briefly a member of the Labour Party between the late 1980s and early 1990s, leaving after Labour's defeat in the 1992 General Election. In 2007, he was made patron of Thomas's Fund, a Northampton-based music therapy charity for children with life-limiting illnesses. Career Collins started his career as a music journalist, writing for the NME, Vox, Select and Q (where was editor, 1995–97). He also wrote for and edited film magazine Empire in 1995. He formed a double-act with fellow music journalist Stuart Maconie, presenting the Sony Award-winning BBC Radio 1 show Collins and Maconie's Hit Parade, after forging their style on a daily comedy strand on Mark Goodier's BBC Radio 1 drivetime show, and Collins & Maconie's Movie Club on ITV. In 1998, Collins published his first book, Still Suitable for Miners, an authorised biography of the singer-songwriter Billy Bragg. The book has been regularly updated, first in 2002, then again in 2007, 2013 and 2018. Collins often appeared on BBC, ITV and Channel 4 list shows, including the popular I Love the '80s programme. He stated on BBC Three's The Most Annoying TV Programmes We Love to Hate that he had appeared on 37 such list shows, and that this would be his last one. He subsequently appeared on Heroes Unmasked on BBC Three. He devoted a full chapter to the experience of appearing as a talking head on such shows in his third volume of autobiography, That's Me in the Corner, and continues to appear on similar shows (most recently, The Comedy years on ITV in May 2019). He has written three volumes of autobiography, humorous accounts of "growing up normal" in 1970s Northampton, struggling with art school in London in the 1980s, and forging a media career in the 1980s and 1990s: Where Did It All Go Right? (2003) (a Sunday Times and Smith's bestseller), Heaven Knows I'm Miserable Now (2004) and That's Me in the Corner (which draws its title from a line from the R.E.M. song Losing My Religion) published in May 2007. He produced a regular (generally weekly) podcast, the Collings & Herrin Podcast, with comedian Richard Herring, which began in February 2008 and ran for four years and was named "Podcast of the Week" in The Times in July 2008. Some episodes were recorded in front of a live audience. A hiatus from June 2011 to 4 November 2011 was due to what Herring joked was "Collins' duplicitous careerism". Herring announced that the November 2011 podcast would most likely be the last, as Collins had lost enthusiasm for it. Collins presented solo shows on BBC Radio 6 Music as well as presenting shows with Richard Herring before and during their podcast series. Collins then presented a Saturday morning radio show with Josie Long on BBC Radio 6 Music between July and December 2011. Herring felt that he had been unceremoniously replaced by Long, which contributed to the end of their collaborations. In 2010 Collins made a brief foray into standup comedy, performing a show at the Edinburgh Fringe called Secret Dancing... and other urban survival techniques. This was recorded and released on DVD. He co-wrote the first series of the sitcom Not Going Out for BBC One with Lee Mack, and co-wrote various episodes for the second, third and fourth series. The fifth was the first series he did not work on. The first series won the Rose D'Or for Best Comedy, and he and Mack won the RTS Breakthrough award. He worked on the team-written sitcom Gates for Sky Living in 2012, and re-teamed with Simon Day (with whom he'd co-written Grass for BBC Three and BBC2 in 2003) to co-write Colin, an episode of the anthology series Common Ground on Sky Atlantic in 2013. In recent years, Collins has moved into script editing. He was script editor on sitcoms The Persuasionists on BBC Two, Little Crackers (specifically Shappi Khorsandi's) on Sky1, the broadcast pilot of Man Down on Channel 4 (2013), two series of Badults on BBC Three (2013-2014), and the second series of Drifters for E4. In 2014, he acted as a script consultant on The Inbetweeners 2. Collins is currently the film editor for Radio Times. He wrote and filmed a weekly TV review column, Telly Addict, for The Guardian website, from May 2011 to April 2016. It returned in June 2016 on YouTube, now hosted and produced by UKTV. He took over the weekly radio show Saturday Night at the Movies on classical music station Classic FM in March 2015 (from presenter and composer Howard Goodall). In March 2023, Jonathan Ross replaced Collins as presenter. Mr Blue Sky Collins' first solo-written comedy, Mr Blue Sky for BBC Radio 4, starred Mark Benton and Rebecca Front and aired in May and June 2011. It was recommissioned for a second series in 2012. It focused on Harvey Easter (Benton), an eternally optimistic man in his 40s and his more realistic wife Jax (played in series two by Claire Skinner), and the rest of the family including son Robbie, daughter Charlie and grandmother Lou. Jim Bob of indie duo Carter The Unstoppable Sex Machine recorded a cover of "Mr. Blue Sky" by Electric Light Orchestra for the theme tune. In the Observer, radio critic Miranda Sawyer said "this series charms" and praised Benton's "lovely" performance. The List gave it 3/5, calling it "warmly cosy". The Guardian found it "full of warm, nicely observed lines". After its second series aired in April and May 2012 (Moira Petty in The Stage praised Benton's performance as "an essay in finely nuanced felicity"), Mr Blue Sky was not recommissioned for a third series. Books * Still Suitable for Miners: Billy Bragg: The Authorised Biography (1998, 2002, 2007, 2013, 2018 rev. ed.), Virgin Books ISBN 075355271X * Friends Reunited: Remarkable Real Life Stories from the Nation's Favourite Website (2003), Virgin Books ISBN 1-85227-039-X (ed.) * Where Did It All Go Right?: Growing Up Normal in the 70s (2003), Ebury Press ISBN 0-09-188667-8 * Heaven Knows I'm Miserable Now: My Difficult Student 80s (2004), Ebury Press ISBN 0-09-189691-6 * That's Me in the Corner: Adventures of an Ordinary Boy in a Celebrity World (2007) Ebury Press ISBN 0-09-189786-6 * Dads (2008), Contributor, (Edited by Sarah Brown and Gil McNeil) Ebury Press ISBN<PHONE_NUMBER> ISBN<PHONE_NUMBER>726 * Shouting at the Telly (2009), Contributor, (Edited by John Grindrod) Faber and Faber ISBN 0-571-24802-0 ISBN<PHONE_NUMBER>025 * Modern Delight (2009), Contributor, Faber and Faber ISBN<PHONE_NUMBER> ISBN<PHONE_NUMBER>254 * Grandparents: A Celebration (2009), Contributor, (Edited by Sarah Brown and Gil McNeil) Ebury Press ISBN<PHONE_NUMBER> ISBN<PHONE_NUMBER>783 * End of a Century: Nineties Album Reviews in Pictures (2015), Editor, SelfMadeHero ISBN 190683895X ISBN<PHONE_NUMBER>959 * Gogglebook: The Wit and Wisdom of Gogglebox (2015), Macmillan Books ISBN<PHONE_NUMBER> ISBN<PHONE_NUMBER>301
WIKI
Bhamipura Kalan Bhammipura Kalan is a village in Ludhiana District of Punjab State in India. It is 14 km south of Jagraon and about 55 km from Ludhiana city. The village consists mostly of Dhaliwal and Chahal families. The memorial of founder of Dhaliwal families "Baba Sidh Bhoe" is in also in this village. The village is neighboring to the villages of Deharka, Akhara, Dalla, Manuke, Bassuwal, Cheema, and Bhammipura Khurd. The famous Gurdwara Mehdiana sahib Mehdiana Sahib is 5 km from village.
WIKI
Polysphincta Polysphincta is a genus belonging to the family Ichneumonidae subfamily Pimplinae. Species The genus contains the following species:
WIKI
Lipschitz Etymology Named after Rudolf Lipschitz. Adjective * 1) (Of a real-valued real function $$f$$) Such that there exists a constant $$K$$ such that whenever $$x_1$$ and $$x_2$$ are in the domain of $$f$$, $$|f(x_1)-f(x_2)|\leq K|x_1-x_2|$$.
WIKI
Paid Notice: Deaths KRISMAN, COL. MICHAEL J., RET. KRISMAN--Col. Michael J., Ret. 91, Highland Falls, NY, on May 22 in West Point, NY. Beloved husband of Alys (nee Savage). Loving father of Michael Krisman, Alys Hart and Mary Ann Krisman-Hart. Devoted brother of Joseph C. Krisman. Loving grandfather and great-grandfather. Visitation June 1, 7-9 p.m. at William F. Hogan Funeral Home, 135 Main St, Highland Falls, NY. Funeral June 2, 10 a.m. Chapel of The Most Holy Trinity, West Point, NY. Donations to American Cancer Society.
NEWS-MULTISOURCE
Talk:European Astronaut Corps National flags I think that these flags are not very useful. Hektor 17:07, 17 July 2007 (UTC) * For what it's worth, I very much disagree; many readers will want to know which country ESA astronauts are from, and using flags makes this more understandable at a glance. <IP_ADDRESS> (talk) 00:58, 28 October 2023 (UTC)
WIKI
Code covered by the BSD License   Highlights from Chebfun V4 image thumbnail Chebfun V4 by   30 Apr 2009 (Updated ) Numerical computation with functions instead of numbers. Editor's Notes: This file was selected as MATLAB Central Pick of the Week chebop_pwlinear function pass = chebop_pwlinear % This test constructs a piecewise-linear chebop and checks % the accuracy of the solution for the ODEs: % u'' + |x+.5|*u = |x| + |x-.5| + 2*sgn(x), % u(-1) = 3, u(1) = 0. % NH 08/2010 tol = 1e-9; d = [-1 1]; x = chebfun(@(x) x, d); A = chebop(@(x,u) diff(u,2) + abs(x+.5).*u); A.lbc = @(u) u-3; A.rbc = @(u) u; f = abs(x) + abs(x-.5) + 2*sign(x); u = A\f; err = A*u-f; err = set(err,'imps',0*err.imps(1,:)); pass = norm(err,inf) < tol; Contact us  
ESSENTIALAI-STEM
Rubio: I still believe Trump can't be trusted with the nuclear codes Sen. Marco Rubio is sticking by his former attack line that Donald Trump can't be trusted with the nuclear codes, despite having previously come around to support Trump as the nominee. In an interview with The Weekly Standard, Rubio reaffirmed his statement from February — when he was still in the throes of a nasty primary battle — that America can't give "the nuclear codes of the United States to an erratic individual." "I stand by everything I said during the campaign," Rubio said on Thursday. Rubio's wobbly support of Trump comes after the presumptive Republican nominee's comments about Judge Gonzalo Curiel's Mexican heritage have caused a number of Republican lawmakers to rethink their endorsements of the billionaire. Rubio also recently walked back his prior pledge to speak on Trump's behalf at the Republican National Convention, saying last Monday that he would only talk about his own beliefs, and not as part of Trump's platform. On Thursday, Rubio appeared to lose patience when asked whether Trump may lose his vote. "I don't have anything new to add from what I've already said. I've talked about it all week long," he said.
NEWS-MULTISOURCE