text stringlengths 192 1.05M |
|---|
Программирование на С
• Просмотров 1316
• Скачиваний 187
• Размер файла 40
Кб
МИНИСТЕРСТВО ОБЩЕГО И ПРОФЕССИОНАЛЬНОГО ОБРАЗОВАНИЯ РОССИЙСКОЙ ФЕДЕРАЦИИ. МОСКОВСКИЙ ГОСУДАРСТВЕННЫЙ АВИАЦИОННО-ТЕХНОЛОГИЧЕСКИЙ УНИВЕРСИТЕТ им. К.Э. ЦИОЛКОВКОГО КАФЕДРА ИНФОРМАЦИОННЫХ ТЕХНОЛОГИЙ Курсовая работа второго курса второго семестра. Руководитель: Чернадский Дата сдачи: _____________ Подпись: _____________ Студент: Лицентов Д.Б. Группа: 3ИТ-2-26 Москва 1998 Постановка задачи. Необходимо реализовать список вида: Техническое
описание программы. В программе предусмотрена работа со списком, которая включает в себя: 1. Создание нового вписка; 2. Добавление элемента в список; 3. Вывод списка на дисплей; 4. Сохранение данных списка в файл; 5. Читение данных из файла; 6. Удаление списка из памяти компьютера; 7. Поиск элемента в списке; 8. Сортировка списка; 9. Удаление элемента списка. Спецификация
программы. Ввод данных в программу может осуществляться двумя спосабами: ввод с клавиатуры или из файла. Для работы с файлом необходимо на соответствующий запрос программы ввести имя файла, из которого будут взяты данные для построения списка. Для нормальной работы программы требуется PC совместимый компьютер и компилятор Borland 3.01 и выше. При использование иного сочетая характеристик системы на которой будет тестироваться
программа возможны некоторые расхождения с результатами теста, но в основном ничего страшного произойти не должно. Текст программы. #include <iostream.h> #include <fstream.h> class List {struct Tree {int Body; Tree *LP; Tree *RP; Tree(int Bdy=0) {Body=Bdy; LP=NULL; RP=NULL;} ~Tree() {Body=0; LP=NULL; RP=NULL;} }; public: List(int Digit=0); Tree *Root; List *LNext; List *LPrev; }; List::List(int Digit) {Root=NULL; for (int i=Digit*10; i<Digit*10+10; i++) {Tree *PTree; PTree=new Tree(i); PTree->LP=NULL; PTree->RP=NULL; if (Root==NULL) Root=PTree; else {Tree *PTree1=Root; do {if (PTree1->LP!=NULL) PTree1=PTree1->LP;} while (PTree1->LP!=NULL);
PTree1->LP=PTree; PTree=NULL; PTree1=NULL; } } } class TreeWork : private List {public: void TreeWorkStart(); private: int ElementQuantity; int Mass; int i; List *BegP; List *PList; int MainMenu(); int Work(int Task); int MakeNewList(); int AddElements(); int PrintList(); void EraseList(); int DeleteElement(); int FindElement(); int SubMenu(); int SubWork(int Task); int SortByIncrease(); int SortByDecrease(); int SaveList(); int OpenList(); protected: void GoThroughTree(Tree *L); void Erase(Tree *L); }; int TreeWork::MainMenu() {cout<<endl<<"Main Menu:"<<endl<<endl; cout<<" 1. Make New List." <<endl; cout<<" 2. Add Element." <<endl; cout<<" 3. Print List." <<endl; |
What is the Relationship Between IoT and Big Data?
What is the Relationship Between IoT and Big Data?
The Internet of Things (IoT) and big data are massive, complex ideas. While interrelated, they’re also distinct. The IoT consists of millions of devices that collect and communicate information, but big data encompasses a much wider landscape. To understand the relationship between IoT connectivity and big data, let’s first take a look at the role of big data and its key attributes.
What is Big Data?
True to its title, big data means tremendous amounts of information. It comes from a variety of sources, from connected devices to clicks from online consumers. The units used to measure big data—petabytes, terabytes, and exabytes—reflect its overwhelming nature. While advances in computing technology have enabled organizations to collect big data sets, computers lacked the power to process such amounts of information until recently. Today, businesses and other organizations are starting to sift through their data in search of actionable insights that can aid decision-making. Professionals able to work with data sets are in high demand. They use modeling software and statistical analysis to extract patterns, performance information, and potential problems. These analysts are the translators who turn big data into useful reports.
Increasingly, artificial intelligence (AI) and machine learning/deep learning technologies are aiding the process of big data analysis. They can compile data from multiple sources and use it to predict outcomes and make recommendations. For example, video streaming services such as Netflix and Amazon remember the movies you watch and recommend similar titles for future viewing.
The Four “V”s
To aid understanding of such an enormous concept, data scientists at IBM popularized the four “V”s of big data: volume, variety, velocity, and veracity.
Volume
The incredible amount of data collected today through sensors, online transactions, social media, and other mediums cannot be processed or even stored using traditional methods. According to some estimates, the accumulated volume of big data will be close to 44 zettabytes or 44 trillion gigabytes by 2020. Data sets are often so large that they cannot fit on a single server, and must instead be distributed between several storage locations. Data analytics software such as Hadoop is built to accommodate the need for distributed storage and aggregation.
Variety
Today’s data comes in a wide range of types, from social media posts to video clips. In past decades, data was more clearly defined—for example, phone numbers, addresses, or ledger amounts—and could be collected easily into spreadsheets or tables. Today’s digital data often cannot be corralled into traditional structures. Powerful analytics software seeks to harness unstructured data, such as images and videos, and combine it with more straightforward data streams to provide additional insights.
Velocity
Currently, data is collected at a mind-boggling rate of 2.5 quintillion bytes per day. From hundreds of thousands of social media posts to more than 5 billion Google searches per day, accumulated data is streaming into servers at a previously unprecedented speed.
Veracity
Veracity refers to the truthfulness or accuracy of a particular set of data. That includes evaluation of the data source—is it trustworthy, or would it lead analysts astray? Poor data quality costs the U.S. around $3.1 trillion per year, so pursuing veracity is important. It includes seeking to eliminate duplication, limit bias, and process data in ways that make sense for the particular application or vertical. This is an area where human analysts and traditional statistical methodologies are still of great value. While AI is becoming more sophisticated, it cannot yet match the discernment of a trained human brain.
Big Data and IoT
In one sense, the IoT is a series of creeks and rivers that feed into an ocean of big data. The enormous collection of connected sensors, devices, and other “things” that represent the IoT—7 billion worldwide—is making a significant contribution to the volume of data collected. IoT use cases span a wide swath of uses and sectors, from agriculture to smart devices to machinery. Sensors are used for asset management, fleet tracking, remote health monitoring, and more.
The tools created for big data and analytics are useful for corralling the influx of data streaming in from IoT devices. IoT-focused developers are creating platforms, software and applications that enterprises and organizations can use to manage their IoT devices and the data generated.
Distinct but Complementary
While both big data and the IoT refer to collecting large sets of data, only the IoT seeks to run analytics simultaneously to support real-time decisions. For example, an e-commerce company might track consumer habits over time and use that data to create tailored content and advertising for the customer. But in the case of an autonomous car, data cannot be put aside for later analysis. If it shows an impending accident, the machine needs to know those results without delay so it can make a split-second decision.
Many IoT devices rely on cloud computing, or communicating with a remote server, but in some verticals designers are applying the idea of edge data processing. In this model, the device retains power to process some data locally, ensuring minimal latency for time-sensitive operations.
While the focus of IoT is more on the immediate analysis and use of incoming data, big data tools can still aid some functions. Predictive analytics, for example, considers a machine’s performance and service alerts over time, building the library of data needed to anticipate upcoming problems. That means companies can be proactive about servicing their equipment. For example, they can ensure that spare parts or service personnel are on hand before a machine breaks down.
Types of data sources are another major distinction between the two. Big data analytics typically looks at human choices, especially in the online realm, in an effort to predict behavior and uncover patterns or trends. On the other hand, IoT is centered on machine-generated data, and its primary goals are machine-oriented—optimal equipment performance, predictive maintenance, and asset tracking, to name a few.
Common Goals
Big data and IoT are distinctive ideas, but they depend on each other for ultimate success. Both emphasize the need for converting data into tangible insights that can be acted upon.
One example of IoT working together with big data analytics comes from the shipping industry. Shipping companies are attaching IoT sensors to trucks, airplanes, boats, and trains to keep track of speed, stops, engine status, and other information. They can use that data to make immediate decisions and to anticipate forthcoming maintenance, but they also store accumulated information to get a big-picture view of the company’s performance over time. Ultimately, this combination of immediate IoT insights and long-term big data analytics results in cost savings, improved efficiency, and better use of environmental resources.
IoT and big data have an important relationship that will continue to develop as technology advances. Companies wishing to harness the power of data should carefully consider the devices they choose to deploy and the types of information they collect. Making an effort at the front end to gather only useful, applicable data—and designing internal systems to process it in sector-specific ways—will make the process of analytics that much easier.
More free resources for tech innovators |
LASIK
LASIK - FAQ
+What is LASIK?
LASIK stands for laser in-situ keratomileusis, a procedure that involves the use of a laser to reshape the cornea of the eye so that refractive capability is improved. This allows the correction of varying degrees of myopia (short-sightedness), hyperopia (far-sightedness) and astigmatism (“san guang”).
+How do I know if I qualify to undergo LASIK surgery?
You would have to be at least 18 years of age to ensure you have achieved a stable refraction status. You should not have any significant illnesses nor be on any medication that could adversely affect the surgery. You would have to be generally healthy with a refractive condition that is not overly severe or outside the accepted treatment limits of LASIK surgery. Importantly, you should have realistic expectations with a good understanding of the risks and benefits of LASIK surgery. Pregnancy is a definite disqualifier. Eligibility is best determined through a pre-LASIK evaluation with your doctor.
+Who will perform the LASIK surgery?
The doctor whom you selected as your LASIK surgeon will perform the LASIK surgery. Your LASIK surgeon will be a qualified practitioner, certified and registered with the relevant medical and professional bodies. Despite the surgery being performed with the aid of computer-guided lasers, your doctor will be in complete control of the entire surgery.
+What is the pre-LASIK evaluation like?
The pre-LASIK evaluation involves a thorough eye examination, information exchange and decision-making. The doctor will perform a complete eye examination along with computerised assessments of your eye’s corneal surface. Taking into account your work and lifestyle needs, the doctor will discuss with you relevant options for refractive correction. Recommendations arising from the discussion will be based on your understanding and acceptance of the risks and benefits of LASIK surgery.
+What do I need to do to prepare for LASIK surgery?
If you wear soft or hard contact lenses, you need to stop wearing them 3 and 14 days respectively prior to undergoing LASIK surgery. Your transportation should be arranged to and from the clinic on the day of surgery as you are not to drive. You must follow medication instructions strictly as prescribed by your doctor as they prevent infection and improve healing. You need to set aside a sufficient period of rest following surgery to allow for healing. This also involves scheduling and attending regular post-op reviews with your doctor after the surgery.
+Why do I need to lay off my contact lenses before the pre-LASIK evaluation and the surgery?
Contact lenses change the shape of your cornea. Removal of contact lenses will allow the cornea to return back to its original shape so that accurate eye readings can be taken.
+How long does LASIK surgery take?
The actual surgery for each eye takes between 5 and 10 minutes to complete.
+Are there any precautions I need to note prior to LASIK surgery?
Should you sustain any injury to the eye prior to surgery, inform your doctor before the surgery is performed. Any infection whether to the eye or the body as a whole should also be communicated to your doctor prior to the surgery.
+Is LASIK surgery painful?
No, the use of anaesthetic eye drops prevents pain in the eyes although minor discomfort may be felt through the surgery. Post-surgery, an itching sensation may develop but is usually not painful.
+What are the risks involved in undergoing LASIK surgery?
The risk of complications is inherent in any surgery and LASIK surgery is no exception. Complications can include infection, dry eyes and night vision problem. These will be discussed in greater detail by your doctor when you go through the pre-LASIK evaluation.
+What are the side effects of LASIK surgery?
Your vision will be blur and unstable for some days after surgery. Night vision problems may occur with the experience of haloes, starbursts and glare following surgery. LASIK surgery may also cause dryness to the eye. Most of these effects will be experienced temporarily and go away within months. In rare instances, some may persist but remain largely treatable.
+Will I be awake during LASIK surgery?
Yes, you will need to be awake and focus on the centre of a red blinking light for the surgery to take place.
+Can I blink during LASIK surgery?
Yes. However, there will be a special instrument to hold your eyelids to prevent your eyes from closing even when you blink during the surgery.
+How fast can I recover my vision?
In general, most patients can see right after their surgery and recover about 75% to 80% of their vision the very next day.
+Will I feel any discomfort after the surgery?
Yes, it is common to experience tearing and some discomfort for the first 4 to 5 hours. Sleeping pills will be prescribed to help you sleep away any discomfort you may experience after the surgery.
+Will my eyes be red after LASIK surgery?
Some patients may get red patches on the whites of the eyes. These are painless and will subside in about 2 to 4 weeks’ time. They are the equivalent of little bruises which spontaneously resolve.
+What are some things to note after LASIK surgery?
You are required to wear a protective shield while you are asleep to protect the eye for at least 3 days after surgery. Swimming and eye makeup are to be avoided for the first month and first week respectively.
+How long will I take to recover?
In generally, most patients can return to work a day after the surgery depending on vision recovery and nature of work. Your vision should stabilise 2 months after the surgery. Regular post-op reviews with your doctor are nevertheless required to ascertain stability of vision.
+Will my eyes look the same after LASIK surgery?
Yes, they will look the same before and after LASIK surgery.
+What type of results can I expect after LASIK surgery?
Realistic expectations need to be developed on consultation with your doctor. In mild refractive conditions, LASIK surgery is usually capable of granting full correction of vision and freedom from artificial vision aids. In severe refractive conditions however, correction may not be complete and artificial vision aids may still be required.
+Are the effects of LASIK surgery permanent?
Corneal tissue is permanently removed with reshaping of the eye and these physical effects are permanent. Vision however, may still change with age due to effects on the lens, ciliary muscles and retina among others.
+If my vision is under or over-corrected after my first LASIK surgery
An enhancement surgery can be performed after the first LASIK surgery provided eligibility criteria is fulfilled including stability of vision and sufficient residual corneal thickness. A thorough discussion should be conducted with your doctor to determine the best option.
+ can I undergo the surgery again?
There will be 3 post-op reviews: next day after surgery, 2 weeks after surgery, 3 months after surgery.
+How many post-op reviews will be there after LASIK surgery?
In general, you can travel by plane 3 to 5 days after surgery.
+When can I travel after LASIK surgery?
You can go visit the gym 2 or 3 days after the surgery. However, please avoid water and contacts sports for 1 month.
+When can I resume exercising?
The corneal flap takes 1 month to heal completely.
+How long does it take for the corneal flap to heal?
If your condition is a refractive one, other options may be available to you. These include Epi-LASIK or Implantable Contact Lens (ICL). Consult your doctor to assess these and other options to treat your condition.
+If I do not qualify for LASIK surgery
LASIK surgery is considered an elective procedure and not an essential one. Hence, generally, government-based medical insurance will not cover the surgery. Your personal insurance provider will be able to advise you on whether LASIK surgery can be covered by your personal medical insurance.
BOOK APPOINTMENT |
Uranus: First Planet Discovered With A Telescope
WRITTEN BY Saumya Jaiswar2023-03-10,10:55:58 education
Discovery of Uranus' Rings
Astronomers aboard a jet carrier named Kuiper Airborne Observatory, made the discovery of Uranus' rings on 10 March 1977.
First Planet Discovered In The Modern Age
The 7th planet from the Sun was discovered in 1781, it expanded the known limits of our solar system
William Herschel
The man who is credited with discovery of Uranus is the astronomer and musician William Herschel
Herschel’s Telescopes
The key component of Herschel’s reflecting telescope is the mirror which gathers light from distant celestial objects which helped to discover Uranus
A New Planet
It was concluded from its movement that Uranus was a planet, and can't be a star or comet because of its closeness to Earth.
How Uranus was Named?
The planet was named Uranus, the Greek god of the sky, as suggested by Johann Bode.
Coldest planet in the Solar System
Uranus is orbiting at a distance of 2.88 billion km from Sun
Uranus has 27 moons
At present, astronomers have confirmed the existence of 27 natural satellites on the planet
Can be seen with Naked Eye
You might be surprised to find out that a telescope is not necessary to observe Uranus. Uranus is just within the range of brightness that the human eye can detect at magnitude 5.3.
For more such stories stay tuned... |
<![CDATA[Bookmarks Not Imported]]>I recently did a whole reset with Opera, and it's now working as it should. However, when I did the Import Bookmarks from Firefox, I can only find a handful, when I have hundreds in my Firefox. Why? And where did it take what few showed from?
Also, even though I did nothing to import them from Safari, I do have a folder "From Safari", that is chock full of bookmarks. However, they appear to be incomplete, and are missing many of the newer ones. Again, I wonder where these came from, as I didn't import them, and they obviously aren't my current, up-to-date ones.
How do I import my current bookmarks from Firefox or Safari. And when I do, will importing them create lots of duplicates, as well? Thanks for any help...
]]>
https://forums.opera.com/topic/56832/bookmarks-not-importedRSS for NodeWed, 07 Aug 2024 10:43:00 GMTFri, 22 Jul 2022 22:59:44 GMT60<![CDATA[Reply to Bookmarks Not Imported on Sun, 20 Nov 2022 05:13:15 GMT]]>@stevenjcee said in Bookmarks Not Imported:
Perhaps, when I uninstalled Opera I saved the Bookmarks folder, so that is why it's so full, and won't import correctly?
Yes. That's possible. As in, if you start with fresh "Bookmarks" and "BookmarksExtras" files in Opera's profile folder and then import, things might go better.
Actually I just noticed, when I open Bookmarks, there is a folder with "imported from Firefox" with random folder names: Current Tabs, Reading List, and Unfiled. Under the Unfiled heading, there are thousands of bookmarks in folders marked New, Newer, Newest!
If you have existing bookmarks, the importer puts imported bookmarks in an imported folder. So, that's normal. As for the folder names inside that, "unfiled" usually means "other bookmarks". As for the new, newer, newest folders, maybe that comes from Firefox's bookmarks.html you exported or from Firefox's palces.sqlitge bookmarks file if you directly imported from Firefox. I would goto opera://bookmarks, select all bookmarks and folders in the "unfiled" folder and drag them to Opera's "other bookmarks" folder. Then, for bookmarks and folders in New, newer, and newest that are now in Opera's "other bookmarks" folder, I would select all messages in each and drag them to the root of the "other bookmarks" folder and then delete the new, newer and newest folders. For the bookmarks bar, I would select and drag those to Opera's bookmarks bar.
Then, ass for the other bookmark categories from Firefox, I would drag what I wanted from them to "other bookmarks" and then delete the whole "imported from Firefox" folder.
Long story short, Chromium and Firefox bookmark hierarchies are different.
However, if another Chromium-based browser imports from Firefox in a better way, import from Firefox into that browser first. Then import into Opera.
]]>
https://forums.opera.com/post/294121https://forums.opera.com/post/294121Sun, 20 Nov 2022 05:13:15 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sun, 20 Nov 2022 04:48:15 GMT]]>@stevenjcee said in Bookmarks Not Imported:
I've already uninstalled & reinstalled Opera,
Not sure about Mac, but uninstalling and reinstalling on Windows doesn't wipe out your data and cause you to start with a new profile unless you choose the option in the uninstaller to delete your data. You have not mentioned deleting your Opera data, so it sounds like you're still using your old Opera data (profile). If you have then never mind. If you haven't, then it would be something to try to rule out issues iwth your profile.
]]>
https://forums.opera.com/post/294119https://forums.opera.com/post/294119Sun, 20 Nov 2022 04:48:15 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sat, 19 Nov 2022 20:01:27 GMT]]>@burnout426 I've already uninstalled & reinstalled Opera, to see if that would help, to no use. At least now Opera functions, through many versions, it wouldn't stay open for even five minutes before stalling out, running up all the CPU usage, and having to be force quit. At this time, it seems the primary issue is just the importing bookmarks function, which I can live without if need be.
The other issue I had was the inability to sync my iMac (OS11.7.1) and my Macbook Pro (OS High Sierra), as every time I tried to do it, nothing would happen, and each computer asked for the code of the other, trapping me in an endless loop. So I erased all that, and will once again try to sync them, from a fresh place, so that one will give me a code I can then input in the other...
Actually I just noticed, when I open Bookmarks, there is a folder with "imported from Firefox" with random folder names: Current Tabs, Reading List, and Unfiled. Under the Unfiled heading, there are thousands of bookmarks in folders marked New, Newer, Newest!
Not sure why or how they got put where they are, or how to somehow organize them better. Why might they have been imported, but without the folder hierarchy of how they were organized in my browser? Also, I don't know how recent the most recent are, as every time I've attempted to import from Firefox, it would stall out Opera,...
I wonder if I should simply delete everything in the Imported From Firefox folder (or remove the file & save on my HD), and try importing, without them being there. Perhaps, when I uninstalled Opera I saved the Bookmarks folder, so that is why it's so full, and won't import correctly?
]]>
https://forums.opera.com/post/294090https://forums.opera.com/post/294090Sat, 19 Nov 2022 20:01:27 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sat, 19 Nov 2022 08:23:18 GMT]]>@stevenjcee Test with a new profile as mentioned above.
]]>
https://forums.opera.com/post/294054https://forums.opera.com/post/294054Sat, 19 Nov 2022 08:23:18 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sat, 19 Nov 2022 03:03:39 GMT]]>@burnout426 said in Bookmarks Not Imported:
Really sounds like you have a corrupted profile and or corrupted Opera program files. If so, no update will ever fix those type of issues.
OK, so if it is, and no update will fix it, what should I do?
]]>
https://forums.opera.com/post/294042https://forums.opera.com/post/294042Sat, 19 Nov 2022 03:03:39 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sun, 18 Sep 2022 04:07:11 GMT]]>@stevenjcee said in Bookmarks Not Imported:
Thanks, but I'm afraid it likely won't work. I've also tried importing Firefox bookmarks, and when I hit the import button, it doesn't even function, only a half circular arrow, and it freezes, immediately. Then I have to force quit it, before it runs out all my memory & CPU, and freezes my entire computer!
Still, providing a minimal bookmarks HTML file that others can try to import to see if they have the problem too or if it's just your Opera and your system would help. Haven't heard of anyone else having your issue.
]]>
https://forums.opera.com/post/289797https://forums.opera.com/post/289797Sun, 18 Sep 2022 04:07:11 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sun, 18 Sep 2022 04:05:07 GMT]]>@stevenjcee Goto the URL opera://about, take note of the "profile" path, and close Opera. Then, rename the profile folder as a test so that when Opera starts up, it'll create a new profile folder. Then, test importing into that. When done testing, if you don't want to keep the new profile, close Opera, delete the new profile folder, and rename your old profile back.
If still no different, uninstall Opera and reinstall, and try a new profile again.
For your existing profile, the Bookmarks and BookmarksExtras files that you can delete are in the profile folder. You delete them while Opera is closed.
Really sounds like you have a corrupted profile and or corrupted Opera program files. If so, no update will ever fix those type of issues.
Also, what operating system to you have on your laptop, and what version? What version of MacOS do you have on your iMac? Is it an Intel iMac or an M1/ARM iMac? Or, are you using a really old iMac?
Are you talking a bout regular Opera or Opera GX? What version of Opera? 91 of regular Opera on both computers?
]]>
https://forums.opera.com/post/289796https://forums.opera.com/post/289796Sun, 18 Sep 2022 04:05:07 GMT
<![CDATA[Reply to Bookmarks Not Imported on Sun, 18 Sep 2022 00:06:23 GMT]]>@burnout426 Nothing has worked, and I tried to delete the Opera bookmarks folder, and can't find a single folder labeled as Opera, anywhere on my computer! This is crazy. Each time I try to import, whether from the Safari bookmarks HTML file, or importing Firefox's, Opera immediately freezes, runs out all my RAM, and has to be force quit.... Never had these kinds of issues with any browser, ever...
]]>
https://forums.opera.com/post/289792https://forums.opera.com/post/289792Sun, 18 Sep 2022 00:06:23 GMT
<![CDATA[Reply to Bookmarks Not Imported on Fri, 16 Sep 2022 05:07:55 GMT]]>@stevenjcee
Just updated to the newest Opera Verson 91.something, and yet again, the bookmarks won't import my Safari ones! Soon as I select the file and click to import them, this damn thing freezes up, and immediately, all 16 GBs of my RAM are gone! I try to upload a screenshot of this, and it reports an error! Does Anything work right with this browser? I know it used to, years ago, what happened?
]]>
https://forums.opera.com/post/289667https://forums.opera.com/post/289667Fri, 16 Sep 2022 05:07:55 GMT
<![CDATA[Reply to Bookmarks Not Imported on Fri, 09 Sep 2022 03:36:43 GMT]]>@burnout426 Thanks, but I'm afraid it likely won't work. I've also tried importing Firefox bookmarks, and when I hit the import button, it doesn't even function, only a half circular arrow, and it freezes, immediately. Then I have to force quit it, before it runs out all my memory & CPU, and freezes my entire computer!
For some reason, on both my laptop & iMac, for the past few years, Opera just doesn't want to work! With each new update, I try it again, & most times it just freezes up within two minutes, has to be force quit, and that's it, just won't work. This time it at least works, but wont import bookmarks, or sync... Can't figure out why, as both my computers are using completely different OS version, yet Opera won't work right, period... I'll just have to give up on it..
]]>
https://forums.opera.com/post/289206https://forums.opera.com/post/289206Fri, 09 Sep 2022 03:36:43 GMT
<![CDATA[Reply to Bookmarks Not Imported on Tue, 06 Sep 2022 00:12:01 GMT]]>@stevenjcee If you want to email me the HTML file (my address is on my user page), I can see if Opera freezes for me and if so, see if I can figure out what in the file Opera has a problem with.
]]>
https://forums.opera.com/post/288989https://forums.opera.com/post/288989Tue, 06 Sep 2022 00:12:01 GMT
<![CDATA[Reply to Bookmarks Not Imported on Tue, 06 Sep 2022 01:33:04 GMT]]>@burnout426 Tried it again, had it import the selected HTML document of my Safari Bookmarks, and it Immediately stopped responding, and ran out all my RAM & CPU!!!! This is ridiculous, I want to use Opera but it's 100% Dysfunctional! Can't import bookmarks, can't sync, and it constantly must be force quit!!
]]>
https://forums.opera.com/post/288921https://forums.opera.com/post/288921Tue, 06 Sep 2022 01:33:04 GMT
<![CDATA[Reply to Bookmarks Not Imported on Tue, 30 Aug 2022 01:35:19 GMT]]>In Safari and Firefox, export your bookmarks to an HTML file. Then, at the URL opera://settings/importData in Opera, select "Bookmarks HTML File" in the drop-down and import the HTML file for Safari, and then repeat for Firefox if you want.
When imported, the bookmarks should appear in an "Imported" folder where you can then review them and move them to the folders you want (via the opera://bookmarks page).
As for duplicates, I think Opera will indeed import any duplicates. You'll then have to move any duplicates to the bookmarks trash and empty the trash. Or, at least for duplicates in the same folder, you can wait. Opera every once in a while will run a dedupe function to get rid of duplicates. You can even trigger the dedupe yourself. Goto the URL opera://about, take note of the "profile" path, close Opera, and edit the "Preferences" file in the profile folder with a JSON editor or text editor. There, you can change the root/opera/deduplication_last_successful_run timestamp (to 0 might trigger it). Or, you can try an extension.
As for missing bookmarks and bookmark folders, view the exported bookmark HTML files in a text editor to confirm that the missing ones are actually there. If so, perhaps you could submit the HTML file in a bug report explaining the situation and what items Opera's not importing or freezing at, and post the bug number here.
Another thing you could try is to delete the "Bookmarks" and "BookmarksExtras" files in Opera's profile folder (while Opera is closed). Maybe they're corrupted. Then, you can try importing from the HTML files to see if there's any difference. You could instead just rename your Opera profile folder to test with a new one and delete the new profile folder and rename the old one back when you're done testing.
If you're using Opera Sync, there could be an issue with it and the syncing of bookmarks. Logging out of sync, starting with fresh bookmark files, importing from the HTML files, confirming everything is there, and then logging into Sync could determine if that's the case. It might be necessary to reset sync at https://sync.opera.com/ and start with fresh profiles on all your devices to truly fix things in some cases.
I do have a folder "From Safari", that is chock full of bookmarks. However, they appear to be incomplete, and are missing many of the newer ones. Again, I wonder where these came from, as I didn't import them
On Windows at least, by default, when you install Opera, it'll import bookmarks from the default browser. In the installer, you have to click "options" and uncheck the import option to prevent it. Could be the same way on Mac.
]]>
https://forums.opera.com/post/288454https://forums.opera.com/post/288454Tue, 30 Aug 2022 01:35:19 GMT
<![CDATA[Reply to Bookmarks Not Imported on Tue, 30 Aug 2022 00:31:51 GMT]]>@stevenjcee Nobody has a solution? What's the point of posting any questions, if there are no responses?
]]>
https://forums.opera.com/post/288452https://forums.opera.com/post/288452Tue, 30 Aug 2022 00:31:51 GMT
<![CDATA[Reply to Bookmarks Not Imported on Mon, 25 Jul 2022 00:01:49 GMT]]>@stevenjcee Now, I've since tried importing them again, from Firefox, and each time, Opera stalls out. The first time, my Mac OS forced a restart, the next time, I manually forced quit Opera. What's going on, can it actually import bookmarks, or no?
]]>
https://forums.opera.com/post/286053https://forums.opera.com/post/286053Mon, 25 Jul 2022 00:01:49 GMT |
Login | Register
PCMacBlackBerry 10NOOKAndroidiPadiPhoneHome
Product Finder
What is the difference between a "soft" and "hard" reset? When should I use them?
It is a sad fact of modern life that complex electronic devices such as PDAs and desktop PCs can suffer from strange little problems which go away if you restart the device, but there are often unique terms associated with this process whose purpose aren't immediately obvious.
What is a Soft Reset?
A "soft" reset is by far the most common type of reset any PDA user will come across, and is often useful in solving minor problems and strange behaviour. When you perform a soft reset of a PDA, you are essentially causing the device to stop everything it is running, and restart - much like rebooting a PC.
If you had a program open and were entering data into when you perform a soft reset, you might lose the information you were entering, but otherwise a soft reset does not affect any information you have stored on the PDA at all, and can be performed quite safely.
Before performing a soft reset, I usually suggest making a quick backup of your data, just to be on the safe side!
Instructions for performing a soft reset can usually be found in the user manual that accompanied your PDA. In general, all that you need to do is locate the "reset hole" (which may be labelled), and gently push the tip of the stylus or a bent paperclip into it. Doing so pushes a tiny button inside the device which triggers the soft reset, so you shouldn't need much pressure. If you are unsure, check with the manufacturer of your PDA!
What is a Hard Reset?
Sometimes also referred to as a "factory reset", a hard reset is an extremely serious process, because performing a hard reset will always wipe all the data from your PDA and return it to the settings it originally had when purchased.
In general, only the manufacturer of your device will ever suggest performing a hard reset, and even then only in the direst of circumstances! Before even considering a hard reset, you should always take a thorough backup of your data, and make sure you have a copy of it in a safe place.
The instructions to perform a hard reset are again usually found in the user manual for your PDA, but if not, you should contact the manufacturer of your PDA and they will be able to help...but a hard reset should only be used as a last resort!
Someone mentioned a Warm Reset to me once, what does that do?
Palm OS® powered PDAs and Pocket PCs both share the above two types of reset (soft and hard), but Palm OS® powered PDAs are also able to perform a third, much less well know, type of reset.
The warm reset is analogous to the "safe mode" of a desktop PC, and similar in many respects to a normal soft reset.
Normally, when a Palm starts up (is soft reset), it sends notifications to all the applications installed informing them that the Palm is starting up, and any program designed to load and run in the background should start at this point. Performing a soft reset bypasses this notification step, restarting your Palm without any of the additional background applications and add-ons you may have installed.
Performing a warm reset is very similar to performing a soft reset, as noted above. The key difference is that you should hold the "up" button on your Palm (or "up" on the directional pad if you Palm has one), and while holding this button, perform a soft reset. Keep the "up" button held until the Palm preferences screen appears, and your Palm has now been warm reset.
It should be noted that on some Palm OS® devices, such as the Tungsten T2, some normal Palm OS functions may not be available. Since a warm reset should be used only to help diagnose and fix a problem, you should perform a normal soft reset when finished to restore everything to normal.
So why is this useful? There are several reasons why this might be handy, and here are a few examples:
Hopefully this information will be helpful in better understanding some of the different terms associated with PDAs.
Feedback: Did this Help? Yes No Send comment
66 % of users found this useful
Search for
Hint: Try not to use general words like 'help' or 'and'.
Example: If you are having a problem finding your hotsync id then you can enter the words 'hotsync' and 'id'.
Words can be separated by a space or by a comma.
Back to Knowledge Base Home |
int hashCode
The hash code for this object.
A hash code is a single integer which represents the state of the object that affects == comparisons.
All objects have hash codes. The default hash code represents only the identity of the object, the same way as the default == implementation only considers objects equal if they are identical (see identityHashCode).
If == is overridden to use the object state instead, the hash code must also be changed to represent that state.
Hash codes must be the same for objects that are equal to each other according to ==. The hash code of an object should only change if the object changes in a way that affects equality. There are no further requirements for the hash codes. They need not be consistent between executions of the same program and there are no distribution guarantees.
Objects that are not equal are allowed to have the same hash code, it is even technically allowed that all instances have the same hash code, but if clashes happen too often, it may reduce the efficiency of hash-based data structures like HashSet or HashMap.
If a subclass overrides hashCode, it should override the == operator as well to maintain consistency.
Source
int get hashCode => _JenkinsSmiHash.hash2(x.hashCode, y.hashCode); |
glycemic index diet
Glycemic Index Diet
The glycemic index essentially ranks carbohydrates according to their effect on the body’s blood glucose level. This effectively measures the effects various carbohydrates have on glucose levels in the blood. A high glycemic index would mean that the carbohydrate breaks down quickly and releases a high amount of glucose into the blood. A low glycemic index therefore refers to carbohydrates which break down slowly and releases glucose into the blood at a slower rate.
A glycemic index diet is often of particular importance for those people who suffer from diabetes. A low glycemic index diet is considered to improve long term blood glucose control. A low glycemic index diet is therefore useful for people who are wishing to reduce the demand for insulin their body makes.
On the contrary a high glycemic index diet is useful for those who need an increased injection of glucose; for example after exercise or someone experiencing hypoglycaemia. A high glycemic index diet is encouraged in situations where recovery is required and one is lacking energy.
gi diet
Foods are given a glycemic index ranking which refers to its glucose releasing affects. Generally unrefined breads with higher fibre content have a higher GI ranking whereas white bread has a lower GI ranking. The glycemic index has also been associated with weight control. Some experts believe that a high gylcemic index diet increases the risk of obesity and therefore health problems which are the result of this such as cardiovascular disease etc. As a result of this people with already existing weight problems and cardiovascular problems should try and maintain a low glycemic index diet.
A low Glycemic index diet over a long period of time has been proven to reduce the chances of developing type2 diabetes. It has also been known to help control type2 diabetes by initiating a slower release of glucose into the blood and therefore reducing the demands for insulin. People suffering from type2 diabetes are therefore advised to keep a low glycemic index diet.
Research has shown that adding lipids to the foods can reduce their glycemic index ranking. This however is risky as lipids themselves carry their own set of health problems if consumed excessively. One should try and maintain a balanced and healthy diet and the glycemic index will take care of itself. It is only really necessary to pay particular attention to the glycemic index diet if you are suffering from specific disease such as diabetes or if you are overweight and experiencing cardiovascular issues. In these circumstances researching and following a strict glycemic diet can be very useful and is strongly advised by the majority of health professionals.
How To Get Started
glycemic index diet
You can get information about the glycemic index diet from your local pharmacist or GP. Any health professional should be able to help you out however so do not hesitate to ask a family member or friend in regards to the glycemic index diet. It is important to know exactly how the glycemic index works and what is right for you before initiating your own glycemic index diet.
|
Carbon dating radioisotopes
carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y.
Basic principles of carbon dating radiocarbon, or carbon 14, is an isotope of the element carbon that is unstable and weakly radioactive. Dating sites charge you for contacting us dating websites and be the case with. Question how is carbon dating done asked by: william baker answer carbon 14 (c14) is an isotope of carbon with 8 neutrons instead of the more common 6 neutrons.
Radiocarbon dating: the available mass of c12 might have on the c14/c12 ratios and thus on radiocarbon dating are shown in the radioactive carbon dating table. Grotto radiocarbon dating bison on the wall of the niaux cave (in ariège), drawn some 13,000 years ago direct carbon 14 dating of this painting was carried out by the tandetron laboratory in gif-sur-yvette, using a highly sensitive method able to measure extremely low amounts of radioisotopes. These radioisotopes can be used to treat certain types of secondary bone cancer this is when cancer has spread to the bones from somewhere else in the body.
Quizlet provides radioisotopes activities, flashcards and games start learning today for free log in sign up study sets matching radioisotopes carbon dating. Why do creationists keep saying carbon dating is we know that carbon dating works because it's been verified by other dating methods, like other radioisotopes. Rate is an acronym applied to a research project investigating radioisotope dating sponsored by the institute for creation research and the creation research society it stands for radioisotopes and the age of the earth this article summarizes the purpose, history, and intermediate findings of the.
What is radioactivity, half-life and radioisotopes march 12, 2016 may 8, broken pipes to carbon dating radioisotopes are also used in tracing leaks. Radioisotopes, (also known as is allowed to decay to produce a radioisotope) is used for carbon dating by archeologists, paleontologists,. How does carbon dating work all other radioisotopes have half-lives under 20 seconds, most less than 200 milliseconds the least stable isotope is 8c,. Carbon-14 dating proves the earth is “young writing on the subject of the presence of carbon-14 in many “ancient” samples, we are told:.
carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y.
Unaware of the many fallacious assumptions used in the dating process, many people believe carbon-14 dating disproves the biblical. Carbon dating carbon-14(c-14) environmental uses of radioisotopes radioisotopes in environment studies determination of origin of water through tracers. Using radioisotopes ansto has been involved in dating the kelly gang's radiocarbon dating uses the naturally occurring radioisotope carbon-14 to. Isotopes in carbon dating the use of various radioisotopes allows the dating of biological and geological samples with a high degree of accuracy.
Radioactive carbon dating or carbon-14-dating is used to find the age of speciments that are no more than radioisotopes are atoms which have an unstable. Dating apps have is remarkable and it is expected among older adults family formation, demography 95, 2012 9842-3610 dating vip has created.
Carbon dating undercuts evolution's long ages to 12 c atoms with extreme precision in very small samples of carbon, radioisotopes and the age. How old is the earth: radioisotope dating scientists estimate that the earth is about 45 billion years old, for that reason, they’re called radioisotopes. Using isotopes to understand the oceans input along with the different behaviours of these radioisotopes using isotopes to understand the oceans and climate.
carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y. carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y. carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y. carbon dating radioisotopes 114 uses of radioactive isotopes several groups of scientists used carbon-14 dating to demonstrate that the age of the shroud of turin was only 600–700 y.
Carbon dating radioisotopes
Rated 3/5 based on 45 review
2018. |
; German forum: http://www.purebasic.fr/german/archive/viewtopic.php?t=2938&postdays=0&postorder=asc&start=0 ; Author: bobobo (updated for PB4.00 by blbltheworm) ; Date: 26. November 2003 ; OS: Windows ; Demo: Yes #Window_0 = 0 #Button_weniger = 2 #Button_mehr = 3 #Combo=4 OpenWindow(#Window_0, 279, 160, 180, 70, "Test", #PB_Window_SystemMenu) CreateGadgetList(WindowID(#Window_0)) ButtonGadget(#Button_weniger, 10, 10, 60, 20, "weniger") ButtonGadget(#Button_mehr, 75, 10, 60, 20, "mehr") ComboBoxGadget(#Combo,5,35, 170,200) AddGadgetItem(#Combo, -1, "Elfriede - 030-123456789") AddGadgetItem(#Combo, -1, "Erna - 030-123456789") AddGadgetItem(#Combo, -1, "Arbeit - 030-123456789") AddGadgetItem(#Combo, -1, "Kneipe - 030-123456789") AddGadgetItem(#Combo, -1, "unterm_Tisch - 030-1236789") SetGadgetState(#Combo,0) Repeat EventID = WindowEvent() If EventID = #PB_Event_Gadget GEID = EventGadget() altWert = wert If GEID = #Button_weniger : wert-100 : EndIf If GEID = #Button_weniger :SetGadgetState(#Combo,GetGadgetState(#Combo)-1 ):EndIf If GEID = #Button_mehr : wert+100 : EndIf If GEID = #Button_mehr :SetGadgetState(#Combo,GetGadgetState(#Combo)+1 ):EndIf If GetGadgetState(#Combo)<0 : SetGadgetState(#Combo,0):EndIf EndIf Until EventID = #PB_Event_CloseWindow End ; IDE Options = PureBasic v4.00 (Windows - x86) ; Folding = - ; EnableXP |
THE PROTEIN WORKS
Advice On Sports Nutrition
Sports Nutrition Advice
How much protein do I need for strength?
How much protein do I need for endurance?
Why do I need protein?
How many carbohydrates do I need in my diet?
What is ‘carb-loading’?
Do I need carbohydrates to increase muscle?
Are low carbohydrates diets good for fat loss?
Are there alternatives to low carbohydrate diets?
How many calories should I eat per day?
Can creatine monohydrate make me stronger?
How much creatine do I need to take?
What’s the best protein to take in the morning?
What’s the best protein to take at night?
How much protein do I need if I’m weight training?
A good question and one that’s still debated amongst the sports nutrition community to this day. Firstly looking specifically at strength, speed and power athletes the International Olympic Committee Consensus on Sports Nutrition that ‘strength or speed athletes require 1.7grams of protein per kg of bodyweight per day.’ To put this into an example for a 90kg sprinter this equates to 153 grams of protein per day (1.7grams x 90kg body weight = 153 grams of protein per day).
Another way of measuring this, and one that’s used by sports nutritionists, is to state ‘per day consume 1gram of protein per pound of body weight.’ So again using the example of the 90kg (198.4 pound) sprinter, this would equate to 198.4 grams per day (1 x 198.4 = 198.4 grams per day). Then it must be noted that many bodybuilding experts recommend as much as 3 grams of protein per kg of bodyweight. So again that same 90kg sprinter, based on this recommendation, would be consuming 270 grams of protein per day (3x 90kg = 270 grams of protein per day).
How much protein do I need if I’m an endurance athlete?
Interestingly experts from McMaster University in Ontario, Canada found that endurance athletes require a greater (or equal) intake of protein than strength athletes. This is to ensure the athletes don’t over train and their bodies’ have enough protein to repair after extreme endurance training and events.
Why do I need protein?
The main role of protein within the body is to help build, maintain and repair body tissue. The reason it’s so important for people who train and athletes is because they require more protein to help them cope with the demands of training. Protein is also used to make hormones, cellular messengers, enzymes, immune-system components and nucleic acids and without enough of it in your diet your body wouldn’t be able to create the biochemical substances needed for simple things we perhaps take for granted like cardiovascular function, muscle contraction, growth, and healing. All in all it’s pretty important and that’s why you need it.
How many carbohydrates do I need in my diet?
Again this isn’t a simple answer and so let’s looks at the different schools of thought surrounding carbohydrates. Firstly for endurance athletes or anyone concerned with sports performance it’s important to understand that carbohydrates are our body’s primary fuel supply and so it’s important to get enough of them in your diet. This means low carbohydrate diets aren’t a good idea since this will detrimentally affect your performance. So athletes, generally speaking, need around 5-7g of carbohydrate per kilogram of body weight or 60 per cent of your daily calorie intake from carbohydrates. This will typically work out at around 1,500kcal from carbohydrate per day for most women and 1,800kcal for men.
What is ‘carb-loading’?
‘Carb-loading’ is a nutritional technique that requires an athlete to consume 8-10grams of carbohydrates per kilogram of body weight, per day, for roughly 3 days before an event. This it to ensure their muscle glycogen levels are completely full and they therefore have enough muscular energy to complete the race.
Do I need carbohydrates when trying to increase muscle?
Carbohydrates are very important for those looking to increase muscle mass, this is because carbohydrates are needed both before a workout to ensure you’ve the energy to complete a strenuous weights routine and after a workout to replenish muscle glycogen, spike insulin levels and therefore kick-start the recovery process that will shuttle the protein and amino acids to the muscles as quickly as possible. Before a workout you can follow the same principles as the one explained above for sports performance but for post-workout it’s best to consume high glycaemic index, fast releasing carbohydrates since these are best for rapidly replenishing muscle glycogen and spiking insulin which ultimately kick-starts the entire recovery process.
Are low carbohydrates diets good for fat loss?
The principle and theory of low carbohydrate diets is sound, and will work, in that cutting carbohydrates out of your diet will in turn reduce the amount of insulin you release. This is then ideal for fat loss since the hormone insulin has been shown to increase lipogenisis (the storing of fat) and reduce lipolysis (the burning of fat) so having less of it in your body is ideal for losing weight. But the problem is that carbohydrates are our body’s and brain’s primary source of fuel and so if we cut them out completely for too long we become tired, lethargic, can’t train and our mood even becomes affected and we lose all motivation. Therefore it’s simply not sustainable and you can’t undergo a low carbohydrate diet for too long.
Are there alternatives to low carbohydrate diets?
Yes. The key is to find a balance and consume a moderate amount of carbohydrates. Also a good rule that many fat loss specialist recommend is to only have carbohydrates when you need them, for example when you wake up to start your day, before a workout to fuel your training and after training to replenish muscle glycogen and kick-start the recovery process. If you only consume a moderate amount of carbohydrates at these times your body will be able to absorb the carbohydrates effectively and therefore won’t release too much insulin which would result in fat storage. This is a much more sustainable and proven way to lose weight by manipulating your carbohydrate intake.
How many calories should I eat per day?
Again this will vary a lot depending on your age, height, weight, metabolic rate and how active you are during the day. But there is a way of estimating how many calories you need per day and that’s through something called the Harrison Benedict Formula. This is an equation that estimates how many calories you need per day based on certain factors. The first thing it does is calculates your metabolism, this is just how many calories you would burn per day just by staying alive and doing things like breathing or making sure your heart beats, but not doing any exercise at all.
For men this calculation is as follows:
• Metabolism (Men) = 66.5 + (13.75 x weight in kg) + (5.003 x height in cm) – (6.755 x age in years)
And for women it’s:
• Metabolism (Women) = 655.1 + (9.563 x weight in kg) + (1.850 x height in cm) – (4.676 x age in years)
Once you’ve found this number (the number of calories you burn per day just through your metabolism) next you have to multiply that number by the number below that correlates to how active you are, this will then give the number of calories you need per day:
• Not active (0 days a week exercise) = Daily calories needed = metabolism x 1.2
• Lightly active (1-2 days a week exercise) = Daily calories needed = metabolism x 1.375
• Moderately active (3-5 days a week exercise) = Daily calories needed = metabolism x 1.55
• Heavily active (6-7 days a week exercise) = Daily calories needed = metabolism x 1.725
• Very heavily active (exercising twice per day) Daily calories needed = metabolism x 1.9
Now once you have this number the next thing to do is to determine whether you want to bulk up, lose fat or stay the same weight. If you want to bulk up then it makes sense to add 500 calories to this number to create a ‘calorie surplus’. If you want to lose weight then deduct 500 calories from this number to create a ‘calorie deficit’. Then lastly to keep your weight the same simply consume this number of calories to achieve what is known as an ‘energy equilibrium’. Now it must be noted this is only an estimate and it will vary depending on your muscle mass, daily activity and other factors. However it is a good estimate and offers some form of guideline for you to follow.
How can creatine monohydrate make me stronger, quicker or bigger?
Creatine monohydrate can make you stronger, quicker and bigger by boosting the muscles production of a substance called Adenosine Triphosphate (ATP.) Adenosine Triphosphate is basically our ‘muscular energy’ and it’s needed whenever we perform any fast, strong or powerful movement such as a sprint, squat or bench press. The thing is we only have enough Adenosine Triphosphate in our bodies to work at our maximum intensity for roughly 5 to 7 seconds, after this you’ll run out and either slow down during a sprint or fail on your 8th repetition when benching. This is why athletes supplement with creatine monohydrate since creatine increases Adenosine Triphosphate within the muscles and therefore increases the amount of time you are able to work at your maximum intensity for. This ultimately means you can perform that extra repetition in the gym or continue accelerating during a 100m sprint at the 70m mark instead of fading at the 60m mark.
How much creatine do I need to take?
Typically creatine supplementation involves a loading phase, 20g per day split between 4 servings, for 5-7 days, followed by a maintenance phase of 5g a day for the duration of a particular phase of training, 6 weeks for example. However more recently experts have suggested you don’t need to do a loading phase and can in fact starting using creatine with a set dose (roughly 5 grams) right from day 1. It’s important to know that there are studies to support both approaches so it’s recommended you find the right one for you.
Regardless of which dosage method you use, it was shown at the Department of Physiology and Pharmacology at the Queen's Medical Center in Nottingham that taking creatine with a high glycaemic index carbohydrate can increase creatine absorption and therefore the positive effect it can have on your training. It’s believed it does this because the high glycaemic index carbohydrate increases levels of the hormone insulin in the body which in turn helps to shuttle the creatine to the muscles far more effectively.
What’s the best protein to take in the morning?
When you wake up in the morning you’ve effectively been fasting for 7-10 hours since you’ve not been eating, what this means is your muscles are starved and are entering into a catabolic state. Therefore the best protein for first thing in the morning would be a quick absorbing protein such as Whey Protein 80 (concentrate) or Whey Protein 90 (isolate) since these are the best proteins to quickly reach the muscles, break the fast and therefore stop your body and muscles entering into a catabolic state.
What’s the best protein to take at night?
When you’re sleeping you’re effectively going to be fasting for 7-10 hours as well since you’re obviously not going to be eating. This potentially means that your muscles could enter a catabolic state and begin to breakdown, especially in the absence of protein and amino acids. That’s why it’s widely considered by experts that casein is the best form of protein before bed since it’s been shown to have a much slower absorption rate compared to other forms of protein which means it can ‘drip-feed’ amino acids and protein to your muscles right through the night.
Clear All
View Products
Add To Basket Successful
What Other Customers Are Looking At Right Now
|
一. 备忘录模式
备忘录模式:在不破坏封装性的前提下,捕获一个对象的内部状态,并在该对象之外保存这个状态。这样以后就可将该对象恢复到原先保存的状态
应用场景:
1. 当你需要创建对象状态快照来恢复其之前的状态时,可以使用备忘录模式
2. 当直接访问对象的成员变量、获取器或设置器将导致封装被突破时,可以使用该模式
我们来模拟一个例子:在游戏中,人物可以保存自己的血量,并可将血量回溯到上一次保存的血量
#include <cstdio>
#include <memory>
class Monster{
public:
double HP;
double Atk;
public:
explicit Monster(double HP, double Atk)
:HP(HP), Atk(Atk){}
};
class Memento{
private:
double HP;
public:
explicit Memento(double HP)
:HP(HP){}
public:
double getHP(void) const{
return this->HP;
}
};
class Brave{
private:
double HP;
double Atk;
public:
explicit Brave(double HP, double Atk)
:HP(HP), Atk(Atk){}
public:
void fight(Monster &monster){
while(true){
if(this->HP <= 0){
printf("You are dead.\n");
break;
}
if(monster.HP <= 0){
printf("You wins.\n");
break;
}
this->HP -= monster.Atk;
monster.HP -= this->Atk;
}
}
void show(void) const{
printf("HP: %.2lf Atk: %.2lf\n", this->HP, this->Atk);
}
std::shared_ptr<Memento> saveState(void) const{
return std::make_shared<Memento>(this->HP);
}
void restoreState(const std::shared_ptr<Memento> &memento){
this->HP = memento->getHP();
}
};
class CareTaker{
private:
std::shared_ptr<Memento> memento;
public:
void setMemento(const std::shared_ptr<Memento> &memento){
this->memento = memento;
}
std::shared_ptr<Memento> getMemento(void) const{
return this->memento;
}
};
二. 组合模式
组合模式:将对象组合成树形结构以表示 “部分-整体” 的层次结构。组合模式使得用户对单个对象和组合对象的使用具有一致性
应用场景:
1. 如果你需要实现树状对象结构,可以使用组合模式
2. 如果你希望客户端代码以相同方式处理简单和复杂元素,可以使用该模式
我们来模拟一个例子:linux 的文件系统
#include <cstdio>
#include <string>
#include <memory>
#include <list>
#include <algorithm>
class File{
private:
std::string file_name;
public:
explicit File(const std::string &file_name)
:file_name(file_name){}
virtual ~File() = default;
public:
std::string fileName(void) const{
return this->file_name;
}
virtual void show(int depth) const = 0;
};
class Leaf : public File{
public:
explicit Leaf(const std::string &file_name)
:File(file_name){}
public:
void show(int depth = 1) const override{
printf("%s%s\n", std::string(depth, '-').c_str(), this->fileName().c_str());
}
};
class Composite : public File{
private:
std::list<std::shared_ptr<File>> files;
public:
explicit Composite(const std::string &file_name)
:File(file_name){}
public:
void addLeaf(const std::string &file_name){
this->files.emplace_back(std::make_shared<Leaf>(file_name));
}
std::shared_ptr<Composite> addComposite(const std::string &file_name){
auto file = std::make_shared<Composite>(file_name);
this->files.emplace_back(file);
return file;
}
void remove(const std::string &file_name){
this->files.erase(std::remove_if(
this->files.begin(), this->files.end(),
[&file_name](const std::shared_ptr<File> file){ return file->fileName() == file_name; }
));
}
void show(int depth = 1) const override{
printf("%s%s\n", std::string(depth, '-').c_str(), this->fileName().c_str());
for(const auto &elem : this->files)
elem->show(depth + 2);
}
};
int main(void){
Composite root("root");
auto c_plus_plus = root.addComposite("C/C++");
c_plus_plus->addLeaf("main.cpp");
c_plus_plus->addLeaf("test.cpp");
auto python = root.addComposite("Python");
python->addLeaf("main.py");
python->addLeaf("test.py");
root.show();
return 0;
}
三. 桥接模式
桥接模式:可将一个大类或一系列紧密相关的类拆分为抽象和实现两个独立的层次结构, 从而能在开发时分别使用
应用场景:
1. 如果你想要拆分或重组一个具有多重功能的庞杂类 (例如能与多个数据库服务器进行交互的类), 可以使用桥接模式
2. 如果你希望在几个独立维度上扩展一个类, 可使用该模式
3. 如果你需要在运行时切换不同实现方法, 可使用桥接模式
我们来模拟一个例子:在几何图形的分类中,假设我们有矩形和椭圆之分,这时我们又希望加入颜色(红色、绿色)来扩展它的层级
#include <algorithm>
#include <cstdio>
#include <memory>
#include <string>
class Color{
private:
std::string color;
public:
explicit Color(const std::string &color)
:color(color){}
virtual ~Color() = default;
public:
std::string getColor(void) const{
return this->color;
}
};
class Red : public Color{
public:
Red()
:Color("red"){}
};
class Blue : public Color{
public:
Blue()
:Color("blue"){}
};
class Shape{
private:
std::unique_ptr<Color> color;
std::string shape;
public:
explicit Shape(const std::string &shape)
:color(nullptr), shape(shape){}
virtual ~Shape() = default;
public:
void setColor(std::unique_ptr<Color> &&color){
this->color = std::move(color);
}
void getShape(void) const{
if(this->color == nullptr)
printf("%s\n", this->shape.c_str());
else
printf("%s %s\n", this->color->getColor().c_str(), this->shape.c_str());
}
};
class Rectangle : public Shape{
public:
Rectangle()
:Shape("rectangle"){}
};
class Ellipse : public Shape{
public:
Ellipse()
:Shape("ellipse"){}
};
int main(void){
Rectangle rectangle;
rectangle.getShape();
rectangle.setColor(std::make_unique<Blue>());
rectangle.getShape();
rectangle.setColor(std::make_unique<Red>());
rectangle.getShape();
return 0;
} |
Light therapy is a treatment method that involves delivering certain specific wavelengths of light to an area of skin affected by acne. Both regular and laser light have been used. When regular light is used immediately following the application of a sensitizing substance to the skin such as aminolevulinic acid or methyl aminolevulinate, the treatment is referred to as photodynamic therapy (PDT).[10][129] PDT has the most supporting evidence of all light therapies.[78] Many different types of nonablative lasers (i.e., lasers that do not vaporize the top layer of the skin but rather induce a physiologic response in the skin from the light) have been used to treat acne, including those that use infrared wavelengths of light. Ablative lasers (such as CO2 and fractional types) have also been used to treat active acne and its scars. When ablative lasers are used, the treatment is often referred to as laser resurfacing because, as mentioned previously, the entire upper layers of the skin are vaporized.[140] Ablative lasers are associated with higher rates of adverse effects compared with nonablative lasers, with examples being postinflammatory hyperpigmentation, persistent facial redness, and persistent pain.[8][141][142] Physiologically, certain wavelengths of light, used with or without accompanying topical chemicals, are thought to kill bacteria and decrease the size and activity of the glands that produce sebum.[129] The evidence for light therapy as a treatment for acne is weak and inconclusive.[8][143] Disadvantages of light therapy can include its cost, the need for multiple visits, time required to complete the procedure(s), and pain associated with some of the treatment modalities.[10] Various light therapies appear to provide a short-term benefit, but data for long-term outcomes, and for outcomes in those with severe acne, are sparse;[144] it may have a role for individuals whose acne has been resistant to topical medications.[10] A 2016 meta-analysis was unable to conclude whether light therapies were more beneficial than placebo or no treatment, nor how long potential benefits lasted.[145] Typical side effects include skin peeling, temporary reddening of the skin, swelling, and postinflammatory hyperpigmentation.[10]
Scars (permanent): People who get acne cysts and nodules often see scars when the acne clears. You can prevent these scars. Be sure to see a dermatologist for treatment if you get acne early — between 8 and 12 years old. If someone in your family had acne cysts and nodules, you also should see a dermatologist if you get acne. Treating acne before cysts and nodules appear can prevent scars.
× |
Early control of viral load by favipiravir promotes survival to Ebola virus challenge and prevents cytokine storm in non-human primates - Institut Pasteur Access content directly
Journal Articles PLoS Neglected Tropical Diseases Year : 2021
Early control of viral load by favipiravir promotes survival to Ebola virus challenge and prevents cytokine storm in non-human primates
Abstract
Ebola virus has been responsible for two major epidemics over the last several years and there has been a strong effort to find potential treatments that can improve the disease outcome. Antiviral favipiravir was thus tested on non-human primates infected with Ebola virus. Half of the treated animals survived the Ebola virus challenge, whereas the infection was fully lethal for the untreated ones. Moreover, the treated animals that did not survive died later than the controls. We evaluated the hematological, virological, biochemical, and immunological parameters of the animals and performed proteomic analysis at various timepoints of the disease. The viral load strongly correlated with dysregulation of the biological functions involved in pathogenesis, notably the inflammatory response, hemostatic functions, and response to stress. Thus, the management of viral replication in Ebola virus disease is of crucial importance in preventing the immunopathogenic disorders and septic-like shock syndrome generally observed in Ebola virus-infected patients.
Domains
Virology
Fichier principal
Vignette du fichier
journal.pntd.0009300.pdf (2.15 Mo) Télécharger le fichier
Origin : Publication funded by an institution
Dates and versions
pasteur-03236111 , version 1 (26-05-2021)
Licence
Attribution
Identifiers
Cite
Stéphanie Reynard, Emilie Gloaguen, Nicolas Baillet, Vincent Madelain, Jérémie Guedj, et al.. Early control of viral load by favipiravir promotes survival to Ebola virus challenge and prevents cytokine storm in non-human primates. PLoS Neglected Tropical Diseases, 2021, 15 (3), pp.e0009300. ⟨10.1371/journal.pntd.0009300⟩. ⟨pasteur-03236111⟩
59 View
42 Download
Altmetric
Share
Gmail Facebook Twitter LinkedIn More |
You asked:
if it's 9:00 pst, what time is it gmt
• Greenwich Mean Time
Greenwich Mean Time
Pacific Standard Time
Pacific Standard Time
5:00:00am Greenwich Mean Time
5:00:00am Western European Time (the European timezone equal to UTC)
• tk10publ tk10ncanl
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
Top ways people ask this question:
• 9pm pst in gmt (81%)
• if it's 9:00 pst, what time is it gmt (13%)
• 9 pm pst in gmt (3%)
• if it's 21:00 pst, what time is it gmt (1%)
Other ways this question is asked:
• what is 9 o'clock pst in gmt
• if it is 9:00 pst what time is it gmt
• if it's 9:00pm pst, what time is it gmt
• what is 9pm pst in gmt
• if it's 9 pm in pst what time is it in gmt
• if it's 9:00 pm pst, what time is it gmt
• 9:00 pm pst in gmt
• 9 pm pst equivalent gmt
• 9:00 pm (pacific standard time - gmt
• if its 9pm pst what is it gmt
• 9:00 pm pst at gmt |
Skip to content
Arrays and objects destructuring in JavaScript
Posted on:October 5, 2020
Assume, that we have an object:
const obj = {
a: "a",
b: "b",
};
And we want to get b property. We have a simple way:
obj.b; // 'b'
// or
obj["b"]; // 'b'
but, we can use a JavaScript ES6 thing called destructuring - but what’s this?
Destructuring assignment
Most used JavaScript data structures are Array and Object.
Array let us store data in ordered collection, syntax looks like:
const student = ["John", "Doe"];
Object allow us to create an entity, where we can store data by keys, for example:
const student = {
firstName: "John",
secondName: "Doe",
};
And when we’ll need to use Array or Object, in most common cases, we’ll need all data or only some of them.
The New ES6 feature gives us syntax sugar which allows us to in an easy way get only fields that we’ll need. It works really well when we’re facing big objects, or arrays and need only a few fields.
Object destructuring
Let’s get back to our example student object.
const student = {
firstName: "John",
secondName: "Doe",
};
A classic way of getting one of the field value is
student.firstName; // John
// or
student["firstName"]; // John
The new syntax allows us to do it in my opinion - a cleaner way.
const { firstName } = student;
firstName; // John
The same thing we can do with secondName
const { firstName, secondName } = student;
firstName; // John
secondName; // Doe
One what we need to remember is that we need to know what keys objects store.
Also, it isn’t a problem to change the key name on the fly.
const { first: firstName, second: secondName } = student;
first; // John
second; // Doe
But what if we will have a more complex structure? Maybe like this?
const student = {
firstName: "John",
secondName: "Doe",
parents: {
father: {
firstName: "Joe",
secondName: "Doe",
},
// ...
},
};
To get father field, we can as in old way - do it in dot or bracket notation or new way
const { parents: { father: { firstName } } = student;
firstName // Joe
Summary
Getting variables using destructuring makes our life a bit easier. It looks nicer than the normal way and makes code reading easier.
For now, I’m using them more than the old way, and what do you think about it? |
Create
cancel
Showing results for
Search instead for
Did you mean:
Sign up Log in
Celebration
Earn badges and make progress
You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Deleted user Avatar
Deleted user
Level 1: Seed
25 / 150 points
Next: Root
Avatar
1 badge earned
Collect
Participate in fun challenges
Challenges come and go, but your rewards stay with you. Do more to earn more!
Challenges
Coins
Gift kudos to your peers
What goes around comes around! Share the love by gifting kudos to your peers.
Recognition
Ribbon
Rise up in the ranks
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Leaderboard
How do I get my solution folder to revert to a previous tagged commit?
Deleted user Apr 01, 2019
I am trying to get my current solution folder to revert to exactly the state it was in some months ago. The commit I want to go back to is tagged but is not a branch.
If I double-click the tagged commit I want to go back to, SourceTree tells me I am creating a detached HEAD but says I can subsequently create a branch so I don't lose the changes I will make. I go ahead with this and select the 'Clean' checkbox to discard all changes.
When I then go to look at my solution folder in Windows Explorer, it is a hotchpotch of the old and the new. I do get back the folders that were in existence when I created the tagged version a few months ago but I also still have new folders that I have created since and did not exist back then. This gives me zero confidence that I can go ahead doing a patch for the old software.
What am I doing wrong?
1 answer
0 votes
minnsey
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Apr 08, 2019
Create a new branch from the commit you want, then from the terminal run
>Git clean
see https://git-scm.com/docs/git-clean
This should remove any untracked files.
(As always its worth making a backup of your local repo folder before doing this ;) )
Suggest an answer
Log in or Sign up to answer
TAGS
AUG Leaders
Atlassian Community Events |
HTTPS SSH
README
Checkout in a folder below irrlicht/examples Copy any of source-files and rename it to main.cpp to get it compiling with the project files.
What is this repository for?
Code snippets around the Irrlicht 3D engine. This project exists mainly to help me to keep my code snippets synchronized.
If you find anything useful in it feel free to use this code (it's zlib-licensed, but if you need another license I can add that - or just use it, I won't sue you). |
Main / World / What everyday items contain beryllium bohr
What everyday items contain beryllium bohr
balesslistua.info! Beryllium discovery, atomic structure, and location information. Now here's where you have to imagine something. You know there are a lot. Beryllium definition, sources, characteristics (melting point, electron configuration , valence electrons), Bohr atom model diagram, toxicity, fun facts, pictures. The human body contains about ppb (parts per billion) of Be though it does not use it as such. It is mostly . balesslistua.info?id= Kids learn about the element beryllium and its chemistry including atomic weight, atom, uses, sources, name, and discovery. Plus properties and characteristics.
bohr model of the element beryllium | other interesting facts it is one of the lightest of. Big image Teacher Organization, Teacher Hacks, Organized Teacher, Teacher Stuff, . Six student pages total, this packet contains three pages of introductory Your home for bite-size math and science lessons from everyday life. [Bohr Model of Beryllium] 5th Grade Science Projects, 8th Grade Science, School TOPIC 1 -> structure of the atom = nucleus - middle of the atom / contains .. a plant cell model out of household items / plant cell made out of a shoebox. Discovered by, Peter Armbruster, Gottfried Münzenberg and colleagues. Origin of the name, Bohrium is named for the Danish atomic physicist Niels Bohr.
Niels Bohr proposed an early model of the atom as a central nucleus containing protons and neutrons being orbited by electrons in shells. Bohrium is a synthetic chemical element with symbol Bh and atomic number It is named after Danish physicist Niels Bohr. Chemistry experiments have confirmed that bohrium behaves as the heavier . the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. Beryllium Sulfide BeS bulk & research qty manufacturer. Properties, SDS Customers For Beryllium Sulfide Have Also Viewed Beryllium Bohr Model. When looking at Bohr models, we look at its valence electrons (the electrons Once you have found this information, follow the directions to draw your model. 6. Notable gemstones which contain beryllium include beryl (aquamarine, Bohr model: In atomicphysics, the Bohr model, introducedby Niels Bohr in watersolubility of its more common naturally occurring compounds,theborate minerals.
Beryllium oxide | BeO | CID - structure, chemical names, physical and Beryllium oxide ceramics have the highest thermal conductivity of the oxide. In your introductory chemistry classes you will have to become One of these models is the Bohr model, in which atoms consist of a Therefore, for beryllium, which has the numbers "4" and "," Things Needed. i found this helpful no i can fap 2 this and other things Atomic Structure• Atoms have a nucleus that contains Protons and Neutrons• Bohr Diagrams 1) Since you have 2 electrons already drawn, you need to add 4 more. Over the years, scientists have used different models to visualize atoms as Rutherford model, the Bohr model, and the Heisenberg/Schrödinger model. The four elements common to all living organisms are oxygen (O), carbon . If you look at the second row of the periodic table, you will find lithium (Li), beryllium (Be).
Mendeleyev's periodic table of contained 17 columns, with This was interpreted in terms of the electronic structure of atoms by Niels Bohr in . is considered to include beryllium, magnesium, calcium, strontium. The nucleus had to contain other particles, just as heavy as protons, without So Chadwick directed a stream of alpha particles at a target made of beryllium. List three common objects that contain one or b) Beryllium . The Bohr Model: is a diagram of the atom that describes the arrangement of subatomic particles.
(с) 2019 balesslistua.info |
Quick-start JavaScript projects with this Webpack project starter kit
Use my free, tried-and-tested Webpack project starter kit if you'd like to get your next web app or JavaScript project started quickly
Webpack starter kit diagram
I feel as if I've written a number of non-technical posts lately, mainly because I've had a lot of broader concepts floating around in my noggin. So if you're interested in article on tech tests being relevant, continuous refactoring or whether you should use nest stuff in your smart home, you're all set.
But let's not do that here. Here, I'm going to share with you my quick and simple starter kit for Webpack based projects.
(Pssst! If you'd rather just investigate the GitHub repo for the Webkit project starter then go there via this link: https://github.com/bpk68/web-template)
Why Webpack (vs Gulp or Grunt)?
Gulp and Grunt have reigned victorious as JavaScript project favourites for many a year. As JavaScript task runners, both Gulp and Grunt do a great job of minifying code, cleaning CSS, transforming template files and, well, just about anything you can imagine as a task that can be run during a build.
Webpack - the bundling system for JS projects
However, they've fallen out of favour amongst the community in recent times as support for both their core product and their coupled ecosystems (e.g. Bower) is dwindling. A further nail in the coffin comes in the form of React, which offers an all-in-one starter solution – Create React App – using....drum roll...Webpack.
Although Webpack at its core is a bundler, not a task runner, it has been enjoying more use in place of Gulp and Grunt – appearing in more overall stacks than either of the two task runners, and is mentioned in nearly twice the number of job posts. It’s also far more popular on GitHub.
Why a Webpack starter kit?
Webkit works well out of the box with minimal configuration. However, depending on your situation, you'll almost always find yourself requiring a little bit more than the 'default' setup.
The problem I faced when I started regularly using Webkit to kick-off projects is that it requires a lot of additional plugins and settings to get it to the point where it covers all the things I want my project to automatically handle. Things such as:
• Bundling and minification
• Code chunking and optimisation running
• Linting support
• ES6/ES2015 language features via Babel
• Copying and moving assets as part of the build
• ...and some others
By setting up a blank project, with all of the above configured out of the box, I can focus on the productivity and excitement of a new project, without getting bogged down in repetitive set up files.
power
Photo by Fancycrave / Unsplash
Let me introduce the (finished?) Webpack starter kit
OK, so you're never really finished – there'll always be tweaking and changes to make as needs shift. But for now, the current state of the publicly available Webpack starter project is a great jumping off point for a web app project.
It includes all the configuration baked in and ready for deploying to a server of your choice. It even includes some non-Webpack packages, such as the amazing Semantic Release.
What's included?
It's a simple setup with relatively unbiased opinions. Here is a brief outline of the project with summaries of the what's and why's of some of the files:
• .babelrc - specifies the version of language support that babel brings. In this case, it's set to use the latest ES6/ES2015 language features.
• .npmignore - a handy cousin to the .gitignore file that ignores some project files/folders if you're doing an npm deploy or publish with this project.
• .releaserc.json - some good defaults for getting Semantic Release working.
• templates/index.html - a very simple html document where your bundles are added and your app starts from. You don't strictly need this file, but it's likely you'll prefer a little more control over what gets added into your final html output, whether that's meta info, other stylesheets, or perhaps micro-schema. This template gives you that control whilst retaining the auto-bundle-inclusion witchcraft of Webpack.
• src/index.js - the very start of your project, where all the magic JS happens.
• src/vendors.js - a good file to put your third-party libraries or load your vendor files in. The webpack.config.js files split out the code based on the main index.js and vendors.js files.
• config/ - we'll go into this in a moment, but for now, this is where your common (shared), development, and production configuration files for Webpack live.
Splitting the Webpack configuration
In a similar way that you can split out Grunt configs into multiple files, Webpack allows you to separate your common (shared), production, and development settings into distinct files. To achieve this, we can use the webpack-merge plugin – included in the starter kit.
Under the /config folder, you'll find these three files:
• webpack.common.js - although the largest of the config files, the shared/common config sets up from Webpack basics, such as how to chunk your bundled assets, creating file-path aliases, how to handle certain files, as well as cleaning the /dist folder before deployment and choosing which html template to use to start your project.
• webpack.dev.js - in here you'll find rules to process CSS using the css-loader plugin, source-mapping options and a fully-functional web server to serve your in-development work from – localhost:8080 by default. It omits any optimisation or minification at this level.
• webpack.prod.js - for production, these settings uglify, minify, and optimise your JS and CSS assets, and the mini-css-extract plugin chops your CSS files into bite-sized chunks; a bit like Webpack does for your JS files.
Project plugins, add-ons and the package.json file
Finally, we have the package.json file for the project which includes a number of plugins to help Webpack do its stuff. Here's a breakdown of what does what:
• @babel/polyfill - polyfill from BabelJs that helps you to run the latest JS features (in our case, from ES6) now.
• semantic-release - Semantic Release helps to automate the scheduled release workflow including adding version numbers, updating release notes, and pushing releases to GitHub/npm. This can be deleted if it's not to your liking.
• babel-loader - this adds Babel support and transpilation to Webpack during bundling of JS files.
• clean-webpack-plugin - simply empties the /dist folder during a build to remove any previous code/assets from the last build.
• html-webpack-plugin - an official Webpack plugin that helps to give you more control over the creation of html files from which to server your bundles.
• copy-webpack-plugin - copies files from the source directory into the build/output directory. You can view the npm package here.
• mini-css-extract-plugin - the mini-css-extract-plugin splits your CSS into separate files – a little like how the JS files can be chunked. It supports on-demand loading of CSS and sourcemaps.
• optimize-css-assets-webpack-plugin - optimises and minifies CSS assets during the production build.
• uglifyjs-webpack-plugin - nice and simple, this one takes your beautiful JS files and transforms them into less-readable, uglified versions that are smaller and harder to understand (keep your coding secrets safe...ish).
• style, css, file, image-loader - these are all official Webpack add-ons that help you to process different file types based on some rules (this is set in the modules section of Webpack config files). For example, we catch CSS files, run them through the style-loader helper to bundle them into our final build output.
Using the Webpack starter project
Hopefully, it should be really easy. Here's how to get started:
1. Fork the repo (get it here: https://github.com/bpk68/web-template) or download the raw files.
2. Run yarn install (assuming you have yarn installed on your machine) to add the npm packages.
3. Edit any settings or config files to suit.
4. Edit src/index.js to create something magical.
5. Then run yarn start to deploy the files locally and spin up a local webserver.
6. Browse to http://localhost:8080 to view your work.
Updates, comments, feedback, changes
This is a handy starter project that I created for my own purposes, but I know people run into the same issues as I do and I hope that this gets people moving faster.
If you have any comments, add them into the Disqus form below. If you have issues, problems, or general feature requests then start a pull request or issue on GitHub and I'll do my best to look at it really quickly.
Now get out there and make something amazing!
Rob Kendal
About Rob Kendal
Rob Kendal is an award-winning front-end developer and marketer who likes simple, organised thinking and making clever things. You can find him working at IAM Cloud. Say hi on me@robkendal.co.uk.
Comments
Receive awesome news in the mail
If you'd like to be notified of the latest updates and articles, pop your email address in the box below and get ready for update goodness. I only send emails out once a month and promise to never spam you.
Read more |
0
600
Vitamins are required for normal growth, metabolism and good health. Their task is to metabolize other nutrients to provide energy and start reactions in the body. They are found in fruits, vegetables and other food, but may be missing due to a number of reasons. The USDA (United States Department of Agriculture) recommends a bare minimum requirement of vitamin supplements to prevent deficiencies.
There are two kinds of vitamins classified according to their solubility. The fat soluble vitamins are A, E, D and K, and can be stored in the body. They contain carbon, hydrogen and oxygen. The water soluble vitamins contain nitrogen, and sometimes sulfur, in addition to these three. Water soluble vitamins include vitamin C or ascorbic acid and vitamins of the B group: thiamine or vitamin B1, riboflavin or vitamin B2, niacin or vitamin B3, pantothenic acid or vitamin B5, pyridoxine or vitamin B6, biotin or vitamin B7, folate/folic acid or vitamin B9 and vitamin B12. They cannot be stored in the body.
It is important to be aware of the multiple functions of vitamins and effects of deficiencies to understand the role of vitamin supplements. Vitamins allow nutrients to be digested and absorbed and convert carbohydrates and fats into energy. They help to metabolize nutrients, produce antibodies to strengthen immunity and develop resistance to diseases. Vitamins strengthen cells, bind tissues, form bones, blood cells and genetic material, hormones and chemicals of the nervous system and combine with proteins to produce enzymes. Each group of vitamins performs more specific roles.
Article Source: http://EzineArticles.com/6803529
LEAVE A REPLY
Please enter your comment!
Please enter your name here |
#include "pthread_impl.h" int pthread_mutex_unlock(pthread_mutex_t *m) { pthread_t self; int waiters = m->_m_waiters; int cont; int robust = 0; if (m->_m_type != PTHREAD_MUTEX_NORMAL) { if (!m->_m_lock) return EPERM; self = pthread_self(); if ((m->_m_lock&0x1fffffff) != self->tid) return EPERM; if ((m->_m_type&3) == PTHREAD_MUTEX_RECURSIVE && m->_m_count) return m->_m_count--, 0; if (m->_m_type >= 4) { robust = 1; self->robust_list.pending = &m->_m_next; *(void **)m->_m_prev = m->_m_next; if (m->_m_next) ((void **)m->_m_next)[-1] = m->_m_prev; } } cont = a_swap(&m->_m_lock, 0); if (robust) self->robust_list.pending = 0; if (waiters || cont<0) __wake(&m->_m_lock, 1, 0); return 0; } |
1. New Software Notice alert!
The new Web Graphics Creator (version 4.0) will be out soon.
For now, version 3.0 is still active and works great on most Windows machines. It no longer runs on Macs using Lion or greater... (but the new version will fix this)
So hang in there! It's almost done!
HELP mac OS - error message then program terminates
Discussion in 'Web Graphics Creator' started by Madsy003, Jun 7, 2009.
1. Madsy003
Message Count:
1
Trophy Points:
0
Hi,
Help needed!! I just purchased this software today and installed it fine on my macbook. Then I open the program and each time within about a minute or two the application closes and comes up with the message 'THE APPLICATION WEB GRAPHICS CREATOR QUIT UNEXPECTEDLY.'
I have repaired the permissions folder already, but still get this error message!! It's really frustrating as I cannot use the software because it keeps closing unexpectedly!
If anyone could assist that would be fantastic!!
Cheers,
madsy
2. Doc
Message Count:
815
Trophy Points:
18
Hi madsy welcome to the forum. Sorry I can not offer any practical advice other than uninstalling the application and reinstalling (I am a PC user rather than a Mac user) However I am sure that one of our Mac gurus will be able to offer advice on how to solve the problem so keep checking for a reply.
3. mugwump
Message Count:
10
Trophy Points:
1
Have you deleted the Preference file? Usually in your Home
directory, /Users/(your_home)/Library/Preferences. I only
have The Logo Creator, and I didn't see a Pref. file, so maybe
this isn't such a good suggestion.
I, too, would say re-install, but after you run "fix permissions"
again, and maybe even reboot.
Do you have any USB devices that could be unplugged before
you do the installation? And then run the software without
the USB stuff attached and see what happens.
Any chance you have access to another Mac with 10.5.6 or
earlier on it? If so, try to install it and run it there. There may
be something "hinky" with 10.5.7 and the software . . .
Sorry I can't be more helpful . . .
Mugwump
Share This Page |
Ask Ubuntu is a question and answer site for Ubuntu users and developers. It's 100% free.
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
I have an XBox joypad:
Bus 005 Device 004: ID 045e:0289 Microsoft Corp. Xbox Controller S
When I start Bastion from the terminal the following is output:
Number of joysticks: 1
Number of buttons for joystick: 0 - 10
Number of axes for joystick: 0 - 6
Number of PovHats for joystick: 0 - 1
When I load up the game it displays a message "press any key" and at this point, if I press a button on the joypad it advances to the main menu. However, the up/down/left/right controls do not work and the button will not operate the menu. When I enter the control configuration, the joypad section is disabled and displays a message "joypad not detected." If I enter the control customization and try to reconfigure one of the controls, noises can be heard when pressing joypad buttons, but the input is otherwise ignored.
Further information which may or may not be relevant:
• My controller is an original Xbox controller, not a 360 controller. XNA games on Windows apparently only work with Xbox360 controllers because they use xinput rather than direct input, see eg here.
• My controller works (almost) properly with MonoGame trunk samples, but Bastion uses a modified MonoGame and crashes when run against trunk, so I can't add debugging to see where the problem is.
• Bug can also be reproduced with a Xbox 360 wired controller.
share|improve this question
I have the same issue with a 360 controller, precise 64 bit – psylockeer Jun 3 '12 at 14:08
Same here... :( – alemur Jun 3 '12 at 14:36
Same here. (huh, there is a character minimum) – senshikaze Jun 4 '12 at 18:33
up vote 2 down vote accepted
This is fixed in the latest package from the Software Centre.
share|improve this answer
SuperGiant Games has not included joystick support for the linux release. Perhaps in a later update.
share|improve this answer
too bad :-( thanks anyway – psylockeer Jun 8 '12 at 23:50
You can try using the qjoypad to do joystick to keyboard emulation:
http://www.playdeb.net/updates/ubuntu/12.04/?q=qjoypad
You need to install the playdeb ppa's to get it to show up in the software centre, all of the instructions are on the playdeb site. it works awesome.
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question. |
Is Gastric Sleeve Safe We generally agree that is gastric sleeve safe as a risk-free treatment. However, we must say that this level of risk varies from person to person. Diseases of the individual affect this situation a lot.
Is Gastric Sleeve Safe
There are various forms of bariatric surgery, but they all try to shrink the stomach and occasionally interfere with nutrient absorption in order to help the patient lose weight.
There are various forms of bariatric surgery, but they all try to shrink the stomach and occasionally interfere with nutrient absorption in order to help the patient lose weight.
Bariatric surgery is a digestive system treatment that aids in the weight loss of obese people. The procedure decreases calorie consumption by shrinking the stomach. Some types of bariatric surgery reduce both nutrient absorption and growth. A successful bariatric surgery, regardless of approach, results in significant weight loss.
What is the purpose of bariatric surgery?
Obese people who have bariatric surgery might lose weight by decreasing their digestive and appetite capacity. Obesity is defined as having a body mass index of 30 or above. Other variables that impact the diagnosis of obesity include muscle mass and waist circumference.
When a person’s BMI reaches 40 or higher, they consider bariatric surgery. We can also prescribe it for persons with a BMI of 30 to 40 who have diabetes, high blood pressure, fatty liver, or sleep apnea.
Obesity can lead to a variety of health issues, including.
• Diabetes
• Coronary heart disease
• Hepatic disease caused by hypertension
• Digestive issues
• Incontinence
• Obstructive sleep apnea
• Backache
• Psychological issues associated with calcification
Obesity is a serious health problem worldwide. Bariatric surgery is the most important and long-term weight loss treatment for obese individuals. However, patients should consider bariatric surgery only after they have exhausted all other weight loss alternatives such as diet, exercise, and medication.
1-Gastric banding that can be adjusted laparoscopically
This is a surgical technique in which the doctor places an inflatable band around the top of the stomach.
This divides the stomach into two halves, producing a small pouch above the main stomach with a short duct connecting it.
This causes the passage of food into the main stomach to be delayed, resulting in less consumption.
The gastric aperture can be controlled by inflating and deflating the band through a port beneath the skin.
This process is reversible since the tape and port may be removed when no longer required.
2-Gastric sleeve surgery or gastric sleeve
This is a restricted laparoscopic procedure in which the surgeon removes around 75 to 85 percent of the stomach, leaving just a small stapled pouch.
This reduces food consumption while having no effect on nutrient absorption.
It prevents the production of stomach hormone, which stimulates appetite and reduces hunger.
This treatment is sometimes used as the first step in a series of weight-loss operations.
This cannot be avoided.
3-Roux-en-Y or gastric bypass surgery
This is a restrictive/absorbent procedure that is performed in two stages.
The surgeon starts by stapling the stomach, creating a small pouch.
They then cut the small intestine and connect the bottom piece straight to the sac, bypassing the bulk of the stomach and small intestine.
The bypassed portion is then linked to the lower area of the small intestine, allowing digesting juices to enter.
Bypass causes changes in gut flora and hormones, which leads to malabsorption and calorie restriction.
Reversing this is a difficult procedure, but it is doable if medically necessary.
4-Duodenal switch biliopancreatic diversion
They apply a restrictive/absorbent treatment that is done in two stages.
To begin, he performs gastric sleeve surgery, in which he removes half of the stomach.
He then attaches the pouch to the end of the small intestine and bypasses most of it.
It connects the skipped portion lower to allow digestive juices to enter.
This procedure is permanent.
Throughout the process
It decides the details of your operation depending on your specific condition as well as the hospital’s or doctor’s practice. Traditional big (open) abdominal incisions are used in certain sleeve gastrectomies. However, they often do gastric sleeve surgery laparoscopically, which necessitates inserting tiny devices through several small incisions in the upper belly.
You will be given general anesthesia prior to your surgery. Anesthesia is a medication that keeps you awake and comfortable throughout the operation.
During gastric sleeve surgery, the surgeon constructs a tiny tube by vertically stapling the stomach and eliminates the wider, curved part of the stomach.
In most cases, the process takes between one and two hours. After surgery, you awaken in a recovery area where medical personnel watches you for any issues.
After the process
You will be on a special diet following a sleeve gastrectomy. For the first seven days, your diet consists of sugar-free, non-carbonated drinks. For the next three weeks, it consists of mashed meals. Four weeks following surgery, it will ultimately consist of normal meals. For the rest of your life, you’ll need to take a multivitamin twice a day, a calcium supplement once a day, and an injectable vitamin B-12 yearly.
In the first few months after weight loss surgery, you will have several doctor appointments to monitor your health. They may need laboratory testing, blood tests, and other diagnostic procedures.
As your body responds to rapid weight reduction, you may notice the following changes in the first three to six months after sleeve gastrectomy:
• An ache in the body
• Feeling exhausted as though you had the illness
• Feeling chilly
• Skin that is parched
• Hair loss and thinning
• Alterations in mood
• Is Gastric Sleeve Safe
When we search is gastric sleeve safe, we come across many pages of results.
As with any major operation, sleeve gastrectomy surgery has both immediate and long-term health concerns.
If we say that is gastric sleeve safe there may be the following risks associated with sleeve gastrectomy surgery:
• A lot of blood
• Infection
• Adverse effects of anesthesia
• Clots in the blood
• Breathing or lung issues
• Leakage from the stomach’s sliced edge
Gastric bypass surgery may have the following long-term risks and complications:
• Package of the gastrointestinal tract
• Hernias
• Gastroesophageal reflux disease
• Low blood sugar levels (hypoglycemia)
• Insufficient feeding
• Vomiting
Complications from sleeve gastrectomy surgery might be deadly in rare cases.
Everyone Wants to Know: Is Gastric Sleeve Safe?
A common question that most people have about weight loss surgeries is as follows: “is gastric sleeve safe?”. Gastric sleeve surgery is a type of bariatric surgery for weight loss that has become increasingly popular in recent years. The procedure involves removing a large portion of the stomach and reshaping it into a smaller “sleeve”. It has been hailed as an effective way to reduce weight and improve health, but there are still many questions about its safety. The safety of gastric sleeve surgery depends largely on the patient’s health, the skill of the surgeon, and the follow-up care.
Generally speaking, gastric sleeve surgery is the safest weight loss surgery when performed on healthy patients with a BMI over 40. People who are significantly overweight or obese are at higher risk for complications, so they should discuss their options with their doctor before committing to the procedure. After the procedure, patients should also follow their doctor’s dietary and exercise recommendations to help ensure a successful outcome. It is important for patients to receive counseling and support before and after the surgery in order to cope with these issues. So, the answer to the question, “is gastric sleeve safe?” quite simple: yes, as long as a qualified doctor does it. |
Skip to main content
Chemistry LibreTexts
4.18: Putting It Together- Examining Relationships- Quantitative Data
• Page ID
251333
• \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)
Let’s Summarize
• We use a scatterplot to graph the relationship between two quantitative variables. In a scatterplot, each dot represents an individual. We always plot the explanatory variable on the horizontal x-axis.
• When we explore a relationship between two quantitative variables using a scatterplot, we describe the overall pattern (direction, form, and strength) and deviations from the pattern (outliers).
• When the form of a relationship is linear, we use the correlation coefficient, r, to measure the strength and direction of the linear relationship. The correlation ranges between −1 an 1. If the pattern is linear, an r-value near −1 indicates a strong negative linear relationship and an r-value near +1 indicates a strong positive linear relationship. Following are some cautions about interpreting correlation:
• Always make a scatterplot before interpreting r. Correlation is affected by outliers and should be used only when the pattern in the data is linear.
• Association does not imply causation. Do not interpret a high correlation between explanatory and response variables as a cause-and-effect relationship.
• Beware of lurking variables that may be explaining the relationship seen in the data.
• The line that best summarizes a linear relationship is the least-squares regression line. The least-squares line is the best fit for the data because it gives the best predictions with the least amount of error. The most common measurement of overall error is the sum of the squares of the errors, SSE. The least-squares line is the line with the smallest SSE.
• We use the least-squares regression line to predict the value of the response variable from a value of the explanatory variable. Avoid making predictions outside the range of the data. (This is called extrapolation.)
• We have two methods for finding the equation of the least-squares regression line: Predicted y = a + b * x
• We use technology to find the equation of the least-squares regression line: Predicted y = a + b * x
• We use summary statistics for x and y and the correlation. Using this method, we can calculate the slope b and the y-intercept a using the following: b=\left(r⋅{s}_{y}\right)/{s}_{x},\text{}a=\stackrel{¯}{y}-b\stackrel{¯}{x}
• The slope of the least-squares regression line is the average change in the predicted values when the explanatory variable increases by 1 unit.
• When we use a regression line to make predictions, there is error in the prediction. We calculate this error as Observed value − Predicted value. This prediction error is also called a residual.
• We use residual plots to determine whether a linear model is a good summary of the relationship between the explanatory and response variables. In particular, we look for any unexpected patterns in the residuals that may suggest that the data is not linear in form.
• We have two numeric measures to help us judge how well the regression line models the data:
• The square of the correlation, r2 , is the proportion of the variation in the response variable that is explained by the least-squares regression line.
• The standard error of the regression, se , gives a typical prediction error based on all of the data. It roughly measures the average distance of the data from the regression line. In this way, it is similar to the standard deviation, which roughly measures average distance from the mean.
CC licensed content, Shared previously
4.18: Putting It Together- Examining Relationships- Quantitative Data is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
• Was this article helpful? |
Golang : Get hardware information such as disk, memory and CPU usage
Tags : golang cpu-usage cross-platform system-process-status operating-system
For this tutorial, we will learn how to extract hardware level information with Golang. Accessing hardware level information such as CPU usage percentage and disk partition information can be useful in a situation such as when you need to query the hardware ID or serial numbers without physically opening up the casing or you need to use the hardware information for tracking and troubleshooting purposes.
For my own personal use, I need it to query couples of raspberry-Pi computers, Linux desktops and a Mac.
In this code example below, we will use the cross-platform github.com/shirou/gopsutil package to find out the CPU utilization level and other system process statuses via Golang. The queried information will be exposed via web interface.
Before you start, please
$>go get github.com/shirou/gopsutil/...
Here you go!
package main
import (
"fmt"
"github.com/shirou/gopsutil/cpu"
"github.com/shirou/gopsutil/disk"
"github.com/shirou/gopsutil/host"
"github.com/shirou/gopsutil/mem"
"github.com/shirou/gopsutil/net"
"net/http"
"runtime"
"strconv"
)
func dealwithErr(err error) {
if err != nil {
fmt.Println(err)
//os.Exit(-1)
}
}
func GetHardwareData(w http.ResponseWriter, r *http.Request) {
runtimeOS := runtime.GOOS
// memory
vmStat, err := mem.VirtualMemory()
dealwithErr(err)
// disk - start from "/" mount point for Linux
// might have to change for Windows!!
// don't have a Window to test this out, if detect OS == windows
// then use "\" instead of "/"
diskStat, err := disk.Usage("/")
dealwithErr(err)
// cpu - get CPU number of cores and speed
cpuStat, err := cpu.Info()
dealwithErr(err)
percentage, err := cpu.Percent(0, true)
dealwithErr(err)
// host or machine kernel, uptime, platform Info
hostStat, err := host.Info()
dealwithErr(err)
// get interfaces MAC/hardware address
interfStat, err := net.Interfaces()
dealwithErr(err)
html := "<html>OS : " + runtimeOS + "<br>"
html = html + "Total memory: " + strconv.FormatUint(vmStat.Total, 10) + " bytes <br>"
html = html + "Free memory: " + strconv.FormatUint(vmStat.Free, 10) + " bytes<br>"
html = html + "Percentage used memory: " + strconv.FormatFloat(vmStat.UsedPercent, 'f', 2, 64) + "%<br>"
// get disk serial number.... strange... not available from disk package at compile time
// undefined: disk.GetDiskSerialNumber
//serial := disk.GetDiskSerialNumber("/dev/sda")
//html = html + "Disk serial number: " + serial + "<br>"
html = html + "Total disk space: " + strconv.FormatUint(diskStat.Total, 10) + " bytes <br>"
html = html + "Used disk space: " + strconv.FormatUint(diskStat.Used, 10) + " bytes<br>"
html = html + "Free disk space: " + strconv.FormatUint(diskStat.Free, 10) + " bytes<br>"
html = html + "Percentage disk space usage: " + strconv.FormatFloat(diskStat.UsedPercent, 'f', 2, 64) + "%<br>"
// since my machine has one CPU, I'll use the 0 index
// if your machine has more than 1 CPU, use the correct index
// to get the proper data
html = html + "CPU index number: " + strconv.FormatInt(int64(cpuStat[0].CPU), 10) + "<br>"
html = html + "VendorID: " + cpuStat[0].VendorID + "<br>"
html = html + "Family: " + cpuStat[0].Family + "<br>"
html = html + "Number of cores: " + strconv.FormatInt(int64(cpuStat[0].Cores), 10) + "<br>"
html = html + "Model Name: " + cpuStat[0].ModelName + "<br>"
html = html + "Speed: " + strconv.FormatFloat(cpuStat[0].Mhz, 'f', 2, 64) + " MHz <br>"
for idx, cpupercent := range percentage {
html = html + "Current CPU utilization: [" + strconv.Itoa(idx) + "] " + strconv.FormatFloat(cpupercent, 'f', 2, 64) + "%<br>"
}
html = html + "Hostname: " + hostStat.Hostname + "<br>"
html = html + "Uptime: " + strconv.FormatUint(hostStat.Uptime, 10) + "<br>"
html = html + "Number of processes running: " + strconv.FormatUint(hostStat.Procs, 10) + "<br>"
// another way to get the operating system name
// both darwin for Mac OSX, For Linux, can be ubuntu as platform
// and linux for OS
html = html + "OS: " + hostStat.OS + "<br>"
html = html + "Platform: " + hostStat.Platform + "<br>"
// the unique hardware id for this machine
html = html + "Host ID(uuid): " + hostStat.HostID + "<br>"
for _, interf := range interfStat {
html = html + "------------------------------------------------------<br>"
html = html + "Interface Name: " + interf.Name + "<br>"
if interf.HardwareAddr != "" {
html = html + "Hardware(MAC) Address: " + interf.HardwareAddr + "<br>"
}
for _, flag := range interf.Flags {
html = html + "Interface behavior or flags: " + flag + "<br>"
}
for _, addr := range interf.Addrs {
html = html + "IPv6 or IPv4 addresses: " + addr.String() + "<br>"
}
}
html = html + "</html>"
w.Write([]byte(html))
}
func SayName(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello, I'm a machine and my name is [whatever]"))
}
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", SayName)
mux.HandleFunc("/gethwdata", GetHardwareData)
http.ListenAndServe(":8080", mux)
}
Sample output(point browser to http://localhost:8080/gethwdata) :
OS : darwin
Total memory: 4294967296 bytes
Free memory: 24223744 bytes
Percentage used memory: 76.96%
Total disk space: 119174365184 bytes
Used disk space: 109002297344 bytes
Free disk space: 9909923840 bytes
Percentage disk space usage: 91.46%
CPU index number: 0
VendorID: GenuineIntel
Family: 6
Number of cores: 2
Model Name: Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz
Speed: 3292.00 MHz
Current CPU utilization: [0] 55.49%
Current CPU utilization: 1 27.71%
Current CPU utilization: 2 55.33%
Current CPU utilization: 3 26.79%
Hostname: Sweets-Mac-Pro.local
Uptime: 9045
Number of processes running: 211
OS: darwin
Platform: darwin
Host ID(uuid): FE3619E6-270B-34A6-BBD7-BED74EC32693
------------------------------------------------------
Interface Name: lo0
Interface behavior or flags: up
Interface behavior or flags: loopback
Interface behavior or flags: multicast
IPv6 or IPv4 addresses: {"addr":"::1/128"}
IPv6 or IPv4 addresses: {"addr":"127.0.0.1/8"}
IPv6 or IPv4 addresses: {"addr":"fe80::1/64"}
------------------------------------------------------
Interface Name: gif0
Interface behavior or flags: pointtopoint
Interface behavior or flags: multicast
------------------------------------------------------
Interface Name: stf0
------------------------------------------------------
Interface Name: en3
Hardware(MAC) Address: 18:a6:f7:16:e8:0b
Interface behavior or flags: up
Interface behavior or flags: broadcast
Interface behavior or flags: multicast
IPv6 or IPv4 addresses: {"addr":"fe80::1aa6:f7ff:fe16:e80b/64"}
IPv6 or IPv4 addresses: {"addr":"192.168.1.65/24"}
Happy coding!
References:
https://www.socketloop.com/tutorials/golang-http-server-example
https://www.socketloop.com/tutorials/golang-detect-os-operating-system
http://stackoverflow.com/questions/11356330/getting-cpu-usage-with-golang?rq=1
https://github.com/shirou/gopsutil
See also : Golang : Get local IP and MAC address
Tags : golang cpu-usage cross-platform system-process-status operating-system
By Adam Ng
IF you gain some knowledge or the information here solved your programming problem. Please consider donating to the less fortunate or some charities that you like. Apart from donation, planting trees, volunteering or reducing your carbon footprint will be great too.
Advertisement |
PEMF Helmholtz coils
PEMF Helmholtz coils
A while ago I was contacted by a doctor who had a very good understanding about field lines in PEMF coils.
This doctor was actually using a device which came with two coils each with a 2" diameter and he was under the impression that the electromagnetic field lines were homogeneous penetrating the area between the two coils, if placed opposite from each other, not dependent on the distance between them.
This doctor wondered if this indeed could be correct because he did not understand how changing the distance between the coils would influence the electromagnetic field lines between the two PEMF coils.
A German scientist by the name of Helmholtz designed a device to create an almost uniform magnetic field between two coils by placing the them exactly opposite each other.
Taking advantage of the question I received, I explained to this doctor that he was correct and only under specific conditions the so called Helmholtz configuration can be obtained. Working with two coils with very small diameter can only cover half the distance of each coil diameter in order to result in a uniform PEMF field.
Because he wanted to treat larger body parts as well of his patients, like an arm or leg or even stomach, he reasoned that he needed a PEMF system with large coil diameters in order to be able to treat a large area between the two coils with the same electromagnetic field intensity, which is only possible by using a Helmholtz configuration.
Our discussion resulted in a special coil set-up which worked just fine in this PEMF Helmholtz configuration. The generated electromagnetic field lines were now going through the complete stomach of the patient with the same homogeneous intensity! Here follows some explanations on how this works.
Helmholtz coil configuration for PEMF
In this picture two PEMF coils are wired in series.
This means that the electrical current from the source goes into the first coil, comes out, then goes into the second coil in the same direction, comes out and returns to it's source. The resulting PEMF field lines leave the coil perpendicular (= at a right angle to the coils) in the direction of the line marked X.
However if we want to obtain an evenly spread PEMF field, where the intensity is completely equal in the whole area between the two coils, specific conditions must be met.
The distance between the 2 coils must be equal to half of the diameter of the coils, notice the red arrows.
Helmholtz coils distances
This explains why it is impossible that an even field can be obtained when using 2 x 2" coils opposite to each other if the distance is more than half the diameter of a coil.
As such the PEMF field distribution is already disturbed when the distance in between such coils exceeds more than 1".
Helmholtz field lines between 2 pemf coils
PEMF field lines between helmholtz coils
These pictures show how we are able to generate a homogeneous PEMF field between the tips of the two green arrows between the coils. We can now see that the PEMF field lines are traveling straight between the two coils.
Having said this if we need to bridge a distance between two coils of at least 9 inches this requires coil diameters of at least 18 inch! See the red arrows in the picture at the top.
This requires a very large coil area in addition to sufficient power to be able to overcome the distance between these two coils. This in turn requires generation of an amount of power sufficient to drive two of these large PEMF coils.
For this we now need a powerful electrical current, which in turn requires a powerful PEMF generator.
PEMF treatment with Helmholtz coils
In the picture above we can now see how a Helmholtz PEMF configuration can be obtained, provided the PEMF generator has sufficient energy output. Two flat coils (each with a diameter of 18") positioned one opposite the other will create a homogeneous field between them able to completely penetrate the body. Even values of 40 milli Tesla can be reached.
|
PHP Script to generate a Google Sitemap automatically
The other day I needed to create a script that could automatically generate a Google Sitemap. The key requirement was that we wanted to have the sitemap recreated automatically. For example every time we added a post or on a regular schedule such as nightly.
The script to create the automatic Google Sitemap was actually pretty straight forward. The key thing is deciding what values to include in your xml entry block for each item.
So far the script has worked out really well. In fact it builds several hundred entries ina very short amount of time. The Google Sitemap generator was designed to create a very large file with thousands of entries. Still you can tweak the script as appropriate to fit your needs.
The Google Sitemap Generator script is included below. I hope you enjoy it.
/*
* SiteMap Generator
* This script is designed to make it very easy to create a custom Sitemap for your site from a set of data objects
* Simply customize your object interactions
* Author: Ben Hall
* Url: http://www.benhallbenhall.com
* License: Free to use commercial or private with a link back to http://www.benhallbenhall.com
*/
/////////////////////////////////////////////////////////////
//// Setup
/////////////////////////////////////////////////////////////
// Replace the following code with whatever code you want to use to get a batch of objects from your data store.
// 1. The Object Class file :: Replace this wil your own Object Class
include("ObjectClass.php");
// 2. Change this code to represent whatever call you need to use to get a batch of objects
function getBatch($start, $end){
return Object::getBatch($start, $end);
}
// 3. Change this code to represent pulling data out of a single object and putting it into a single sitemap entry
// Note :: change frequency and priority are default to a standard value - feel free to customize as appropriate.
function writeRecord($object){
$html .= '<url>';
$html .= '<loc>'.$object->url.'</loc>'; //The canicol URL to the item
$html .= '<lastmod>'.Util::formatDate($object->lastModified,"Y-m-d").'T'.Util::formatDate($object->lastModified,"H:i:s-04:00").'</lastmod>';
$html .= '<changefreq>weekly</changefreq>';
$html .= '<priority>0.5</priority>';
$html .= '</url>';
return $html;
}
// 4. If needed you can set the following variables to help tune the script.
$memory = '64M'; //Increase or lower the memory values as appropriate (useful if your sitemap is very large
$sitemapFile = "sitemap.xml"; //The location of the Sitemap to build
$batchSize = 1000; //Number of objects to retrieve at a single time
$sleepLength = 1; //Number of seconds to sleep in between batches (so as to not harm your server)
$showErrors = true; //Boolean, set to True or False to show or hide errors
/////////////////////////////////////////////////////////////
//// Code Body
/////////////////////////////////////////////////////////////
//Setup the variables
ini_set('memory_limit', $memory);
$start = $end = 0;
$moreToDo = true;
//Error reporting
if($showErrors){
error_reporting(E_ALL);
ini_set('display_errors','On');
}
//Open the file handler
$fh = fopen($myFile, 'w');
//Write the headers on the XML file
$html = '<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.google.com/schemas/sitemap/0.84"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.google.com/schemas/sitemap/0.84 http://www.google.com/schemas/sitemap/0.84/sitemap.xsd">';
fwrite($fh, $html);
//Loop through the objects. Get a batch an
while($moreToDo){
//Setup variables for the loop
$start = $end;
$end = $end + $batchSize;
//Get a batch of objects from your Data Store... Replace
$objects = getBatch($start, $end);
$objectCount = count($objects);
//Turn off looping if batch size is smaller then total possible
if($objectCount < $batchSize){
$moreToDo = false;
}
//Build the XML for each entry
foreach($objects as $object){
$entry = writeRecord($object);
fwrite($fh, $entry);
}
//Sleep briefly to not harm your server
sleep($sleepLength);
}
//Close up the end of the File
$html = '</urlset>';
fwrite($fh, $html);
fclose($fh);
echo '<p>All Done. <a href="'.$sitemapFile.'">Click here to view</a></p>';
Posted in , and tagged , , . |
Clements Vaporizer
WLMD ID: amxu
Australian inventor and engineer Hubert Ingham Clements (1886-1969) founded his manufacturing company in 1908. One of the first medical devices they made was an endotracheal insufflation anesthesia apparatus. This technique, popular through the first half of the 20th century, delivered anesthetic vapor through a tube inserted in the patient's trachea (windpipe.) The example shown here was probably made between 1930 and 1950.
The cylinder at the top is an ether reservoir, with a drip feed tube that leads to a vaporizing chamber below. The lower cylinder surrounds the vaporizing chamber with a water bath that was heated by electricity. A motor (not present in this example) drew air into the chamber, where it became charged with the ether vapor, and then out again toward the patient. It was noted that this design required the water to be so hot that it could potentially cause an unsafe concentration of the vapor. But Clements' products were well respected, and over 900 of its insufflation anesthesia apparatus were sold before the company changed hands in 1967.
Catalog Record: Clements Vaporizer Clements Vaporizer
Access Key: amxu
Accession No.: 2007-08-12-1
Title: [Clements Endotracheal Anesthesia Apparatus].
Author: Clements, Hubert I. (Ingham), 1886-1969.
Corporate Author: H. I. Clements.
Title variation: Alt Title
Title: Clements Anaesthetic Apparatus.
Title variation: Alt Title
Title: Clements Ether Vaporizer.
Publisher: Sydney, Australia : H. I. Clements, [between 1917 and 1962].
Physical Description: 1 endotracheal insufflation anesthesia apparatus : metals, glass, rubber, paint ; 30.5 x 22 dia. cm.
Subject: Anesthesia, Inhalation – instrumentation.
Subject: Anesthesia Machines.
Subject: Ether, Ethyl.
Web Link: https://www.woodlibrarymuseum.org/museum/item/1010/clements-vaporizer
Note Type: General
Notes: The title is based on the common elements in the names of various models of the apparatus. The cataloged object is not an exact match to any known description. The first year in the date range is based on the date of the company’s entry into the field of anesthesia equipment manufacture. The second year in the date range is based on the date when the company ceased production of ether apparatus; this example’s low serial number (315 out of a total of 933) makes it likely that it was produced somewhat earlier. In this description the front of the apparatus is considered that side on which the flowmeter and the thermometer can be read.
Note Type: With
Notes: 1 wooden stand : 5.5 x 26.5 dia. cm.
Note Type: Citation
Notes: Ball C. The Lidwell machine. Anaesthesia and Intensive Care. February, 1990;18(1):4.
Note Type: Citation
Notes: Clements Company File. Archives. Located at: Wood Library-Museum of Anesthesiology, Schaumburg, Illinois.
Note Type: Citation
Notes: Elliott Brothers Limited. Illustrated Catalogue, Surgical Instruments and Appliances, Aseptic Hospital Furniture, Electric Appliances, Diathermy Apparatus, etc., and Hospital Supplies, 5th ed. Sydney: Elliott Brothers Limited, 1929.
Note Type: Citation
Notes: Holland R. Hubert Ingham Clements: a pioneer of Australian anesthesia. Anaesth Intensive Care. June, 2005;33(Supplement 1):4-6.
Note Type: Citation
Notes: Holland R. Decline and fall–a tragedy in three acts. Anaesth Intensive Care. June, 2007;35(Supplement1): 11-16.
Note Type: Citation
Notes: Kaye G, Orton RH, Renton DG. Anaesthetic Methods. Melbourne: Ramsay (Surgical) Pty. Ltd., 1946:158-159.
Note Type: Physical Description
Notes: One endotracheal insufflation anesthesia apparatus; The height of the apparatus alone is approximately 32 centimeters; When seated in the wooden stand, the total height is approximately 35.5 centimeters; The apparatus could not be used while seated in the stand; The upper component of the apparatus is a cylindrical ether reservoir; At the bottom of this cylinder is a drip feed tube that connects to a second cylinder below; This lower cylinder is attached to a circular metal base plate;
On top of the ether reservoir there is a vertical post with a horizontal arm; Mounted in a hex nut, this post can turn 360 degrees both to the left and the right; Also on top of the reservoir, to the left of the post, is an ether fill port which is fitted with a screw cap; The back of the reservoir has an inset area that holds a glass tube for viewing the level of the ether; Behind this tube, the inset portion of the wall of the reservoir is painted red; At the bottom of the reservoir is a drip feed that has a glass window on the front and another on the back; The reservoir and drip feed form one solid column, which is screwed into the cylinder below; This column can be easily removed and replaced;
The lower cylinder contains a vaporizing chamber which is surrounded by a space that can be filled with water (a water bath); The bottom of the drip feed connects to the vaporizing chamber; This cylinder holds four components: 1) a round thermometer mounted on the front, 2) a flowmeter mounted on the left, 3) an ether drain stopcock mounted directly below the flowmeter, and 4) a vertical metal tube, connected to a stopcock, mounted on the right (the fill port and drain cock of the water bath);
The flowmeter is marked, from top to bottom, “12, 6, 3”; Near the top of the flowmeter, and extending to the left, is a short metal tube that would connect to a breathing tube;
The thermometer dial has a red needle indicator, and is marked with two numbered rings; The outer ring reads, from left to right: “-40, -20, 0, 20, 40, 60, 80, 100, 120, 140”; The inner ring reads, from left to right: “-30, -10, 10, FRZ, 50, 70, 90, 110, 130”; The center of the dial is marked “Tele-Tru Cub [new line] Thermometer [new line] Made in U. S. A.”;
The lower cylinder is connected to a round base plate by three screws; The base plate holds a toggle switch, mounted directly in front of the thermometer; The switch is marked, from left to right; “HEAT OFF”; A metal label is screwed to the plate directly in front of the thermometer, between the cylinder and the toggle switch: This label reads: “IMPORTANT, HEAT IS NECESSARY [new line] KEEP HOT”;
Two more metal labels, each screwed to the plate, are located directly below the stopcock on the left; That label closest to the cylinder reads: “BEFORE USING APPARATUS [new line] BE SURE [new line] TO DRAIN OFF ANY ETHER, THAT MAY BE IN [new line] VAPORIZER CHAMBER, BY MEANS OF [new line] COCK “A” ABOVE”; The outer of these two labels reads: “BE SURE [new line] TO HAVE AIR PASSING THRU [new line] FLOWMETER BEFORE ETHER [new line] DRIP IS TURNED ON, OTHER [new line] WISE ETHER WOULD ACCU- [new line] MULATE IN VAPOR CHAMBER”;
To the right of the right-hand stopcock, a fourth metal label reads: “SCIENTIFIC APPARATUS [new line] MADE BY [new line] H. I. CLEMENTS [new line] Engineers Sydney”; [new line] No. 315″;
The base plate has three screws that do not attach to anything; These screws extend approximately 2 centimeters below the plate, and act as supporting “feet” for the unit; An enclosed electrical component is attached to the underside of the plate; The electrical component is controlled by the toggle switch mounted on the surface of the plate; A length of rubber-encased electrical cable exits this component; This cable has been cut short.
Note Type: Reproduction
Notes: Photographed by Mr. Steve Donich, January 12, 2016.
Note Type: Acquisition
Note Type: Historical
Notes: In use throughout the first half of the 20th century, endotracheal insufflation anesthesia delivered anesthetic vapor through a tube inserted in the patient’s trachea (windpipe.) Some insufflation apparatus was coupled with a component for warming the anesthetic, and some with a motor to supply forced air and/or suction. The Clements apparatus was accompanied by a motor, and could be purchased with an optional suction component.
Australian inventor and engineer Hubert Ingham Clements (1886-1969) founded an automotive repair business, H. I. Clements, in 1908. He became a respected leader in the automotive industry. After WWII, the company changed its name to H. I. Clements & Son. The company was sold in 1967, and continued making medical equipment under the name H. I. Clements Pty. Ltd. (i.e. Propriety Limited) until 1987, when the name was changed to Phoenix. Phoenix was dissolved in 1994 (Holland, 2007, p. 14.)
According to an unsigned and undated history of the company, Mr. Clements made his first medical device, a suction pump, in 1912. Holland states that the company “forsook the motor industry altogether and concentrated on medical and scientific apparatus”, selling its anesthesia and suction equipment to “almost every hospital in Australia” (2005, pp. 5-6.)
An advertisement dated 1938 states that the company then had “21 years of experience in the manufacture of anaesthesia apparatus.” Holland states that Clements’ first anesthesia apparatus was produced at a commensurate date, 1917, when, “in collaboration with Dr. Mark Lidwill, he designed and manufactured a mechanical vaporizer, a machine which remained in use for many years under Lidwill’s name” (2005, p. 4.) Set against this account, Ball gives Lidwill sole credit for designing an insufflation anesthesia apparatus in 1913, which was “manufactured by” Elliott Brothers, of Sydney. Elliot’s 1929 catalog, in the WLM collection, features a Lidwill apparatus. The cataloger was unable to determine whether both of these authors refer to the same machine.
According to a brief company history by Michael G. Cooper, M.D., dated June, 1994, Clements began manufacturing insufflation anesthesia apparatus in the 1930s. Holland states that Clements’ various models “included warmed-water versions, but from the 1950s electric heating gradually became more popular.” (2007, p. 13.)
Cooper states that the company’s production of ether/air apparatus ended in 1962. Holland states that the production run was under 1000, the last one being sold in 1967 (2007, p. 13.)
In 1946, Kaye described, but did not identify, an apparatus which is very similar to that made by Clements. In that description, an ether reservoir is set above a vaporizing chamber, with a glass window in the drip feed that stands between the two. The vaporizing chamber is surrounded by a water bath. Air is taken in through an inlet, passes through the chamber and out again toward the patient. Kaye noted that the disadvantages of this design were that “the water must be really hot”, and that “if the supply of air be turned off without closing the screw valve”, the patient would receive vapor “of boiling ether”, which “may well lead to a calamity” (p. 158.)
Note Type: Exhibition
Notes: Selected for the WLM website. |
@proceedings {52131, title = {Ecosystem stress response: Understanding effects on the benthic invertebrate community of Alberta oil-sands wetlands}, year = {2003}, note = {No. 2510.}, month = {10/2003}, pages = {1 page }, publisher = {Canadian Technical Report of Fisheries and Aquatic Sciences }, abstract = {The environmental stress response of invertebrates was examined using wetlands in the Alberta oil-sands region as a model. Wetlands in this region occur naturally or they have been affected by oil-sands mining process materials such as mine-tailings, or saline process water. These materials can be toxic to aquatic organisms due to their high concentrations of sulphate ions, ammonia, polycyclic aromatic hydrocarbons (PAHs) and naphthenic acids. Wetlands are classified as either young or mature, and as having low or high sediment organic content. This study examined food web dynamics and structure in wetlands using stable isotopes to determine the effects of stress on ecological communities. Primary and secondary production in the wetlands was measured along with invertebrate diversity in order to determine a relationship. The maximum trophic position was determined using stable carbon and nitrogen isotopes to indicate food chain length which is influenced by energetic constraints, ecosystem size and stressors. The study quantifies the dynamics of vital links between the responses to environmental pressures in aquatic systems and the effects on terrestrial ecosystems }, keywords = {benthic community, ecology, field, invertebrates, wetlands}, url = {https://inis.iaea.org/search/search.aspx?orig_q=RN:35038231}, author = {Wytrykush, C. M. and Ciborowski, J. J. H.} } |
Documentation Home
MySQL 8.0 Reference Manual
Related Documentation Download this Manual
PDF (US Ltr) - 42.7Mb
PDF (A4) - 42.7Mb
Man Pages (TGZ) - 270.0Kb
Man Pages (Zip) - 379.7Kb
Info (Gzip) - 4.1Mb
Info (Zip) - 4.1Mb
Excerpts from this Manual
1.7.3.2 FOREIGN KEY Constraints
Foreign keys let you cross-reference related data across tables, and foreign key constraints help keep this spread-out data consistent.
MySQL supports ON UPDATE and ON DELETE foreign key references in CREATE TABLE and ALTER TABLE statements. The available referential actions are RESTRICT, CASCADE, SET NULL, and NO ACTION (the default).
SET DEFAULT is also supported by the MySQL Server but is currently rejected as invalid by InnoDB. Since MySQL does not support deferred constraint checking, NO ACTION is treated as RESTRICT. For the exact syntax supported by MySQL for foreign keys, see Section 13.1.20.5, “FOREIGN KEY Constraints”.
MATCH FULL, MATCH PARTIAL, and MATCH SIMPLE are allowed, but their use should be avoided, as they cause the MySQL Server to ignore any ON DELETE or ON UPDATE clause used in the same statement. MATCH options do not have any other effect in MySQL, which in effect enforces MATCH SIMPLE semantics full-time.
MySQL requires that foreign key columns be indexed; if you create a table with a foreign key constraint but no index on a given column, an index is created.
You can obtain information about foreign keys from the INFORMATION_SCHEMA.KEY_COLUMN_USAGE table. An example of a query against this table is shown here:
mysql> SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, CONSTRAINT_NAME
> FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
> WHERE REFERENCED_TABLE_SCHEMA IS NOT NULL;
+--------------+---------------+-------------+-----------------+
| TABLE_SCHEMA | TABLE_NAME | COLUMN_NAME | CONSTRAINT_NAME |
+--------------+---------------+-------------+-----------------+
| fk1 | myuser | myuser_id | f |
| fk1 | product_order | customer_id | f2 |
| fk1 | product_order | product_id | f1 |
+--------------+---------------+-------------+-----------------+
3 rows in set (0.01 sec)
Information about foreign keys on InnoDB tables can also be found in the INNODB_FOREIGN and INNODB_FOREIGN_COLS tables, in the INFORMATION_SCHEMA database.
InnoDB and NDB tables support foreign keys. |
WEBINAR: Design, develop, and characterize novel gene therapies on a unified digital platform REGISTER
Application Note
Benchling for Protein and Peptide Therapeutics R&D
This application note outlines the critical needs and complexities of protein and peptide R&D and how Benchling has helped address these challenges for leading protein and peptide therapeutic companies.
Overview
Proteins and peptides make up a wide variety of therapeutic molecules ranging from enzymes to cytokines. These therapeutics have several biological functions and interact with various biological pathways. New classes of proteins and peptides continue to be of significant research interest, but there are several hurdles to cross for companies seeking to develop new protein and peptide therapies.
This application note provides:
• An outline of the complexities present in protein and peptide research
• How Benchling’s capabilities can solve these complexities
Download Application Note |
Thursday, May 19, 2022
HomePopularWhat Are The Hours For Intermittent Fasting
What Are The Hours For Intermittent Fasting
How Safe Is Intermittent Fasting
Best Time to Eat During Intermittent Fasting
The 12 hour intermittent fasting Keto or Ketogenic diet and other carbohydrate-restricted diets aim to achieve ketosis, which is an accumulation of acidic ketones in the blood. As much as the outcomes are great, ketosis can cause liver, brain, and kidney damage. It is unsafe for individuals with chronic disorders such as diabetes and heart disease. Your blood sugar should also be stable for you to follow this diet plan. Therefore, it is important to discuss with your doctor and seek guidance from a licensed dietitian before beginning 12 hour intermittent fasting.
Dropping pounds by the dozens without putting yourself through the wringer is everyoneâs weight loss pipe dream. But what if we told you that the BetterMe app can make that happen? Keep yourself in prime shape with our fat-blasting workouts, delicious budget-sparing recipes, and body-transforming challenges with our app!
Easier Intermittent Fasting Plan
In my opinion, 12 hour fasting is the easiest and least painful way to go about fasting. If you are following an otherwise nutrient dense clean eating diet, it should actually be easy to do.
Truly, if youre interested in embracing intermittent fasting, this 12 hour intermittent fasting plan is the most pleasant way to go.
As mentioned, I have been doing it more or less by accident for years.
Keep Track Of Your Journey
It helps keep track of progress and helps with staying motivated by finding ways to keep up the momentum with the nutrition goals.
Trifecta App is a weight loss app that helps you to track your progress and stay motivated. It also motivates challenges, rewards, and competitions.
Alternatively, you can write your progress in a notebook or journal.
Don’t Miss: Is Intermittent Fasting An Eating Disorder
How Does The 1: 8 Diet Work
The 16:8 diet works on an hourly basis. So each day you can eat within an 8 hour time frame and fast for the remaining 16 hours. The best part? You dont have to restrict your calories when eating during the 8 hour window. As long as you eat healthily in your 8 hour time frame, youll see the weight drop off.
Experts say that the 16.8 diet restricted schedule gives our bodies the chance to process the nutrients stored in foods and burns away calories. Plus, you wont go hungry like you do on those two fasting days on the 5:2 either.
As Tom Jenane, nutrition and fitness expert explains, The 16:8 diet is a brilliant form of intermittent fasting that has proven results in a number of cases.
Jennifer Aniston is a famous fan of intermittent fasting.
As well as Jennifer Aniston, other Hollywood names who have seen success after following the 16:8 diet include Hugh Jackman, who reportedly used the diet to get in shape for his Wolverine films and actress Jennifer Love Hewitt.
What Is Fasting And How Does It Work
Intermittent Fasting does it work for Anti
A good rule of thumb for those wanting to fast to improve their metabolic and overall health is to leave at least 12 hours between meals ideally more on a regular basis, according to Dr Adam Collins, Principal Teaching Fellow in Nutrition at the University of Surrey.
Time-restricted eating is often used for weight-loss, and some people increase the daily fasting period to 14 hours or 16 hours. There are other forms of intermittent fasting, including the 5:2 approach, which involves very restricted eating on two days of the week, with normal eating on the other five. The 4:3, or every other day, approach involves similar restrictions on alternate days. Whichever one you choose, make sure youre doing it consistently, Dr Collins advises.
You May Like: Does A1c Test Require Fasting
Managing Your Weight With Intermittent Fasting And Time Restricted Eating
Intermittent fasting and time restricted eating are techniques to help people lose weight, improve their overall health, and some say simplify their lifestyles. Intermittent fasting and time restricted eating are becoming increasingly popular so you are likely to hear about them more and more.
Many studies are emerging that show that both IF and TRE can potentially lead to longer life, improve your health, and even enhance your brain function.
Those who are underweight or have a history of eating disorders should consult with their doctor before trying intermittent fasting or time restricted eating. Those with diabetes or poor blood sugar regulation should also consult their doctor before trying intermittent fasting or time restricted eating.
Fasting is done today for religious reasons and practiced by many faiths. Advocates of intermittent fasting argue that not eating for periods of time is more natural than having several meals a day along with snacks. The argument is that our ancestors did not have unlimited access to fast food, refrigerators, and convenient stores. As a result, we have evolutionarily been designed to not eat for periods of time until we could find food. So, intermittent fasting and time restricted eating attempt to recreate that potentially natural process.
Three common approaches exist. These are:
• An increase in human growth hormone production which improves fat lose and muscle growth.
• Improved cellular repair processes throughout the body.
• Improved Workouts Thanks To Intermittent Fasting
According to PubMed, training with limited carbohydrate availability can stimulate adaptations in muscle cells to facilitate energy production via fat oxidation.
Which means, when you workout during the fasting window of your 12 hour intermittent fasting, your body gets better at burning fat for energy since there are no foods to pull from.
Also Check: How To Go About Intermittent Fasting
Is Fasting The Same As Reducing Calories
No, fasting is not the same as reducing calories. While fasting and calorie reduction are effective ways to lose weight, they do not always have the same effects. For example, while fasting has been shown to improve mood and cognitive function in some studies, it may be difficult for people with low blood sugar levels during a fast.
How To Do The 12
The BEST Intermittent Fasting Schedule
When following this IF method, in as much as the two meals are 12 hours apart, you must finish your last meal before the 12 hour fasting-eating interval. For instance, if your eating window is 8 a.m. to 8 p.m., you should eat breakfast at or after 8 a.m. finish dinner before 8 p.m. As you begin, you will have hunger pains, but your body will adjust with time.
Do not jump into the 12 hour intermittent fasting. Your body takes time to adjust to not eating, especially if you usually take three meals and two snacks a day. Begin by staying off some meals, then when you get used to staying without food for hours, increase the period slowly to the 12 hours optimal window. Ensure that you are comfortable staying for long without eating because 12 hour intermittent fasting should not make you miserable, but rather, you should enjoy doing it.
Also Check: Can I Lose Weight With Just Intermittent Fasting
Fasting For 2 Days A Week
People following the 5:2 diet eat standard amounts of healthful food for 5 days and reduce calorie intake on the other 2 days.
During the 2 fasting days, men generally consume 600 calories and women 500 calories.
Typically, people separate their fasting days in the week. For example, they may fast on a Monday and Thursday and eat normally on the other days. There should be at least 1 non-fasting day between fasting days.
There is limited research on the 5:2 diet, which is also known as the Fast diet. A involving 107 overweight or obese women found that restricting calories twice weekly and continuous calorie restriction both led to similar weight loss.
The study also found that this diet reduced insulin levels and improved insulin sensitivity among participants.
A small-scale study looked at the effects of this fasting style in 23 overweight women. Over the course of one menstrual cycle, the women lost 4.8 percent of their body weight and 8.0 percent of their total body fat. However, these measurements returned to normal for most of the women after 5 days of normal eating.
Cons Of Intermittent Fasting For Women
“Despite the benefits found in research, it’s important to consider the context and remember that it’s not appropriate for all people at all times,” says Greaves. “Women of reproductive age need to be particularly careful with intermittent fasting as their bodies are more sensitive to stressors like prolonged fasting and caloric restriction.”
Greaves explains, “Intermittent fasting itself is a stressor on the body, and in the context of our modern day life that’s already filled with chronic emotional, physiological and environmental stressors, IF might do more harm than good. Fasting increases cortisol which can lead to blood sugar dysregulation, increased insulin resistance, lean muscle loss, fatigue and disruptions to thyroid function over time. In the short-term fasting may lower thyroid stimulating hormone, but elevated cortisol on a persistent basis can reduce the conversion of thyroid hormone.”
“Fasting can also lead to undereating, which we know negatively influences female hormones in a variety of ways,” Greaves says. The caloric restriction caused by intermittent fasting could lead to loss of menstrual cycle and interfere with fertility .
Also Check: Does Intermittent Fasting Help With Belly Fat
Intermittent Fasting May Affect Men And Women Differently
There is some evidence that intermittent fasting may not be as beneficial for some women as it is for men.
One study showed that blood sugar control actually worsened in women after three weeks of intermittent fasting, which was not the case in men .
There are also many anecdotal stories of women who have experienced changes to their menstrual cycles after starting intermittent fasting.
Such shifts occur because female bodies are extremely sensitive to calorie restriction.
When calorie intake is low such as from fasting for too long or too frequently a small part of the brain called the hypothalamus is affected.
This can disrupt the secretion of gonadotropin-releasing hormone , a hormone that helps release two reproductive hormones: luteinizing hormone and follicle stimulating hormone (
10 ).
For these reasons, women should consider a modified approach to intermittent fasting, such as shorter fasting periods and fewer fasting days.
Summary
Intermittent fasting may not be as beneficial for women as it is for men. To reduce any adverse effects, women should take a mild approach to fasting: shorter fasts and fewer fasting days.
Intermittent fasting not only benefits your waistline but may also lower your risk of developing a number of chronic diseases.
Other Plans Besides Intermittent Fasting : 4 For Beginners
Intermittent Fasting by the Hour
If youre new to intermittent fasting, it involves eating during a window of time and fasting the other part of the day. Quite oftenbecause it makes the most senseparticipants schedule their fasting window while sleeping. After all, its hard to be tempted by food while you are snoozing for six to eight hours.
If youre curious about the numbers, 20 4 intermittent fasting means fasting for 20 hours while eating for four hours. Please note that doesnt mean that you eat the entire four-hour time period. It simply involves being able to eat within that time frame.
That being said, there are recommended plans for those who are just starting with intermittent fasting. We suggest starting from the top and working your endurance down to the bottom for time-centric intermittent fasting.
• 14/10 – This plan involves 10 hours of eating with a 14-hour fasting window.
• 16/8 – This is one of the most popular methods for beginners, where participants fast for 16 hours and consume for eight.
• 18/6 – Another popular plan, this schedule has a six-hour window for eating.
When you are building endurance, you can do intermediate steps. For example, when considering intermittent fasting 16/8 vs 20/4 fasting, you can build up your tolerance over time. Start with 16/8, move to 17/7, then to 18/6, and go hour by hour until you build up to an intermittent fasting 20/4 schedule.
Read Also: Who Should Do Intermittent Fasting
Intermittent Fasting Can Be Hard But Maybe It Doesnt Have To Be
Initial human studies that compared fasting every other day to eating less every day showed that both worked about equally for weight loss, though people struggled with the fasting days. So, I had written off IF as no better or worse than simply eating less, only far more uncomfortable. My advice was to just stick with the sensible, , diet.
New research is suggesting that not all IF approaches are the same, and some are actually very reasonable, effective, and sustainable, especially when combined with a nutritious plant-based diet. So Im prepared to take my lumps on this one .
We have evolved to be in sync with the day/night cycle, i.e., a circadian rhythm. Our metabolism has adapted to daytime food, nighttime sleep. Nighttime eating is well associated with a higher risk of obesity, as well as .
Based on this, researchers from the University of Alabama conducted a study with a small group of obese men with prediabetes. They compared a form of intermittent fasting called “early time-restricted feeding,” where all meals were fit into an early eight-hour period of the day ,or spread out over 12 hours . Both groups maintained their weight but after five weeks, the eight-hours group had dramatically lower insulin levels and significantly improved insulin sensitivity, as well as significantly lower blood pressure. The best part? The eight-hours group also had significantly decreased appetite. They werent starving.
Could Leaving 12 Hours Between Dinner And Breakfast Benefit Health
Whether youre trying to lengthen your night-time break from eating by having an earlier dinner or later breakfast, or going a step further and following an intermittent fasting plan, some scientists believe there are benefits beyond weight-loss to giving your digestive system a break. They argue that for many people it can improve metabolic and overall health.
Read Also: Is Intermittent Fasting For Me
What Is Intermittent Fasting And Does It Really Work
The best diet is the one where you are healthy, hydrated and living your best life.
By Crystal Martin
Generally, intermittent fasting is a diet strategy that involves alternating periods of eating and extended fasting. Theres quite a bit of debate in our research community: How much of the benefits of intermittent fasting are just due to the fact that it helps people eat less? Could you get the same benefits by just cutting your calories by the same amount? said Courtney M. Peterson, Ph.D., an assistant professor in the Department of Nutrition Sciences at the University of Alabama at Birmingham who studies time-restricted feeding, a form of intermittent fasting.
We asked Dr. Peterson and a few other experts to help us sort out the real from the scam on intermittent fasting.
Try Sticking To The Following Foods On The 1: 8 Diet:
16 Hour Intermittent Fasting Results | New Intermittent Fasting Study!
• Whole grains: Ones like rice, oats, barley, wholegrain pasta and quinoa will keep you fuller for longer.
• Protein: Meat, poultry, fish, eggs, nuts and seeds will keep you full.
• Fruit: Apples, bananas, berries, oranges and pears will offer good vitamin sustenance.
• Vegetables: Broccoli and leafy greens are especially good for making sure youre eating enough fibre.
• Healthy fats: Olive oil, coconut oil, avocados.
Also Check: How Many Calories When Intermittent Fasting
Is It Ok To Skip Breakfast
Yes, Varady said. The notion that omitting a morning meal is bad for your waistline likely began with studies sponsored by cereal companies, and most of that research looked at the effects of breakfast skipping on cognition in children, she noted: Im not sure how that all got translated to body weight.
What Is 1: 6 Intermittent Fasting
18:6 involves fasting for 18 hours out of the day, leaving you with a six-hour eating window. This could mean eating lunch at 12:30 p.m., a snack at 3 p.m., then finishing dinner by 6:30 p.m. This is a much more rigid form of intermittent fasting and definitely best saved for experienced fasters who’ve tried other methods. This plan might be right for you if your weight loss has stalled doing 16:8 or if you tend to overeat with a longer eating window.
Don’t Miss: Is Fasting In The Morning Good For Weight Loss
How Do I Try Intermittent Fasting
There are four popular fasting approaches: periodic fasting, time-restricted feeding, alternate-day fasting and the 5:2 diet. Time-restricted feeding, sometimes called daily intermittent fasting, is perhaps the easiest and most popular fasting method. Daily intermittent fasters restrict eating to certain time periods each day, say 11 in the morning to 7 at night. The fasting period is usually around 12 or more hours that, helpfully, includes time spent sleeping overnight. Periodic fasting will feel most familiar: no food or drinks with calories for 24-hour periods. Another type of fast, alternate-day fasting requires severe calorie reduction every other day. Lastly, the 5:2 method was popularized by author Kate Harrisons book The 5:2 Diet” and requires fasting on two nonconsecutive days a week.
Are There Any Side Effects Of Intermittent Fasting 20/4
The Complete Intermittent Fasting Guide for Beginners
Yes, there are. Some downsides of participating in a fasting eating plan include:
• May lead to a binge-eating disorder This is a serious eating disorder in which you frequently consume unusually large amounts of food and feel unable to stop eating. Fasting has been linked to the increase in the onset of binge-eating .
• May lead to nutrient deficiencies Since you are eating during a very small window, you are unlikely to consume the recommended servings of fruits and vegetables per day. Nutrient deficiencies that can lead to anemia, fatigue, weakness, poor eye health and immunity, short term memory loss, diarrhea, dementia, and skin disorders, muscle loss, osteoporosis, and depression .
• It has no basis in science As we mentioned above, this diet was developed by an ex-military guy who based all his findings from his experience during active duty. He is neither a scientist, nutritionist, nor a dietitian. On the other hand, most of the benefits mentioned have been linked to fasting in general or other forms of intermittent fasting and not the 20/4 method.
Read Also: What Not To Eat When Fasting
RELATED ARTICLES
Most Popular |
Learn to Fix Internet Explorer Script Error
The other main tasks that this software perform is disk partitioning. Besides this, a lot of handy built-in applications like disk cleaner, disk cloner, remote desktop client, Firefox browser, etc. also come with it. In order to find bad sectors, this software provides a Run Test section that you can access from the Manage tab. In the Run test sections, you get three different tests namelyOffline Data Collection, Short Self Test, and Extended Self-Test.
• Select a language, keyboard method, and timeand then click Next.
• Consumers will have to decide if it’s a step in the right direction or a step too far.
• A game will display “kicked by server” when it detects suspicious activity from the player’s client.
This is the only way to make your hard drive usable without having to lose your essential data files. CheckDisk Portableis a free and portable software to check hard drive bad sectors for Windows. Using it, you can check system hard drive, portable hard drive, flash drives, etc. for bad sectors.
How to Reboot / Restart Xbox One & Xbox 360
Select the option to Schedule disk check when you restart the computer. To use this technique, open “This PC” in File Explorer and right-click on any hard drive (or HDD/SSD) you need to examine. Then select Propertiesand click on the Tools tab.
Next, let’s see detailed steps about how to run CHKDSK Windows 10 to fix hard drive error with third-party software. It is recommended to use MiniTool Partition Wizard Free Edition, which makes CHKDSK Windows 10 as a breeze. If you can, you may also run CHKDSK repair with third-party software, but be sure to choose a reliable one. CHKDSK Windows 10 with elevated Command Prompt performs a couple of functions. Although running it may take some time, it does prevent hard disk from being damaged and loss of data in the long term. It is recommended to run it whenever Windows has shut down abnormally or hard disk performs abnormally.
Confirm the Windows Update Service is Working
It may sound odd but in fact memory problems can be responsible for a vast array of different errors on a system where testing your RAM might be the last thing you think of. Once you exit the BIOS settings and restarts the system, it would load Recoverit’s interface instead. You can simply select the drive you wish to scan, preview the extracted data, and restore them to a secure location. Once the process is complete, you will be notified, so that you can safely remove the newly-created bootable media. Simply confirm your choice by selecting the “Yes” prompt and restart your system with default BIOS settings. Using the correct keys, navigate to the “Exit” tab.
If your selected drive http://driversol.com/drivers/audio-cards/microsoft/generic-usb-audio-device/ is a system partition that is being used, Windows will let you schedule a disk check on the next restart. If your target drive is an external or non-boot internal disk, theCHKDSK process will begin as soon as we enter the command above. It is a powerful tool provided by Microsoft that can help diagnose hard drive issues from minor fragmented portions of the drive up to the most problematic bad sectors. Once in a while, Windows 10 persists in telling you to scan the drive for errors even after you have restarted. Well, this could mean several things with each calling for a different action.
Este post tem um comentário
1. nimabi
Thank you very much for sharing, I learned a lot from your article. Very cool. Thanks. nimabi
Deixe um comentário |
Difference between revisions of "IO inside"
From HaskellWiki
Jump to: navigation, search
m (Replacements for 'world' values)
(What is a monad?: Formatting, wording, grammar)
(One intermediate revision by the same user not shown)
Line 14: Line 14:
Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.
Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.
Haskell purity allows compiler to call only functions whose results
+
Haskell's purity allows the compiler to call only functions whose results are really required to calculate the final value of a top-level function (e.g., main) - this is called lazy evaluation. It's a great thing for pure mathematical computations, but how about I/O actions? A function like
are really required to calculate final value of high-level function
+
(i.e., main) - this is called lazy evaluation. It's great thing for
pure mathematical computations, but how about I/O actions? Function
like:
<haskell>
<haskell>
putStrLn "Press any key to begin formatting"
putStrLn "Press any key to begin formatting"
</haskell>
</haskell>
can't return any
+
meaningful result value, so how can we ensure that compiler will not
+
can't return any meaningful result value, so how can we ensure that the compiler will not omit or reorder its execution? And in general: How we can work with stateful algorithms and side effects in an entirely lazy language? This question has had many different solutions proposed while Haskell was developed (see [[History of Haskell]]), though a solution based on [[monad]]s is now the standard.
omit or reorder its execution? And in general: how we can work with
stateful algorithms and side effects in an entirely lazy language?
This question has had many different solutions proposed in 18 years of
Haskell development (see [[History of Haskell]]), though a solution based on [[monad]]s is now
the standard.
== What is a monad? ==
== What is a monad? ==
What is a [[monad]]? It's something from mathematical category theory, which I
+
What is a [[monad]]? It's a concept from mathematical category theory. In order to understand how monads are used to solve the problem of I/O and side effects, you don't need to know category theory. It's enough to just know elementary mathematics.
don't know anymore. In order to understand how monads are used to
solve the problem of I/O and side effects, you don't need to know it. It's
enough to just know elementary mathematics, like I do.
Let's imagine that we want to implement in Haskell the well-known
+
Let's imagine that we want to implement the well-known 'getchar' function in Haskell. What type should it have? Let's try:
'getchar' function. What type should it have? Let's try:
<haskell>
<haskell>
getchar :: Char
getchar :: Char
get2chars = [getchar,getchar]
+
get2chars = [getchar, getchar]
</haskell>
</haskell>
What will we get with 'getchar' having just the 'Char' type? You can see
+
What will we get with 'getchar' having just the 'Char' type? You can see all the possible problems in the definition of 'get2chars':
all the possible problems in the definition of 'get2chars':
# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "excessive" calls to 'getchar' and use one returned value twice.
+
# Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to 'getchar' and use one returned value twice.
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.
# Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.
How can these problems be solved, from the programmer's viewpoint?
+
How can these problems be solved, from the programmer's perspective? Let's introduce a fake parameter of 'getchar' to make each call "different" from the compiler's point of view:
Let's introduce a fake parameter of 'getchar' to make each call
"different" from the compiler's point of view:
<haskell>
<haskell>
Line 55: Line 47:
</haskell>
</haskell>
Right away, this solves the first problem mentioned above - now the
+
Right away, this solves the first problem mentioned above - now the compiler will make two calls because it sees that the calls have different parameters. The whole 'get2chars' function should also have a fake parameter, otherwise we will have the same problem calling it:
compiler will make two calls because it sees them as having different
parameters. The whole 'get2chars' function should also have a
fake parameter, otherwise we will have the same problem calling it:
<haskell>
<haskell>
Line 68: Line 57:
Now we need to give the compiler some clue to determine which function it
+
Now we need to give the compiler some clue to determine which function it should call first. The Haskell language doesn't provide any way to express
should call first. The Haskell language doesn't provide any way to express
+
order of evaluation except for data dependencies! How about adding an artificial data dependency which prevents evaluation of the second 'getchar' before the first one? In order to achieve this, we will return an additional fake result from 'getchar' that will be used as a parameter for the next 'getchar' call:
order of evaluation... except for data dependencies! How about adding an
artificial data dependency which prevents evaluation of the second
'getchar' before the first one? In order to achieve this, we will
return an additional fake result from 'getchar' that will be used as a
parameter for the next 'getchar' call:
<haskell>
<haskell>
getchar :: Int -> (Char, Int)
getchar :: Int -> (Char, Int)
get2chars _ = [a,b] where (a,i) = getchar 1
+
get2chars _ = [a, b] where (a, i) = getchar 1
(b,_) = getchar i
+
(b, _) = getchar i
</haskell>
</haskell>
So far so good - now we can guarantee that 'a' is read before 'b'
+
So far so good now we can guarantee that 'a' is read before 'b' because reading 'b' needs the value ('i') that is returned by reading 'a'!
because reading 'b' needs the value ('i') that is returned by reading 'a'!
We've added a fake parameter to 'get2chars' but the problem is that the
+
We've added a fake parameter to 'get2chars' but the problem is that the Haskell compiler is too smart! It can believe that the external 'getchar' function is really dependent on its parameter but for 'get2chars' it will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want.
Haskell compiler is too smart! It can believe that the external 'getchar'
+
function is really dependent on its parameter but for 'get2chars' it
+
How can we fix this? How about passing this fake parameter to the 'getchar' function? In this case the compiler can't guess that it is really unused.
will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want. How can we fix this? How about passing this fake parameter to the 'getchar' function?! In this case
the compiler can't guess that it is really unused.
<haskell>
<haskell>
get2chars i0 = [a,b] where (a,i1) = getchar i0
+
get2chars i0 = [a, b] where (a, i1) = getchar i0
(b,i2) = getchar i1
+
(b, i2) = getchar i1
</haskell>
</haskell>
+
Furthermore, 'get2chars' has the same purity problems as the 'getchar' function. If you need to call it two times, you need a way to describe the order of these calls. Consider this:
And more - 'get2chars' has all the same purity problems as the 'getchar'
function. If you need to call it two times, you need a way to describe
the order of these calls. Look at:
<haskell>
<haskell>
Line 100: Line 84:
</haskell>
</haskell>
We already know how to deal with these problems - 'get2chars' should
+
We already know how to deal with these problems: 'get2chars' should also return some fake value that can be used to order calls:
also return some fake value that can be used to order calls:
<haskell>
<haskell>
get2chars :: Int -> (String, Int)
get2chars :: Int -> (String, Int)
get4chars i0 = (a++b) where (a,i1) = get2chars i0
+
get4chars i0 = (a++b) where (a, i1) = get2chars i0
(b,i2) = get2chars i1
+
(b, i2) = get2chars i1
</haskell>
</haskell>
+
But what should the fake return value of 'get2chars' be? If we use some integer constant, the excessively smart Haskell compiler will guess that we're cheating again. What about returning the value returned by 'getchar'? See:
But what's the fake value 'get2chars' should return? If we use some integer constant, the excessively-smart Haskell compiler will guess that we're cheating again. What about returning the value returned by 'getchar'? See:
<haskell>
<haskell>
get2chars :: Int -> (String, Int)
get2chars :: Int -> (String, Int)
get2chars i0 = ([a,b], i2) where (a,i1) = getchar i0
+
get2chars i0 = ([a, b], i2) where (a, i1) = getchar i0
(b,i2) = getchar i1
+
(b, i2) = getchar i1
</haskell>
</haskell>
Believe it or not, but we've just constructed the whole "monadic"
+
Believe it or not, but we've just constructed the whole "monadic" Haskell I/O system.
Haskell I/O system.
== Welcome to the RealWorld, baby ==
== Welcome to the RealWorld, baby ==
Latest revision as of 10:24, 8 March 2020
Haskell I/O has always been a source of confusion and surprises for new Haskellers. While simple I/O code in Haskell looks very similar to its equivalents in imperative languages, attempts to write somewhat more complex code often result in a total mess. This is because Haskell I/O is really very different internally. Haskell is a pure language and even the I/O system can't break this purity.
The following text is an attempt to explain the details of Haskell I/O implementations. This explanation should help you eventually master all the smart I/O tricks. Moreover, I've added a detailed explanation of various traps you might encounter along the way. After reading this text, you will receive a "Master of Haskell I/O" degree that is equal to a Bachelor in Computer Science and Mathematics, simultaneously.
If you are new to Haskell I/O you may prefer to start by reading the Introduction to IO page.
Haskell is a pure language
Haskell is a pure language, which means that the result of any function call is fully determined by its arguments. Pseudo-functions like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell. Moreover, Haskell functions can't have side effects, which means that they can't effect any changes to the "real world", like changing files, writing to the screen, printing, sending data over the network, and so on. These two restrictions together mean that any function call can be replaced by the result of a previous call with the same parameters, and the language guarantees that all these rearrangements will not change the program result!
Let's compare this to C: optimizing C compilers try to guess which functions have no side effects and don't depend on mutable global variables. If this guess is wrong, an optimization can change the program's semantics! To avoid this kind of disaster, C optimizers are conservative in their guesses or require hints from the programmer about the purity of functions.
Compared to an optimizing C compiler, a Haskell compiler is a set of pure mathematical transformations. This results in much better high-level optimization facilities. Moreover, pure mathematical computations can be much more easily divided into several threads that may be executed in parallel, which is increasingly important in these days of multi-core CPUs. Finally, pure computations are less error-prone and easier to verify, which adds to Haskell's robustness and to the speed of program development using Haskell.
Haskell's purity allows the compiler to call only functions whose results are really required to calculate the final value of a top-level function (e.g., main) - this is called lazy evaluation. It's a great thing for pure mathematical computations, but how about I/O actions? A function like
putStrLn "Press any key to begin formatting"
can't return any meaningful result value, so how can we ensure that the compiler will not omit or reorder its execution? And in general: How we can work with stateful algorithms and side effects in an entirely lazy language? This question has had many different solutions proposed while Haskell was developed (see History of Haskell), though a solution based on monads is now the standard.
What is a monad?
What is a monad? It's a concept from mathematical category theory. In order to understand how monads are used to solve the problem of I/O and side effects, you don't need to know category theory. It's enough to just know elementary mathematics.
Let's imagine that we want to implement the well-known 'getchar' function in Haskell. What type should it have? Let's try:
getchar :: Char
get2chars = [getchar, getchar]
What will we get with 'getchar' having just the 'Char' type? You can see all the possible problems in the definition of 'get2chars':
1. Because the Haskell compiler treats all functions as pure (not having side effects), it can avoid "unnecessary" calls to 'getchar' and use one returned value twice.
2. Even if it does make two calls, there is no way to determine which call should be performed first. Do you want to return the two chars in the order in which they were read, or in the opposite order? Nothing in the definition of 'get2chars' answers this question.
How can these problems be solved, from the programmer's perspective? Let's introduce a fake parameter of 'getchar' to make each call "different" from the compiler's point of view:
getchar :: Int -> Char
get2chars = [getchar 1, getchar 2]
Right away, this solves the first problem mentioned above - now the compiler will make two calls because it sees that the calls have different parameters. The whole 'get2chars' function should also have a fake parameter, otherwise we will have the same problem calling it:
getchar :: Int -> Char
get2chars :: Int -> String
get2chars _ = [getchar 1, getchar 2]
Now we need to give the compiler some clue to determine which function it should call first. The Haskell language doesn't provide any way to express order of evaluation — except for data dependencies! How about adding an artificial data dependency which prevents evaluation of the second 'getchar' before the first one? In order to achieve this, we will return an additional fake result from 'getchar' that will be used as a parameter for the next 'getchar' call:
getchar :: Int -> (Char, Int)
get2chars _ = [a, b] where (a, i) = getchar 1
(b, _) = getchar i
So far so good — now we can guarantee that 'a' is read before 'b' because reading 'b' needs the value ('i') that is returned by reading 'a'!
We've added a fake parameter to 'get2chars' but the problem is that the Haskell compiler is too smart! It can believe that the external 'getchar' function is really dependent on its parameter but for 'get2chars' it will see that we're just cheating because we throw it away! Therefore it won't feel obliged to execute the calls in the order we want.
How can we fix this? How about passing this fake parameter to the 'getchar' function? In this case the compiler can't guess that it is really unused.
get2chars i0 = [a, b] where (a, i1) = getchar i0
(b, i2) = getchar i1
Furthermore, 'get2chars' has the same purity problems as the 'getchar' function. If you need to call it two times, you need a way to describe the order of these calls. Consider this:
get4chars = [get2chars 1, get2chars 2] -- order of 'get2chars' calls isn't defined
We already know how to deal with these problems: 'get2chars' should also return some fake value that can be used to order calls:
get2chars :: Int -> (String, Int)
get4chars i0 = (a++b) where (a, i1) = get2chars i0
(b, i2) = get2chars i1
But what should the fake return value of 'get2chars' be? If we use some integer constant, the excessively smart Haskell compiler will guess that we're cheating again. What about returning the value returned by 'getchar'? See:
get2chars :: Int -> (String, Int)
get2chars i0 = ([a, b], i2) where (a, i1) = getchar i0
(b, i2) = getchar i1
Believe it or not, but we've just constructed the whole "monadic" Haskell I/O system.
Welcome to the RealWorld, baby
Warning: The following story about IO is incorrect in that it cannot actually explain some important aspects of IO (including interaction and concurrency). However, some people find it useful to begin developing an understanding.
The 'main' Haskell function has the type:
main :: RealWorld -> ((), RealWorld)
where 'RealWorld' is a fake type used instead of our Int. It's something like the baton passed in a relay race. When 'main' calls some IO function, it passes the "RealWorld" it received as a parameter. All IO functions have similar types involving RealWorld as a parameter and result. To be exact, "IO" is a type synonym defined in the following way:
type IO a = RealWorld -> (a, RealWorld)
So, 'main' just has type "IO ()", 'getChar' has type "IO Char" and so on. You can think of the type "IO Char" as meaning "take the current RealWorld, do something to it, and return a Char and a (possibly changed) RealWorld". Let's look at 'main' calling 'getChar' two times:
getChar :: RealWorld -> (Char, RealWorld)
main :: RealWorld -> ((), RealWorld)
main world0 = let (a, world1) = getChar world0
(b, world2) = getChar world1
in ((), world2)
Look at this closely: 'main' passes the "world" it received to the first 'getChar'. This 'getChar' returns some new value of type RealWorld that gets used in the next call. Finally, 'main' returns the "world" it got from the second 'getChar'.
1. Is it possible here to omit any call of 'getChar' if the Char it read is not used? No, because we need to return the "world" that is the result of the second 'getChar' and this in turn requires the "world" returned from the first 'getChar'.
2. Is it possible to reorder the 'getChar' calls? No: the second 'getChar' can't be called before the first one because it uses the "world" returned from the first call.
3. Is it possible to duplicate calls? In Haskell semantics - yes, but real compilers never duplicate work in such simple cases (otherwise, the programs generated will not have any speed guarantees).
As we already said, RealWorld values are used like a baton which gets passed between all routines called by 'main' in strict order. Inside each routine called, RealWorld values are used in the same way. Overall, in order to "compute" the world to be returned from 'main', we should perform each IO procedure that is called from 'main', directly or indirectly. This means that each procedure inserted in the chain will be performed just at the moment (relative to the other IO actions) when we intended it to be called. Let's consider the following program:
main = do a <- ask "What is your name?"
b <- ask "How old are you?"
return ()
ask s = do putStr s
readLn
Now you have enough knowledge to rewrite it in a low-level way and check that each operation that should be performed will really be performed with the arguments it should have and in the order we expect.
But what about conditional execution? No problem. Let's define the well-known 'when' operation:
when :: Bool -> IO () -> IO ()
when condition action world =
if condition
then action world
else ((), world)
As you can see, we can easily include or exclude from the execution chain IO procedures (actions) depending on the data values. If 'condition' will be False on the call of 'when', 'action' will never be called because real Haskell compilers, again, never call functions whose results are not required to calculate the final result (i.e., here, the final "world" value of 'main').
Loops and more complex control structures can be implemented in the same way. Try it as an exercise!
Finally, you may want to know how much passing these RealWorld values around the program costs. It's free! These fake values exist solely for the compiler while it analyzes and optimizes the code, but when it gets to assembly code generation, it "suddenly" realize that this type is like "()", so all these parameters and result values can be omitted from the final generated code. Isn't it beautiful?
'>>=' and 'do' notation
All beginners (including me) start by thinking that 'do' is some magic statement that executes IO actions. That's wrong - 'do' is just syntactic sugar that simplifies the writing of procedures that use IO (and also other monads, but that's beyond the scope of this tutorial). 'do' notation eventually gets translated to statements passing "world" values around like we've manually written above and is used to simplify the gluing of several IO actions together. You don't need to use 'do' for just one statement; for instance,
main = do putStr "Hello!"
is desugared to:
main = putStr "Hello!"
Let's examine how to desugar a 'do' with multiple statements in the following example:
main = do putStr "What is your name?"
putStr "How old are you?"
putStr "Nice day!"
The 'do' statement here just joins several IO actions that should be performed sequentially. It's translated to sequential applications of one of the so-called "binding operators", namely '>>':
main = (putStr "What is your name?")
>> ( (putStr "How old are you?")
>> (putStr "Nice day!")
)
This binding operator just combines two IO actions, executing them sequentially by passing the "world" between them:
(>>) :: IO a -> IO b -> IO b
(action1 >> action2) world0 =
let (a, world1) = action1 world0
(b, world2) = action2 world1
in (b, world2)
If defining operators this way looks strange to you, read this definition as follows:
action1 >> action2 = action
where
action world0 = let (a, world1) = action1 world0
(b, world2) = action2 world1
in (b, world2)
Now you can substitute the definition of '>>' at the places of its usage and check that program constructed by the 'do' desugaring is actually the same as we could write by manually manipulating "world" values.
A more complex example involves the binding of variables using "<-":
main = do a <- readLn
print a
This code is desugared into:
main = readLn
>>= (\a -> print a)
As you should remember, the '>>' binding operator silently ignores the value of its first action and returns as an overall result the result of its second action only. On the other hand, the '>>=' binding operator (note the extra '=' at the end) allows us to use the result of its first action - it gets passed as an additional parameter to the second one! Look at the definition:
(>>=) :: IO a -> (a -> IO b) -> IO b
(action1 >>= action2) world0 =
let (a, world1) = action1 world0
(b, world2) = action2 a world1
in (b, world2)
First, what does the type of the second "action" (more precisely, a function which returns an IO action), namely "a -> IO b", mean? By substituting the "IO" definition, we get "a -> RealWorld -> (b, RealWorld)". This means that second action actually has two parameters - the type 'a' actually used inside it, and the value of type RealWorld used for sequencing of IO actions. That's always the case - any IO procedure has one more parameter compared to what you see in its type signature. This parameter is hidden inside the definition of the type alias "IO".
Second, you can use these '>>' and '>>=' operations to simplify your program. For example, in the code above we don't need to introduce the variable, because the result of 'readLn' can be send directly to 'print':
main = readLn >>= print
And third - as you see, the notation:
do x <- action1
action2
where 'action1' has type "IO a" and 'action2' has type "IO b", translates into:
action1 >>= (\x -> action2)
where the second argument of '>>=' has the type "a -> IO b". It's the way the '<-' binding is processed - the name on the left-hand side of '<-' just becomes a parameter of subsequent operations represented as one large IO action. Note also that if 'action1' has type "IO a" then 'x' will just have type "a"; you can think of the effect of '<-' as "unpacking" the IO value of 'action1' into 'x'. Note also that '<-' is not a true operator; it's pure syntax, just like 'do' itself. Its meaning results only from the way it gets desugared.
Look at the next example:
main = do putStr "What is your name?"
a <- readLn
putStr "How old are you?"
b <- readLn
print (a,b)
This code is desugared into:
main = putStr "What is your name?"
>> readLn
>>= \a -> putStr "How old are you?"
>> readLn
>>= \b -> print (a,b)
I omitted the parentheses here; both the '>>' and the '>>=' operators are left-associative, but lambda-bindings always stretches as far to the right as possible, which means that the 'a' and 'b' bindings introduced here are valid for all remaining actions. As an exercise, add the parentheses yourself and translate this procedure into the low-level code that explicitly passes "world" values. I think it should be enough to help you finally realize how the 'do' translation and binding operators work.
Oh, no! I forgot the third monadic operator - 'return'. It just combines its two parameters - the value passed and "world":
return :: a -> IO a
return a world0 = (a, world0)
How about translating a simple example of 'return' usage? Say,
main = do a <- readLn
return (a*2)
Programmers with an imperative language background often think that 'return' in Haskell, as in other languages, immediately returns from the IO procedure. As you can see in its definition (and even just from its type!), such an assumption is totally wrong. The only purpose of using 'return' is to "lift" some value (of type 'a') into the result of a whole action (of type "IO a") and therefore it should generally be used only as the last executed statement of some IO sequence. For example try to translate the following procedure into the corresponding low-level code:
main = do a <- readLn
when (a>=0) $ do
return ()
print "a is negative"
and you will realize that the 'print' statement is executed even for non-negative values of 'a'. If you need to escape from the middle of an IO procedure, you can use the 'if' statement:
main = do a <- readLn
if (a>=0)
then return ()
else print "a is negative"
Moreover, Haskell layout rules allow us to use the following layout:
main = do a <- readLn
if (a>=0) then return ()
else do
print "a is negative"
...
that may be useful for escaping from the middle of a longish 'do' statement.
Last exercise: implement a function 'liftM' that lifts operations on plain values to the operations on monadic ones. Its type signature:
liftM :: (a -> b) -> (IO a -> IO b)
If that's too hard for you, start with the following high-level definition and rewrite it in low-level fashion:
liftM f action = do x <- action
return (f x)
Mutable data (references, arrays, hash tables...)
As you should know, every name in Haskell is bound to one fixed (immutable) value. This greatly simplifies understanding algorithms and code optimization, but it's inappropriate in some cases. As we all know, there are plenty of algorithms that are simpler to implement in terms of updatable variables, arrays and so on. This means that the value associated with a variable, for example, can be different at different execution points, so reading its value can't be considered as a pure function. Imagine, for example, the following code:
main = do let a0 = readVariable varA
_ = writeVariable varA 1
a1 = readVariable varA
print (a0, a1)
Does this look strange? First, the two calls to 'readVariable' look the same, so the compiler can just reuse the value returned by the first call. Second, the result of the 'writeVariable' call isn't used so the compiler can (and will!) omit this call completely. To complete the picture, these three calls may be rearranged in any order because they appear to be independent of each other. This is obviously not what was intended. What's the solution? You already know this - use IO actions! Using IO actions guarantees that:
1. the execution order will be retained as written
2. each action will have to be executed
3. the result of the "same" action (such as "readVariable varA") will not be reused
So, the code above really should be written as:
import Data.IORef
main = do varA <- newIORef 0 -- Create and initialize a new variable
a0 <- readIORef varA
writeIORef varA 1
a1 <- readIORef varA
print (a0, a1)
Here, 'varA' has the type "IORef Int" which means "a variable (reference) in the IO monad holding a value of type Int". newIORef creates a new variable (reference) and returns it, and then read/write actions use this reference. The value returned by the "readIORef varA" action depends not only on the variable involved but also on the moment this operation is performed so it can return different values on each call.
Arrays, hash tables and any other _mutable_ data structures are defined in the same way - for each of them, there's an operation that creates new "mutable values" and returns a reference to it. Then special read and write operations in the IO monad are used. The following code shows an example using mutable arrays:
import Data.Array.IO
main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)
a <- readArray arr 1
writeArray arr 1 64
b <- readArray arr 1
print (a, b)
Here, an array of 10 elements with 37 as the initial value at each location is created. After reading the value of the first element (index 1) into 'a' this element's value is changed to 64 and then read again into 'b'. As you can see by executing this code, 'a' will be set to 37 and 'b' to 64.
Other state-dependent operations are also often implemented as IO actions. For example, a random number generator should return a different value on each call. It looks natural to give it a type involving IO:
rand :: IO Int
Moreover, when you import C routines you should be careful - if this routine is impure, i.e. its result depends on something in the "real world" (file system, memory contents...), internal state and so on, you should give it an IO type. Otherwise, the compiler can "optimize" repetitive calls of this procedure with the same parameters!
For example, we can write a non-IO type for:
foreign import ccall
sin :: Double -> Double
because the result of 'sin' depends only on its argument, but
foreign import ccall
tell :: Int -> IO Int
If you will declare 'tell' as a pure function (without IO) then you may get the same position on each call!
IO actions as values
By this point you should understand why it's impossible to use IO actions inside non-IO (pure) procedures. Such procedures just don't get a "baton"; they don't know any "world" value to pass to an IO action. The RealWorld type is an abstract datatype, so pure functions also can't construct RealWorld values by themselves, and it's a strict type, so 'undefined' also can't be used. So, the prohibition of using IO actions inside pure procedures is just a type system trick (as it usually is in Haskell).
But while pure code can't _execute_ IO actions, it can work with them as with any other functional values - they can be stored in data structures, passed as parameters, returned as results, collected in lists, and partially applied. But an IO action will remain a functional value because we can't apply it to the last argument - of type RealWorld.
In order to _execute_ the IO action we need to apply it to some RealWorld value. That can be done only inside some IO procedure, in its "actions chain". And real execution of this action will take place only when this procedure is called as part of the process of "calculating the final value of world" for 'main'. Look at this example:
main world0 = let get2chars = getChar >> getChar
((), world1) = putStr "Press two keys" world0
(answer, world2) = get2chars world1
in ((), world2)
Here we first bind a value to 'get2chars' and then write a binding involving 'putStr'. But what's the execution order? It's not defined by the order of the 'let' bindings, it's defined by the order of processing "world" values! You can arbitrarily reorder the binding statements - the execution order will be defined by the data dependency with respect to the "world" values that get passed around. Let's see what this 'main' looks like in the 'do' notation:
main = do let get2chars = getChar >> getChar
putStr "Press two keys"
get2chars
return ()
As you can see, we've eliminated two of the 'let' bindings and left only the one defining 'get2chars'. The non-'let' statements are executed in the exact order in which they're written, because they pass the "world" value from statement to statement as we described above. Thus, this version of the function is much easier to understand because we don't have to mentally figure out the data dependency of the "world" value.
Moreover, IO actions like 'get2chars' can't be executed directly because they are functions with a RealWorld parameter. To execute them, we need to supply the RealWorld parameter, i.e. insert them in the 'main' chain, placing them in some 'do' sequence executed from 'main' (either directly in the 'main' function, or indirectly in an IO function called from 'main'). Until that's done, they will remain like any function, in partially evaluated form. And we can work with IO actions as with any other functions - bind them to names (as we did above), save them in data structures, pass them as function parameters and return them as results - and they won't be performed until you give them the magic RealWorld parameter!
Example: a list of IO actions
Let's try defining a list of IO actions:
ioActions :: [IO ()]
ioActions = [(print "Hello!"),
(putStr "just kidding"),
(getChar >> return ())
]
I used additional parentheses around each action, although they aren't really required. If you still can't believe that these actions won't be executed immediately, just recall the real type of this list:
ioActions :: [RealWorld -> ((), RealWorld)]
Well, now we want to execute some of these actions. No problem, just insert them into the 'main' chain:
main = do head ioActions
ioActions !! 1
last ioActions
Looks strange, right? Really, any IO action that you write in a 'do' statement (or use as a parameter for the '>>'/'>>=' operators) is an expression returning a result of type 'IO a' for some type 'a'. Typically, you use some function that has the type 'x -> y -> ... -> IO a' and provide all the x, y, etc. parameters. But you're not limited to this standard scenario - don't forget that Haskell is a functional language and you're free to compute the functional value required (recall that "IO a" is really a function type) in any possible way. Here we just extracted several functions from the list - no problem. This functional value can also be constructed on-the-fly, as we've done in the previous example - that's also OK. Want to see this functional value passed as a parameter? Just look at the definition of 'when'. Hey, we can buy, sell, and rent these IO actions just like we can with any other functional values! For example, let's define a function that executes all the IO actions in the list:
sequence_ :: [IO a] -> IO ()
sequence_ [] = return ()
sequence_ (x:xs) = do x
sequence_ xs
No black magic - we just extract IO actions from the list and insert them into a chain of IO operations that should be performed one after another (in the same order that they occurred in the list) to "compute the final world value" of the entire 'sequence_' call.
With the help of 'sequence_', we can rewrite our last 'main' function as:
main = sequence_ ioActions
Haskell's ability to work with IO actions as with any other (functional and non-functional) values allows us to define control structures of arbitrary complexity. Try, for example, to define a control structure that repeats an action until it returns the 'False' result:
while :: IO Bool -> IO ()
while action = ???
Most programming languages don't allow you to define control structures at all, and those that do often require you to use a macro-expansion system. In Haskell, control structures are just trivial functions anyone can write.
Example: returning an IO action as a result
How about returning an IO action as the result of a function? Well, we've done this each time we've defined an IO procedure - they all return IO actions that need a RealWorld value to be performed. While we usually just execute them as part of a higher-level IO procedure, it's also possible to just collect them without actual execution:
main = do let a = sequence ioActions
b = when True getChar
c = getChar >> getChar
putStr "These 'let' statements are not executed!"
These assigned IO procedures can be used as parameters to other procedures, or written to global variables, or processed in some other way, or just executed later, as we did in the example with 'get2chars'.
But how about returning a parameterized IO action from an IO procedure? Let's define a procedure that returns the i'th byte from a file represented as a Handle:
readi h i = do hSeek h AbsoluteSeek i
hGetChar h
So far so good. But how about a procedure that returns the i'th byte of a file with a given name without reopening it each time?
readfilei :: String -> IO (Integer -> IO Char)
readfilei name = do h <- openFile name ReadMode
return (readi h)
As you can see, it's an IO procedure that opens a file and returns... another IO procedure that will read the specified byte. But we can go further and include the 'readi' body in 'readfilei':
readfilei name = do h <- openFile name ReadMode
let readi h i = do hSeek h AbsoluteSeek i
hGetChar h
return (readi h)
That's a little better. But why do we add 'h' as a parameter to 'readi' if it can be obtained from the environment where 'readi' is now defined? An even shorter version is this:
readfilei name = do h <- openFile name ReadMode
let readi i = do hSeek h AbsoluteSeek i
hGetChar h
return readi
What have we done here? We've build a parameterized IO action involving local names inside 'readfilei' and returned it as the result. Now it can be used in the following way:
main = do myfile <- readfilei "test"
a <- myfile 0
b <- myfile 1
print (a,b)
This way of using IO actions is very typical for Haskell programs - you just construct one or more IO actions that you need, with or without parameters, possibly involving the parameters that your "constructor" received, and return them to the caller. Then these IO actions can be used in the rest of the program without any knowledge about your internal implementation strategy. One thing this can be used for is to partially emulate the OOP (or more precisely, the ADT) programming paradigm.
Example: a memory allocator generator
As an example, one of my programs has a module which is a memory suballocator. It receives the address and size of a large memory block and returns two procedures - one to allocate a subblock of a given size and the other to free the allocated subblock:
memoryAllocator :: Ptr a -> Int -> IO (Int -> IO (Ptr b),
Ptr c -> IO ())
memoryAllocator buf size = do ......
let alloc size = do ...
...
free ptr = do ...
...
return (alloc, free)
How this is implemented? 'alloc' and 'free' work with references created inside the memoryAllocator procedure. Because the creation of these references is a part of the memoryAllocator IO actions chain, a new independent set of references will be created for each memory block for which memoryAllocator is called:
memoryAllocator buf size = do start <- newIORef buf
end <- newIORef (buf `plusPtr` size)
...
These two references are read and written in the 'alloc' and 'free' definitions (we'll implement a very simple memory allocator for this example):
...
let alloc size = do addr <- readIORef start
writeIORef start (addr `plusPtr` size)
return addr
let free ptr = do writeIORef start ptr
What we've defined here is just a pair of closures that use state available at the moment of their definition. As you can see, it's as easy as in any other functional language, despite Haskell's lack of direct support for impure functions.
The following example uses procedures, returned by memoryAllocator, to simultaneously allocate/free blocks in two independent memory buffers:
main = do buf1 <- mallocBytes (2^16)
buf2 <- mallocBytes (2^20)
(alloc1, free1) <- memoryAllocator buf1 (2^16)
(alloc2, free2) <- memoryAllocator buf2 (2^20)
ptr11 <- alloc1 100
ptr21 <- alloc2 1000
free1 ptr11
free2 ptr21
ptr12 <- alloc1 100
ptr22 <- alloc2 1000
Example: emulating OOP with record types
Let's implement the classical OOP example: drawing figures. There are figures of different types: circles, rectangles and so on. The task is to create a heterogeneous list of figures. All figures in this list should support the same set of operations: draw, move and so on. We will represent these operations as IO procedures. Instead of a "class" let's define a structure containing implementations of all the procedures required:
data Figure = Figure { draw :: IO (),
move :: Displacement -> IO ()
}
type Displacement = (Int, Int) -- horizontal and vertical displacement in points
The constructor of each figure's type should just return a Figure record:
circle :: Point -> Radius -> IO Figure
rectangle :: Point -> Point -> IO Figure
type Point = (Int, Int) -- point coordinates
type Radius = Int -- circle radius in points
We will "draw" figures by just printing their current parameters. Let's start with a simplified implementation of the 'circle' and 'rectangle' constructors, without actual 'move' support:
circle center radius = do
let description = " Circle at "++show center++" with radius "++show radius
return $ Figure { draw = putStrLn description }
rectangle from to = do
let description = " Rectangle "++show from++"-"++show to)
return $ Figure { draw = putStrLn description }
As you see, each constructor just returns a fixed 'draw' procedure that prints parameters with which the concrete figure was created. Let's test it:
drawAll :: [Figure] -> IO ()
drawAll figures = do putStrLn "Drawing figures:"
mapM_ draw figures
main = do figures <- sequence [circle (10,10) 5,
circle (20,20) 3,
rectangle (10,10) (20,20),
rectangle (15,15) (40,40)]
drawAll figures
Now let's define "full-featured" figures that can actually be moved around. In order to achieve this, we should provide each figure with a mutable variable that holds each figure's current screen location. The type of this variable will be "IORef Point". This variable should be created in the figure constructor and manipulated in IO procedures (closures) enclosed in the Figure record:
circle center radius = do
centerVar <- newIORef center
let drawF = do center <- readIORef centerVar
putStrLn (" Circle at "++show center
++" with radius "++show radius)
let moveF (addX,addY) = do (x,y) <- readIORef centerVar
writeIORef centerVar (x+addX, y+addY)
return $ Figure { draw=drawF, move=moveF }
rectangle from to = do
fromVar <- newIORef from
toVar <- newIORef to
let drawF = do from <- readIORef fromVar
to <- readIORef toVar
putStrLn (" Rectangle "++show from++"-"++show to)
let moveF (addX,addY) = do (fromX,fromY) <- readIORef fromVar
(toX,toY) <- readIORef toVar
writeIORef fromVar (fromX+addX, fromY+addY)
writeIORef toVar (toX+addX, toY+addY)
return $ Figure { draw=drawF, move=moveF }
Now we can test the code which moves figures around:
main = do figures <- sequence [circle (10,10) 5,
rectangle (10,10) (20,20)]
drawAll figures
mapM_ (\fig -> move fig (10,10)) figures
drawAll figures
It's important to realize that we are not limited to including only IO actions in a record that's intended to simulate a C++/Java-style interface. The record can also include values, IORefs, pure functions - in short, any type of data. For example, we can easily add to the Figure interface fields for area and origin:
data Figure = Figure { draw :: IO (),
move :: Displacement -> IO (),
area :: Double,
origin :: IORef Point
}
Exception handling (under development)
Although Haskell provides a set of exception raising/handling features comparable to those in popular OOP languages (C++, Java, C#), this part of the language receives much less attention. This is for two reasons. First, you just don't need to worry as much about them - most of the time it just works "behind the scenes". The second reason is that Haskell, lacking OOP inheritance, doesn't allow the programmer to easily subclass exception types, therefore limiting flexibility of exception handling.
The Haskell RTS raises more exceptions than traditional languages - pattern match failures, calls with invalid arguments (such as head []) and computations whose results depend on special values undefined and error "...." all raise their own exceptions:
example 1:
main = print (f 2)
f 0 = "zero"
f 1 = "one"
example 2:
main = print (head [])
example 3:
main = print (1 + (error "Value that wasn't initialized or cannot be computed"))
This allows to write programs in much more error-prone way.
Interfacing with C/C++ and foreign libraries (under development)
While Haskell is great at algorithm development, speed isn't its best side. We can combine the best of both worlds, though, by writing speed-critical parts of program in C and rest in Haskell. We just need a way to call C functions from Haskell and vice versa, and to marshal data between two worlds.
We also need to interact with C world for using Windows/Linux APIs, linking to various libraries and DLLs. Even interfacing with other languages requires to go through C world as "common denominator". Chapter 8 of the Haskell 2010 report provides a complete description of interfacing with C.
We will learn FFI via a series of examples. These examples include C/C++ code, so they need C/C++ compilers to be installed, the same will be true if you need to include code written in C/C++ in your program (C/C++ compilers are not required when you just need to link with existing libraries providing APIs with C calling convention). On Unix (and Mac OS?) systems, the system-wide default C/C++ compiler is typically used by GHC installation. On Windows, no default compilers exist, so GHC is typically shipped with a C compiler, and you may find on the download page a GHC distribution bundled with C and C++ compilers. Alternatively, you may find and install a GCC/MinGW version compatible with your GHC installation.
If you need to make your C/C++ code as fast as possible, you may compile your code by Intel compilers instead of GCC. However, these compilers are not free, moreover on Windows, code compiled by Intel compilers may not interact correctly with GHC-compiled code, unless one of them is put into DLLs (due to object file incompatibility).
More links:
C->Haskell
A lightweight tool for implementing access to C libraries from Haskell.
HSFFIG
Haskell FFI Binding Modules Generator (HSFFIG) is a tool that takes a C library include file (.h) and generates Haskell Foreign Functions Interface import declarations for items (functions, structures, etc.) the header defines.
MissingPy
MissingPy is really two libraries in one. At its lowest level, MissingPy is a library designed to make it easy to call into Python from Haskell. It provides full support for interpreting arbitrary Python code, interfacing with a good part of the Python/C API, and handling Python objects. It also provides tools for converting between Python objects and their Haskell equivalents. Memory management is handled for you, and Python exceptions get mapped to Haskell Dynamic exceptions. At a higher level, MissingPy contains Haskell interfaces to some Python modules.
HsLua
A Haskell interface to the Lua scripting language
Calling functions
First, we will learn how to call C functions from Haskell and Haskell functions from C. The first example consists of three files:
main.hs:
{-# LANGUAGE ForeignFunctionInterface #-}
main = do print "Hello from main"
c_function
haskell_function = print "Hello from haskell_function"
foreign import ccall safe "prototypes.h"
c_function :: IO ()
foreign export ccall
haskell_function :: IO ()
evil.c:
#include <stdio.h>
#include "prototypes.h"
void c_function (void)
{
printf("Hello from c_function\n");
haskell_function();
}
prototypes.h:
extern void c_function (void);
extern void haskell_function (void);
It may be compiled and linked in one step by ghc:
ghc --make main.hs evil.c
Or, you may compile C module(s) separately and link in .o files (this may be preferable if you use make and don't want to recompile unchanged sources; ghc's --make option provides smart recompilation only for .hs files):
ghc -c evil.c
ghc --make main.hs evil.o
You may use gcc/g++ directly to compile your C/C++ files but I recommend to do linking via ghc because it adds a lot of libraries required for execution of Haskell code. For the same reason, even if your main routine is written in C/C++, I recommend calling it from the Haskell function main - otherwise you'll have to explicitly init/shutdown the GHC RTS (run-time system).
We use the "foreign import" specification to import foreign routines into our Haskell world, and "foreign export" to export Haskell routines into the external world. Note that the import statement creates a new Haskell symbol (from the external one), while the export statement uses a Haskell symbol previously defined. Technically speaking, both types of statements create a wrapper that converts the names and calling conventions from C to Haskell or vice versa.
All about the "foreign" statement
The "ccall" specifier in foreign statements means the use of C (not C++ !) calling convention. This means that if you want to write the external function in C++ (instead of C) you should add export "C" specification to its declaration - otherwise you'll get linking errors. Let's rewrite our first example to use C++ instead of C:
prototypes.h:
#ifdef __cplusplus
extern "C" {
#endif
extern void c_function (void);
extern void haskell_function (void);
#ifdef __cplusplus
}
#endif
Compile it via:
ghc --make main.hs evil.cpp
where evil.cpp is just a renamed copy of evil.c from the first example. Note that the new prototypes.h is written to allow compiling it both as C and C++ code. When it's included from evil.cpp, it's compiled as C++ code. When GHC compiles main.hs via the C compiler (enabled by -fvia-C option), it also includes prototypes.h but compiles it in C mode. It's why you need to specify .h files in "foreign" declarations - depending on which Haskell compiler you use, these files may be included to check consistency of C and Haskell declarations.
The quoted part of the foreign statement may also be used to import or export a function under another name--for example,
foreign import ccall safe "prototypes.h CFunction"
c_function :: IO ()
foreign export ccall "HaskellFunction"
haskell_function :: IO ()
specifies that the C function called CFunction will become known as the Haskell function c_function, while the Haskell function haskell_function will be known in the C world as HaskellFunction. It's required when the C name doesn't conform to Haskell naming requirements.
Although the Haskell FFI standard tells about many other calling conventions in addition to ccall (e.g. cplusplus, jvm, net) current Haskell implementations support only ccall and stdcall. The latter, also called the "Pascal" calling convention, is used to interface with WinAPI:
foreign import stdcall unsafe "windows.h SetFileApisToOEM"
setFileApisToOEM :: IO ()
And finally, about the safe/unsafe specifier: a C function imported with the "unsafe" keyword is called directly and the Haskell runtime is stopped while the C function is executed (when there are several OS threads executing the Haskell program, only the current OS thread is delayed). This call doesn't allow recursively entering into the Haskell world by calling any Haskell function - the Haskell RTS is just not prepared for such an event. However, unsafe calls are as quick as calls in C world. It's ideal for "momentary" calls that quickly return back to the caller.
When "safe" is specified, the C function is called in safe environment - the Haskell execution context is saved, so it's possible to call back to Haskell and, if the C call takes a long time, another OS thread may be started to execute Haskell code (of course, in threads other than the one that called the C code). This has its own price, though - around 1000 CPU ticks per call.
You can read more about interaction between FFI calls and Haskell concurrency in [7].
Marshalling simple types
Calling by itself is relatively easy; the real problem of interfacing languages with different data models is passing data between them. In this case, there is no guarantee that Haskell's Int is represented in memory the same way as C's int, nor Haskell's Double the same as C's double and so on. While on *some* platforms they are the same and you can write throw-away programs relying on these, the goal of portability requires you to declare imported and exported functions using special types described in the FFI standard, which are guaranteed to correspond to C types. These are:
import Foreign.C.Types ( -- equivalent to the following C type:
CChar, CUChar, -- char/unsigned char
CShort, CUShort, -- short/unsigned short
CInt, CUInt, CLong, CULong, -- int/unsigned/long/unsigned long
CFloat, CDouble...) -- float/double
Now we can import and export typeful C/Haskell functions:
foreign import ccall unsafe "math.h"
c_sin :: CDouble -> CDouble
Note that pure C functions (those whose results depend only on their arguments) are imported without IO in their return type. The "const" specifier in C is not reflected in Haskell types, so appropriate compiler checks are not performed.
All these numeric types are instances of the same classes as their Haskell cousins (Ord, Num, Show and so on), so you may perform calculations on these data directly. Alternatively, you may convert them to native Haskell types. It's very typical to write simple wrappers around imported and exported functions just to provide interfaces having native Haskell types:
-- |Type-conversion wrapper around c_sin
sin :: Double -> Double
sin = fromRational . c_sin . toRational
Memory management
Marshalling strings
import Foreign.C.String ( -- representation of strings in C
CString, -- = Ptr CChar
CStringLen) -- = (Ptr CChar, Int)
foreign import ccall unsafe "string.h"
c_strlen :: CString -> IO CSize -- CSize defined in Foreign.C.Types and is equal to size_t
-- |Type-conversion wrapper around c_strlen
strlen :: String -> Int
strlen = ....
Marshalling composite types
A C array may be manipulated in Haskell as StorableArray.
There is no built-in support for marshalling C structures and using C constants in Haskell. These are implemented in c2hs preprocessor, though.
Binary marshalling (serializing) of data structures of any complexity is implemented in library Binary.
Dynamic calls
DLLs
because i don't have experience of using DLLs, can someone write into this section? ultimately, we need to consider the following tasks:
• using DLLs of 3rd-party libraries (such as ziplib)
• putting your own C code into a DLL to use in Haskell
• putting Haskell code into a DLL which may be called from C code
Dark side of IO monad
unsafePerformIO
Programmers coming from an imperative language background often look for a way to execute IO actions inside a pure procedure. But what does this mean? Imagine that you're trying to write a procedure that reads the contents of a file with a given name, and you try to write it as a pure (non-IO) function:
readContents :: Filename -> String
Defining readContents as a pure function will certainly simplify the code that uses it. But it will also create problems for the compiler:
1. This call is not inserted in a sequence of "world transformations", so the compiler doesn't know at what exact moment you want to execute this action. For example, if the file has one kind of contents at the beginning of the program and another at the end - which contents do you want to see? You have no idea when (or even if) this function is going to get invoked, because Haskell sees this function as pure and feels free to reorder the execution of any or all pure functions as needed.
2. Attempts to read the contents of files with the same name can be factored (i.e. reduced to a single call) despite the fact that the file (or the current directory) can be changed between calls. Again, Haskell considers all non-IO functions to be pure and feels free to omit multiple calls with the same parameters.
So, implementing pure functions that interact with the Real World is considered to be Bad Behavior. Good boys and girls never do it ;)
Nevertheless, there are (semi-official) ways to use IO actions inside of pure functions. As you should remember this is prohibited by requiring the RealWorld "baton" in order to call an IO action. Pure functions don't have the baton, but there is a special "magic" procedure that produces this baton from nowhere, uses it to call an IO action and then throws the resulting "world" away! It's a little low-level magic. This very special (and dangerous) procedure is:
unsafePerformIO :: IO a -> a
Let's look at its (possible) definition:
unsafePerformIO :: (RealWorld -> (a, RealWorld)) -> a
unsafePerformIO action = let (a, world1) = action createNewWorld
in a
where 'createNewWorld' is an internal function producing a new value of the RealWorld type.
Using unsafePerformIO, you can easily write pure functions that do I/O inside. But don't do this without a real need, and remember to follow this rule: the compiler doesn't know that you are cheating; it still considers each non-IO function to be a pure one. Therefore, all the usual optimization rules can (and will!) be applied to its execution. So you must ensure that:
1. The result of each call depends only on its arguments.
2. You don't rely on side-effects of this function, which may be not executed if its results are not needed.
Let's investigate this problem more deeply. Function evaluation in Haskell is determined by a value's necessity - the language computes only the values that are really required to calculate the final result. But what does this mean with respect to the 'main' function? To "calculate the final world's" value, you need to perform all the intermediate IO actions that are included in the 'main' chain. By using 'unsafePerformIO' we call IO actions outside of this chain. What guarantee do we have that they will be run at all? None. The only time they will be run is if running them is required to compute the overall function result (which in turn should be required to perform some action in the 'main' chain). This is an example of Haskell's evaluation-by-need strategy. Now you should clearly see the difference:
- An IO action inside an IO procedure is guaranteed to execute as long as it is (directly or indirectly) inside the 'main' chain - even when its result isn't used (because the implicit "world" value it returns will be used). You directly specify the order of the action's execution inside the IO procedure. Data dependencies are simulated via the implicit "world" values that are passed from each IO action to the next.
- An IO action inside 'unsafePerformIO' will be performed only if result of this operation is really used. The evaluation order is not guaranteed and you should not rely on it (except when you're sure about whatever data dependencies may exist).
I should also say that inside 'unsafePerformIO' call you can organize a small internal chain of IO actions with the help of the same binding operators and/or 'do' syntactic sugar we've seen above. For example, here's a particularly convoluted way to compute the integer that comes after zero:
one :: Int
one = unsafePerformIO $ do var <- newIORef 0
modifyIORef var (+1)
readIORef var
and in this case ALL the operations in this chain will be performed as long as the result of the 'unsafePerformIO' call is needed. To ensure this, the actual 'unsafePerformIO' implementation evaluates the "world" returned by the 'action':
unsafePerformIO action = let (a,world1) = action createNewWorld
in (world1 `seq` a)
(The 'seq' operation strictly evaluates its first argument before returning the value of the second one [8]).
inlinePerformIO
inlinePerformIO has the same definition as unsafePerformIO but with addition of INLINE pragma:
-- | Just like unsafePerformIO, but we inline it. Big performance gains as
-- it exposes lots of things to further inlining
{-# INLINE inlinePerformIO #-}
inlinePerformIO action = let (a, world1) = action createNewWorld
in (world1 `seq` a)
#endif
Semantically inlinePerformIO = unsafePerformIO in as much as either of those have any semantics at all.
The difference of course is that inlinePerformIO is even less safe than unsafePerformIO. While ghc will try not to duplicate or common up different uses of unsafePerformIO, we aggressively inline inlinePerformIO. So you can really only use it where the IO content is really properly pure, like reading from an immutable memory buffer (as in the case of ByteStrings). However things like allocating new buffers should not be done inside inlinePerformIO since that can easily be floated out and performed just once for the whole program, so you end up with many things sharing the same buffer, which would be bad.
So the rule of thumb is that IO things wrapped in unsafePerformIO have to be externally pure while with inlinePerformIO it has to be really really pure or it'll all go horribly wrong.
That said, here's some really hairy code. This should frighten any pure functional programmer...
write :: Int -> (Ptr Word8 -> IO ()) -> Put ()
write !n body = Put $ \c buf@(Buffer fp o u l) ->
if n <= l
then write' c fp o u l
else write' (flushOld c n fp o u) (newBuffer c n) 0 0 0
where {-# NOINLINE write' #-}
write' c !fp !o !u !l =
-- warning: this is a tad hardcore
inlinePerformIO
(withForeignPtr fp
(\p -> body $! (p `plusPtr` (o+u))))
`seq` c () (Buffer fp o (u+n) (l-n))
it's used like:
word8 w = write 1 (\p -> poke p w)
This does not adhere to my rule of thumb above. Don't ask exactly why we claim it's safe :-) (and if anyone really wants to know, ask Ross Paterson who did it first in the Builder monoid)
unsafeInterleaveIO
But there is an even stranger operation called 'unsafeInterleaveIO' that gets the "official baton", makes its own pirate copy, and then runs an "illegal" relay-race in parallel with the main one! I can't talk further about its behavior without causing grief and indignation, so it's no surprise that this operation is widely used in countries that are hotbeds of software piracy such as Russia and China! ;) Don't even ask me - I won't say anything more about this dirty trick I use all the time ;)
One can use unsafePerformIO (not unsafeInterleaveIO) to perform I/O operations not in predefined order but by demand. For example, the following code:
do let c = unsafePerformIO getChar
do_proc c
will perform getChar I/O call only when value of c is really required by code, i.e. it this call will be performed lazily as any usual Haskell computation.
Now imagine the following code:
do let s = [unsafePerformIO getChar, unsafePerformIO getChar, unsafePerformIO getChar]
do_proc s
Three chars inside this list will be computed on demand too, and this means that their values will depend on the order they are consumed. It is not that we usually need.
unsafeInterleaveIO solves this problem - it performs I/O only on demand but allows to define exact *internal* execution order for parts of your datastructure. It is why I wrote that unsafeInterleaveIO makes illegal copy of baton.
First, unsafeInterleaveIO has (IO a) action as a parameter and returns value of type 'a':
do str <- unsafeInterleaveIO myGetContents
Second, unsafeInterleaveIO don't perform any action immediately, it only creates a box of type 'a' which on requesting this value will perform action specified as a parameter.
Third, this action by itself may compute the whole value immediately or... use unsafeInterleaveIO again to defer calculation of some sub-components:
myGetContents = do
c <- getChar
s <- unsafeInterleaveIO myGetContents
return (c:s)
This code will be executed only at the moment when value of str is really demanded. In this moment, getChar will be performed (with result assigned to c) and one more lazy IO box will be created - for s. This box again contains link to the myGetContents call
Then, list cell returned that contains one char read and link to myGetContents call as a way to compute rest of the list. Only at the moment when next value in list required, this operation will be performed again
As a final result, we get inability to read second char in list before first one, but lazy character of reading in whole. bingo!
PS: of course, actual code should include EOF checking. also note that you can read many chars/records at each call:
myGetContents = do
c <- replicateM 512 getChar
s <- unsafeInterleaveIO myGetContents
return (c++s)
A safer approach: the ST monad
We said earlier that we can use unsafePerformIO to perform computations that are totally pure but nevertheless interact with the Real World in some way. There is, however, a better way! One that remains totally pure and yet allows the use of references, arrays, and so on -- and it's done using, you guessed it, type magic. This is the ST monad.
The ST monad's version of unsafePerformIO is called runST, and it has a very unusual type.
runST :: (forall s . ST s a) -> a
The s variable in the ST monad is the state type. Moreover, all the fun mutable stuff available in the ST monad is quantified over s:
newSTRef :: a -> ST s (STRef s a)
newArray_ :: Ix i => (i, i) -> ST s (STArray s i e)
So why does runST have such a funky type? Let's see what would happen if we wrote
makeSTRef :: a -> STRef s a
makeSTRef a = runST (newSTRef a)
This fails, because newSTRef a doesn't work for all state types s -- it only works for the s from the return type STRef s a.
This is all sort of wacky, but the result is that you can only run an ST computation where the output type is functionally pure, and makes no references to the internal mutable state of the computation. The ST monad doesn't have access to I/O operations like writing to the console, either -- only references, arrays, and suchlike that come in handy for pure computations.
Important note -- the state type doesn't actually mean anything. We never have a value of type s, for instance. It's just a way of getting the type system to do the work of ensuring purity for us, with smoke and mirrors.
It's really just type system magic: secretly, on the inside, runST runs a computation with the real world baton just like unsafePerformIO. Their internal implementations are almost identical: in fact, there's a function
stToIO :: ST RealWorld a -> IO a
The difference is that ST uses type system magic to forbid unsafe behavior like extracting mutable objects from their safe ST wrapping, but allowing purely functional outputs to be performed with all the handy access to mutable references and arrays.
So here's how we'd rewrite our function using unsafePerformIO from above:
oneST :: ST s Int -- note that this works correctly for any s
oneST = do var <- newSTRef 0
modifySTRef var (+1)
readSTRef var
one :: Int
one = runST oneST
Welcome to the machine: the actual GHC implementation
A little disclaimer: I should say that I'm not describing here exactly what a monad is (I don't even completely understand it myself) and my explanation shows only one _possible_ way to implement the IO monad in Haskell. For example, the hbc Haskell compiler and the Hugs interpreter implements the IO monad via continuations [9]. I also haven't said anything about exception handling, which is a natural part of the "monad" concept. You can read the "All About Monads" guide to learn more about these topics.
But there is some good news: first, the IO monad understanding you've just acquired will work with any implementation and with many other monads. You just can't work with RealWorld values directly.
Second, the IO monad implementation described here is really used in the GHC, yhc/nhc (jhc, too?) compilers. Here is the actual IO definition from the GHC sources:
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
It uses the "State# RealWorld" type instead of our RealWorld, it uses the "(# #)" strict tuple for optimization, and it adds an IO data constructor around the type. Nevertheless, there are no significant changes from the standpoint of our explanation. Knowing the principle of "chaining" IO actions via fake "state of the world" values, you can now easily understand and write low-level implementations of GHC I/O operations.
The Yhc/nhc98 implementation
data World = World
newtype IO a = IO (World -> Either IOError a)
This implementation makes the "World" disappear somewhat [10], and returns Either a result of type "a", or if an error occurs then "IOError". The lack of the World on the right-hand side of the function can only be done because the compiler knows special things about the IO type, and won't overoptimise it.
Further reading
[1] This tutorial is largely based on the Simon Peyton Jones' paper Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell. I hope that my tutorial improves his original explanation of the Haskell I/O system and brings it closer to the point of view of beginning Haskell programmers. But if you need to learn about concurrency, exceptions and FFI in Haskell/GHC, the original paper is the best source of information.
[2] You can find more information about concurrency, FFI and STM at the GHC/Concurrency#Starting points page.
[3] The Arrays page contains exhaustive explanations about using mutable arrays.
[4] Look also at the Using monads page, which contains tutorials and papers really describing these mysterious monads.
[5] An explanation of the basic monad functions, with examples, can be found in the reference guide A tour of the Haskell Monad functions, by Henk-Jan van Tuyl.
[6] Official FFI specifications can be found on the page The Haskell 98 Foreign Function Interface 1.0: An Addendum to the Haskell 98 Report
[7] Using FFI in multithreaded programs described in paper Extending the Haskell Foreign Function Interface with Concurrency
[8] This particular behaviour is not a requirement of Haskell 2010, so the operation of 'seq' may differ between various Haskell implementations - if you're not sure, staying within the IO monad is the safest option.
[9] How to Declare an Imperative by Phil Wadler provides an explanation of how this can be done.
[10] The RealWorld type can even be replaced e.g. Functional I/O Using System Tokens by Lennart Augustsson.
Do you have more questions? Ask in the haskell-cafe mailing list.
To-do list
If you are interested in adding more information to this manual, please add your questions/topics here.
Topics:
• fixIO and 'mdo'
• Q monad
Questions:
• split '>>='/'>>'/return section and 'do' section, more examples of using binding operators
• IORef detailed explanation (==const*), usage examples, syntax sugar, unboxed refs
• explanation of how the actual data "in" mutable references are inside 'RealWorld', rather than inside the references themselves ('IORef','IOArray',&c.)
• control structures developing - much more examples
• unsafePerformIO usage examples: global variable, ByteString, other examples
• how 'unsafeInterLeaveIO' can be seen as a kind of concurrency, and therefore isn't so unsafe (unlike 'unsafeInterleaveST' which really is unsafe)
• discussion about different senses of "safe"/"unsafe" (like breaking equational reasoning vs. invoking undefined behaviour (so can corrupt the run-time system))
• actual GHC implementation - how to write low-level routines on example of newIORef implementation
This manual is collective work, so feel free to add more information to it yourself. The final goal is to collectively develop a comprehensive manual for using the IO monad. |
Confused and have questions? We’ve got answers. With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. If you rather get 1:1 study help, try 30 minutes of free online tutoring with Chegg Tutors.
Overhead cover
From Biology-Online Dictionary | Biology-Online Dictionary
Jump to: navigation, search
overhead cover
material (organic or inorganic) that provides protection to fish or other aquatic animals from above, generally includes material overhanging the stream less than a particular distance above the water surface. Values less than 0.5 m (1.5 feet) and less than 1 m (3 feet) have been used. |
BS EN 30-1-4:2012 - Domestic cooking appliances burning gas. Safety. Appliances having one or more burners with an automatic burner control system
BS EN 30-1-4:2012
Domestic cooking appliances burning gas. Safety. Appliances having one or more burners with an automatic burner control system
Status : Current, Under review Published : May 2012
Format
PDF
Format
HARDCOPY
This European Standard specifies the construction and performance characteristics as well as the requirements and methods of test for the safety and marking of domestic cooking appliances, capable of using the combustible gases defined in EN 30-1-1:2008+A2:2010, that have one or more burners with an automatic burner control system, referred to in the text as "appliances". This European Standard includes specific requirements and methods of test that are applicable to burners having an automatic burner control system, whether or not the appliance is equipped with a fan for the supply of combustion air to, and/or the evacuation of the products of combustion from the burner concerned. These specific requirements and methods of test are only applicable when the burner has an automatic burner control system and do not apply to burners having automatic ignition that fall within the scope of EN 30-1-1:2008+A2:2010. This European Standard is intended to be used in conjunction with EN 30-1-1:2008+A2:2010 and, where appropriate, other parts of EN 30-1 covering appliances having: - forced-convection ovens and/or grills; - a glass ceramic hotplate. It does not cover all of the safety requirements and methods of test that are specific to forced-convection ovens and/or grills and glass ceramic hotplates. Unless specifically excluded hereafter, this standard applies to these appliances or their component parts, whether or not the component parts are independent or incorporated into a single appliance, even if the other heating components of the appliance use electrical energy (e.g. combined gas-electric cookers). This European Standard includes requirements covering the electrical safety of equipment incorporated in the appliance that is associated with the use of gas. It does not include requirements covering the electrical safety of electrically heated component parts of their associated equipment ). This European Standard does not apply to: - outdoor appliances; - appliances connected to a combustion products evacuation duct; - appliances having a pyrolytic gas oven; - appliances having automatic burner control systems that: - have a second safety time (see EN 298:2003), or - control one or more burners that incorporate a separate ignition burner; - appliances having an uncovered burner or a non-enclosed covered burner (see 3.1.1) that utilises a fan for the supply of its combustion air; - appliances having enclosed covered burners that are not equipped with an automatic burner control system; - appliances having one or more burners that are capable of remote operation (type1), unless the burner(s) concerned are: - oven burners equipped with an automatic burner control system, or - oven burners of time-controlled ovens that are designed for a delayed start without the user being present; - appliances having one or more burners that are capable of remote operation (type 2), unless the burner(s) concerned are: - oven, grill or hotplate burners equipped with automatic burner control systems, or - oven burners of time-controlled ovens that are designed for a delayed start without the user being present; - appliances supplied at pressures greater than those defined in 7.1.3; - appliances equipped with air-gas ratio controls; - appliances incorporating one or more hotplate or grill burners that enable the user to program the delayed start of a cooking cycle. This European Standard does not cover the requirements relating to automatic on-off cycling multi-ring hotplate burners for which specific requirements are under consideration. This European Standard does not cover the requirements relating to third family gas cylinders, their regulators and their connection. This European Standard only covers type testing.
Standard NumberBS EN 30-1-4:2012
TitleDomestic cooking appliances burning gas. Safety. Appliances having one or more burners with an automatic burner control system
StatusCurrent, Under review
Publication Date31 May 2012
Normative References(Required to achieve compliance to this standard)IEC 60730-2-9, EN 60335-2-102:2006/AMD 1:2010, EN 30-1-1:2008/AMD 2:2010, IEC 60335-2-102:2004/AMD 1:2008, EN 257, EN 30-1-3:2003/AMD 1:2006, EN 60730-2-9, EN 60335-2-102:2006, EN 161, EN 126, IEC 60335-2-102:2004, EN 88-1:2011, EN 30-1-3:2003, EN 30-1-1:2008, EN 30-1-2:2012, EN 298:2003
Informative References(Provided for Information)IEC 60335-2-9, EN 60335-2-3, 2009/142/EC
ReplacesBS EN 30-1-4:2002
International RelationshipsEN 30-1-4:2012
Draft Superseded By11/30213548 DC
DescriptorsPerformance, Domestic, Ovens (cooking appliances), Automatic control systems, Control systems, Gas-powered devices, Safety devices, Hotplates (cookers), Marking, Cooking appliances, Grills (cooking), Hobs, Type testing, Burners, Cookers, Performance testing
ICS97.040.20
Title in FrenchAppareils de cuisson domestiques utilisant les combustibles gazeux. Sécurité. Appareils comportant un ou plusieurs brûleurs avec système automatique de commande des brûleurs
Title in GermanHaushalt-Kochgeräte für gasförmige Brennstoffe. Sicherheit. Geräte mit einem oder mehreren Brenner(n) mit Feuerungsautomat
CommitteeGSE/35
ISBN978 0 580 69887 3
PublisherBSI
FormatA4
DeliveryYes
Pages106
File Size1.33 MB
Price£298.00
Your basket
Your basket is empty
Multi-user access to over 3,500 medical device standards, regulations, expert commentaries and other documents
Worldwide Standards
We can source any standard from anywhere in the world
Develop a PAS
Develop a fast-track standardization document in 9-12 months
Tracked Changes
Understand the changes made to a standard with our new Tracked Changes version |
blob: ac0ac21a262862a5a3473b3bf2d4038b79ba0cf3 [file] [log] [blame]
//! This is `#[proc_macro_error]` attribute to be used with
//! [`proc-macro-error`](https://docs.rs/proc-macro-error/). There you go.
extern crate proc_macro;
use crate::parse::parse_input;
use crate::parse::Attribute;
use proc_macro::TokenStream;
use proc_macro2::{Literal, Span, TokenStream as TokenStream2, TokenTree};
use quote::{quote, quote_spanned};
use crate::settings::{Setting::*, *};
mod parse;
mod settings;
type Result<T> = std::result::Result<T, Error>;
struct Error {
span: Span,
message: String,
}
impl Error {
fn new(span: Span, message: String) -> Self {
Error { span, message }
}
fn into_compile_error(self) -> TokenStream2 {
let mut message = Literal::string(&self.message);
message.set_span(self.span);
quote_spanned!(self.span=> compile_error!{#message})
}
}
#[proc_macro_attribute]
pub fn proc_macro_error(attr: TokenStream, input: TokenStream) -> TokenStream {
match impl_proc_macro_error(attr.into(), input.clone().into()) {
Ok(ts) => ts,
Err(e) => {
let error = e.into_compile_error();
let input = TokenStream2::from(input);
quote!(#input #error).into()
}
}
}
fn impl_proc_macro_error(attr: TokenStream2, input: TokenStream2) -> Result<TokenStream> {
let (attrs, signature, body) = parse_input(input)?;
let mut settings = parse_settings(attr)?;
let is_proc_macro = is_proc_macro(&attrs);
if is_proc_macro {
settings.set(AssertUnwindSafe);
}
if detect_proc_macro_hack(&attrs) {
settings.set(ProcMacroHack);
}
if settings.is_set(ProcMacroHack) {
settings.set(AllowNotMacro);
}
if !(settings.is_set(AllowNotMacro) || is_proc_macro) {
return Err(Error::new(
Span::call_site(),
"#[proc_macro_error] attribute can be used only with procedural macros\n\n \
= hint: if you are really sure that #[proc_macro_error] should be applied \
to this exact function, use #[proc_macro_error(allow_not_macro)]\n"
.into(),
));
}
let body = gen_body(body, settings);
let res = quote! {
#(#attrs)*
#(#signature)*
{ #body }
};
Ok(res.into())
}
#[cfg(not(always_assert_unwind))]
fn gen_body(block: TokenTree, settings: Settings) -> proc_macro2::TokenStream {
let is_proc_macro_hack = settings.is_set(ProcMacroHack);
let closure = if settings.is_set(AssertUnwindSafe) {
quote!(::std::panic::AssertUnwindSafe(|| #block ))
} else {
quote!(|| #block)
};
quote!( ::proc_macro_error::entry_point(#closure, #is_proc_macro_hack) )
}
// FIXME:
// proc_macro::TokenStream does not implement UnwindSafe until 1.37.0.
// Considering this is the closure's return type the unwind safety check would fail
// for virtually every closure possible, the check is meaningless.
#[cfg(always_assert_unwind)]
fn gen_body(block: TokenTree, settings: Settings) -> proc_macro2::TokenStream {
let is_proc_macro_hack = settings.is_set(ProcMacroHack);
let closure = quote!(::std::panic::AssertUnwindSafe(|| #block ));
quote!( ::proc_macro_error::entry_point(#closure, #is_proc_macro_hack) )
}
fn detect_proc_macro_hack(attrs: &[Attribute]) -> bool {
attrs
.iter()
.any(|attr| attr.path_is_ident("proc_macro_hack"))
}
fn is_proc_macro(attrs: &[Attribute]) -> bool {
attrs.iter().any(|attr| {
attr.path_is_ident("proc_macro")
|| attr.path_is_ident("proc_macro_derive")
|| attr.path_is_ident("proc_macro_attribute")
})
} |
Splunk® Add-on for Microsoft Active Directory
Install and use the Splunk Add-on for Microsoft Active Directory
Download manual as PDF
Download topic as PDF
About the Splunk Add-on for Microsoft Active Directory
Version 1.0.0 (TA-Microsoft-AD)
Vendor Products Microsoft Active Directory
Visible No. This add-on does not contain any views.
The Splunk Add-on for Microsoft Active Directory (AD) lets you collect Active Directory and Domain Name Server debug logs from Windows hosts that act as domain controllers for a supported version of Windows Server.
The Splunk Add-on for Microsoft Active Directory requires that you configure Active Directory audit policy. This is because AD does not log certain events by default. After the Splunk platform indexes the events, you can analyze the data.
This add-on provides the inputs and CIM-compatible knowledge to use with other Splunk apps, such as the Splunk Apps for Microsoft Exchange and Windows Infrastructure.
Download the Splunk Add-on for Microsoft Active Directory from Splunkbase at http://splunkbase.splunk.com/app/3207.
Discuss the Splunk Add-on for Microsoft Active Directory on Splunk Answers at http://answers.splunk.com/answers/app/3207.
NEXT
Source types for the Splunk Add-on for Microsoft Active Directory
This documentation applies to the following versions of Splunk® Add-on for Microsoft Active Directory: 1.0.0
Was this documentation topic helpful?
Enter your email address, and someone from the documentation team will respond to you:
Please provide your comments here. Ask a question or make a suggestion.
You must be logged into splunk.com in order to post comments. Log in now.
Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.
0 out of 1000 Characters |
Fatigue is a tired and run-down feeling that persists though you’re technically getting enough sleep. At Peninsula Integrative Cardiology, Daniel Rieders, MD, FACC, FHRS, CCDS, IFMCP, evaluates your overall health to determine the root cause of your fatigue. He’ll then develop a customized, integrative therapy plan to restore your energy and vitality. Call the San Ramon, California, office to set up your appointment or schedule online.
request an appointment
When should I be concerned about feelings of fatigue?
It’s normal to feel overworked, overwhelmed, and overtired on occasion. But you should be concerned about fatigue that causes unrelenting exhaustion that permeates every day. Fatigue isn’t relieved by rest and interferes with your daily life.
Fatigue is not just the tiredness you feel after a poor night’s sleep. It’s a nearly constant state of weariness. Fatigue interferes with concentration, energy, and motivation. It also impacts your emotional and psychological wellbeing.
What causes fatigue?
Fatigue has many possible causes, including emotional and physical factors. Fatigue is a symptom of many medical conditions. Fatigue shows up in response to systemic inflammation, autoimmune disease, and hormonal imbalances.
Problems with your heart can also cause fatigue, including heart failure and coronary artery disease.
Adrenal fatigue or adrenal insufficiency can also cause overwhelming fatigue. Intense stress can lead to this condition, characterized by nervousness, sleep disturbances, lightheadedness, digestive problems, and body aches.
Dr. Rieders also considers whether your fatigue is due to:
• Certain medications, such as antihistamines and cough medicines
• Anemia
• Chronic infection
• Diabetes
• Fibromyalgia
• Thyroid disorders
• Inflammatory bowel disease (IBD)
Nutritional deficiencies are also often responsible for overwhelming fatigue.
How do you evaluate fatigue?
Dr. Rieders takes a holistic approach when assessing the reasons for your fatigue and developing a treatment plan. He’ll consider your health, the environment, and lifestyle choices and habits, such as:
• Use of drugs or alcohol
• Excess or not enough physical activity
• Lack of sleep
• Unhealthy eating habits, and poor lifestyle choices
In addition, he uses his vast medical expertise to run any necessary screenings, lab tests, and blood tests to look for possible causes. These tests can reveal inflammatory markers, nutritional deficiencies, or hormone problems.
How is fatigue treated?
Dr. Rieders addresses your fatigue using a multi-disciplinary approach. He may offer vitamins and supplements, healthy lifestyle changes, and medical nutrition. He can also help rebalance your hormones or provide certain medications, when needed, like anti-inflammatories and corticosteroids.
If you have a heart condition that’s causing fatigue, Dr. Rieders has the training to provide management and support. He can also help you create a healthy relationship with environmental stressors, so you respond in a healthy way.
If you’re suffering crippling fatigue, seek the holistic care offered at Peninsula Integrative Cardiology. Call today to set up an in-person or telemedicine appointment or schedule online. |
Friction control through surface texturing (FriText)
Research Project, 2019 – 2021
Purpose and goal
The main objective of the project is to determine the surface texturing method to control the coefficient of friction and surface protection against wear, in order to improve the tribological properties of machinery components in e.g. engines or power generation equipment. The project include two main goals, which are strictly connected to each other: the first goal is development modification of existing abrasive processes in order to texturing of the surface, whereas the second goal is to develop a method for evaluation surfaces after texturing.
Expected results and effects
This project can significantly extend the lifetime of the mechanical components, reduce the energy and energy and fuel consumption, increase the controllability on the friction coefficient and hence reduce the friction losses and have better wear protection, and also have positive impact on CO2 emissions reduction. One of the driving motivation in FFI Sustainable Production sub-program is to reduce the automotive industry´s CO2 emissions from a life cycle perspective, and this project is a next step to achieve this goal.
Planned approach and implementation
The work in the project is divided i 7 workpackages (WPs): WP1 Project management and coordination WP2 Simulation of friction dependent texture pattern WP3 Modelling and simulation of surface texturing methods WP4 Experimental testing of different surface texturing methods WP5 Development of textured surface evaluation method WP6 Alternative surface texturing methods WP7 Demonstrators and dissemination The structure is so thought in order to facilitate synergies among the partners and an efficient implementation of the concepts together w ith the necessary guidelines.
Participants
Peter Krajnik (contact)
Chalmers, Industrial and Materials Science, Materials and manufacture
Collaborations
Royal Institute of Technology (KTH)
Stockholm, Sweden
Funding
VINNOVA
Project ID: 2017-05540
Funding Chalmers participation during 2019–2021
More information
Latest update
2019-12-29 |
The mediawiki package
[Tags:bsd3, library, program]
A complete Haskell binding to the MediaWiki API
[Skip to Readme]
Properties
Versions 0.2, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 0.2.6
Dependencies base, HTTP, mime, network, pretty, utf8-string, xml [details]
License BSD3
Author Sigbjorn Finne <sof@forkIO.com>
Maintainer Sigbjorn Finne <sof@forkIO.com>
Category Web
Uploaded Tue Nov 18 17:40:33 UTC 2008 by SigbjornFinne
Distributions NixOS:0.2.6
Downloads 1330 total (19 in the last 30 days)
Votes
0 []
Status Docs uploaded by user
Build status unknown [no reports yet]
Hackage Matrix CI
Modules
[Index]
Flags
NameDescriptionDefaultType
new-base
Build with new smaller base library
DisabledAutomatic
Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info
Downloads
Maintainer's Corner
For package maintainers and hackage trustees
Readme for mediawiki
Readme for mediawiki-0.2.1
= mediawiki - Accessing MediaWiki from Haskell =
'mediawiki' is a Haskell package providing a comprehensive binding to
the programmatic interface to MediaWiki (aka, 'the MediaWiki API') -
http://www.mediawiki.org/wiki/API
The binding is allegedly complete (2008-11-17), letting you write
applications in Haskell that accesses and (if enabled by target Wiki)
manipulate content on MediaWiki pages.
= Getting started =
For some code samples showing you how to get started using this
API binding, have a look in the examples/ directory.
= Building and installing =
This package is provided in Cabal form, so only thing you need to
do to get going is:
foo% runghc Setup configure
foo% runghc Setup build
foo% runghc Setup install
The package depends on a bunch of other packages though, so you
need to have them built&installed, as well. They are:
* HTTP: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/HTTP
* xml: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/xml
* mime: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/mime
* utf8-string: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/utf8-string
= Feedback / question =
Please send them to sof@forkIO.com , and I'll try to respond to them
as best/quickly as possible. |
Solved
To IOStream gurus
Posted on 2001-09-13
13
531 Views
Last Modified: 2013-12-14
Hi,
I am trying to implement logging system using iostreams. My question is: how can I add date/time stamp on iostream basis? What I want to acheive is as following:
cmylog << "Test" << std::endl;
And the output shall be
10:00 Test
(I will be able to configure time stamp format but it is beyond my question, do not worry about it)
Please advise
P.S. I will increase points for working solution.
0
Comment
Question by:proskig
[X]
Welcome to Experts Exchange
Add your voice to the tech community where 5M+ people just like you are talking about what matters.
• Help others & share knowledge
• Earn cash & points
• Learn & ask questions
• 5
• 4
• 4
13 Comments
LVL 8
Expert Comment
by:mnashadka
ID: 6479119
One way would be to store the data in a std::ostringstream, and when the stream insertion operator is called, you can see if the inserted data is the first data in the ostringstream (by checking ostringstream.str().empty()). Then, when you see std::endl come in, dump the ostringstream.str() to the file and clear it. This is the approach I took when I had a similar requirement. Hope this helps.
0
LVL 5
Author Comment
by:proskig
ID: 6479138
mnashadka: What's the correct way to recognize std::endl? I mean flushing buffer? Another question is: which operator to overload ?
0
LVL 8
Expert Comment
by:mnashadka
ID: 6479203
Actually, probably an easier way would be to just have 2 stream insertion operators, like:
template<class T>
cmylog &operator<<(T &value)
{
stream << value;
}
And then another one to dump the ostringstream to the file
cmylog &operator<<(std::ostream &(*end_of_line) (std::ostream &))
{
file << time() << stream.str() << end_of_line;
// Clear the ostringstream
}
Of course, the stream variable is the ostringstream and file is the ofstream.
0
Independent Software Vendors: We Want Your Opinion
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
LVL 7
Expert Comment
by:peterchen092700
ID: 6479213
I once did something similar, providing my own class with this (pseudo-)code:
class mystream
{
ostream * os;
}
// default template to "redirect" all output to the ostream
template <class T>
mystream & operator << (mystream & s, T const & x)
{
s->os << x;
}
// specific overload for endl:
mystream & operator << (mystream & s, std::endl const & x)
{
s->os << x;
s->os << ... // output time for new line
}
Peter
Note 1: I found the streams extremely slow for this purpose. Since we had to log *tons* of data, I finally rewrote the thing to use printf-style (which is less fun, but more readable and faster)
Note 2: I'm not quite sure about the const & - just tell me if there are problems.
0
LVL 8
Expert Comment
by:mnashadka
ID: 6479262
perterchen, what happens if you don't have a new line for a long time? This is assuming that the next log statement will be coming within seconds. Just curious.
0
LVL 5
Author Comment
by:proskig
ID: 6479604
I do have problems with specialization for std::endl
class mystream
{
std::ostream *os;
};
// default template to "redirect" all output to the ostream
template <class T>
mystream & operator << (mystream & s, T const & x)
{
*(s.os) << x;
}
// specific overload for endl:
mystream & operator << (mystream & s, std::endl & x)
{
*(s.os) << x;
}
Produces error
error C2321: syntax error : unexpected 'endl'
for MSVC
0
LVL 7
Expert Comment
by:peterchen092700
ID: 6480172
mnashadka: you're right - in this case, you need to use a flag
class mystream
{
ostream * os;
bool needWriteTime; // init to true in CTor
}
// default template to "redirect" all output to the ostream
template <class T>
mystream & operator << (mystream & s, T const & x)
{
if (s->needWriteTime)
s->os << "10:00 "; // ;)
s->os << x;
}
// specific overload for endl:
mystream & operator << (mystream & s, std::endl const & x)
{
s->os << x;
s->needWriteTime = true;
}
proksig: I'll look into it, I'm myself not sure with the syntax...
0
LVL 8
Expert Comment
by:mnashadka
ID: 6480222
cmylog &cmylog::operator<<(std::ostream &(*end_of_line) (std::ostream &))
is the syntax for endl
0
LVL 8
Expert Comment
by:mnashadka
ID: 6480238
I still think it's better to buffer the output, though. If you don't and your application is (or becomes) multithreaded, your log messages will be interspersed and you won't be able to tell them apart (I know this from experience). But that's just my 2 cents.
0
LVL 7
Expert Comment
by:peterchen092700
ID: 6480300
Hmmm... found a solution, but... seee below..
typedef ostream & (tEndl)(ostream & os);
mystream & operator << (mystream & s, tEndl x)
{
tEndl * endlfunc = endl;
x(*s.os); // s.os << endl;
if ( x == endlfunc) // ***
(*s.os) << "10:40 ";
return s;
}
I initially thought the endl manip is declared as class - but no, its a function.
The point I find a bit awkward in an STL environment is that at ***, we actually have to compare two function pointers to see if the manipulator passed is actually "endl", or something else (like flush). Although I see no *technical* problem with this.
P.S. next comment: working sample (console app)
0
LVL 7
Accepted Solution
by:
peterchen092700 earned 200 total points
ID: 6480303
#include "stdafx.h"
#include <ios>
#include <iostream>
using namespace std;
class mystream
{
public:
ostream * os;
bool needtime;
mystream() { needtime = true; }
};
static int time; // instead of time....
// default template to "redirect" all output to the ostream
template <class T>
mystream & operator << (mystream & s, T const & x)
{
if (s.needtime) {
++time;
(*s.os) << time << " ";
s.needtime = false;
}
(*s.os) << x;
return s;
}
int f(int);
int f(int,int);
// specific overload for endl:
//typedef std::basic_ostream<char,std::char_traits<char> > & (tEndl)(std::basic_ostream<char, std::char_traits<char> > & os);
typedef ostream & (tEndl)(ostream & os);
mystream & operator << (mystream & s, tEndl x)
{
x(*s.os); // s.os << x;
tEndl * endlfunc = endl/*<char, std::char_traits<char> >*/;
if ( x == endl)
s.needtime = true;
return s;
}
int main(int argc, char* argv[])
{
mystream s;
s.os = &cout;
s << "Hello World " << endl;
s << "Yeah yeah yeah " << flush << "It works " << endl; // flush: to test another manip
return 0;
}
// it's not a clean example... but shows the principles
0
LVL 5
Author Comment
by:proskig
ID: 6480420
Cool Thanks!
0
LVL 5
Author Comment
by:proskig
ID: 6480431
Cool thanks.
0
Featured Post
Industry Leaders: We Want Your Opinion!
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
How to install Selenium IDE and loops for quick automated testing. Get Selenium IDE from http://seleniumhq.org Go to that link and select download selenium in the right hand column That will then direct you to their download page. From that p…
Go is an acronym of golang, is a programming language developed Google in 2007. Go is a new language that is mostly in the C family, with significant input from Pascal/Modula/Oberon family. Hence Go arisen as low-level language with fast compilation…
The goal of the tutorial is to teach the user how to use functions in C++. The video will cover how to define functions, how to call functions and how to create functions prototypes. Microsoft Visual C++ 2010 Express will be used as a text editor an…
The viewer will learn how to clear a vector as well as how to detect empty vectors in C++.
717 members asked questions and received personalized solutions in the past 7 days.
Join the community of 500,000 technology professionals and ask your questions.
Join & Ask a Question |
You can add source control system functionality to Dreamweaver by writing a GetNewFeatures handler that returns a set of menu items and corresponding C functions. For example, if you write a Sourcesafe library and want to let Dreamweaver users see the history of a file, you can write a GetNewFeatures handler that returns the History menu item and the C function name of history. Then, in Windows, when the user right-clicks a file, the History menu item is one of the items on the menu. If a user selects the History menu item, Dreamweaver calls the corresponding function, passing the selected files to the DLL. The DLL displays the History dialog box so the user can interact with it in the same way as Sourcesafe.
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
Legal Notices | Online Privacy Policy |
Intel CPU bug, how does this affect us?
Hime
New Member
It's probably feasible for a user to retrieve data from memory within the VPS container from other users.
KH-Jonathan
CTO
Staff member
Does this vulnerability impact both Linux and the virtualisation software?
It impacts basically everything.
https://spectreattack.com/ has a very good Q/A section.
We're monitoring patches that are being worked on and that have been released thus far. We use the live kernel patching system KernelCare on most of our infrastructure and and eagerly awaiting patches from them and more information in general.
As of now I've not seen any information about attacks happening in the wild and hopefully things will stay that way for a while since this was discovered by a white-hat group and details shrouded in secrecy.
Top |
Совершенный код: обработка ошибок в библиотеках | Курс ReactJS в Чернигове
Совершенный код: обработка ошибок в библиотеках
Программисты каждый день пользуются сторонними библиотеками в своих программах, например, http-клиентами или парсерами. Помимо выполнения основных функций, все эти библиотеки как-то обрабатывают возникающие ошибки. Причем чем больше в библиотеке побочных эффектов — сетевое взаимодействие, работа с файлами — тем больше внутри кода, отвечающего за ошибки, и тем он сложнее.
Онлайн курс ReactJS в Чернигове | ReactWarriors
В этой статье разберем принципы, по которым строится обработка ошибок внутри библиотек. Это поможет отличать хорошие библиотеки от плохих. Вы сможете лучше строить взаимодействие с ними и, даже, проектировать свои собственные библиотеки.
Прежде чем начать, давайте разберемся с терминологией. В отличие от программы, библиотека не может использоваться напрямую, например, в терминале. Любая библиотека — это код на конкретном языке, который вызывается другим кодом на этом же языке. Говорят, что у библиотеки есть клиент. Клиент — тот, кто использует библиотеку:
// http-клиент в js, эту библиотеку мы будем использовать в качестве примера
import axios from 'axios';
// С точки зрения axios, этот файл содержит клиентский код
// То есть код, который использует axios.
const runProgram = async () => {
const url = 'https://ru.hexlet.io';
// Вызов библиотеки идет в клиентском коде
const response = await axios.get(url);
console.log(response.body);
}
В свою очередь, находящийся внутри библиотеки код называется библиотечным кодом. Это разделение довольно важно, так как у каждой из этих частей своя зона ответственности.
Сами библиотеки часто реализованы как функция, набор функций, класс, или, опять же, набор классов. Обработка ошибок в этом случае различаться не будет, поэтому для простоты все примеры ниже будут построены на функциях.
Про ошибки
Что вообще считать ошибкой, а что нет? Представьте функцию, которая ищет символ внутри строки и не находит его. Является ли это ошибкой?
// Эта функция ищет символ в строке и возвращает его индекс
// Данный вызов ничего не найдет
'lala'.indexOf('j'); // -1
Такое поведение нормально для данной функции. Если значения нет, все нормально, функция все равно выполнила свою задачу и вернула что-то осмысленное.
А что насчет http запроса как в примере выше? Как вести себя функцииaxios.get, которая не смогла загрузить указанную страницу? С точки зрения функции такая ситуация не является нормальной. Если функция не может выполнить свое основное предназначение, это ошибка. Именно об этих ошибках и пойдет речь. Ниже конкретные примеры того, как делать стоит и как не стоит при использовании библиотек и их проектировании.
Завершение процесса
Во всех языках программирования есть возможность досрочно остановить процесс операционной системы, в котором выполняется код. Обычно это делается с помощью функции, имеющие в названии словоexit. Вызов этой функции останавливает программу целиком.
if (/* что-то не получилось */) {
process.exit();
}
Есть ровно одна причина, по которой такой код недопустим ни в одной библиотеке: то, что является фатальной ошибкой для конкретной библиотеки, не является такой же ошибкой для всей программы. В самом худшем случае программа предупредит пользователя о неудачной попытке загрузить сайт и попросит попробовать снова. Подобное поведение невозможно было бы реализовать в случае использования библиотеки, которая останавливает выполнение целой программы.
Проще говоря, библиотека не может решать за программу, когда ей завершаться. Это не ее зона ответственности. Задача библиотеки — оповестить клиентский код об ошибке, а дальше уже не ее забота. Оповещать можно с помощью исключений:
import axios from 'axios';
const runProgram = async () => {
const url = 'https://ru.hexlet.io';
try {
const response = await axios.get(url);
// Делаем что-нибудь полезное с response
console.log(response.body);
} catch (e) {
// Для отладки хорошо бы вывести полную информацию
console.error(e.message);
// exit нужно делать на уровне программы,
// так как важно установить правильный код возврата (отличный от 0)
// только так снаружи можно будет понять что была ошибка
process.exit(1);
}
}
Вопрос на самоконтроль. Можно ли написать тесты на библиотеку, которая выполняет остановку процесса?
Имитация успеха
Иногда разработчик пытается скрыть от клиентского кода любые или почти любые ошибки. Код в таком случае перехватывает все возможные ошибки (исключения) и всегда возвращает какой-то результат. Ниже гипотетический пример с функциейaxios.get. Как бы она выглядела в этом случае:
// Очень упрощенный код внутри axios.get
const get = (url) => {
// Тут на самом деле сложный асинхронный код, выполняющий http запрос
// Опустим его и посмотрим на то место где возникает ошибка
// Ниже упрощенный пример обработки ошибки
if (error) {
// Генеральная идея — этот код возвращает какой-то результат,
// который сложно или невозможно отличить от успешно выполненного запроса
const response = { body: null };
return response;
}
}
Самая главная проблема в таком решении: оно скрывает проблемы и делает невозможным или практически невозможным отлов ошибки снаружи, в клиентском коде:
import axios from 'axios';
const runProgram = async () => {
const url = 'https://ru.hexlet.io';
// Как узнать что тут произошла ошибка и предупредить пользователя?
const response = await axios.get(url);
console.log(response.body);
}
Правильное решение — использовать исключения.
Подавления ошибок
Этот способ очень похож на предыдущий. Код с подавлением ошибки выглядит примерно так:
// Очень упрощенный код внутри axios.get
const get = (url) => {
if (error) {
console.error(`Something was wrong during http request: ${error}`);
return null;
}
}
Что здесь происходит? Разработчик выводит сообщение об ошибке в консоль и возвращает наружу, например,null. Такой код появляется тогда, когда программист еще не до конца осознал, что такое библиотека. Главный вопрос, который должен вызывать этот код: а как клиентский код узнает об ошибке? Ответ здесь — никак. Подобную библиотеку невозможно использовать правильно. Представьте, если бы так себя велaxios.get:
import axios from 'axios';
const runProgram = async () => {
const url = 'https://ru.hexlet.io';
// Во время ошибки идет вывод в консоль
// А если консоли вообще нет?
// Даже если есть, как обработать ошибку?
const response = await axios.get(url);
console.log(response.body);
}
Иногда ситуация еще хуже. Внутри библиотеки используется код, который порождает исключения, что правильно, а программист их перехватывает и подавляет.
import { promises as fs } from 'fs';
// Клиент этой функции (библиотеки) никогда не узнает о том, что произошла ошибка
const getData = async (filepath) => {
try {
const json = await fs.readFile(filepath);
return JSON.parse(json);
} catch (e) {
console.log(e.message);
// Тут масса вариантов. Возврат {}, null, '' и т.п.
return {};
}
}
Правильное решение — порождать исключения и не подавлять исключения.
Коды возврата
Само по себе это не является ошибкой, но во многих ситуациях коды возврата нежелательны. О том, почему исключения предпочтительнее в большинстве ошибочных ситуаций, можно почитать вшикарной статьена Хабре.
Вопрос на самоконтроль: как должна себя вести функция валидации в случае нахождения ошибок: выкидывать исключение или возвращать ошибки, например, как массив?
Исключения
Как правило, это самый адекватный способ работы с ошибками в большинстве языков. Однако есть другие языки с совершенно другими схемами работы. Если ошибка фатальная, то она либо уже является исключением, либо исключение нужно выбросить:
if (error) {
throw new Error(/* чем больше тут полезной информации, тем лучше */);
}
[Источник](https://ru.hexlet.io/blog/posts/sovershennyy-kod-obrabotka-oshibok-v-bibliotekah)
Онлайн курс ReactJS в Чернигове | ReactWarriors |
by Sai gowtham
How to find the sum of numbers in an array javascript
In this tutorial we are going to learn two ways to find the sum of an array of numbers by using JavaScript.
First way using for of loop.
let arr = [1,2,3,4,5];
let sum = 0;
for (let num of arr){
sum = sum + num
}
console.log(sum) // 15
In the above code, we used for of loop to get the number directly instead of index.
On each iteration, we have added the number to sum variable.
Array.reduce( ) method
There is an inbuilt reduce method in JavaScript which helps to calculate the sum of numbers in the array.
let arr = [1,2,3,4,5];
const sum = arr.reduce((result,number)=> result+number);
console.log(sum) // 15
In the above code on each iteration reducer function store, the numbers sum inside the result parameter and finally returns the result value.
reduce() method takes the callback function as its first argument and runs that callback function on each element present in the array.
The callback function takes two parameters result and current element.
Top Udemy Courses
JavaScript - The Complete Guide 2020 (Beginner + Advanced)
JavaScript - The Complete Guide 2020 (Beginner + Advanced)
45,614 students enrolled
52 hours of video content
View Course
React - The Complete Guide (incl Hooks, React Router, Redux)
React - The Complete Guide (incl Hooks, React Router, Redux)
284,472 students enrolled
40 hours of video content
View Course
Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)
Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)
130,921 students enrolled
21 hours of video content
View Course |
add error handling section
continuous-integration/drone/push Build is passing Details
This commit is contained in:
LordMZTE 2022-10-25 14:56:34 +02:00
parent 2519235497
commit 8305a761a4
Signed by: LordMZTE
GPG Key ID: B64802DC33A64FF6
1 changed files with 4 additions and 0 deletions
View File
@ -33,6 +33,10 @@ title = "Why python is a bad language"
- **Very bad** performance.
- Python is hard to package. Of course tools exist that can do it, but they are slow and large as they always include the interpreter as opposed to compiling the code or using some sort of faster intermediate language. Packaged python also includes the source code, which may be undesirable.
{{ sec_header(name="Error Handling") }}
- It has exceptions! The second most common mistake in OOP languages after `null`! Exceptions make error handling inherently unsafe, as there is no knowing when an exception might come flying at you!
- `ExceptionGroup`s (introduced in 3.11, preview version at the time of writing) make this mess of exceptions even more intertwined. Error spaghetti, anyone?
{{ sec_header(name="Dynamic Typing") }}
- Passing an invalid type into a function may cause unpredictable behaviour. Manual type checks are annoying, and type hints are still just hints.
- It is often unclear what type a function is expecting, thus it can be hard to know how to call it, especially if it is undocumented. |
Format
Send to
Choose Destination
Int J Biochem Cell Biol. 2003 Sep;35(9):1301-5.
Osteoblasts: novel roles in orchestration of skeletal architecture.
Author information
1
School of Veterinary Science, University of Melbourne, Parkville, Victoria 3010, Australia. ejmackie@unimelb.edu.au
Abstract
Osteoblasts are located on bone surfaces and are the cells responsible for bone formation through secretion of the organic components of bone matrix. Osteoblasts are derived from mesenchymal osteoprogenitor cells found in bone marrow and periosteum. Following a period of secretory activity, osteoblasts undergo either apoptosis or terminal differentiation to form osteocytes surrounded by bone matrix. Osteoblasts secrete a characteristic mixture of extracellular matrix proteins including type I collagen as the major component as well as proteoglycans, glycoproteins and gamma-carboxylated proteins. Cells of the osteoblast lineage also provide factors essential for differentiation of osteoclasts (bone-resorbing cells). By regulating osteoclast differentiation and activity in response to systemic influences, osteoblasts not only play a central role in regulation of skeletal architecture, but also in calcium homeostasis. Inadequate osteoblastic bone formation in relation to osteoclastic resorption results in osteoporosis, a disease characterised by enhanced skeletal fragility. Cellfacts: Osteoblasts are the cells responsible for bone formation. Osteoblasts indirectly control levels of bone resorption. Osteoblasts play a key role in the pathophysiology of osteoporosis and the resulting fractures, which constitute a major public health burden in developed countries.
PMID:
12798343
[Indexed for MEDLINE]
Supplemental Content
Full text links
Icon for Elsevier Science
Loading ...
Support Center |
Deconvolution/1D
From Rosetta Code
Jump to: navigation, search
Task
Deconvolution/1D
You are encouraged to solve this task according to the task description, using any language you may know.
The convolution of two functions F and H of an integer variable is defined as the function G satisfying
G(n) = \sum_{m=-\infty}^{\infty} F(m) H(n-m)
for all integers n. Assume F(n) can be non-zero only for 0n | F | , where | F | is the "length" of F, and similarly for G and H, so that the functions can be modeled as finite sequences by identifying f_0, f_1, f_2, \dots with F(0), F(1), F(2), \dots, etc. Then for example, values of | F | = 6 and | H | = 5 would determine the following value of g by definition.
\begin{array}{lllllllllll}
g_0 &= &f_0h_0\\
g_1 &= &f_1h_0 &+ &f_0h_1\\
g_2 &= &f_2h_0 &+ &f_1h_1 &+ &f_0h_2\\
g_3 &= &f_3h_0 &+ &f_2h_1 &+ &f_1h_2 &+ &f_0h_3\\
g_4 &= &f_4h_0 &+ &f_3h_1 &+ &f_2h_2 &+ &f_1h_3 &+ &f_0h_4\\
g_5 &= &f_5h_0 &+ &f_4h_1 &+ &f_3h_2 &+ &f_2h_3 &+ &f_1h_4\\
g_6 &= & & &f_5h_1 &+ &f_4h_2 &+ &f_3h_3 &+ &f_2h_4\\
g_7 &= & & & & &f_5h_2 &+ &f_4h_3 &+ &f_3h_4\\
g_8 &= & & & & & & &f_5h_3 &+ &f_4h_4\\
g_9 &= & & & & & & & & &f_5h_4
\end{array}
We can write this in matrix form as:
\left(
\begin{array}{l}
g_0 \\
g_1 \\
g_2 \\
g_3 \\
g_4 \\
g_5 \\
g_6 \\
g_7 \\
g_8 \\
g_9 \\
\end{array}
\right) = \left(
\begin{array}{lllll}
f_0\\
f_1 & f_0\\
f_2 & f_1 & f_0\\
f_3 & f_2 & f_1 & f_0\\
f_4 & f_3 & f_2 & f_1 & f_0\\
f_5 & f_4 & f_3 & f_2 & f_1\\
& f_5 & f_4 & f_3 & f_2\\
& & f_5 & f_4 & f_3\\
& & & f_5 & f_4\\
& & & & f_5
\end{array}
\right) \; \left(
\begin{array}{l}
h_0 \\
h_1 \\
h_2 \\
h_3 \\
h_4 \\
\end{array} \right)
or
g = A \; h
For this task, implement a function (or method, procedure, subroutine, etc.) deconv to perform deconvolution (i.e., the inverse of convolution) by constructing and solving such a system of equations represented by the above matrix A for h given f and g.
• The function should work for G of arbitrary length (i.e., not hard coded or constant) and F of any length up to that of G. Note that | H | will be given by | G | − | F | + 1.
• There may be more equations than unknowns. If convenient, use a function from a library that finds the best fitting solution to an overdetermined system of linear equations (as in the Multiple regression task). Otherwise, prune the set of equations as needed and solve as in the Reduced row echelon form task.
• Test your solution on the following data. Be sure to verify both that deconv(g,f) = h and deconv(g,h) = f and display the results in a human readable form.
h = [-8,-9,-3,-1,-6,7]
f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1]
g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7]
Contents
[edit] BBC BASIC
As several others, this is a translation of the D solution.
*FLOAT 64
DIM h(5), f(15), g(20)
h() = -8,-9,-3,-1,-6,7
f() = -3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1
g() = 24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7
PROCdeconv(g(), f(), x())
PRINT "deconv(g,f) = " FNprintarray(x())
x() -= h() : IF SUM(x()) <> 0 PRINT "Error!"
PROCdeconv(g(), h(), y())
PRINT "deconv(g,h) = " FNprintarray(y())
y() -= f() : IF SUM(y()) <> 0 PRINT "Error!"
END
DEF PROCdeconv(g(), f(), RETURN h())
LOCAL f%, g%, i%, l%, n%
f% = DIM(f(),1) + 1
g% = DIM(g(),1) + 1
DIM h(g% - f%)
FOR n% = 0 TO g% - f%
h(n%) = g(n%)
IF n% < f% THEN l% = 0 ELSE l% = n% - f% + 1
IF n% THEN
FOR i% = l% TO n% - 1
h(n%) -= h(i%) * f(n% - i%)
NEXT
ENDIF
h(n%) /= f(0)
NEXT n%
ENDPROC
DEF FNprintarray(a())
LOCAL i%, a$
FOR i% = 0 TO DIM(a(),1)
a$ += STR$(a(i%)) + ", "
NEXT
= LEFT$(LEFT$(a$))
Output:
deconv(g,f) = -8, -9, -3, -1, -6, 7
deconv(g,h) = -3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1
[edit] C
Using FFT:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <complex.h>
double PI;
typedef double complex cplx;
void _fft(cplx buf[], cplx out[], int n, int step)
{
if (step < n) {
_fft(out, buf, n, step * 2);
_fft(out + step, buf + step, n, step * 2);
for (int i = 0; i < n; i += 2 * step) {
cplx t = cexp(-I * PI * i / n) * out[i + step];
buf[i / 2] = out[i] + t;
buf[(i + n)/2] = out[i] - t;
}
}
}
void fft(cplx buf[], int n)
{
cplx out[n];
for (int i = 0; i < n; i++) out[i] = buf[i];
_fft(buf, out, n, 1);
}
/* pad array length to power of two */
cplx *pad_two(double g[], int len, int *ns)
{
int n = 1;
if (*ns) n = *ns;
else while (n < len) n *= 2;
cplx *buf = calloc(sizeof(cplx), n);
for (int i = 0; i < len; i++) buf[i] = g[i];
*ns = n;
return buf;
}
void deconv(double g[], int lg, double f[], int lf, double out[]) {
int ns = 0;
cplx *g2 = pad_two(g, lg, &ns);
cplx *f2 = pad_two(f, lf, &ns);
fft(g2, ns);
fft(f2, ns);
cplx h[ns];
for (int i = 0; i < ns; i++) h[i] = g2[i] / f2[i];
fft(h, ns);
for (int i = 0; i >= lf - lg; i--)
out[-i] = h[(i + ns) % ns]/32;
free(g2);
free(f2);
}
int main()
{
PI = atan2(1,1) * 4;
double g[] = {24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7};
double f[] = { -3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1 };
double h[] = { -8,-9,-3,-1,-6,7 };
int lg = sizeof(g)/sizeof(double);
int lf = sizeof(f)/sizeof(double);
int lh = sizeof(h)/sizeof(double);
double h2[lh];
double f2[lf];
printf("f[] data is : ");
for (int i = 0; i < lf; i++) printf(" %g", f[i]);
printf("\n");
printf("deconv(g, h): ");
deconv(g, lg, h, lh, f2);
for (int i = 0; i < lf; i++) printf(" %g", f2[i]);
printf("\n");
printf("h[] data is : ");
for (int i = 0; i < lh; i++) printf(" %g", h[i]);
printf("\n");
printf("deconv(g, f): ");
deconv(g, lg, f, lf, h2);
for (int i = 0; i < lh; i++) printf(" %g", h2[i]);
printf("\n");
}
Output
f[] data is : -3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
deconv(g, h): -3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
h[] data is : -8 -9 -3 -1 -6 7
deconv(g, f): -8 -9 -3 -1 -6 7
[edit] Common Lisp
Uses the routine (lsqr A b) from Multiple regression and (mtp A) from Matrix transposition.
;; Assemble the mxn matrix A from the 2D row vector x.
(defun make-conv-matrix (x m n)
(let ((lx (cadr (array-dimensions x)))
(A (make-array `(,m ,n) :initial-element 0)))
(loop for j from 0 to (- n 1) do
(loop for i from 0 to (- m 1) do
(setf (aref A i j)
(cond ((or (< i j) (>= i (+ j lx)))
0)
((and (>= i j) (< i (+ j lx)))
(aref x 0 (- i j)))))))
A))
;; Solve the overdetermined system A(f)*h=g by linear least squares.
(defun deconv (g f)
(let* ((lg (cadr (array-dimensions g)))
(lf (cadr (array-dimensions f)))
(lh (+ (- lg lf) 1))
(A (make-conv-matrix f lg lh)))
(lsqr A (mtp g))))
Example:
(setf f #2A((-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1)))
(setf h #2A((-8 -9 -3 -1 -6 7)))
(setf g #2A((24 75 71 -34 3 22 -45 23 245 25 52 25 -67 -96 96 31 55 36 29 -43 -7)))
(deconv g f)
#2A((-8.0)
(-9.000000000000002)
(-2.999999999999999)
(-0.9999999999999997)
(-6.0)
(7.000000000000002))
(deconv g h)
#2A((-2.999999999999999)
(-6.000000000000001)
(-1.0000000000000002)
(8.0)
(-5.999999999999999)
(3.0000000000000004)
(-1.0000000000000004)
(-9.000000000000002)
(-9.0)
(2.9999999999999996)
(-1.9999999999999991)
(5.0)
(1.9999999999999996)
(-2.0000000000000004)
(-7.000000000000001)
(-0.9999999999999994))
[edit] D
T[] deconv(T)(in T[] g, in T[] f) pure nothrow {
int flen = f.length;
int glen = g.length;
auto result = new T[glen - flen + 1];
foreach (int n, ref e; result) {
e = g[n];
immutable lowerBound = (n >= flen) ? n - flen + 1 : 0;
foreach (i; lowerBound .. n)
e -= result[i] * f[n - i];
e /= f[0];
}
return result;
}
void main() {
import std.stdio;
immutable h = [-8,-9,-3,-1,-6,7];
immutable f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1];
immutable g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,
-96,96,31,55,36,29,-43,-7];
writeln(deconv(g, f) == h, " ", deconv(g, f));
writeln(deconv(g, h) == f, " ", deconv(g, h));
}
Output:
true [-8, -9, -3, -1, -6, 7]
true [-3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1]
[edit] Fortran
This solution uses the LAPACK95 library.
! Build
! Windows: ifort /I "%IFORT_COMPILER11%\mkl\include\ia32" deconv1d.f90 "%IFORT_COMPILER11%\mkl\ia32\lib\*.lib"
! Linux:
program deconv
! Use gelsd from LAPACK95.
use mkl95_lapack, only : gelsd
implicit none
real(8), allocatable :: g(:), href(:), A(:,:), f(:)
real(8), pointer :: h(:), r(:)
integer :: N
character(len=16) :: cbuff
integer :: i
intrinsic :: nint
! Allocate data arrays
allocate(g(21),f(16))
g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7]
f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1]
! Calculate deconvolution
h => deco(f, g)
! Check result against reference
N = size(h)
allocate(href(N))
href = [-8,-9,-3,-1,-6,7]
cbuff = ' '
write(cbuff,'(a,i0,a)') '(a,',N,'(i0,a),i0)'
if (any(abs(h-href) > 1.0d-4)) then
write(*,'(a)') 'deconv(f, g) - FAILED'
else
write(*,cbuff) 'deconv(f, g) = ',(nint(h(i)),', ',i=1,N-1),nint(h(N))
end if
! Calculate deconvolution
r => deco(h, g)
cbuff = ' '
N = size(r)
write(cbuff,'(a,i0,a)') '(a,',N,'(i0,a),i0)'
if (any(abs(r-f) > 1.0d-4)) then
write(*,'(a)') 'deconv(h, g) - FAILED'
else
write(*,cbuff) 'deconv(h, g) = ',(nint(r(i)),', ',i=1,N-1),nint(r(N))
end if
contains
function deco(p, q)
real(8), pointer :: deco(:)
real(8), intent(in) :: p(:), q(:)
real(8), allocatable, target :: r(:)
real(8), allocatable :: A(:,:)
integer :: N
! Construct derived arrays
N = size(q) - size(p) + 1
allocate(A(size(q),N),r(size(q)))
A = 0.0d0
do i=1,N
A(i:i+size(p)-1,i) = p
end do
! Invoke the LAPACK routine to do the work
r = q
call gelsd(A, r)
deco => r(1:N)
end function deco
end program deconv
Results:
deconv(f, g) = -8, -9, -3, -1, -6, 7
deconv(h, g) = -3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1
[edit] Go
Translation of: D
package main
import "fmt"
func main() {
h := []float64{-8, -9, -3, -1, -6, 7}
f := []float64{-3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1}
g := []float64{24, 75, 71, -34, 3, 22, -45, 23, 245, 25, 52, 25, -67, -96,
96, 31, 55, 36, 29, -43, -7}
fmt.Println(h)
fmt.Println(deconv(g, f))
fmt.Println(f)
fmt.Println(deconv(g, h))
}
func deconv(g, f []float64) []float64 {
h := make([]float64, len(g)-len(f)+1)
for n := range h {
h[n] = g[n]
var lower int
if n >= len(f) {
lower = n - len(f) + 1
}
for i := lower; i < n; i++ {
h[n] -= h[i] * f[n-i]
}
h[n] /= f[0]
}
return h
}
Output:
[-8 -9 -3 -1 -6 7]
[-8 -9 -3 -1 -6 7]
[-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1]
[-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1]
Translation of: C
package main
import (
"fmt"
"math"
"math/cmplx"
)
func main() {
h := []float64{-8, -9, -3, -1, -6, 7}
f := []float64{-3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1}
g := []float64{24, 75, 71, -34, 3, 22, -45, 23, 245, 25, 52, 25, -67, -96,
96, 31, 55, 36, 29, -43, -7}
fmt.Printf("%.1f\n", h)
fmt.Printf("%.1f\n", deconv(g, f))
fmt.Printf("%.1f\n", f)
fmt.Printf("%.1f\n", deconv(g, h))
}
func deconv(g, f []float64) []float64 {
n := 1
for n < len(g) {
n *= 2
}
g2 := make([]complex128, n)
for i, x := range g {
g2[i] = complex(x, 0)
}
f2 := make([]complex128, n)
for i, x := range f {
f2[i] = complex(x, 0)
}
gt := fft(g2)
ft := fft(f2)
for i := range gt {
gt[i] /= ft[i]
}
ht := fft(gt)
it := 1 / float64(n)
out := make([]float64, len(g)-len(f)+1)
out[0] = real(ht[0]) * it
for i := 1; i < len(out); i++ {
out[i] = real(ht[n-i]) * it
}
return out
}
func fft(in []complex128) []complex128 {
out := make([]complex128, len(in))
ditfft2(in, out, len(in), 1)
return out
}
func ditfft2(x, y []complex128, n, s int) {
if n == 1 {
y[0] = x[0]
return
}
ditfft2(x, y, n/2, 2*s)
ditfft2(x[s:], y[n/2:], n/2, 2*s)
for k := 0; k < n/2; k++ {
tf := cmplx.Rect(1, -2*math.Pi*float64(k)/float64(n)) * y[k+n/2]
y[k], y[k+n/2] = y[k]+tf, y[k]-tf
}
}
Output:
Some results have errors out in the last decimal place or so. Only one decimal place shown here to let results fit in 80 columns.
[-8.0 -9.0 -3.0 -1.0 -6.0 7.0]
[-8.0 -9.0 -3.0 -1.0 -6.0 7.0]
[-3.0 -6.0 -1.0 8.0 -6.0 3.0 -1.0 -9.0 -9.0 3.0 -2.0 5.0 2.0 -2.0 -7.0 -1.0]
[-3.0 -6.0 -1.0 8.0 -6.0 3.0 -1.0 -9.0 -9.0 3.0 -2.0 5.0 2.0 -2.0 -7.0 -1.0]
[edit] Haskell
import Data.List
h, f, g :: [Double]
h = [-8,-9,-3,-1,-6,7]
f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1]
g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7]
scale x ys = map (x*) ys
deconv1d :: (Fractional a) => [a] -> [a] -> [a]
deconv1d xs ys = takeWhile (/=0) $ deconv xs ys
where [] `deconv` _ = []
(0:xs) `deconv` (0:ys) = xs `deconv` ys
(x:xs) `deconv` (y:ys) =
q : zipWith (-) xs (scale q ys ++ repeat 0) `deconv` (y:ys)
where q = x / y
Check:
*Main> h == deconv1d g f
True
*Main> f == deconv1d g h
True
[edit] J
This solution borrowed from Formal power series:
Ai=: (i.@] =/ i.@[ -/ i.@>:@-)&#
divide=: [ +/ .*~ [:%.&.x: ] +/ .* Ai
Sample data:
h=: _8 _9 _3 _1 _6 7
f=: _3 _6 _1 8 _6 3 _1 _9 _9 3 _2 5 2 _2 _7 _1
g=: 24 75 71 _34 3 22 _45 23 245 25 52 25 _67 _96 96 31 55 36 29
Example use:
g divide f
_8 _9 _3 _1 _6 7
g divide h
_3 _6 _1 8 _6 3 _1 _9 _9 3 _2 5 2 _2 _7 _1
That said, note that this particular implementation is slow since it uses extended precision intermediate results. It will run quite a bit faster for this example with no notable loss of precision if floating point is used. In other words:
divide=: [ +/ .*~ [:%. ] +/ .* Ai
[edit] Java
This example is untested. Please check that it's correct, debug it as necessary, and remove this message.
Translation of: Go
import java.util.Arrays;
public class Deconvolution1D {
public static double[] deconv(double[] f, double[] g) {
double[] h = new double[g.length - f.length + 1];
for (int n = 0; n < h.length; n++) {
h[n] = g[n];
int lower = Math.max(n - f.length + 1, 0);
for (int i = lower; i < n; i++)
h[n] -= h[i] * f[n-i];
h[n] /= f[0];
}
return h;
}
public static void main(String[] args) {
double[] h = {-8, -9, -3, -1, -6, 7};
double[] f = {-3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1};
double[] g = {24, 75, 71, -34, 3, 22, -45, 23, 245, 25, 52, 25, -67, -96,
96, 31, 55, 36, 29, -43, -7};
System.out.println(Arrays.toString(h));
System.out.println(Arrays.toString(deconv(g, f)));
System.out.println(Arrays.toString(f));
System.out.println(Arrays.toString(deconv(g, h)));
}
}
Output:
[-8.0, -9.0, -3.0, -1.0, -6.0, 7.0]
[-8.0, -9.0, -3.0, -1.0, -6.0, 7.0]
[-3.0, -6.0, -1.0, 8.0, -6.0, 3.0, -1.0, -9.0, -9.0, 3.0, -2.0, 5.0, 2.0, -2.0, -7.0, -1.0]
[-3.0, -6.0, -1.0, 8.0, -6.0, 3.0, -1.0, -9.0, -9.0, 3.0, -2.0, 5.0, 2.0, -2.0, -7.0, -1.0]
[edit] Lua
Using metatables:
function deconvolve(f, g)
local h = setmetatable({}, {__index = function(self, n)
if n == 1 then self[1] = g[1] / f[1]
else
self[n] = g[n]
for i = 1, n - 1 do
self[n] = self[n] - self[i] * (f[n - i + 1] or 0)
end
self[n] = self[n] / f[1]
end
return self[n]
end})
local _ = h[#g - #f + 1]
return setmetatable(h, nil)
end
Tests:
local f = {-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1}
local g = {24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7}
local h = {-8,-9,-3,-1,-6,7}
print(unpack(deconvolve(f, g))) --> -8 -9 -3 -1 -6 7
print(unpack(deconvolve(h, g))) --> -3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
[edit] Mathematica
This function creates a sparse array for the A matrix and then solves it with a built-in function. It may fail for overdetermined systems, though. Fast approximate methods for deconvolution are also built into Mathematica. See Deconvolution/2D+
deconv[f_List, g_List] :=
Module[{A =
SparseArray[
Table[Band[{n, 1}] -> f[[n]], {n, 1, Length[f]}], {Length[g], Length[f] - 1}]},
Take[LinearSolve[A, g], Length[g] - Length[f] + 1]]
Usage:
f = {-3, -6, -1, 8, -6, 3, -1, -9, -9, 3, -2, 5, 2, -2, -7, -1};
g = {24, 75, 71, -34, 3, 22, -45, 23, 245, 25, 52, 25, -67, -96, 96, 31, 55, 36, 29, -43, -7};
deconv[f,g]
Gives the output:
{-8, -9, -3, -1, -6, 7}
[edit] MATLAB
The deconvolution function is built-in to MATLAB as the "deconv(a,b)" function, where "a" and "b" are vectors storing the convolved function values and the values of one of the deconvoluted vectors of "a". To test that this operates according to the task spec we can test the criteria above:
>> h = [-8,-9,-3,-1,-6,7];
>> g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7];
>> f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1];
>> deconv(g,f)
ans =
-8.0000 -9.0000 -3.0000 -1.0000 -6.0000 7.0000
>> deconv(g,h)
ans =
-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
Therefore, "deconv(a,b)" behaves as expected.
[edit] Perl 6
Works with: Rakudo 2010.12
Translation of Python, using a modified version of the Reduced Row Echelon Form subroutine rref() from here.
sub deconvolve (@g, @f) {
my $h = 1 + @g - @f;
my @m;
@m[^@g]>>.[^$h] >>+=>> 0;
@m[^@g]>>.[$h] >>=<< @g;
for ^$h -> $j { for @f.kv -> $k, $v { @m[$j + $k][$j] = $v } }
return rref( @m )[^$h]>>.[$h];
}
sub convolve (@f, @h) {
my @g = 0 xx + @f + @h - 1;
@g[^@f X+ ^@h] >>+=<< (@f X* @h);
return @g;
}
# Reduced Row Echelon Form simultaneous equation solver.
# Can handle over-specified systems of equations.
# (n unknowns in n + m equations)
sub rref ($m is rw) {
return unless $m;
my ($lead, $rows, $cols) = 0, +$m, +$m[0];
# Trim off over specified rows if they exist.
# Not strictly necessary, but can save a lot of
# redundant calculations.
if $rows >= $cols {
$m = trim_system($m);
$rows = +$m;
}
for ^$rows -> $r {
$lead < $cols or return $m;
my $i = $r;
until $m[$i][$lead] {
++$i == $rows or next;
$i = $r;
++$lead == $cols and return $m;
}
$m[$i, $r] = $m[$r, $i] if $r != $i;
my $lv = $m[$r][$lead];
$m[$r] >>/=>> $lv;
for ^$rows -> $n {
next if $n == $r;
$m[$n] >>-=>> $m[$r] >>*>> $m[$n][$lead];
}
++$lead;
}
return $m;
# Reduce a system of equations to n equations with n unknowns.
# Looks for an equation with a true value for each position.
# If it can't find one, assumes that it has already taken one
# and pushes in the first equation it sees. This assumtion
# will alway be successful except in some cases where an
# under-specified system has been supplied, in which case,
# it would not have been able to reduce the system anyway.
sub trim_system ($m is rw) {
my ($vars, @t) = +$m[0]-1, ();
for ^$vars -> $lead {
for ^$m -> $row {
@t.push( $m.splice( $row, 1 ) ) and last if $m[$row][$lead];
}
}
while (+@t < $vars) and +$m { @t.push( $m.splice( 0, 1 ) ) };
return @t;
}
}
my @h = (-8,-9,-3,-1,-6,7);
my @f = (-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1);
my @g = (24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7);
.say for ~@g, ~convolve(@f, @h),'';
.say for ~@h, ~deconvolve(@g, @f),'';
.say for ~@f, ~deconvolve(@g, @h),'';
Output:
24 75 71 -34 3 22 -45 23 245 25 52 25 -67 -96 96 31 55 36 29 -43 -7
24 75 71 -34 3 22 -45 23 245 25 52 25 -67 -96 96 31 55 36 29 -43 -7
-8 -9 -3 -1 -6 7
-8 -9 -3 -1 -6 7
-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1
[edit] PicoLisp
(load "@lib/math.l")
(de deconv (G F)
(let A (pop 'F)
(make
(for (N . H) (head (- (length F)) G)
(for (I . M) (made)
(dec 'H
(*/ M (get F (- N I)) 1.0) ) )
(link (*/ H 1.0 A)) ) ) ) )
Test:
(setq
F (-3. -6. -1. 8. -6. 3. -1. -9. -9. 3. -2. 5. 2. -2. -7. -1.)
G (24. 75. 71. -34. 3. 22. -45. 23. 245. 25. 52. 25. -67. -96. 96. 31. 55. 36. 29. -43. -7.)
H (-8. -9. -3. -1. -6. 7.) )
(test H (deconv G F))
(test F (deconv G H))
[edit] Python
Works with: Python version 3.x
Inspired by the TCL solution, and using the ToReducedRowEchelonForm function to reduce to row echelon form from here
def ToReducedRowEchelonForm( M ):
if not M: return
lead = 0
rowCount = len(M)
columnCount = len(M[0])
for r in range(rowCount):
if lead >= columnCount:
return
i = r
while M[i][lead] == 0:
i += 1
if i == rowCount:
i = r
lead += 1
if columnCount == lead:
return
M[i],M[r] = M[r],M[i]
lv = M[r][lead]
M[r] = [ mrx / lv for mrx in M[r]]
for i in range(rowCount):
if i != r:
lv = M[i][lead]
M[i] = [ iv - lv*rv for rv,iv in zip(M[r],M[i])]
lead += 1
def pmtx(mtx):
print ('\n'.join(''.join(' %4s' % col for col in row) for row in mtx))
def convolve(f, h):
g = [0] * (len(f) + len(h) - 1)
for hindex, hval in enumerate(h):
for findex, fval in enumerate(f):
g[hindex + findex] += fval * hval
return g
def deconvolve(g, f):
lenh = len(g) - len(f) + 1
mtx = [[0 for x in range(lenh+1)] for y in g]
for hindex in range(lenh):
for findex, fval in enumerate(f):
gindex = hindex + findex
mtx[gindex][hindex] = fval
for gindex, gval in enumerate(g):
mtx[gindex][lenh] = gval
ToReducedRowEchelonForm( mtx )
return [mtx[i][lenh] for i in range(lenh)] # h
if __name__ == '__main__':
h = [-8,-9,-3,-1,-6,7]
f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1]
g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7]
assert convolve(f,h) == g
assert deconvolve(g, f) == h
[edit] R
Here we won't solve the system but use the FFT instead. The method :
• extend vector arguments so that they are the same length, a power of 2 larger than the length of the solution,
• solution is ifft(fft(a)*fft(b)), truncated.
conv <- function(a, b) {
p <- length(a)
q <- length(b)
n <- p + q - 1
r <- nextn(n, f=2)
y <- fft(fft(c(a, rep(0, r-p))) * fft(c(b, rep(0, r-q))), inverse=TRUE)/r
y[1:n]
}
deconv <- function(a, b) {
p <- length(a)
q <- length(b)
n <- p - q + 1
r <- nextn(max(p, q), f=2)
y <- fft(fft(c(a, rep(0, r-p))) / fft(c(b, rep(0, r-q))), inverse=TRUE)/r
return(y[1:n])
}
To check :
h <- c(-8,-9,-3,-1,-6,7)
f <- c(-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1)
g <- c(24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7)
max(abs(conv(f,h) - g))
max(abs(deconv(g,f) - h))
max(abs(deconv(g,h) - f))
This solution often introduces complex numbers, with null or tiny imaginary part. If it hurts in applications, type Re(conv(f,h)) and Re(deconv(g,h)) instead, to return only the real part. It's not hard-coded in the functions, since they may be used for complex arguments as well.
R has also a function convolve,
conv(a, b) == convolve(a, rev(b), type="open")
[edit] Racket
#lang racket
(require math/matrix)
(define T matrix-transpose)
(define (convolution-matrix f m n)
(define l (matrix-num-rows f))
(for*/matrix m n ([i (in-range 0 m)] [j (in-range 0 n)])
(cond [(or (< i j) (>= i (+ j l))) 0]
[(matrix-ref f (- i j) 0)])))
(define (least-square X y)
(matrix-solve (matrix* (T X) X) (matrix* (T X) y)))
(define (deconvolve g f)
(define lg (matrix-num-rows g))
(define lf (matrix-num-rows f))
(define lh (+ (- lg lf) 1))
(least-square (convolution-matrix f lg lh) g))
Test:
(define f (col-matrix [-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1]))
(define h (col-matrix [-8 -9 -3 -1 -6 7]))
(define g (col-matrix [24 75 71 -34 3 22 -45 23 245 25 52 25 -67 -96 96 31 55 36 29 -43 -7]))
(deconvolve g f)
(deconvolve g h)
Output:
#<array '#(6 1) #[-8 -9 -3 -1 -6 7]>
#<array '#(16 1) #[-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1]>
[edit] Tcl
Works with: Tcl version 8.5
This builds the a command, 1D, with two subcommands (convolve and deconvolve) for performing convolution and deconvolution of these kinds of arrays. The deconvolution code is based on a reduction to reduced row echelon form.
package require Tcl 8.5
namespace eval 1D {
namespace ensemble create; # Will be same name as namespace
namespace export convolve deconvolve
# Access core language math utility commands
namespace path {::tcl::mathfunc ::tcl::mathop}
# Utility for converting a matrix to Reduced Row Echelon Form
# From http://rosettacode.org/wiki/Reduced_row_echelon_form#Tcl
proc toRREF {m} {
set lead 0
set rows [llength $m]
set cols [llength [lindex $m 0]]
for {set r 0} {$r < $rows} {incr r} {
if {$cols <= $lead} {
break
}
set i $r
while {[lindex $m $i $lead] == 0} {
incr i
if {$rows == $i} {
set i $r
incr lead
if {$cols == $lead} {
# Tcl can't break out of nested loops
return $m
}
}
}
# swap rows i and r
foreach j [list $i $r] row [list [lindex $m $r] [lindex $m $i]] {
lset m $j $row
}
# divide row r by m(r,lead)
set val [lindex $m $r $lead]
for {set j 0} {$j < $cols} {incr j} {
lset m $r $j [/ [double [lindex $m $r $j]] $val]
}
for {set i 0} {$i < $rows} {incr i} {
if {$i != $r} {
# subtract m(i,lead) multiplied by row r from row i
set val [lindex $m $i $lead]
for {set j 0} {$j < $cols} {incr j} {
lset m $i $j \
[- [lindex $m $i $j] [* $val [lindex $m $r $j]]]
}
}
}
incr lead
}
return $m
}
# How to apply a 1D convolution of two "functions"
proc convolve {f h} {
set g [lrepeat [+ [llength $f] [llength $h] -1] 0]
set fi -1
foreach fv $f {
incr fi
set hi -1
foreach hv $h {
set gi [+ $fi [incr hi]]
lset g $gi [+ [lindex $g $gi] [* $fv $hv]]
}
}
return $g
}
# How to apply a 1D deconvolution of two "functions"
proc deconvolve {g f} {
# Compute the length of the result vector
set hlen [- [llength $g] [llength $f] -1]
# Build a matrix of equations to solve
set matrix {}
set i -1
foreach gv $g {
lappend matrix [list {*}[lrepeat $hlen 0] $gv]
set j [incr i]
foreach fv $f {
if {$j < 0} {
break
} elseif {$j < $hlen} {
lset matrix $i $j $fv
}
incr j -1
}
}
# Convert to RREF, solving the system of simultaneous equations
set reduced [toRREF $matrix]
# Extract the deconvolution from the last column of the reduced matrix
for {set i 0} {$i<$hlen} {incr i} {
lappend result [lindex $reduced $i end]
}
return $result
}
}
To use the above code, a simple demonstration driver (which solves the specific task):
# Simple pretty-printer
proc pp {name nlist} {
set sep ""
puts -nonewline "$name = \["
foreach n $nlist {
puts -nonewline [format %s%g $sep $n]
set sep ,
}
puts "\]"
}
set h {-8 -9 -3 -1 -6 7}
set f {-3 -6 -1 8 -6 3 -1 -9 -9 3 -2 5 2 -2 -7 -1}
set g {24 75 71 -34 3 22 -45 23 245 25 52 25 -67 -96 96 31 55 36 29 -43 -7}
pp "deconv(g,f) = h" [1D deconvolve $g $f]
pp "deconv(g,h) = f" [1D deconvolve $g $h]
pp " conv(f,h) = g" [1D convolve $f $h]
Output:
deconv(g,f) = h = [-8,-9,-3,-1,-6,7]
deconv(g,h) = f = [-3,-6,-1,8,-6,3,-1,-9,-9,3,-2,5,2,-2,-7,-1]
conv(f,h) = g = [24,75,71,-34,3,22,-45,23,245,25,52,25,-67,-96,96,31,55,36,29,-43,-7]
[edit] Ursala
The user defined function band constructs the required matrix as a list of lists given the pair of sequences to be deconvolved, and the lapack..dgelsd function solves the system. Some other library functions used are zipt (zipping two unequal length lists by truncating the longer one) zipp0 (zipping unequal length lists by padding the shorter with zeros) and pad0 (making a list of lists all the same length by appending zeros to the short ones).
#import std
#import nat
band = pad0+ ~&rSS+ zipt^*D(~&r,^lrrSPT/~<K33tx zipt^/~&r ~&lSNyCK33+ zipp0)^/~&rx ~&B->NlNSPC ~&bt
deconv = lapack..dgelsd^\~&l ~&||0.!**+ band
test program:
h = <-8.,-9.,-3.,-1.,-6.,7.>
f = <-3.,-6.,-1.,8.,-6.,3.,-1.,-9.,-9.,3.,-2.,5.,2.,-2.,-7.,-1.>
g = <24.,75.,71.,-34.,3.,22.,-45.,23.,245.,25.,52.,25.,-67.,-96.,96.,31.,55.,36.,29.,-43.,-7.>
#cast %eLm
test =
<
'h': deconv(g,f),
'f': deconv(g,h)>
output:
<
'h': <
-8.000000e+00,
-9.000000e+00,
-3.000000e+00,
-1.000000e+00,
-6.000000e+00,
7.000000e+00>,
'f': <
-3.000000e+00,
-6.000000e+00,
-1.000000e+00,
8.000000e+00,
-6.000000e+00,
3.000000e+00,
-1.000000e+00,
-9.000000e+00,
-9.000000e+00,
3.000000e+00,
-2.000000e+00,
5.000000e+00,
2.000000e+00,
-2.000000e+00,
-7.000000e+00,
-1.000000e+00>>
Personal tools
Namespaces
Variants
Actions
Community
Explore
Misc
Toolbox |
Skip to content
HTTPS clone URL
Subversion checkout URL
You can clone with
or
.
Download ZIP
Windows installer for custom elements of software stack used in field by HOTOSM
Shell
Branch: master
Fetching latest commit…
Cannot retrieve the latest commit at this time
Failed to load latest commit information.
docs fix mapnik python paths so they are relative to the paths.py file, an…
windows
.gitignore
README.txt
TODO.txt add to todo list and list of existing packages that get installed
README.txt
HOTOSM Installer
----------------
Windows is the most common platform in many places HOTOSM works.
These are a set of scripts to start to pull together a unified Windows
installer for a range of software we are actively deploying in the field.
There are lots of ways to install stuff on windows. This installer takes
the shortcut approach of sticking all stuff into a unixy directory structure.
Then it runs a bat script to put onto PATH both '{program files}\HOTOSM\bin'
and '{program files}\HOTOSM\lib' while automatically putting on the
PYTHONPATH '{program files}\HOTOSM\python\2.5\site-packages'. There are likely
better ways to do this, but its worked so far.
See the docs/ directory for more details on building and using this
installer.
Goals
=====
The idea here is a one-shot, kitchen-sink approach to make it as
easy as possible for getting a variety of tools running that are
*not otherwise* available as easy to use windows installers.
This is to allow for tools to be ready to run as soon as they are required
by one of the tutorials at:
http://wiki.openstreetmap.org/wiki/Humanitarian_OSM_Team/HOT_Package
The tools packaged so far are mainly python stuff because Dane Springmeyer,
who deployed with HOT on mission 2, knows python and tends to solve
problems this way. But, one idea behind making this installer public
is to allow others to recommend tools that might be useful if stuck
into this thingy.
Origins
=======
This started as a standalone Mapnik installer, which is the reason the
'setup.sh' script builds off of the base directory structure from the
Mapnik windows download. But the longer term goal is to abstract
this out and provide a more generic base to throw in more tools.
Why not OSGEO4W?
================
This installer is an alternative to using OSGEO4W, which is a great tool
but has certain drawbacks for rapid deployment of tools in the field:
1) OSGEO4W is non-trivial to get working fully-offine (from a usb stick).
2) OSGEO4W is less than "one-click" and requires some explanation to use.
3) OSGEO4W requires a high level of skill to add new packages to.
What is included?
=================
This installer does not package everything! Common tools used by HOTOSM that
are not provided in this installer are:
* JOSM / Java runtime
* GPSBabel
* Garmin Drivers
* QGIS
* Postgres/PostGIS
* Python 2.5
This installer currently includes:
* Mapnik 0.7.1 (python 2.5/2.6)
- installer *only* sets up py2.5 bindings
* osm2pgsql (and libs) revision 0.69-21289
* proj4 nad files
- this ensures mapnik/osm2pgsql can create projections with +init=espg=<code>
* PIL - python imaging (python 2.5/2.6)
* lxml (python 2.5/2.6)
* Cascadenik
* cssutils
* nik2img
* TileLite
* osmosis 1.37
* mkgmap r1625
* bzip2
* wget
* See the TODO.txt for further tools we plan to consider packaging
Something went wrong with that request. Please try again. |
1
vote
1answer
49 views
Correct data model for users with clients
I have an application where users are granted access. Part of the system allows users to have their own 'clients'. I have a users table ******************************** users ...
5
votes
2answers
135 views
Resolving DUPLICATE for a column with UNIQUE
I have a few column with UNIQUE INDEX. In some queries, duplicates should be acceptable, in which a postfix should be added to the value. For example, if test title exits, we change it to test ...
1
vote
1answer
2k views
How to use ON DUPLICATE KEY for UPDATE
Consider a table of CREATE TABLE test ( id int(11) unsigned NOT NULL AUTO_INCREMENT, external_id int(11), number smallint(5), value varchar(255), UNIQUE INDEX (external_id, number), PRIMARY KEY(id) ...
2
votes
4answers
910 views
Two-sided UNIQUE INDEX for two columns
Creating a table with composite PRIMARY KEY or using UNIQUE INDEX for two columns guarantee uniqueness of col1, col2. Is there a tricky approach to make the reverse order of two columns UNIQUE too ...
12
votes
3answers
6k views
PostgreSQL multi-column unique constraint and NULL values
I have a table like the following: create table my_table ( id int8 not null, id_A int8 not null, id_B int8 not null, id_C int8 null, constraint pk_my_table primary key (id), constraint ... |
getColumn method
int getColumn(
1. int offset,
2. {int? line}
)
Gets the 0-based column corresponding to offset.
If line is passed, it's assumed to be the line containing offset and is used to more efficiently compute the column.
Implementation
int getColumn(int offset, {int? line}) {
if (offset < 0) {
throw RangeError('Offset may not be negative, was $offset.');
} else if (offset > length) {
throw RangeError('Offset $offset must be not be greater than the '
'number of characters in the file, $length.');
}
if (line == null) {
line = getLine(offset);
} else if (line < 0) {
throw RangeError('Line may not be negative, was $line.');
} else if (line >= lines) {
throw RangeError('Line $line must be less than the number of '
'lines in the file, $lines.');
}
final lineStart = _lineStarts[line];
if (lineStart > offset) {
throw RangeError('Line $line comes after offset $offset.');
}
return offset - lineStart;
} |
paid ebook download link
Making Densified Biomass Fuels
By Jack Huang Nov 22, 2013
There are three basic methods used to manufacture densified fuels from biomass materials. There are extrusion, mechanical compression and hydraulic compression. All the methods rely on the same basic technologies of permanently reducing the air space between the particles of the biomass, transforming loose particles into a dense, solid block.[1]
Densified biomass briquetting fuel
Densified Biomass Fuel
Extrusion is the forcing of the biomass through a narrow passage (a die). This method of densification produces pellets, fuel logs and briquettes. Mechanical compression is the low pressure confinement of the biomass of the biomass into a shape, or the reduction of the volume of the biomass by means of forcing the biomass into a progressively narrower space(medium pressure), or the reduction of volume by means of a dynamic impact and extremely high pressure on small amounts of biomass. The third method is hydraulic compression which will be discussed with length in the following paragraphs.
Hydraulic compress is the confinement by means of pressure of a large amount of biomass into a small space (also called a die). Hydraulically operated briquetting machines are available in difference shapes and sizes, with varying output usually lower than 1 ton per hour. Compression pressures range from 700 to 1,750Atm. There are two categories of hydraulic machines: a) Heavy duty, industrial-type machines used to manufacture fuel briquettes for the consumer market, for the generation of space heat or power. b) Small, light duty machines mostly sued to manufacture fuel briquettes for own use in small companies that generate biomass waste.
The compression process for both these types is relatively slow, with a transient from a fast initial reduction of volume at low pressure to a longer compression phase during which the pressure reaches its peak.
Each compression cycle takes between 10 and 25 seconds, depending on the amount of materials loaded at each cycle and the required density of the finished briquette. A low amount of material, combined with a long cycle time and the highest pressure will produce the highest density, and therefore the best quality briquette. However, since the cycle time does not change a lot with the amount of materials loaded at each cycle, the manufacture of a high quality briquette penalises the output capacity of the system.[2]
With technology innovation, the GC-HBP125 hydraulic briquetting presse gains new features on a higher stage. The advantages of the hydraulic briquette presses are low noise, easy operation, small space occupation, relatively light weight, and impeccable overload protection system, long service lifetime of molding part at about 1500-2000h, etc. Furthermore, the pressure display panel is installed to the hydraulic briquette machine, which can exactly show the working status of the machine, very convenient for the operator to observe and operate the machine.
Well, then comes the electrical control system. The electrical control system is made of Siemens PLC system and Schneider Electric components. The control of the machine can be realized by operating on the touch screen, where the key parameters are set and operation orders are released according to the programs. Meantime, the touch screen can conduct synchronous simulation operation that can display the current working conditions.
So it means the production of the high quality briquettes with the hydraulic briquette press will be easy, efficient and cost effective. And the key parameters of the hydraulic briquette press are shown below for your reference.
Specification of GC-HBP125 Hydraulic Briquette Press
Max Capacity
(kg/h)
Poweer
(kw)
Diameter of Briquette(mm) Volume of Hopper(m3) Weight
(kg)
Dimesion
(mm)
125 7.7+1.5 70 1.5 1200 3150*1270*1790
For more information: [1][2] was written by Girodanno Checchi, CEO OF sunomi, exclusive distributor of Di Piu briquetting systems in North America from Bioenergy briquettes. |
JavaScript Date and Time – Implementation of JavaScript Date Methods
I was just scrolling down through a website in the morning and suddenly a good morning message popped-up. This put me in a dilemma for a few minutes and I wondered how come a website gets to know that its a morning or a night. I was curious if JavaScript could do that for me, and guess what? It can. JavaScript is fully capable of accessing and manipulating date and time as per my needs. Now, let’s understand how.
This tutorial explains all you need to know about JavaScript date and time. You can use the current date and time or you can select the time frame you want. The choice is all yours and JavaScript helps you to produce the desired output easily. We will discuss the Date object in this tutorial, along with the numerous methods associated with it.
You can’t move forward in this tutorial until you clear your concepts of JavaScript Numbers
JavaScript Dates
Keeping you updated with latest technology trends, Join DataFlair on Telegram
JavaScript Date and Time
One of the features of JavaScript that make it so popular is its ability to use (within the script or in the user’s browser) the local date and time. This is the output of the JavaScript Date object. A Date object contains a Number that represents milliseconds since the date 1 January 1970 UTC. The value in the object changes dynamically as the local date and time changes. You can create a Date object in various ways.
new Date();
new Date(value);
new Date(dateString);
new Date(year, monthIndex [, dayIndex [, hours [, minutes [, seconds [, milliseconds]]]]]);
If you declare a variable without the new keyword, it will return date as a string. The program below implements both these approaches.
Note: You cannot alter the syntax or the sequence of the Date parameters. JavaScript will either invalidate the format (last statement of the program) or you get a jumbled date.
date1 = new Date()
// Sat Jul 27 2019 10:02:29 GMT+0530 (India Standard Time)
date1 = Date()
// "Sat Jul 27 2019 10:02:36 GMT+0530 (India Standard Time)"
date2 = new Date('July 27, 2019 10:40:00')
// Sat Jul 27 2019 10:40:00 GMT+0530 (India Standard Time)
date3 = new Date('2019-07-27T10:40:00')
// Sat Jul 27 2019 10:40:00 GMT+0530 (India Standard Time)
date4 = new Date(1995, 11, 17, 3, 24, 0)
// Sun Dec 17 1995 03:24:00 GMT+0530 (India Standard Time)
date4 = new Date(1995, 11, 17)
// Sun Dec 17 1995 00:00:00 GMT+0530 (India Standard Time)
date1 = new Date("2019-07-27 T 10:40:00")
// Invalid Date
Output:
JavaScript Date and Time Output
Take a deep dive and explore everything about JavaScript Strings
Individual Date and Time Component Values
We saw numerous parameters associated with our Date object above. We understand many of them by name, but JavaScript doesn’t always understand all the date formats that we do. So it’s crucial that you learn to give the script the correct description of the date you want, with the syntax that JavaScript understands. Let’s go through them, so you don’t get confused with the parameter’s values. Don’t forget that missing fields are given the lowest possible value (1 for the day and 0 for the rest of the components).
ComponentDescription
year
It represents the year as an integer value. All the values, except 0 to 99 (these map to the year 1900 to 1999), are actual years.
monthIndex
It is an integer value representing the month, with 0 for January to 11 for December.
dayIndex
It is an integer value representing the day of the month, with 0 for Sunday to 6 for Saturday.
hours
It is an integer value representing the hour of the day, with the default as 0 (midnight).
minutes
This integer value represents the minute segment of time, with the default as 0 minutes past the hour.
seconds
This integer value represents the second segment of time. The default is 0 seconds past the minute.
milliseconds
This integer value depicts the millisecond segment of time. The default is 0 milliseconds past the second.
JavaScript Date Methods
The Date object has the following types of methods for accessing and manipulating date and time:
1. Getter
These methods retrieve the specified parameter from the Date object. JavaScript doesn’t always return the same format as you want, but you can use these methods to convert them into user-understandable date format. The following table lists all the major methods you need to be aware of to be able to access dates.
MethodDescription
getDate()
This method returns the day of the month (1-31) for the specified date as per the local time.
getDay()
It returns the day of the week (0-6, from Sunday to Saturday) for the specified date according to local time.
getMonth()
It returns the month (0-11, from January to December) in the specified date according to local time.
getFullYear()
It returns the year (as a 4-digit number) of the specified date according to local time.
getHours()
This method returns the hour (0-23) for the specific date as per the local time.
getMinutes()
This method returns the minutes (0-59) for the specified date according to local time.
getSeconds()
It returns the seconds (0-59) for the specified date as per the local time.
getMilliseconds()
This returns the milliseconds (0-999) in the specified date as per the local time.
getTime()
This method returns the numeric value of the specified date: the number of milliseconds since January 1, 1970, 00:00:00 UTC (negative for the time before that).
getTimezoneOffset()
It returns the time-zone offset in minutes for the current locale.
The all above methods work with the local time. If you want to use Coordinated Universal Time (UTC), prefer the methods listed below. These perform the same tasks like the ones we discussed above, but with UTC.
getUTCDate()getUTCDay()getUTCMonth()
getUTCFullYear()
getUTCHours()getUTCMinutes()getUTCSeconds()
getUTCMilliseconds()
Let’s use the local time and JavaScript Date methods to print the current date and time in the format on the browser window:
Current Date: Tuesday, April 25, 2017
Current Time: 04: 10 PM
Code:
<html>
<body>
<p id = "date">
<p id = "time">
<script>
var days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']; //array of days
var months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] //array of months
var date = new Date(); //creating Date object
var currentDay = days[date.getDay()]; //determining day using the array and method
var currentMonth = months[date.getMonth()]; //determining month using the array and method
var currentDate = date.getDate(); //current date
var currentYear = date.getFullYear(); //current year
document.getElementById('date').innerHTML = "Current Date: " + currentDay + ", " + currentMonth + " " + currentDate + ", " + currentYear + "</br";
var hrs = date.getHours(), min = date.getMinutes(); //current time (hours and minutes)
var suffix = 'AM';
if(hrs >= 12){
hrs -= 12;
suffix = 'PM';
}
if(hrs < 10){
hrs = "0" + hrs;
}
if(min < 10){
min = "0" + min;
}
document.getElementById('time').innerHTML = "Current Time: " + hrs + ": " + min + " " + suffix + "</br";
</script>
</body>
</html>
Screenshot:
Getter Current Date - JavaScript Date and Time
Output:
Getter Current Date Output - JavaScript Date and Time
Don’t worry, this is not a monster code, it is very easy to implement. Just keep track of all the variables and only use related identifiers. You can use individual methods to retrieve individual elements in the Date object.
2. Setter
These JavaScript methods manipulate the different parameters of the Date object. They set a part of the date and lets us alter the specific values. The methods in the table below produce results as per the local time.
MethodDescription
setDate()
This Date method sets the day of the month for a specified date.
setMonth()
It sets the month for a specified date.
setFullYear()
This method sets the full year (as a 4-digit number) for a specified date.
setHours()
This JavaScript method sets the hours for a specified date.
setMinutes()
It sets the minutes for a specified date.
setSeconds()
It manually sets the seconds for a specified date.
setMilliseconds()
This method sets the milliseconds for a specified date.
setTime()
It sets the Date object to the time represented by the number of milliseconds since January 1, 1970, 00:00:00 UTC (negative numbers for times before that).
JavaScript methods to work with UTC are as follows:
setUTCDate()setUTCMonth()setUTCFullYear()setUTCHours()
setUTCMinutes()setUTCSeconds()
setUTCMilliseconds()
You can’t afford to miss the article on JavaScript Objects
3. Conversion Getter
These methods get the results we want after conversion. This means that they first convert the Date object to a String object and then return the value. The list of these methods is as follows:
MethodDescription
toDateString()
It returns a string, containing the “date” of the Date object in a human-readable format.
toTimeString()
It returns a string, containing the “time” of the Date object in a human-readable format.
toUTCString()
It returns a string, containing the Date using the UTC timezone.
valueOf()
This method returns the primitive value of the Date object.
toString()
This method returns a string representation of the specified Date object.
You can use these methods when you want to use the standard date and time formats that JavaScript uses. Just remember, you cannot alter the format of the value returned. Also, these methods return String objects rather than Date objects. So you need to be careful when and where you use these methods. Let’s run the following statements in the browser console:
date.toDateString()
// "Sat Jul 27 2019"
date.toTimeString()
// "12:12:06 GMT+0530 (India Standard Time)"
date.toUTCString()
// "Sat, 27 Jul 2019 06:42:06 GMT"
date.valueOf()
// 1564209726170
date.toString()
// "Sat Jul 27 2019 12:12:06 GMT+0530 (India Standard Time)"
Output:
conversion getter output
Wow! We got the same result as the above code with a single line. We didn’t get the same format, but I think this is cool, don’t you? Experiment with these methods, see what happens. Notice the difference between different methods and their outputs. These methods are very beneficial if you know how to use them in your program.
Summary
Here we conclude our tutorial on JavaScript Date and Time. Date and time in JavaScript are very fascinating to use. You can do almost anything you want with dates: accessing, manipulating, etc. It also isn’t that difficult. All you need to do is get a hang of all the methods we discussed in this tutorial. Clear all your concepts regarding this topic and you won’t face any problem with dates in the future.
Next, you must go through our next article on JavaScript Array
Hope you liked our article.
Share your feedback and queries through the comment section below.
Leave a Reply
Your email address will not be published. Required fields are marked *
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. |
2
How do I extract the value of a property in a PropertyCollection?
If I drill down on the 'Properties' in the line below is visual studion I can see the value but how do I read it?
foreach (string propertyName in result.Properties.PropertyNames)
{
MessageBox.Show(ProperyNames[0].Value.ToString()); <--Wrong!
}
1
• 1
What is the type of 'result'? Which property in Properties do you want the value of?
– Jay Bazuzi
Oct 28 '08 at 15:27
3
Using a few hints from above I managed to get what I needed using the code below:
ResultPropertyValueCollection values = result.Properties[propertyName];
if (propertyName == "abctest")
{
MessageBox.Show(values[0].ToString());
}
Thanks to all.
2
Try this:
foreach (string propertyName in result.Properties.PropertyNames)
{
MessageBox.Show(result.Properties[propertyName].ToString());
}
Or this:
foreach (object prop in result.Properties)
{
MessageBox.Show(prop.ToString());
}
Also: there are a couple different PropertyCollections classes in the framework. These examples are based on the System.Data class, but you might also be using the System.DirectoryServices class. However, neither of those classes are really "reflection". Reflection refers to something different- namely the System.Reflection namespace plus a couple special operators.
1
• I needed an index after the proeprtyname, ie result.Properties[propertyName][0].ToString()
– SteveCav
Apr 23 '15 at 23:46
0
is that propertynames meant to be upper case within function?
Reading again, i have to admit to be a little confused exactly what you're after with all these properties. Is this the class property value or an instance you're after?
0
Vb.NET
For Each prop As String In result.Properties.PropertyNames
MessageBox.Show(result.Properties(prop).Item(0), result.Item(i).Properties(prt).Item(0))
Next
I think C# looks like this...
foreach (string property in result.Properties.PropertyNames)
{
MessageBox.Show(result.Properties[property].Item[0]);
}
As noted above, there are a few different property collections in the framework.
0
I'm not certain what you're asking for, but I think the problem is that you're seeing the property names instead of their values?
If so, the reason is that you're enumerating through the PropertyCollection.PropertyNames collection and not the PropertyCollection.Values collection. Try something like this instead:
foreach (object value in result.Properties.Values)
{
MessageBox.Show(property.ToString());
}
I was assuming that this question referred to the System.DirectoryServices.PropertyCollection class and not System.Data.PropertyCollection because of the reference to PropertyNames, but now I'm not so sure. If the question is about the System.Data version then disregard this answer.
0
If you put the value collection inside your "if", you would only retrieve it when you actually need it rather than every time through the loop. Just a suggestion... :)
0
The PropertyNames is not in uppercase elsewhere, the code below works and would show the name of the property but I want to read the value. 'PropertyName' is just a string.
foreach (string propertyName in result.Properties.PropertyNames)
{
MessageBox.Show(PropertyName.ToString());
}
-1
Try:
foreach (string propertyName in result.Properties.PropertyNames)
{ MessageBox.Show(properyName.ToString()); <--Wrong!
}
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy |
دانلود مقاله ISI انگلیسی شماره 29145
عنوان فارسی مقاله
تجزیه و تحلیل ریسک در طی ساخت تونل با استفاده از شبکه های بیزی: مطالعه موردی مترو پورتو
کد مقاله سال انتشار مقاله انگلیسی ترجمه فارسی تعداد کلمات
29145 2012 15 صفحه PDF سفارش دهید 8010 کلمه
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید.
عنوان انگلیسی
Risk analysis during tunnel construction using Bayesian Networks: Porto Metro case study
منبع
Publisher : Elsevier - Science Direct (الزویر - ساینس دایرکت)
Journal : Tunnelling and Underground Space Technology, Volume 27, Issue 1, January 2012, Pages 86–100
کلمات کلیدی
خطر - تونل زنی - شبکه های بیزی -
پیش نمایش مقاله
پیش نمایش مقاله تجزیه و تحلیل ریسک در طی ساخت تونل با استفاده از شبکه های بیزی: مطالعه موردی مترو پورتو
چکیده انگلیسی
This paper presents a methodology to systematically assess and manage the risks associated with tunnel construction. The methodology consists of combining a geologic prediction model that allows one to predict geology ahead of the tunnel construction, with a construction strategy decision model that allows one to choose amongst different construction strategies the one that leads to minimum risk. This model used tunnel boring machine performance data to relate to and predict geology. Both models are based on Bayesian Networks because of their ability to combine domain knowledge with data, encode dependencies among variables, and their ability to learn causal relationships. The combined geologic prediction–construction strategy decision model was applied to a case, the Porto Metro, in Portugal. The results of the geologic prediction model were in good agreement with the observed geology, and the results of the construction strategy decision support model were in good agreement with the construction methods used. Very significant is the ability of the model to predict changes in geology and consequently required changes in construction strategy. This risk assessment methodology provides a powerful tool with which planners and engineers can systematically assess and mitigate the inherent risks associated with tunnel construction.
مقدمه انگلیسی
There is an intrinsic risk associated with tunnel construction because of the limited a priori knowledge of the existing subsurface conditions. Although the majority of tunnel construction projects have been completed safely there have been several incidents in various tunneling projects that have resulted in delays, cost overruns, and in a few cases more significant consequences such as injury and loss of life. It is therefore important to systematically assess and manage the risks associated with tunnel construction. A detailed database of accidents that occurred during tunnel construction was created by Sousa (2010). The database contains 204 cases all around the world with different construction methods and different types of accidents. The accident cases were obtained from the technical literature, newspapers and correspondence with experts in the tunneling domain. Knowledge representation systems (or knowledge based systems) and decision analysis techniques were both developed to facilitate and improve the decision making process. Knowledge representation systems use various computational techniques of AI (artificial intelligence) for representation of human knowledge and inference. Decision analysis uses decision theory principles supplemented by judgment psychology (Henrion, 1991). Both emerged from research done in the 1940s regarding development of techniques for problem solving and decision making. John von Neumann and Oscar Morgensten, who introduced game theory in “Games and Economic Behavior” (1944), had a tremendous impact on research in decision theory. Although the two fields have common roots, since then they have taken different paths. More recently there has been a resurgence of interest by many AI researchers in the application of probability theory, decision theory and analysis to several problems in AI, resulting in the development of Bayesian Networks and influence diagrams, an extension of Bayesian Networks designed to include decision variables and utilities. The 1960s saw the emergence of decision analysis with the use of subjective expected utility and Bayesian statistics. Howard Raiffa, Robert Schlaifer, and John Pratt at Harvard, and Ronald Howard at Stanford emerged as leaders in these areas. For instance Raiffa and Schlaifer’s Applied Statistical Decision Theory (1961) provided a detailed mathematical treatment of decision analysis focusing primarily on Bayesian statistical models. Pratt et al. (1964) developed basic decision analysis. while Eskesen et al. (2004) and Hartford and Baecher (2004) provide good summaries on the different techniques (fault trees, decision trees, etc.) that can be used to assess and manage risk in tunneling. Various commercial and research software for risk analysis during tunnel construction have been developed over the years, the most important of which is the DAT (Decision Aids for Tunneling), developed at MIT in collaboration with EPFL (Ecole Polytechnique Fédérale de Lausanne). The DAT are based on an interactive program that uses probabilistic modeling of the construction process to analyze the effects of geotechnical uncertainties and construction uncertainties on construction costs and time. (Dudt et al., 2000 and Einstein, 2002) However, the majority of existing risk analysis systems, including the DAT, deal only with the effects of random (“common”) geological and construction uncertainties on time and cost of construction. There are other sources of risks, not considered in these systems, which are related to specific geotechnical scenarios that can have substantial consequences on the tunnel process, even if their probability of occurrence is low. This paper attempts to address the issue of specific geotechnical risk by first developing a methodology that allows one to identify major sources of geotechnical risks, even those with low probability, in the context of a particular project and then performing a quantitative risk analysis to identify the “optimal” construction strategies, where “optimal” refers to minimum risk. For that purpose a decision support system framework for determining the “optimal” (minimum risk) construction method for a given tunnel alignment was developed. The decision support system consists of two models: a geologic prediction model, and a construction strategy decision model. Both models are based on the Bayesian Network technique, and when combined allow one to determine the ‘optimal’ tunnel construction strategies. The decision model contains an updating component, by including information from the excavated tunnel sections. This system was implemented in a real tunnel project, the Porto Metro in Portugal.
نتیجه گیری انگلیسی
A decision support framework for assessing and avoiding risks in tunnel construction was developed and successfully applied in a case study. The decision support framework consists of the geology prediction model and the construction strategy decision model both of which are based on Bayesian Networks. TBM performance data are used to predict geology, which is then used to help decide on the construction method involving the lowest risk. The presented risk model contains two models, a geological prediction model and a decision model. The geological model was trained (or calibrated) with the data from a specific project (the Porto Metro). Afterwards, the models were applied, i.e. tested on another section of the Porto Metro. The data that were used to test the model were not those used to the train the model. The results of the predictions of geology on the part of tunnel that was not used to train the model can predict changes in geology. The application to the Porto Metro tunnel case, in which several accidents occurred, shows that the decision support framework fulfills its objectives. Specifically the results show that the model can predict changes in geology and that it suggests changes in construction strategy. This is most visible in the zone of accidents 2 and 3, where the model accurately predicts the change in geology and occurrence of soil. The “optimal” construction strategy determined by the combined risk assessment model is EPBM in closed mode, i.e. with a fully pressurized face, in the areas where accident 2 and 3 occurred, and not what was actually used during construction, EPBM in open/semi-closed mode. This difference is due to the fact that during the actual construction there was no effective system to predict changes in geology and therefore adapt the construction strategy. Clearly the question arises how the proposed methodology process would work in other cases. If the geological prediction model were applied elsewhere (in another type of geology) it would need calibration. There is also the issue of not having data to calibrate the model at the beginning of construction. A way to use and calibrate this type of prediction model in cases where initial data do not exist (from a nearby project or similar geology) would be to use at the beginning of the construction subjective probabilities given by the experts (e.g. What is the probability that one is in geology G1, if the penetration rate is high (i.e. greater than a certain value), etc.), and then update these probabilities as the construction progresses.
خرید مقاله
پس از پرداخت، فوراً می توانید مقاله را دانلود فرمایید. |
carsten carsten - 11 months ago 58
Python Question
plot log-scale and linear scale functions and histograms on same canvas
I have a probability density function of that I can only evaluate the logarithm without running into numeric issues. I have a histogram that I would like to plot on the same canvas. However, for the histogram, I need the option
log=True
to have it plotted in log scale, wheras for the function, I can only have the logarithms of the values directly. How can I plot both on the same canvas?
Please look at this MWE for illustration of the problem:
import matplotlib.pyplot as plt
import random
import math
import numpy as np
sqrt2pi = math.sqrt(2*math.pi)
def gauss(l):
return [ 1/sqrt2pi * math.exp(-x*x) for x in l]
def loggauss(l):
return [ -math.log(sqrt2pi) -x*x for x in l ]
# just fill a histogram
h = [ random.gauss(0,1) for x in range(0,1000) ]
plt.hist(h,bins=21,normed=True,log=True)
# this works nicely
xvals = np.arange(-4,4,0.1)
plt.plot(xvals,gauss(xvals),"-k")
# but I would like to plot this on the same canvas:
# plt.plot(xvals,loggauss(xvals),"-r")
plt.show()
Any suggestions?
Answer Source
If I understand correctly, you want to plot two data sets in the same figure, on the same x-axis, but one on a log y-scale and one on a linear y-scale. You can do this using twinx:
fig, ax = plt.subplots()
ax.hist(h,bins=21,normed=True,log=True)
ax2 = ax.twinx()
ax2.plot(xvals, loggauss(xvals), '-r')
Figure |
Skip to main content
Advertisement
The Minkowski inequalities via generalized proportional fractional integral operators
Article metrics
• 351 Accesses
• 3 Citations
Abstract
Recent research has gained more attention on conformable integrals and derivatives to derive the various type of inequalities. One of the recent advancements in the field of fractional calculus is the generalized nonlocal proportional fractional integrals and derivatives lately introduced by Jarad et al. (Eur. Phys. J. Special Topics 226:3457–3471, 2017) comprising the exponential functions in the kernels. The principal aim of this paper is to establish reverse Minkowski inequalities and some other fractional integral inequalities by utilizing generalized proportional fractional integrals. Also, two new theorems connected with this inequality as well as other inequalities associated with the generalized proportional fractional integrals are established.
Introduction
Fractional calculus is a study of integrals and derivatives of arbitrary order which was a natural outgrowth of conventional definitions of calculus integral and derivative. Fractional integral has been comprehensively studied in the literature. The idea has been defined by numerous mathematicians with a slightly different formula, for example, Riemann–Liouville, Weyl, Erdélyi–Kober, Hadamard integral, Liouville and Katugampola fractional integral (see [18, 22, 23, 26, 34]). In the last few years, Khalil et al. [24] and Abdeljawad [1] established a new class of fractional derivatives and integrals called fractional conformable derivatives and integrals. Jarad et al. [21] introduced the fractional conformable integral operators. On the basis of that idea, one can obtain the generalizations of the inequalities: Hadamard, Hermite–Hadamard, Opial, Grüss, Ostrowski, Chebyshev, among others [19, 35, 37,38,39]).
Later on in [6], Anderson and Ulness improved the idea of the fractional conformable derivative by introducing the idea of local derivatives. In [2, 3, 7, 9, 27] researchers introduced new fractional derivative operators by using exponential and Mittag-Leffler functions in their kernels. In [20], Jarad et al. proposed the left and right generalized nonlocal proportional fractional integral and derivative operators. Such generalizations motivate future research to present more innovative ideas to unify the fractional operators and obtain the inequalities involving such fractional operators. The integral inequalities and their applications play an essential role in the theory of differential equations and applied mathematics. A variety of various types of some classical integral inequalities and their generalizations have been established by utilizing the classical fractional integral, fractional derivative operators (see, e.g., [4, 12, 14,15,16,17, 25, 28,29,30, 32, 33, 36, 41, 42, 46, 47]).
The reverse Minkowski fractional integral inequalities are perceived in [13]. Anber et al. [5] have gained some fractional integral inequalities by using Riemann–Liouville fractional integral. In [11], the authors established Minkowski inequalities and some other inequalities by employing Katugampola fractional integral operators. In [10, 45], the authors established the reverse Minkowski inequality for Hadamard fractional integral operators. In [31], Mubeen et al. recently established the reverse Minkowski inequalities and some related inequalities for generalized k-fractional conformable integrals.
This paper is organized as follows: In the second section, we present some known results and basic definitions. In the third section, the reverse Minkowski inequalities are presented. In the fourth section, some other related inequalities involving generalized nonlocal proportional fractional integrals are presented.
Preliminaries
This section is devoted to some known definitions and results associated with the classical Riemann–Liouville fractional integrals and their generalization involving the Riemann–Liouville fractional integrals. Set et al. [40] presented Hermite–Hadamard and reverse Minkowski inequalities for Riemann–Liouville fractional integrals. In [8], Bougoffa also presented Hardy’s and reverse Minkowski inequalities. The following theorems involving the reverse Minkowski inequalities are the motivation of work performed so far, involving the classical Riemann integrals.
Theorem 2.1
([40])
Let \(r\geq 1\) and let g, h be two positive functions on \([0,\infty )\). If \(0< m\leq \frac{g(\rho )}{h(\rho )} \leq M\), \(\vartheta \in [a,b]\), then the following inequality holds:
$$\begin{aligned} & \biggl( \int _{a}^{b}g^{r}(\vartheta )\,d \vartheta \biggr)^{1/r}+ \biggl( \int _{a}^{b}h^{r}(\vartheta )\,d \vartheta \biggr)^{1/r} \\ &\quad \leq \frac{1+M(m+2)}{(m+1)(M+1)} \biggl( \int _{a}^{b}(g+h)^{r}(\vartheta ) \,d \vartheta \biggr)^{1/r}. \end{aligned}$$
(1)
Theorem 2.2
([40])
Let \(r\geq 1\) and let g, h be two positive functions on \([0,\infty )\). If \(0< m\leq \frac{g(\rho )}{h(\rho )} \leq M\), \(\vartheta \in [a,b]\), then the following inequality holds:
$$\begin{aligned} & \biggl( \int _{a}^{b}g^{r}(\vartheta )\,d \vartheta \biggr)^{2/r}+ \biggl( \int _{a}^{b}h^{r}(\vartheta )\,d \vartheta \biggr)^{2/r} \\ &\quad \geq \biggl(\frac{(M+1)(m+1)}{M}-2 \biggr) \biggl( \int _{a}^{b}g^{r}( \vartheta )\,d \vartheta \biggr)^{1/r} \biggl( \int _{a}^{b}h^{r}(\vartheta )\,d \vartheta \biggr)^{1/r}. \end{aligned}$$
(2)
Definition 2.1
([26, 34])
The left and right R-L fractional integrals of order λ are respectively defined by
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda }g \bigr) ( \vartheta )=\frac{1}{ \varGamma (\lambda )} \int _{a}^{\vartheta }(\vartheta -\rho )^{\lambda -1}g( \rho )\,d\rho ,\quad a< \vartheta \end{aligned}$$
(3)
and
$$\begin{aligned} \bigl(\mathfrak{I}_{b}^{\lambda }g \bigr) ( \vartheta )=\frac{1}{ \varGamma (\lambda )} \int _{\vartheta }^{b}(\rho -\vartheta )^{\lambda -1}g( \rho )\,d\rho ,\quad \vartheta < b, \end{aligned}$$
(4)
where \(\lambda \in \mathbb{C}\) and \(\Re (\lambda )>0\).
In [13], Dahmani introduced the following reverse Minkowski inequalities involving the R-L fractional integral operators.
Theorem 2.3
([13])
Let \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that, for all \(\vartheta >0\), \(\mathfrak{I}^{\lambda }g^{r}( \vartheta )<\infty \), \(\mathfrak{I}^{\lambda }h^{r}(\vartheta )< \infty \). If \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\), \(\rho \in [a, \vartheta ]\), then the following inequality holds:
$$\begin{aligned} \bigl(\mathfrak{I}^{\lambda }g^{r}(\vartheta ) \bigr)^{1/r}+ \bigl(\mathfrak{I} ^{\lambda }h^{r}( \vartheta ) \bigr)^{1/r}\leq \frac{1+M(m+2)}{(m+1)(M+1)} \bigl( \mathfrak{I}^{\lambda }(g+h)^{r}( \vartheta ) \bigr)^{ 1/r }. \end{aligned}$$
(5)
Theorem 2.4
([13])
Let \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that, for all \(\vartheta >0\), \(\mathfrak{I}^{\lambda }g^{r}( \vartheta )<\infty \), \(\mathfrak{I}^{\lambda }h^{r}(\vartheta )< \infty \). If \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\), \(\rho \in [a, \vartheta ]\), then the following inequality holds:
$$\begin{aligned} & \bigl(\mathfrak{I}^{\lambda }g^{r}(\vartheta ) \bigr)^{2/r}+ \bigl(\mathfrak{I} ^{\lambda }h^{r}( \vartheta ) \bigr)^{2/r} \\ &\quad \geq \biggl(\frac{(M+1)(m+1)}{M}-2 \biggr) \bigl(\mathfrak{I}^{\lambda }g ^{r}(\vartheta ) \bigr)^{1/r} \bigl(\mathfrak{I}^{\lambda }h^{r}( \vartheta ) \bigr)^{1/r}. \end{aligned}$$
(6)
Definition 2.2
([20])
The left and right generalized nonlocal proportional integral operators are respectively defined by
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g \bigr) ( \vartheta )=\frac{1}{ \eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{ \eta -1}{\eta }( \vartheta -\rho )\biggr] (\vartheta -\rho )^{\lambda -1}g( \rho )\,d\rho \end{aligned}$$
(7)
and
$$\begin{aligned} \bigl(\mathfrak{I}_{b}^{\lambda ,\eta }g \bigr) ( \vartheta )=\frac{1}{ \eta ^{\lambda }\varGamma (\lambda )} \int _{\vartheta }^{b}\exp \biggl[\frac{ \eta -1}{\eta }( \rho -\vartheta )\biggr](\rho -\vartheta )^{\lambda -1}g( \rho )\,d\rho , \end{aligned}$$
(8)
where \(\eta \in (0,1]\) and \(\lambda \in \mathbb{C}\) and \(\Re (\lambda )>0\).
Remark 2.1
If we consider \(\eta =1\) in (7) and (8), then we get the left and right Riemann–Liouville (3) and (4) respectively.
Reverse Minkowski inequalities via generalized proportional fractional integral operator
In this section, we use generalized nonlocal proportional fractional integral operator to develop reverse Minkowski integral inequalities. The reverse Minkowski fractional integral inequality is presented in the following theorem.
Theorem 3.1
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that, for all \(\vartheta >0\), \({}_{a}\mathfrak{I}^{\lambda , \eta }g^{r}(\vartheta )<\infty \), \({}_{a}\mathfrak{I}^{\lambda , \eta }h^{r}(\vartheta )<\infty \). If \(0< m\leq \frac{g(\rho )}{h( \rho )}\leq M\), \(\rho \in [a,\vartheta ]\), then the following inequality holds:
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}+ \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr) ^{1/r} \leq \frac{1+M(m+2)}{(m+1)(M+1)} \bigl({}_{a}\mathfrak{I}^{ \lambda ,\eta }(g+h)^{r}( \vartheta ) \bigr)^{1/r}. \end{aligned}$$
(9)
Proof
Under the condition stated in Theorem 3.1, \(\frac{g(\rho )}{h( \rho )}\leq M\), \(\rho \in [0,\vartheta ]\), \(\vartheta >0\), we have
$$ (M+1)^{r} g^{r}(\rho )\leq M^{r} (g+h )^{r}(\rho ). $$
(10)
Consider a function
$$\begin{aligned} \mathfrak{F}(\vartheta ,\rho ) &=\frac{1}{\eta ^{\lambda }\varGamma ( \lambda )}\exp \biggl[\frac{\eta -1}{\eta }(\vartheta -\rho )\biggr] (\vartheta - \rho )^{\lambda -1} \\ &= \frac{1}{\eta ^{\lambda }\varGamma (\lambda )}(\vartheta -\rho )^{ \lambda -1} \biggl[1+ \frac{\eta -1}{\eta }(\vartheta -\rho )+\frac{ (\frac{\eta -1}{\eta }(\vartheta -\rho ) )^{2}}{2}+\cdots \biggr]. \end{aligned}$$
(11)
We observe that the function \(\mathfrak{F}(\vartheta ,\rho )\) remains positive for all \(\rho \in (a,\vartheta )\), \(a<\vartheta \leq b\), since each term of the above function is positive in view of conditions stated in Theorem 3.1.
Multiplying both sides of (10) by \(\mathfrak{F}(\vartheta , \rho )\) and integrating the resultant inequality with respect to ρ from a to ϑ, we have
$$\begin{aligned} &\frac{(M+1)^{r} }{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr] (\vartheta -\rho )^{ \lambda -1}g^{r}(\rho )\,d\rho \\ &\quad \leq \frac{M^{r} }{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{ \vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr] (\vartheta -\rho )^{\lambda -1} (g+h )^{r}(\rho )\,d\rho , \end{aligned}$$
which can be written as
$$ {}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}(\vartheta ) \leq \frac{M^{r}}{(M+1)^{r}}{}_{a}\mathfrak{I}^{\lambda ,\eta } (g+h ) ^{r}(\vartheta ). $$
Hence, it follows that
$$ \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}\leq \frac{M}{(M+1)} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } (g+h )^{r}( \vartheta ) \bigr) ^{1/r}. $$
(12)
Now, using the condition \(mg(\rho )\leq h(\rho )\), we have
$$ \biggl(1+\frac{1}{m} \biggr)h(\rho )\leq \frac{1}{m} \bigl(g(\rho )+h( \rho ) \bigr), $$
it follows that
$$ \biggl(1+\frac{1}{m} \biggr)^{r} h^{r}(\rho )\leq \biggl(\frac{1}{m}\biggr)^{r} \bigl(g(\rho )+h(\rho ) \bigr)^{r}. $$
(13)
Multiplying both sides of (13) by \(\mathfrak{F}(\vartheta , \rho )\) and integrating the resultant inequality with respect to ρ from a to ϑ, we have
$$ \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r}( \vartheta ) \bigr) ^{1/r}\leq \frac{1}{(m+1)} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } (g+h )^{r}( \vartheta ) \bigr) ^{1/r}. $$
(14)
Thus adding inequalities (12) and (14) yields the desired inequality. □
Theorem 3.2
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that, for all \(\vartheta >0\), \({}_{a}\mathfrak{I}^{\lambda , \eta }g^{r}(\vartheta )<\infty \), \({}_{a}\mathfrak{I}^{\lambda , \eta }h^{r}(\vartheta )<\infty \). If \(0< m\leq \frac{g(\rho )}{h( \rho )}\leq M\), \(\rho \in [a,\vartheta ]\), then the following inequality holds:
$$\begin{aligned} & \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{2/r}+ \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr) ^{2/r} \\ &\quad \geq \biggl(\frac{(M+1)(m+1)}{M}-2 \biggr) \bigl({}_{a} \mathfrak{I}^{ \lambda ,\eta }g^{r}(\vartheta ) \bigr)^{1/r} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr)^{1/r}. \end{aligned}$$
(15)
Proof
The multiplication of inequalities (12) and (14) yields
$$\begin{aligned} \biggl(\frac{(M+1)(m+1)}{M} \biggr) \bigl({}_{a} \mathfrak{I}^{\lambda , \eta }g^{r}(\vartheta ) \bigr)^{1/r} \bigl({}_{a}\mathfrak{I}^{ \lambda ,\eta }h^{r}(\vartheta ) \bigr)^{1/r}\leq \bigl[ \bigl({} _{a} \mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+ h(\vartheta ) \bigr) ^{r} \bigr)^{1/r} \bigr]^{2}. \end{aligned}$$
(16)
Now, applying the Minkowski inequality to the right-hand side of (16), we obtain
$$\begin{aligned} & \bigl[ \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+ h(\vartheta ) \bigr)^{r} \bigr)^{1/r} \bigr]^{2} \\ &\quad \leq \bigl[ \bigl({} _{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr)^{1/r}+ \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r}( \vartheta ) \bigr) ^{1/r} \bigr]^{2} \\ &\quad \leq \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{2/r}+ \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr)^{2/r}+2 \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}(\vartheta ) \bigr)^{1/r} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r}( \vartheta ) \bigr)^{1/r}. \end{aligned}$$
(17)
Thus, from inequalities (16) and (17), we get the desired inequality (15). □
Certain related inequalities via generalized proportional fractional integral operator
This section is devoted to deriving certain related inequalities involving a generalized proportional fractional integral operator.
Theorem 4.1
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r>1\), \(1/r+1/s =1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g(\vartheta )]<\infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h(\vartheta )]< \infty \). If \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M<\infty \), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), we have
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g( \vartheta ) \bigr)^{1/r} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h( \vartheta ) \bigr)^{1/s} \leq \biggl(\frac{M}{m} \biggr)^{1/rs} \bigl({}_{a}\mathfrak{I}^{ \lambda ,\eta } \bigl[g(\vartheta )\bigr]^{1/r}\bigl[h(\vartheta )\bigr]^{1/s} \bigr). \end{aligned}$$
(18)
Proof
Since \(\frac{g(\rho )}{h(\rho )}\leq M<\infty \), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), therefore we have
(19)
It follows that
$$\begin{aligned} \bigl[g(\rho )\bigr]^{1/r}\bigl[h(\rho ) \bigr]^{1/s} &\geq M^{-1/r}\bigl[g(\rho ) \bigr]^{1/r}\bigl[g( \rho )\bigr]^{1/s} \\ &\geq M^{-1/s}\bigl[g(\rho )\bigr]^{\frac{1}{r}+{1/s}} \\ &\geq M^{-1/r}\bigl[g(\rho )\bigr]. \end{aligned}$$
(20)
Multiplying both sides of (20) by \(\mathfrak{F}(\vartheta , \rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant inequality with respect to ρ from a to ϑ, we have
$$\begin{aligned} &\frac{ 1}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta } \exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}\bigl[g(\rho ) \bigr]^{1/r}\bigl[h(\rho )\bigr]^{1/s}\,d\rho \\ &\quad \geq \frac{M^{-1/r} }{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{ \vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta - \rho )^{\lambda -1}g(\rho )\,d\rho . \end{aligned}$$
(21)
It follows that
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl[\bigl[g( \vartheta )\bigr]^{1/r}\bigl[h( \vartheta )\bigr]^{1/s} \bigr]\geq M^{\frac{-1}{r}} \bigl[{}_{a} \mathfrak{I}^{\lambda ,\eta }g( \vartheta ) \bigr]. \end{aligned}$$
(22)
Consequently, we have
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl[ \bigl[g(\vartheta )\bigr]^{1/r}\bigl[h( \vartheta )\bigr]^{1/s} \bigr] \bigr)^{1/r}\geq M^{\frac{-1}{rs}} \bigl[{}_{a} \mathfrak{I}^{\lambda ,\eta }h(\vartheta ) \bigr]^{1/r}. \end{aligned}$$
(23)
On the other hand, \(m g(\rho )\leq h(\rho )\), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), therefore we have
$$\begin{aligned} \bigl[g(\rho )\bigr]^{1/r}\geq m^{1/r} \bigl[h(\rho )\bigr]^{1/r}. \end{aligned}$$
(24)
It follows that
$$\begin{aligned} \bigl[g(\rho )\bigr]^{1/r}\bigl[h(\rho ) \bigr]^{1/s} &\geq m^{1/r}\bigl[g(\rho )\bigr]^{1/r} \bigl[h( \rho )\bigr]^{1/s} \\ &\geq m^{1/r}\bigl[h(\rho )\bigr]^{\frac{1}{r}+{1/s}} \\ &\geq m^{1/r}\bigl[h(\rho )\bigr]. \end{aligned}$$
(25)
Again, multiplying both sides of (25) by \(\mathfrak{F}( \vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant inequality with respect to ρ from a to ϑ, we have
$$\begin{aligned} &\frac{ 1}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta } \exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}\bigl[g(\rho ) \bigr]^{1/r}\bigl[h(\rho )\bigr]^{1/s}\,d\rho \\ & \quad \geq \frac{m^{1/r}}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{ \vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta - \rho )^{\lambda -1}h(\rho )\,d\rho . \end{aligned}$$
(26)
Hence, we can write
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl[ \bigl[g(\vartheta )\bigr]^{1/r}\bigl[h( \vartheta )\bigr]^{1/s} \bigr] \bigr)^{1/r}\geq m^{\frac{1}{rs}} \bigl[{}_{a} \mathfrak{I}^{\lambda ,\eta }g(\vartheta ) \bigr]^{1/s}. \end{aligned}$$
(27)
Multiplying (23) and (27), we get the desired inequality. □
Theorem 4.2
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r>1\), \(\frac{1}{r}+{1/s}=1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g^{r}( \vartheta )]<\infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h^{s}( \vartheta )]<\infty \). If \(0< m\leq \frac{g(\rho )^{r}}{h(\rho )^{s}} \leq M<\infty \), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), we have
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{s}( \vartheta ) \bigr) ^{1/s}\leq \biggl(\frac{M}{m} \biggr)^{\frac{1}{rs}} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl[g(\vartheta )\bigr]^{1/r}\bigl[h(\vartheta )\bigr]^{1/s} \bigr). \end{aligned}$$
(28)
Proof
Replacing \(g(\vartheta )\) and \(h(\vartheta )\) by \(g^{r}(\vartheta )\) and \(h^{r}(\vartheta )\), \(a<\vartheta \leq b\) in Theorem 4.1, we get the desired inequality (28). □
Theorem 4.3
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r>1\), \(\frac{1}{r}+{1/s}=1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g^{r}( \vartheta )]<\infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h^{s}( \vartheta )]<\infty \). If \(0< m\leq \frac{g^{r}(\rho )}{h^{s}(\rho )} \leq M<\infty \) where \(m, M\in \mathbb{R}\), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), then the following inequality for left generalized proportional fractional integral holds:
$$\begin{aligned} _{a}\mathfrak{I}^{\lambda ,\eta }\bigl[g(\vartheta )h(\vartheta )\bigr]\leq \frac{2^{r-1}M ^{r}}{r(M+1)^{r}}{}_{a} \mathfrak{I}^{\lambda ,\eta }\bigl[g^{r}+h^{p}\bigr]( \vartheta )+ \frac{2^{s-1}}{s(m+1)^{s}}{}_{a}\mathfrak{I}^{\lambda , \eta } \bigl[g^{s}+h^{s}\bigr](\vartheta ). \end{aligned}$$
(29)
Proof
By the given hypothesis \(\frac{g(\rho )}{h(\rho )}\leq M\), we have
$$\begin{aligned} (M+1)^{r}g^{r}(\rho )\leq M^{r}[g+h]^{r}(\rho ). \end{aligned}$$
(30)
Multiplying both sides of inequality (30) by \(\mathfrak{F}( \vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} &\frac{(M+1)^{r}}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}g^{r}(\rho )\,d\rho \\ & \quad \leq \frac{M^{r}}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{ \vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta - \rho )^{\lambda -1}[g+h]^{r}( \rho )\,d\rho . \end{aligned}$$
(31)
It follows that
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta )\leq \frac{M^{r}}{(M+1)^{r}}{}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{r}( \vartheta ). \end{aligned}$$
(32)
On the other hand, using \(m\leq \frac{g(\rho )}{h(\rho )}\), \(a< t<\vartheta \), we have
$$\begin{aligned} (m+1)^{s} h^{s}(\rho )\leq [g+h]^{s}(\rho ). \end{aligned}$$
(33)
Again, multiplying both sides of inequality (33) by \(\mathfrak{F}(\vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta , \rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta }h^{s}( \vartheta )\leq \frac{1}{(m+1)^{s}}{}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{s}( \vartheta ). \end{aligned}$$
(34)
Now, using Young’s inequality, we have
$$\begin{aligned} g(\rho )h(\rho )\leq \frac{g^{r}(\rho )}{r}+ \frac{g^{s}(\rho )}{s}. \end{aligned}$$
(35)
Multiplying both sides of inequality (33) by \(\mathfrak{F}( \vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} _{a}\mathfrak{I}^{\lambda ,\eta }g(\vartheta )h( \vartheta )\leq \frac{1}{r} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }g^{r}(\vartheta ) \bigr)+ {1/s} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } g^{s}( \vartheta ) \bigr). \end{aligned}$$
(36)
With the aid of (32) and (34), (36) can be written as
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta }g(\vartheta )h( \vartheta ) &\leq \frac{1}{r} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }g^{r}(\vartheta ) \bigr)+ {1/s} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } g^{s}( \vartheta ) \bigr) \\ & \leq \frac{M^{r}}{r(M+1)^{r}}{}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{r}( \vartheta )+\frac{1}{s(m+1)^{s}}{}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{s}( \vartheta ). \end{aligned}$$
(37)
Now, using the inequality \((\rho +\omega )^{r}\leq 2^{s-1}(\rho ^{r}+ \omega ^{r})\), \(r>1\), \(\rho , \omega >0\), one can obtain
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{r}( \vartheta )\leq {}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl[g^{r}+h^{r}\bigr](\vartheta ) \end{aligned}$$
(38)
and
$$\begin{aligned} {}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{s}( \vartheta )\leq {}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl[g^{s}+h^{s}\bigr](\vartheta ). \end{aligned}$$
(39)
Hence the proof of (29) can follow from (37), (38), and (39). □
Theorem 4.4
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g^{r}(\vartheta )]< \infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h^{r}(\vartheta )]< \infty \). If \(0< k< m\leq \frac{g(\rho )}{h(\rho )}\leq M<\infty \), where \(m, M\in \mathbb{R}\), \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), then the following inequality for left generalized proportional fractional integral holds:
$$\begin{aligned} \frac{M+1}{M-k} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }\bigl[g(\vartheta )-kh(\vartheta )\bigr] \bigr) &\leq \bigl({}_{a}\mathfrak{I}^{\lambda , \eta }g^{r}(\vartheta ) \bigr)^{1/r}+ \bigl({}_{a}\mathfrak{I}^{ \lambda ,\eta }h^{r}( \vartheta ) \bigr)^{1/r} \\ &\leq \frac{m+1}{m-k} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl[g( \vartheta )-kh(\vartheta )\bigr] \bigr)^{1/r}. \end{aligned}$$
(40)
Proof
Under the given hypothesis \(0< k< m\leq \frac{g^{r}(\rho )}{h^{s}( \rho )}\leq M<\infty \), we have
$$\begin{aligned} mk\leq Mk\quad &\Rightarrow\quad mk+m\leq mk+M\leq Mk+M\, \\ &\Rightarrow\quad (M+1) (m-k)\leq (m+1) (M-k). \end{aligned}$$
It can be written as
$$\begin{aligned} \frac{ (M+1)}{(M-k)}\leq \frac{(m+1)}{(m-k)}. \end{aligned}$$
Also, we have
$$\begin{aligned} m-k\leq \frac{g(\rho )-kh(\rho )}{h(\rho )}\leq M-k. \end{aligned}$$
It follows that
$$\begin{aligned} \frac{ (g(\rho )-kh(\rho ) )^{r}}{(M-k)^{r}}\leq h^{r}( \rho )\leq \frac{ (g(\rho )-kh(\rho ) )^{r}}{(m-k)^{r}}. \end{aligned}$$
(41)
Also, we have
$$\begin{aligned} \frac{1}{M}\leq \frac{h(\rho )}{g(\rho )}\leq \frac{1}{m}\quad \Rightarrow\quad \frac{m-k}{km}\leq \frac{g(\rho )-kh(\rho )}{kg(\rho )}\leq \frac{M-k}{kM}. \end{aligned}$$
It follows that
$$\begin{aligned} \biggl(\frac{M}{M-k} \biggr)^{r}\leq \bigl(g(\rho )-kh(\rho ) \bigr) ^{r}\leq g^{r}(\rho )\leq \biggl(\frac{m}{m-k} \biggr)^{r}\leq \bigl(g( \rho )-kh(\rho ) \bigr)^{r}. \end{aligned}$$
(42)
Multiplying both sides of inequality (41) by \(\mathfrak{F}( \vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} &\frac{1}{(M-k)^{r} \eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1} \bigl(g(\rho )-kh(\rho ) \bigr)^{r} \,d\rho \\ & \quad \leq \frac{1}{ \eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}h^{r}(\rho )\,d\rho \\ & \quad \leq \frac{1}{(m-k)^{r} \eta ^{\lambda }\varGamma (\lambda )} \int _{a} ^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{\lambda -1} \bigl(g(\rho )-kh(\rho ) \bigr)^{r}\,d\rho . \end{aligned}$$
It follows that
$$\begin{aligned} \frac{1}{(M-k)} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl(g( \vartheta )-kh(\vartheta ) \bigr)^{r} \bigr)^{1/r} &\leq \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr)^{1/r} \\ & \leq \frac{1}{(m-k)} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl(g( \vartheta )-kh(\vartheta ) \bigr)^{r} \bigr)^{1/r}. \end{aligned}$$
(43)
Again, multiplying both sides of inequality (42) by \(\mathfrak{F}(\vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta , \rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} \biggl(\frac{M}{M-k} \biggr) \bigl({}_{a} \mathfrak{I}^{\lambda , \eta } \bigl(g(\vartheta )-kh(\vartheta ) \bigr)^{r} \bigr)^{1/r} & \leq \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }g^{r}(\vartheta ) \bigr) ^{1/r} \\ & \leq \biggl(\frac{m}{m-k} \biggr) \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )-kh(\vartheta ) \bigr)^{r} \bigr)^{1/r}. \end{aligned}$$
(44)
Hence, by adding inequalities (43) and (44), we get the desired inequality (40). □
Theorem 4.5
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g^{r}(\vartheta )]< \infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h^{r}(\vartheta )]< \infty \). If \(0\leq \alpha \leq g(\rho )\leq \mathcal{A}\) and \(0\leq \sigma \leq h(\rho )\leq \mathcal{B}\) for all \(\rho \in [a, \vartheta ]\), \(\vartheta >a\), then the following inequality for left generalized proportional fractional integral holds:
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}+ \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr) ^{1/r} \leq \frac{\mathcal{A}(\alpha +\mathcal{B})+\mathcal{B}(\sigma +\mathcal{A})}{(\mathcal{A}+\sigma )(\mathcal{B}+\alpha )} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }[g+h]^{r}( \vartheta ) \bigr)^{1/r}. \end{aligned}$$
(45)
Proof
Under the given hypothesis, we have
$$\begin{aligned} \frac{1}{\mathcal{B}}\leq \frac{1}{h(\rho )}\leq \frac{1}{\sigma }. \end{aligned}$$
(46)
The product of inequality (46) with \(0\leq \alpha \leq g( \rho )\leq \mathcal{A}\) yields
$$\begin{aligned} \frac{\alpha }{\mathcal{B}}\leq \frac{g(\rho )}{h(\rho )}\leq \frac{ \mathcal{A}}{\sigma }. \end{aligned}$$
(47)
From (47), we obtain
$$\begin{aligned} h^{r}(\rho )\leq \biggl(\frac{\mathcal{B}}{\alpha +\mathcal{B}} \biggr) ^{r} \bigl(g(\rho )+h(\rho ) \bigr)^{r} \end{aligned}$$
(48)
and
$$\begin{aligned} g^{r}(\rho )\leq \biggl(\frac{\mathcal{A}}{\sigma +\mathcal{A}} \biggr) ^{r} \bigl(g(\rho )+h(\rho ) \bigr)^{r}. \end{aligned}$$
(49)
Now, multiplying both sides of inequalities (48) and (49) respectively by \(\mathfrak{F}(\vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we obtain
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r}( \vartheta ) \bigr) ^{1/r}\leq \biggl(\frac{\mathcal{B}}{\alpha +\mathcal{B}} \biggr) \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+h( \vartheta ) \bigr)^{r} \bigr)^{1/r} \end{aligned}$$
(50)
and
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}\leq \biggl(\frac{\mathcal{A}}{\sigma +\mathcal{A}} \biggr) \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+h( \vartheta ) \bigr)^{r} \bigr)^{1/r}. \end{aligned}$$
(51)
Hence, by adding (50) and (51), we get the desired proof. □
Theorem 4.6
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g(\vartheta )]<\infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h(\vartheta )]<\infty \). If \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\) where \(m, M\in \mathbb{R}\) for all \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), then the following inequality for the left generalized proportional fractional integral holds:
$$\begin{aligned} \frac{1}{M} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }g(\vartheta )h( \vartheta ) \bigr) &\leq \frac{1}{(m+1)(M+1)} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+h(\vartheta ) \bigr) ^{2} \bigr) \\ &\quad \leq \frac{1}{m} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g( \vartheta )h(\vartheta ) \bigr). \end{aligned}$$
(52)
Proof
Under the given hypothesis, \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\), we have
$$\begin{aligned} h(\rho ) (m+1)\leq h(\rho )+g(\rho )\leq h(\rho ) (M+1). \end{aligned}$$
(53)
Also, we have \(\frac{1}{M}\leq \frac{h(\rho )}{g(\rho )}\leq \frac{1}{m}\), which gives
$$\begin{aligned} g(\rho ) \biggl(\frac{M+1}{M} \biggr)\leq g(\rho )+h( \rho )\leq g( \rho ) \biggl(\frac{m+1}{m} \biggr). \end{aligned}$$
(54)
The multiplication of (53) and (54) yields
$$\begin{aligned} \frac{g(\rho )h(\rho )}{M}\leq \frac{ (g(\rho )+h(\rho ) ) ^{2}}{(m+1)(M+1)}\leq \frac{g(\rho )h(\rho )}{m}. \end{aligned}$$
(55)
Now, multiplying both sides of inequality (55) by \(\mathfrak{F}(\vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we have
$$\begin{aligned} &\frac{1}{M\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta } \exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}g(\rho )h(\rho )\,d\rho \\ &\quad \leq \frac{1}{(m+1)(M+1)\eta ^{\lambda }\varGamma (\lambda )} \int _{a} ^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{\lambda -1} \bigl(g(\rho )+h(\rho ) \bigr)^{2}\,d\rho \\ &\quad \leq \frac{1}{m\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta }\exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}g(\rho )h(\rho )\,d \rho. \end{aligned}$$
(56)
It follows that
$$\begin{aligned} \frac{1}{M} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g( \vartheta )h( \vartheta ) \bigr) &\leq \frac{1}{(m+1)(M+1)} \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } \bigl(g(\vartheta )+h(\vartheta ) \bigr) ^{2} \bigr) \\ &\leq \frac{1}{m} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g( \vartheta )h(\vartheta ) \bigr), \end{aligned}$$
which completes the desired proof. □
Theorem 4.7
Let \(\eta \in (0,1]\), \(\lambda \in \mathbb{C}\), \(\Re (\lambda )>0\), \(r\geq 1\), and let g, h be two positive functions on \([0,\infty )\) such that \({}_{a}\mathfrak{I}^{\lambda ,\eta }[g(\vartheta )]<\infty \), \({}_{a}\mathfrak{I}^{\lambda ,\eta }[h(\vartheta )]<\infty \). If \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\), where \(m, M\in \mathbb{R}\) for all \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), then the following inequality for the left generalized proportional fractional integral holds:
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}+ \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r}(\vartheta ) \bigr) ^{1/r} \leq 2 \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r} \bigl(g( \vartheta ),h(\vartheta ) \bigr) \bigr), \end{aligned}$$
(57)
where \(h (g(\vartheta ),h(\vartheta ) )=\max \{M [ (\frac{M}{m}+1 )g(\rho )-Mh(\rho ) ],\frac{(m+M)h( \rho )-g(\rho )}{m} \}\).
Proof
Under the given hypothesis \(0< m\leq \frac{g(\rho )}{h(\rho )}\leq M\), where \(\rho \in [a,\vartheta ]\), \(\vartheta >a\), we have
$$\begin{aligned} 0< m\leq M+m-\frac{g(\rho )}{h(\rho )} \end{aligned}$$
(58)
and
$$\begin{aligned} M+m-\frac{g(\rho )}{h(\rho )}\leq M. \end{aligned}$$
(59)
From (58) and (59), we have
$$\begin{aligned} h(\rho )< \frac{(M+m)h(\rho )-g(\rho )}{m}\leq h \bigl(g(\rho ),h( \rho ) \bigr), \end{aligned}$$
(60)
where \(h (g(\vartheta ),h(\vartheta ) )=\max \{M [ (\frac{M}{m}+1 )g(\rho )-Mh(\rho ) ],\frac{(m+M)h( \rho )-g(\rho )}{m} \}\). Also, from the given hypothesis \(0<\frac{1}{M}\leq \frac{h(\rho )}{g(\rho )}\leq \frac{1}{m}\), we have
$$\begin{aligned} \frac{1}{M}\leq \frac{1}{M}+ \frac{1}{m}-\frac{h(\rho )}{g(\rho )} \end{aligned}$$
(61)
and
$$\begin{aligned} \frac{1}{M}+\frac{1}{m}- \frac{h(\rho )}{g(\rho )}\leq \frac{1}{m}. \end{aligned}$$
(62)
From (61) and (62), we obtain
$$\begin{aligned} \frac{1}{M}\leq \frac{ (\frac{1}{M}+\frac{1}{m} )g(\rho )-h( \rho )}{g(\rho )}\leq \frac{1}{m}. \end{aligned}$$
(63)
It follows that
$$\begin{aligned} g(\rho ) &=M \biggl(\frac{1}{M}+\frac{1}{m} \biggr)g(\rho )-Mh(\rho ) \\ &= \frac{M (M+m )g(\rho )-M^{2}mh(\rho )}{mM} \\ &= \biggl(\frac{M}{m}+1 \biggr)g(\rho )-Mh(\rho ) \\ &= M \biggl[ \biggl(\frac{M}{m}+1 \biggr)g(\rho )-Mh(\rho ) \biggr] \\ & \leq h \bigl(g(\rho ), h(\rho ) \bigr). \end{aligned}$$
(64)
From (60) and (64), we can write
$$\begin{aligned} g^{r}(\rho )\leq h \bigl(g(\rho ), h(\rho ) \bigr) \end{aligned}$$
(65)
and
$$\begin{aligned} h^{r}(\rho )\leq h^{r} \bigl(g(\rho ),h(\rho ) \bigr). \end{aligned}$$
(66)
Now, multiplying both sides of inequalities (65) and (62) respectively by \(\mathfrak{F}(\vartheta ,\rho )\) where \(\mathfrak{F}(\vartheta ,\rho )\) is defined by (11) and integrating the resultant identity with respect to ρ over \((a,\vartheta )\), we get
$$\begin{aligned} &\frac{1}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta } \exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1}g^{r}(\rho )\,d\rho \\ &\quad \leq \frac{1}{\eta ^{\lambda }\varGamma (\lambda )} \int _{a}^{\vartheta } \exp \biggl[\frac{\eta -1}{\eta }( \vartheta -\rho )\biggr](\vartheta -\rho )^{ \lambda -1} h \bigl(g(\rho ), h( \rho ) \bigr)\,d\rho . \end{aligned}$$
(67)
It follows that
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }g^{r}( \vartheta ) \bigr) ^{1/r}\leq \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta } h \bigl(g( \vartheta ), h(\vartheta ) \bigr) \bigr)^{1/r}. \end{aligned}$$
(68)
Similarly, from (62), we obtain
$$\begin{aligned} \bigl({}_{a}\mathfrak{I}^{\lambda ,\eta }h^{r}( \vartheta ) \bigr) ^{1/r}\leq \bigl({}_{a} \mathfrak{I}^{\lambda ,\eta }h^{r} \bigl(g( \vartheta ),h(\vartheta ) \bigr) \bigr)^{1/r}. \end{aligned}$$
(69)
Hence, by adding (68) and (69), we get the desired proof. □
Concluding remarks
In this paper, we presented the Minkowski inequalities and some other related inequalities via generalized nonlocal proportional fractional integral operators. The results exhibited in Sect. 3 generalized the work earlier done by Dahmani [13] for Riemann–Liouville fractional integral operator. Also, the special cases of the results presented in Sect. 3 are found in [40]. The inequalities established in Sect. 4 generalized the inequalities earlier obtained by Suliman [44]. Also, our result will reduce to some classical results which are found in the work of Sroysang [43].
References
1. 1.
Abdeljawad, T.: On conformable fractional calculus. J. Comput. Appl. Math. 279, 57–66 (2015). https://doi.org/10.1016/j.cam.2014.10.016
2. 2.
Abdeljawad, T., Baleanu, D.: Monotonicity results for fractional difference operators with discrete exponential kernels. Adv. Differ. Equ. 2017, 78 (2017). https://doi.org/10.1186/s13662-017-1126-1
3. 3.
Abdeljawad, T., Baleanu, D.: On fractional derivatives with exponential kernel and their discrete versions. Rep. Math. Phys. 80, 11–27 (2017). https://doi.org/10.1016/S0034-4877(17)30059-9
4. 4.
Alzabut, J., Abdeljawad, T., Jarad, F., Sudsutad, W.: A Gronwall inequality via the generalized proportional fractional derivative with applications. J. Inequal. Appl. 2019, 101 (2019)
5. 5.
Anber, A., Dahmani, Z., Bendoukha, B.: New integral inequalities of Feng Qi type via Riemann-Liouville fractional integration. Facta Univ., Ser. Math. Inform. 27(2), 13–22 (2012)
6. 6.
Anderson, D.R., Ulness, D.J.: Newly defined conformable derivatives. Adv. Dyn. Syst. Appl. 10(2), 109–137 (2015)
7. 7.
Atangana, A., Baleanu, D.: New fractional derivatives with nonlocal and non-singular kernel: theory and application to heat transfer model. Therm. Sci. 20, 763–769 (2016). https://doi.org/10.2298/TSCI160111018A
8. 8.
Bougoffa, L.: On Minkowski and Hardy integral inequalities. J. Inequal. Pure Appl. Math. 7(2), Article ID 60 (2006)
9. 9.
Caputo, M., Fabrizio, M.: A new definition of fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 1(2), 73–85 (2015)
10. 10.
Chinchane, V.L., Pachpatte, D.B.: New fractional inequalities via Hadamard fractional integral. Int. J. Funct. Anal. Oper. Theory Appl. 5, 165–176 (2013)
11. 11.
da Vanterler, J., Sousa, C., Capelas de Oliveira, E.: The Minkowski’s inequality by means of a generalized fractional integral. AIMS Ser. Appl. Math. 3, 131–147 (2018). https://doi.org/10.3934/Math.2018.1.131
12. 12.
da Vanterler, J., Sousa, C., Oliveira, D.S., Capelas de Oliveira, E.: Grüss-type inequalities by means of generalized fractional integrals. Bull. Braz. Math. Soc. (2019). https://doi.org/10.1007/s00574-019-00138-z
13. 13.
Dahmani, Z.: On Minkowski and Hermite-Hadamard integral inequalities via fractional integral. Ann. Funct. Anal. 1, 51–58 (2010)
14. 14.
Dahmani, Z.: New inequalities in fractional integrals. Int. J. Nonlinear Sci. 9(4), 493–497 (2010)
15. 15.
Dahmani, Z., Tabharit, L.: On weighted Gruss type inequalities via fractional integration. J. Adv. Res. Pure Math. 2, 31–38 (2010)
16. 16.
Dragomir, S.S.: A generalization of Gruss’s inequality in inner product spaces and applications. J. Math. Anal. Appl. 237(1), 74–82 (1999)
17. 17.
Dragomir, S.S.: Some integral inequalities of Gruss type. Indian J. Pure Appl. Math. 31(4), 397–415 (2002)
18. 18.
Herrmann, R.: Fractional Calculus: An Introduction for Physicists. World Scientific, Singapore (2011)
19. 19.
Huang, C.J., Rahman, G., Nisar, K.S., Ghaffar, A., Qi, F.: Some inequalities of Hermite-Hadamard type for k-fractional conformable integrals. Aust. J. Math. Anal. Appl. 16(1), 1–9 (2019)
20. 20.
Jarad, F., Abdeljawad, T., Alzabut, J.: Generalized fractional derivatives generated by a class of local proportional derivatives. Eur. Phys. J. Spec. Top. 226, 3457–3471 (2017). https://doi.org/10.1140/epjst/e2018-00021-7
21. 21.
Jarad, F., Ugrlu, E., Abdeljawad, T., Baleanu, D.: On a new class of fractional operators. Adv. Differ. Equ. 2017(1), 247 (2017). https://doi.org/10.1186/s13662-017-1306-z
22. 22.
Katugampola, U.N.: A new approach to generalized fractional derivatives. Bull. Math. Anal. Appl. 6, 1–15 (2014)
23. 23.
Katugampola, U.N.: New fractional integral unifying six existing fractional integrals (2016) arXiv:1612.08596
24. 24.
Khalil, R., Al Horani, M., Yousef, A., Sababheh, M.: A new definition of fractional derivative. J. Comput. Appl. Math. 264(65), 65–70 (2014)
25. 25.
Khan, H., Abdeljawad, T., Tunç, C., Alkhazzan, A., Khan, A.: Minkowski’s inequality for the AB-fractional integral operator. J. Inequal. Appl. 2019, 96 (2019)
26. 26.
Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies, vol. 207. Elsevier, Amsterdam (2006)
27. 27.
Losada, J., Nieto, J.J.: Properties of a new fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 1(2), 87–92 (2015)
28. 28.
McD Mercer, A.: An improvement of the Gruss inequality. JIPAM. J. Inequal. Pure Appl. Math. 10(4), Article ID 93 (2005)
29. 29.
McD Mercer, A., Mercer, P.: New proofs of the Gruss inequality. Aust. J. Math. Anal. Appl. 1(2), Article ID 12 (2004)
30. 30.
Mitrinovic, D.S., Pecaric, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Kluwer Academic Publishers, Dordrecht (1993)
31. 31.
Mubeen, S., Habib, S., Naeem, M.N.: The Minkowski inequality involving generalized k-fractional conformable integral, Mubeen et al. J. Inequal. Appl. 2019, 81 (2019). https://doi.org/10.1186/s13660-019-2040-8
32. 32.
Nisar, K.S., Qi, F., Rahman, G., Mubeen, S., Arshad, M.: Some inequalities involving the extended gamma function and the Kummer confluent hypergeometric k-function. J. Inequal. Appl. 2018, 135 (2018)
33. 33.
Nisar, K.S., Rahman, G., Choi, J., Mubeen, S., Arshad, M.: Certain Gronwall type inequalities associated with Riemann-Liouville k- and Hadamard k-fractional derivatives and their applications. East Asian Math. J. 34(3), 249–263 (2018)
34. 34.
Podlubny, I.: Fractional Differential Equation. Mathematics in Science and Engineering, vol. 198. Academic Press, San Diego (1999)
35. 35.
Qi, F., Rahman, G., Hussain, S.M., Du, W.S., Nisar, K.S.: Some inequalities of Čebyšev type for conformable k-fractional integral operators. Symmetry 10, 614 (2018). https://doi.org/10.3390/sym10110614
36. 36.
Rahman, G., Nisar, K.S., Mubeen, S., Choi, J.: Certain inequalities involving the \((k,\rho )\)-fractional integral operator. Far East J. Math. Sci.: FJMS 103(11), 1879–1888 (2018)
37. 37.
Rahman, G., Nisar, K.S., Qi, F.: Some new inequalities of the Gruss type for conformable fractional integrals. AIMS Ser. Appl. Math. 3(4), 575–583 (2018)
38. 38.
Rahman, G., Ullah, Z., Khan, A., Set, E., Nisar, K.S.: Certain Chebyshev type inequalities involving fractional conformable integral operators. Mathematics 7, 364 (2019). https://doi.org/10.3390/math7040364
39. 39.
Set, E., Mumcu, İ., Demirbaş, S.: Conformable fractional integral inequalities of Chebyshev type. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 113(3), 2253–2259 (2019). https://doi.org/10.1007/s13398-018-0614-9
40. 40.
Set, E., Özdemir, M., Dragomir, S.: On the Hermite-Hadamard inequality and other integral inequalities involving two functions. J. Inequal. Appl. 2010, 148102 (2010)
41. 41.
Set, E., Tomar, M., Sarikaya, M.Z.: On generalized Grüss type inequalities for k-fractional integrals. Appl. Math. Comput. 269, 29–34 (2015)
42. 42.
Sousa, J., Capelas de Oliveira, E.: The Minkowski’s inequality by means of a generalized fractional integral. AIMS Ser. Appl. Math. 3(1), 131–147 (2018)
43. 43.
Sroysang, B.: More on reverses of Minkowski’s integral inequality. Math. Æterna 3, 597–600 (2013)
44. 44.
Sulaiman, W.T.: Reverses of Minkowski’s, Hölder’s, and Hardy’s integral inequalities. Int. J. Mod. Math. Sci. 1, 14–24 (2012)
45. 45.
Taf, S., Brahim, K.: Some new results using Hadamard fractional integral. Int. J. Nonlinear Anal. Appl. 7, 103–109 (2015)
46. 46.
Usta, F., Budak, H., Ertuǧral, F., Sarıkaya, M.Z.: The Minkowski’s inequalities utilizing newly defined generalized fractional integral operators. Commun. Fac. Sci. Univ. Ank. Sér. A1 Math. Stat. 68(1), 686–701 (2019)
47. 47.
Vanterlerda, J., Sousa, C., Capelas de Oliveira, E.: On the Ψ-fractional integral and applications. Comput. Appl. Math. 38, 4 (2019). https://doi.org/10.1007/s40314-019-0774-z
Download references
Acknowledgements
Further we are thankful to the anonymous referee for useful suggestions.
Availability of data and materials
Not applicable.
Funding
The third author would like to thank Prince Sultan University for funding this work through research group Nonlinear Analysis Methods in Applied Mathematics (NAMAM) group number RG-DES-2017-01-17.
Author information
All authors have done equal contribution in this article. All authors read and approved the final manuscript.
Correspondence to Thabet Abdeljawad.
Ethics declarations
Competing interests
There does not exist any competing interest regarding this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and Permissions
About this article
Verify currency and authenticity via CrossMark
MSC
• 26D10
• 26A33
• 05A30
Keywords
• Minkowski inequalities
• Generalized proportional fractional integral operator
• Inequality |
ArrayIterator::seek
(PHP 5 >= 5.0.0, PHP 7)
ArrayIterator::seekSeek to position
Beschreibung
public void ArrayIterator::seek ( int $position )
Warnung
Diese Funktion ist bis jetzt nicht dokumentiert. Es steht nur die Liste der Argumente zur Verfügung.
Parameter-Liste
position
The position to seek to.
Rückgabewerte
Es wird kein Wert zurückgegeben.
add a note add a note
User Contributed Notes 2 notes
up
3
jon at ngsthings dot com
8 years ago
<?php
// didn't see any code demos...here's one from an app I'm working on
$array = array('1' => 'one',
'2' => 'two',
'3' => 'three');
$arrayobject = new ArrayObject($array);
$iterator = $arrayobject->getIterator();
if(
$iterator->valid()){
$iterator->seek(1); // expected: two, output: two
echo $iterator->current(); // two
}
?>
up
1
foalford at gmail dot com
1 year ago
<?php
//seek alter the iterator's internal indice instead of the value that key() returns.
//This is a big trap if combining with uasort/natsort function.
$a = new ArrayObject([4,3,2,1]);
$it = $a->getIterator();
$it->natsort(); //The Iterator preserve the key while sorting the array
$it->rewind();
$first = $it->key(); //The first element is 1 and it's key is 3
echo $first. PHP_EOL// 3
$it->next();
$second = $it->key();
echo
$second. PHP_EOL; //2
$it->next();
$it->seek($first); //Was intended to seek to element 1, key 3, indice 0
echo $it->key() . PHP_EOL; //end up 0 because seek took parameter as indice instead of key. It seek to element indice 3, element 4 key 0.
var_dump($it);
/* Output:
3
2
0
object(ArrayIterator)#2 (1) {
["storage":"ArrayIterator":private]=>
object(ArrayObject)#1 (1) {
["storage":"ArrayObject":private]=>
array(4) {
[3]=>
int(1)
[2]=>
int(2)
[1]=>
int(3)
[0]=>
int(4)
}
}
}
*/
To Top |
Skip to main content
Advertisement
Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates
Is there an association between mild cognitive impairment and dietary pattern in chinese elderly? Results from a cross-sectional population study
• 5196 Accesses
• 19 Citations
Abstract
Background
Diet has an impact on cognitive function in most prior studies but its association with Mild Cognitive Impairment (MCI) in Chinese nonagenarians and centenarians has not been explored.
Methods
870 elder dujiangyan residents aged 90 years or more in 2005 census were investigated at community halls or at home. They underwent the Mini-Mental State Examination (MMSE) for assessment of cognitive function and replied to our questionnaire comprised of 12 food items and other risk factors. MCI was defined by two steps: first, subjects with post-stroke disease, Alzheimer's disease or Parkinson's disease and MMSE< 18 were excluded; and then subjects were categorized as MCI (MMSE scores between 19 and 24) and normal (MMSE scores between 25 and 30). Logistic regression models were used to analyze the association between diet and the prevalence of MCI. The model was adjusted for gender, ages, systolic blood pressure, diastolic blood pressure, body mass index, fasting plasma glucose, total cholesterol, triglycerides, high-density lipoprotein cholesterol and low-density lipoprotein cholesterol, smoking habits, alcohol and tea consumption, educational levels and exercise in baseline dietary assessment.
Results
364 elderly finally included, 108 (38.71%) men and 171 (61.29%) women of whom were classified as MCI. A significant correlation between MCI and normal in legume was observed (OR, 0.84; 95%CI, 0.72-0.97), and also in animal oil (any oil that obtained from animal substances) (OR, 0.93; 95%CI, 0.88-0.98). There was no statistical difference of other food items between normal and MCI.
Conclusions
Among Chinese nonagenarians and centenarians, we found there were significant associations between inadequate intake of legume and animal oil and the prevalence of MCI. No significant correlation between other food items and the prevalence of MCI were demonstrated in this study.
Peer Review reports
Background
Mild cognitive impairment (MCI) is a subjective complaint of memory impairment with objective memory impairment adjusted for age and education in the absence of dementia [1]. It ranges from normal aging to dementia in a species of the transitional stage of cognitive impairment [2], and may be early signs of Alzheimer's disease (AD). In 2001, at the "Current Concepts in MCI Conference," a definition of MCI that more broadly encompassed the clinical heterogeneity of MCI patients beyond memory impairment was proposed. Three subsets of MCI were proposed: 1) amnestic-MCI; 2) multiple domains, slightly-impaired-MCI; and 3) single, non-memory-domain-MCI. Patients with MCI may show evidence of vascular disease; movement disorders without diagnosed Parkinson disease; or neuropsychiatric disorders. In accordance with the clinical categorization of dementia, MCI subtypes based on etiology have been proposed. But it still lacks standardized diagnostic criteria of MCI [3]. Dementia is the advanced stage of untreated or mistreated mild cognitive impairment, and early diagnosis and intervention of mild cognitive impairment could postpone or prevent the onset of subsequent dementia [4]. As a curative treatment is currently impossible, current studies are major in behavioural or pharmacological preventive interventions. Among behavioural approaches, diet may play an important role in the causation and prevention of AD. However, epidemiological data on diet and AD have been conflicting [5, 6]. Moreover, there is paucity of research with regard to the effect of dietary factors on the prevalence of MCI in Chinese old people (especially in Nonagenarians and Centenarians).
Recently, a study demonstrated that higher adherence to the MeDi (a diet characterized by high intake of fish, vegetables, legumes, fruits, cereals, unsaturated fatty acids (mostly in the form of olive oil), low intake of dairy products, meat, saturated fatty acids and a regular but moderate amount of ethanol) [7] is associated with a trend for reduced risk for developing MCI and with reduced risk for MCI conversion to AD [8]. Another study reported AD factor may be labelled as the low vegetable, high fat and sugar diet patter [9]. Nevertheless, potential associations between the Chinese' diet and MCI (among nonagenarians and centenarians) have not been explored. The primary aim of this paper is to investigate the association between Chinese' dietary pattern and MCI, using data from the Project of Longevity and Aging in Dujiangyan (PLAD).
Methods
Participants and Study Design
The data were collected by Project of Longevity and Aging in Dujiangyan (PLAD). The PLAD was initiated in April 2005 (and ended in Sep 2009) and aimed at investigating the relationship between environments, life-style, genetic, longevity and age-related diseases. Dujiangyan (located in Sichuan province and outside Chengdu city) inhabits 2,311,709 persons and 870 very-elderly-persons aged 90 years or more. The design of this study was questionnaire-based and cross-sectional in April 2005. The results of the questionnaire and health examination were filled in the standard form. Overall, 21 men and 26 women were not eligible for the study because they had already died or moved away from the area. 262 men and 561 women were interviewed, and we excluded subjects who suffered from post-stroke disease, Alzheimer's disease or Parkinson's disease (23 men and 31 women), or did not complete the MMSE test (8 men and 15 women), or the MMSE score ≤18 (71 men and 311 women). Finally, 364 participants (160 men and 204 women) were included in our study and their data were analyzed. The PLAD was approved by the Research Ethics Committee of the Sichuan University and written informed consent was obtained from all participants (also from their legal proxies).
Data Collection
Data was obtained by trained personnel, who interviewed all the participants face to face. Participants were underwent a standardized physical examination, anthropometric measurements and a 12-lead electrocardiogram, based on the prepared questionnaire for the medical record. The following information was collected: age, gender, education level, the MMSE score, dietary habits, weight (kilograms), height (centimetres), tea drinking, alcohol consumption, smoking history and other items. Body mass index (BMI) was calculated as body weight in kilograms divided by height in metres squared. Venous blood samples were collected after an overnight fast (at least 8 h) for measurement of plasma glucose, plasma lipids and other biochemistry indicators. Sitting or recumbent position, right arm blood pressure (BP) was measured twice to the nearest 2 mm Hg using a standard mercury sphygmomanometer (phases I and V of Korotkoff) by trained nurses or physicians. The mean value of two measurements was used to calculate systolic BP (SBP) and diastolic BP (DBP), and the SBP and DBP were calculated as the mean of right and left arm values in exceptional subjects.
Assessment of Cognitive Function
Cognitive function was assessed by the Mini-mental state examination (MMSE) test which includes 30-items component of orientation, attention, calculation, language and recall. The conventional cut-off score of cognitive impairment was defined as a score below 24 on the MMSE (sensitivity 80-90%, specificity 80-100%) [10]. But more and more studies pay close attention to many other factors (such as education and ages) affecting on the MMSE score. In women, at age 75, MMSE score ranged from 21 (10th percentile) to 29 (90th percentile). At age 95, the range was 10 (10th percentile) to 27 (90th percentile [11]. Cut-off points for old-old Chinese individuals (age 75 and older) with 0 to 6 years of education, the cut-off were 18/19 (sensitivity 94%, specificity 92%). For old-old Chinese individuals with more than 6 years of education, the cut-off was 22/23 (sensitivity 100%, specificity 88%) [12]. Previous study categorized subjects as follows: cognitive impairment (scores between 0 and 18), mild cognitive impairment (scores between 19 and 24) and normal (scores between 25 and 30) [10].
To decrease methodological bias and assure methodological reliability, the diagnosis of MCI was achieved as: mild cognitive impairment (scores between 19 and 24) and normal (scores between 25 and 30). Subjects were divided into 2 groups; those two groups were compared with each other on baseline characteristics and dietary patterns.
Dietary Pattern Study Design
All participants filled out the questionnaire about the frequencies of each food item and the frequency units (day, week, month, year or never) based on12 food categories (grain (or cereals), vegetables, fruit, poultry, meat (pork, beef and mutton), eggs, fish and shrimp, milk and milk products, legumes, animal oil, plant oil and nuts. The 12 foods were based on the Chinese Food Guide Pagoda which is extensively used for dietary guidelines for Chinese residents. For easy to compute, each food category frequency was converted into times per week (times/week).
Statistical Methods
SPSS16.0 was used in our analysis. All clinical continuous variables are presented as means ± standard deviation (M ± s.d.). Gender, educational level, current smoker, alcohol consumption and alcohol, smoking history and so on, for each group, were presented as category variables. Significance testing of the difference between the two groups was analysed using Independent-Samples t test for continuous variables and Chi-square or Fisher's exact test for category variables. Binary logistic analyses were performed to evaluate the association between potential dietary pattern risk factors and MCI. We calculated the 95% confidence interval (CI) for each odds ratio. P-value < 0.05 was considered to be statistically significant, and all of the P-values are two sides.
Results
Baseline Characteristics and Prevalence of Dietary Pattern and MCI
Among the 364 volunteers, mean age was 93.02 years (s.d. 3.01 years, range from 90 to105 years) and 204 (56.00%) were women, including 15 centenarians. 90% of subjects lived in the countryside. The mean cognitive function score for the old population was 22.52 (SD 2.75, range 19-30). In the oldest group, the total prevalence rate of mild cognitive impairment was 76.6%, the prevalence rate among males was 67.5% and among females it was 83.8%. Women had significantly higher prevalence of MCI than men (171/204 vs. 108/160 p < 0.001). Subjects without MCI had significantly lower ages and higher MMSE scores than MCI (92.25 ± 2.55VS. 93.26 ± 3.10, P < 0.01 and 26.45 ± 1.45 VS.21.32 ±1.74, P < 0.001 respectively). Subjects without MCI had more exercise habits former than MCI (37/84 VS. 82/265, P < 0.05). Educational level also had significant difference between MCI and normal (illiteracy, 179/213 VS. 34/213; primary school, 89/134 VS. 45/134; secondary school, 7/11 VS 4/11; college and postgraduate, 4/6 VS. 2/6 and P < 0.01). There was no food items and other factors that were different statistically between normal and MCI. Demographic, clinical characteristics of the 364 subjects, grouped by MCI and normal, are shown in Table 1.
Table 1 Baseline characteristics according to MCI (n = 364)
Dietary Frequencies and Risk of MCI
We assessed whether the 12 dietary items were associated with an increased risk of mild cognitive impairment (Table 2), through comparing the food frequencies in a week. After adjustment for gender, ages, systolic blood pressure, diastolic blood pressure, body mass index, fasting plasma glucose, total cholesterol, triglycerides, high-density lipoprotein cholesterol and low-density lipoprotein cholesterol, smoking habits, alcohol and tea consumption and exercise, there were significant differences revealed between MCI and normal concerning animal oil (ORs, 0.93; 95%CI, 0.88-0.98; P < 0.01) and legume (ORs, 0.84; 95CI%, 0.72-0.97; P < 0.05). As for the other 10 food items, no significant difference was detected in both unadjusted and adjusted models.
Table 2 Odds Rations (OR) for Subjects with MCI VS Normal Control by 12 Foods Frequencies in a Week in Continuous.
12 Food Category Frequencies Distributed among MCI and Normal
The means of 12 food category frequencies per week are intuitively showed in figure 1. Subjects in both MCI and normal groups were high intake of grain or cereals, vegetables and plant oil and low intake of animal oil, pork or beef or mutton, eggs, legume, fruits, milk or milk products, nuts, poultry and fish or shrimp. The percentages, the maximal values and the minimum of 12 foods category frequencies in a week were intuitively showed in figure 2. Not much difference was observed for food items between MCI and normal group except animal oil (median was lower than 5 times/week in MCI and higher 5 in normal control; maximum lower than 15 and higher than 20 separately) and milk or milk products (the maximum of normal was nearly to 5 times in a week, but MCI nearly to 0). The frequencies of nuts, poultry and fish or shrimp were paucity, nearly to 0 both in MCI and normal control.
Figure 1
figure1
12 food category frequencies among MCI and normal.
Figure 2
figure2
Box-and-bar plot of 12 food category frequencies in a week in subjects with mild cognitive impairment (MCI) and normal cognitive are shown.
Discussion
In cross-sectional observation of community-dwelling Chinese nonagenarians and centenarians, there was a high prevalence of mild cognitive impairment, and compared with men, women had a higher prevalence of mild cognitive impairment. Based on cross-sectional observation of community-dwelling oldest persons, we found there were significant associations between the prevalence of mild cognitive impairment and low intake of animal oil and legume, in Chinese nonagenarians and centenarians. But they were affected by other factors, such as gender, ages, former exercise, education, FBG, blood pressure and so on. However, this result of animal oil has some difference with previous cognitive function studies [8, 9, 13, 14], but accord with an Asia study [15]. Animal oil is any oil that obtained from animal substances. Animal oil is not only abundant with saturated fatty acids and Cholesterol, but also much of unsaturated fatty acids (such as fish oil), which is very important for the absorption of fat-soluble vitamins A, D, E, K. In addition, cholesterol in animal oil is an important component of human tissue cells and important raw materials for synthesising bile and some hormones. We can see that both MCI and normal cognitive ones of our participants were mostly vegetarians from figure 1 and figure 2 intuitively and the MCI were malnutrition compared with the normal ones from table 1. To sum up, these findings may prompt that both the too low and the too high levels of animal oil, and will lead to cognitive functions of decline or increase the prevalence of MCI. There are many reports regarding legume protecting cognitive function in the elderly [8, 9, 15], and in our study, low intake of legume was a risk for MCI, which is in accord with former studies.
This survey showed that there is a high prevalence rate of mild cognitive impairment in Chinese Nonagenarians and Centenarians, more than other large sample surveys [4, 16]. The reasons may be as follows: Firstly, although their populations' ages mostly were more than 65-year old, yet the percentage of subjects more than 90 years old was very low. Secondly, our participants mostly lived in Countryside, and their education level were lower (illiteracy 213, non-illiteracy 151, including primary school 134, secondary school 11, college and postgraduate 6). However participants in those surveys had a higher level of education. The illiterate were not as many as ours (213/364). So, there are objective differences between the population in our study and previously.
There were other interesting findings among nonagenarians and centenarians in the present study. Firstly, compared with men, women had a higher prevalence rate of mild cognitive impairment. Secondly, former exercise was related to low prevalence of MCI. Thirdly, educational levels were association with prevalence of MCI, which showed a tendency that the higher education level of participants is, the lower the prevalence of MCI is (illiterates, 84.04%; primary school, 66.42%; secondary school, 63.64%; College and postgraduate, 66.67%).
Our study had some limitations that deserved a mention. Firstly, since it was one part of the PLAD, there might be a survival bias. However, it was inherent in all studies of individuals in this age group. Secondly, because of too many Missing data and to prevent the bias caused by them, we didn't consider the presence depressive symptoms and the number of drugs used. In addition, the conclusion that depression was not directly correlated with cognitive impairment in Chinese nonagenarians and centenarians has been described elsewhere [17]. Thirdly, we did not adjust for other potential confounding factors, such as APOE genotype, socio-economic status and family history of cognitive impairment. Most (90%) participants in the present study lived in the countryside, and some subjects had been working on a farm every day, and so physical activity may be a potential confounder. Thus, this population might not represent the urban population.
Conclusions
In conclusion, we found there were significant associations between animal oil and legume and the prevalence of MCI among Chinese nonagenarians and centenarians. But it needs to be confirmed by more large studies. No significant associations were detected between the other food items and MCI in this study.
References
1. 1.
Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E: Mild cognitive impairment: clinical characterization and outcome. Archives of neurology. 1999, 56: 303-308. 10.1001/archneur.56.3.303.
2. 2.
Kelley BJ, Petersen RC: Alzheimer's disease and mild cognitive impairment. Neurologic clinics. 2007, 25: 577-609. 10.1016/j.ncl.2007.03.008. v
3. 3.
Panza F, D'Introno A, Colacicco AM, Capurso C, Del Parigi A, Caselli RJ, Pilotto A, Argentieri G, Scapicchio PL, Scafato E, et al: Current epidemiology of mild cognitive impairment and other predementia syndromes. Am J Geriatr Psychiatry. 2005, 13: 633-644.
4. 4.
Gauthier S, Reisberg B, Zaudig M, Petersen RC, Ritchie K, Broich K, Belleville S, Brodaty H, Bennett D, Chertkow H, et al: Mild cognitive impairment. Lancet. 2006, 367: 1262-1270. 10.1016/S0140-6736(06)68542-5.
5. 5.
Luchsinger JA, Noble JM, Scarmeas N: Diet and Alzheimer's disease. Curr Neurol Neurosci Rep. 2007, 7: 366-372. 10.1007/s11910-007-0057-8.
6. 6.
Luchsinger JA, Mayeux R: Dietary factors and Alzheimer's disease. Lancet Neurol. 2004, 3: 579-587. 10.1016/S1474-4422(04)00878-6.
7. 7.
Trichopoulou A, Costacou T, Bamia C, Trichopoulos D: Adherence to a Mediterranean diet and survival in a Greek population. The New England journal of medicine. 2003, 348: 2599-2608. 10.1056/NEJMoa025039.
8. 8.
Solfrizzi V, Frisardi V, Capurso C, D'Introno A, Colacicco AM, Vendemiale G, Capurso A, Panza F: Mediterranean dietary pattern, mild cognitive impairment, and progression to dementia. Archives of neurology. 2009, 66: 912-913. 10.1001/archneurol.2009.128. author reply 913-914
9. 9.
Gustaw-Rothenberg K: Dietary patterns associated with Alzheimer's disease: population based study. International journal of environmental research and public health. 2009, 6: 1335-1340. 10.3390/ijerph6041335.
10. 10.
Huang CQ, Dong BR, Wu HM, Zhang YL, Wu JH, Lu ZC, Flaherty JH: Association of cognitive impairment with serum lipid/lipoprotein among Chinese nonagenarians and centenarians. Dementia and geriatric cognitive disorders. 2009, 27: 111-116. 10.1159/000194660.
11. 11.
Dufouil C, Clayton D, Brayne C, Chi LY, Dening TR, Paykel ES, O'Connor DW, Ahmed A, McGee MA, Huppert FA: Population norms for the MMSE in the very old: estimates based on longitudinal data. Mini-Mental State Examination. Neurology. 2000, 55: 1609-1613.
12. 12.
Steis MR, Schrauf RW: A review of translations and adaptations of the Mini-Mental State Examination in languages other than English and Spanish. Research in gerontological nursing. 2009, 2: 214-224. 10.3928/19404921-20090421-06.
13. 13.
Eskelinen MH, Ngandu T, Helkala EL, Tuomilehto J, Nissinen A, Soininen H, Kivipelto M: Fat intake at midlife and cognitive impairment later in life: a population-based CAIDE study. International journal of geriatric psychiatry. 2008, 23: 741-747. 10.1002/gps.1969.
14. 14.
Engelhart MJ, Geerlings MI, Ruitenberg A, Van Swieten JC, Hofman A, Witteman JC, Breteler MM: Diet and risk of dementia: Does fat matter?: The Rotterdam Study. Neurology. 2002, 59: 1915-1921. 10.1001/archneur.59.12.1915.
15. 15.
Lee L, Kang SA, Lee HO, Lee BH, Park JS, Kim JH, Jung IK, Park YJ, Lee JE: Relationships between dietary intake and cognitive function level in Korean elderly people. Public health. 2001, 115: 133-138. 10.1016/S0033-3506(01)00432-2.
16. 16.
Frisoni GB, Fratiglioni L, Fastbom J, Guo Z, Viitanen M, Winblad B: Mild cognitive impairment in the population and physical health: data on 1,435 individuals aged 75 to 95. The journals of gerontology. 2000, 55: M322-328.
17. 17.
Ji-Rong Y, Bi-Rong D, Chang-Quang H, Hong-Mei W, Yan-Ling Z, Qing-Xiu L, Jue-Ling D, Bing-You W, Qi-Yuan Y: Cognitive impairment and depression among Chinese nonagenarians/centenarians. Am J Geriatr Psychiatry. 2010, 18: 297-304. 10.1097/JGP.0b013e3181d143bc.
Pre-publication history
1. The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2458/10/595/prepub
Download references
Acknowledgements
This work was supported by the Discipline Construction Foundation of Sichuan University and by grants from the Project of Science and Technology Bureau of Sichuan Province (2006Z09-006-4), and the Construction Fund for Subjects of West China Hospital of Sichuan University (XK05001). The authors thank the staff of the Department of Geriatrics Medicine, West China Hospital, Dujiangyan Government and Dujjiangyan People's Hospital, and all participants (as well as their legal proxies) for their great contribution.
Author information
Correspondence to Birong Dong.
Additional information
Competing interests
Prof. Dong serves as the director of Department of Geriatrics West China Hospital, West China School of Medicine, Sichuan University, China. She sponsored the Project of Longevity and Aging in Dujiangyan (PLAD) and was supported by the Discipline Construction Foundation of Sichuan University and by grants from the Project of Science and Technology Bureau of Sichuan Province (2006Z09-006-4), and the Construction Fund for Subjects of West China Hospital of Sichuan University (XK05001).
Authors' contributions
Ziqi Wang (the first author) drafted the manuscript and performed the statistical analysis. Birong Dong sponsored the Project of Longevity and Aging in Dujiangyan (PLAD) and conceived of the study. Guo Zeng participated in sponsoring PLAD. Jun Li helped to draft the manuscript. Wenlei Wang, Binyou Wang and Qiyuan Quan participated in the questionnaire design and information collected. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Authors’ original file for figure 1
Authors’ original file for figure 2
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Wang, Z., Dong, B., Zeng, G. et al. Is there an association between mild cognitive impairment and dietary pattern in chinese elderly? Results from a cross-sectional population study. BMC Public Health 10, 595 (2010). https://doi.org/10.1186/1471-2458-10-595
Download citation
Keywords
• Food Item
• Mild Cognitive Impairment
• Dietary Pattern
• MMSE Score
• Mild Cognitive Impairment Patient |
Ask Question, Ask an Expert
+1-415-315-9853
info@mywordsolution.com
Ask Java Expert
Home >> Java
problem 1: prepare down all the applications of Wrapper Classes?
problem 2: Which of the given statements is correct?
a) You should initialize System with a path and filename if you want to use the System.err output.
b) The System.arraycopy method works only with the array of primitives.
c) The System.exit method takes an int primitive parameter.
problem 3: How does the class loader load the applet?
problem 4: Illustrate the utilization of java.ref package?
problem 5: Describe the benefits of Thread Grouping?
problem 6: prepare down a program to input CustNo(String), ItemNo(String), Qty(int), Rate(Float class), discount Rate(Float class). Compute NetValue and discount amount for each customer by calling a function. At the end of program display the net collection and total discount amount given to all customers.
problem 7: prepare down a program to call Paintbrush program from in Java the application.
problem 8: prepare down a program that does the given:
a) Displays the memory available to the program. Exploring Java.Lang
b) Displays free memory available to the JavaRunTime.
c) Initiates the garbage collector.
d) Displays free memory after initiating the garbage collector.
e) Creates int array of 500 elements.
f) Displays free memory after the allocation.
g) Removes 200 elements by putting null reference to such elements.
h) Calls garbage collector.
i) Displays free memory after the garbage collection.
Java, Programming
• Category:- Java
• Reference No.:- M98743
Have any Question?
Related Questions in Java
Write java program with eclipsinstructionswrite a program
Write java program with Eclips INSTRUCTIONS Write a program that will help a student practice basic math (addition, subtraction, multiplication, and division). Display a menu the student can select from. The student will ...
Importantuse jgrasp for editingobjectives - at the
IMPORTANT: use JGRASP for editing. Objectives - At the conclusion of this assignment students will have demonstrated that they can: Validate input data from a keyboard. Use loops to repeat actions in a program Use a Rand ...
Casegreentek is a software solution company for smartphone
Case Greentek is a software solution company for smartphone and tablet devices. Current headquarter (HQ) locates in Sydney, 30 sale team members and 50 software engineers base in Singapore and Manila respective. The cycl ...
Java programing essaywrite a paper of 700-word response to
JAVA programing essay Write a paper of 700-word response to the following: In your opinion, what are the three biggest challenges in planning and designing a solution for a programming problem? What can you do to overcom ...
Program 1objectivethis program assignment is provided to
Program 1 Objective: This program assignment is provided to let the students know how to handle threads and enhance system availability on a multiprocessor or multicore environment. A single process is supposed to create ...
Jva programmingmodify the given java application attached
JAVA PROGRAMMING Modify the given Java application (attached) using NetBeans IDE to meet these additional and changed business requirements: • The application will now compare the total annual compensation of at least tw ...
Java programming using ide netbeansdetailed question must
Java programming using IDE NetBeans Detailed Question: Must use file operations, exception handling, recursive programming (to calculate averages), and encapsulation (or inheritance) in the program Must have four java fi ...
Design your own java class that includes at least 3 data
Design your own Java Class that includes at least 3 data fields, 2 constructors and 4 methods. When designing your class, pick an object that you are familiar with and make it your own, realistic, yet simple design with ...
Assignmentpersonasas has been outlined there are three
Assignment Personas As has been outlined there are three specific groups who would be considered the core demographic for the users of this site. Sue Smith Age: 35 years old Gender: Female Location: Vancouver, BC Educati ...
Java application product and inventoryreadingchapter 9 of
Java Application Product and Inventory Reading Chapter 9 of the text Moodle: Class Coding Style Java Classes Java Application Product and Inventory Product.java Create a class to encapsulate the data and behavior of a pr ...
• 4,153,160 Questions Asked
• 13,132 Experts
• 2,558,936 Questions Answered
Ask Experts for help!!
Looking for Assignment Help?
Start excelling in your Courses, Get help with Assignment
Write us your full requirement for evaluation and you will receive response within 20 minutes turnaround time.
Ask Now Help with Problems, Get a Best Answer
WalMart Identification of theory and critical discussion
Drawing on the prescribed text and/or relevant academic literature, produce a paper which discusses the nature of group
Section onea in an atwood machine suppose two objects of
SECTION ONE (a) In an Atwood Machine, suppose two objects of unequal mass are hung vertically over a frictionless
Part 1you work in hr for a company that operates a factory
Part 1: You work in HR for a company that operates a factory manufacturing fiberglass. There are several hundred empl
Details on advanced accounting paperthis paper is intended
DETAILS ON ADVANCED ACCOUNTING PAPER This paper is intended for students to apply the theoretical knowledge around ac
Create a provider database and related reports and queries
Create a provider database and related reports and queries to capture contact information for potential PC component pro |
WeiXin
WeiXin
QQ
QQ
Call us: +86-024-74175222
Contact online :
News
What is Synovial membrane?
Synovial membrane (also known as synovium or stratum synoviale) is the soft tissue found between the articular capsule (joint capsule) and the joint cavity of synovial joints.
The word "synovium" is related to the word "synovia" (synovial fluid), which is the clear, viscous, lubricating fluid secreted by synovial membranes. The word "synvovia" or "sinovia" was coined by Paracelsus, and may have been derived from the Greek word "syn" ("with") and the Latin word "ovum" ("egg") because the synovial fluid in joints that have a cavity between the bearing surfaces is similar to egg white.
Structure
Synovium is very variable but often has two layers
The outer layer, or subintima, can be of almost any type: fibrous, fatty or loosely "areolar".
The inner layer, or intima, consists of a sheet of cells thinner than a piece of paper.
Where the underlying subintima is loose, the intima sits on a pliable membrane, giving rise to the term synovial membrane.
This membrane, together with the cells of the intima, provides something like an inner tube, sealing the synovial fluid from the surrounding tissue (effectively stopping the joints from being squeezed dry when subject to impact, such as running).
The surface of synovium may be flat or may be covered with finger-like projections or villi, which, it is presumed, help to allow the soft tissue to change shape as the joint surfaces move one on another.
Just beneath the intima, most synovium has a dense net of small blood vessels that provide nutrients not only for synovium but also for the avascular cartilage.
In any one position, much of the cartilage is close enough to get nutrition direct from synovium.
Some areas of cartilage have to obtain nutrients indirectly and may do so either from diffusion through cartilage or possibly by 'stirring' of synovial fluid.
The intimal cells are of two types, fibroblasts and macrophages, both of which are different in certain respects from similar cells in other tissues.
The fibroblasts manufacture a long-chain sugar polymer called hyaluronan, which makes the synovial fluid "ropy"-like egg-white, together with a molecule calledlubricin, which lubricates the joint surfaces. The water of synovial fluid is not secreted as such but is effectively trapped in the joint space by the hyaluronan.
The macrophages are responsible for the removal of undesirable substances from the synovial fluid.
Sales phone number
(+86) 024 - 74175222
Become a partner
Start your Yilin business
Online store
Flagship store on Tmall.com
Taobao store
Our only shop on Taobao.com
Copyright © 2013-2015 Yilin Ltd. All rights reserved. |
0
I am having trouble inserting logged-in user data, using the 'user_register' hook. The following code inserts a row in the database, but only inserts the 'email' column.
function nuevoPostulante()
{
global $wpdb;
$tablaPostulante = $wpdb->prefix . 'postulante';
$current_user = wp_get_current_user();
$wpdb->insert(
$tablaPostulante,
array(
'dni' => $current_user->user_login,
'nombre' => $current_user->display_name,
'email' => '[email protected]',
)
);
}
add_action('user_register', 'nuevoPostulante');
Columns with values taken from '$ current_user' are empy, the insert do not seem to take data from the array.
I think it's a scope problem, I still don't understand how to fix it. Somebody could help me? Thank you!
2 Answers 2
0
user_register action allows you to access data for a new user immediately after they are added to the database. The user id is passed to hook as an argument.
So you wish to insert record for user when user register you can modify your code like follows:
add_action( 'user_register', 'nuevoPostulante', 10, 1 );
function nuevoPostulante( $user_id ) {
global $wpdb;
$tablaPostulante = $wpdb->prefix . 'postulante';
$current_user = get_user_by( 'ID', $user_id ); // You can use get_userdata( $user_id ) instead of get_user_by() both can be work perfectly suite your requirement.
$wpdb->insert(
$tablaPostulante,
array(
'dni' => $current_user->user_login,
'nombre' => $current_user->display_name,
'email' => '[email protected]',
)
);
}
0
It's probably beacause of the simple fact that when a user (can) register for a site, then no user is logged in (yet), hence the object returned from wp_get_current_user() is empty.
Try checking and using values from the $_POST array instead.
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Not the answer you're looking for? Browse other questions tagged or ask your own question. |
About
The majestic spiral horns of the male bighorn sheep are one of the most easily identifiable in the animal world.
Size Matters!
As members of the bovid family, both male and female sheep have horns, but the male’s horns are much bigger than the female and size often determines who “gets the girl.” Bovid horns are a sheath made of keratin (the same substance that human hair and fingernails are made of) that grows from a bony core attached to the skull.
Age and horn size determines male dominance, although head-butting clashes between males are used as a way to prove dominance and gain access to a particular female during mating season. Younger males engage in these fights more frequently, and older males, with their bigger and stronger horns, will win the match very quickly. Head butting clashes have been known to last for hours and sometimes days.
Status
Although IUCN lists the bighorn sheep as a species of least concern, populations of desert bighorn sheep are less stable.
Habitat
Bighorn sheep is one species with three living subspecies: the Rocky Mountain bighorn sheep (Ovis canadensis canadensis), the Sierra Nevada bighorn sheep (Ovis canadensis sierrae), and the desert bighorn sheep (Ovis canadensis nelsoni). Desert bighorn are found on the desert slopes of the Peninsular Ranges, in the western-most portion of the Sonoran Desert.
Diet
Bighorn sheep are grazers. They eat all types of grasses and brush and will eat twigs and leaves when necessary. In the desert, their diet includes all kinds of cacti, yucca, and fruits. They get most of their moisture from the desert plants, but still need to visit water holes every few days during the summer months.
Physical_Characteristics
The most recognizable characteristic of the bighorn sheep are the male’s massive, spiraled horns and their majestic faces. Females, or ewes, are smaller than the male, and have smaller, shorter horns that also curve into a spiral shape, never exceeding a half a curl. They are brownish in color with white rumps and a short tail. The eyes are set on the sides of the head; the ears are short and pointed. They have very acute eyesight and hearing, which helps them navigate through their rocky terrain and avoid predators. Their faces are narrow and pointed. |
Hbase flusher源码解析(flush全代码流程解析)
版权声明:本文为博主原创文章,遵循版权协议,转载请附上原文出处链接和本声明。
在介绍HBASE flush源码之前,我们先在逻辑上大体梳理一下,便于后续看代码。flush的整体流程分三个阶段
1.第一阶段:prepare阶段,这个阶段主要是将当前memstore的内存结构做snapshot。HBASE写入内存的数据结构(memstore以及snapshot)是跳跃表,用的是jdk自带的ConcurrentSkipListMap结构。这个过程其实就是将memstore赋值给snapshot,并构造一个新的memstore。
2.第二阶段:flushcache阶段,这个阶段主要是将第一阶段生成的snapshot flush到disk,但是注意这里是将其flush到temp文件,此时并没有将生成的hfile move到store实际对应的cf路径下,move是发生在第三阶段。
3.第三阶段:commit阶段。这个阶段主要是将第二阶段生成的hfile move最终正确的位置。
上面是HBASE flush的逻辑流程,flush是region级别,涉及到的类很多,下面我们开始介绍一下Flush相关的操作
flush线程启动
• 在regionserver启动时,会调用startServiceThread方法启动一些服务线程,其中
// Cache flushing
protected MemStoreFlusher cacheFlusher;
。。。。。省略。。。。。。
private void startServiceThreads() throws IOException { 。。。。其他代码省略。。。 this.cacheFlusher.start(uncaughtExceptionHandler); }
• 而cacheFlusher是MemStoreFlusher类的实例,在梳理上述逻辑之前首先介绍两个MemStoreFlusher的变量
• //该变量是一个BlockingQueue<FlushQueueEntry>类型的变量。
// 主要存储了FlushRegionEntry类型刷新请求实例,以及一个唤醒队列WakeupFlushThread实例对象。
private final BlockingQueue<FlushQueueEntry> flushQueue =
new DelayQueue<FlushQueueEntry>();
//同时也会把加入到flushqueue中的requst加入到regionsInQueue中。
private final Map<HRegion, FlushRegionEntry> regionsInQueue =
new HashMap<HRegion, FlushRegionEntry>();
• MemStoreFlusher的start方法如下:
synchronized void start(UncaughtExceptionHandler eh) {
ThreadFactory flusherThreadFactory = Threads.newDaemonThreadFactory(
server.getServerName().toShortString() + "-MemStoreFlusher", eh);
for (int i = 0; i < flushHandlers.length; i++) {
flushHandlers[i] = new FlushHandler("MemStoreFlusher." + i);
flusherThreadFactory.newThread(flushHandlers[i]);
flushHandlers[i].start();
}
}
会根据配置flusher.handler.count生成相应个数的flushHandler线程。然后对每一个flushHandler线程调用start方法。我们继续看一下flushHandler。
private class FlushHandler extends HasThread {
private FlushHandler(String name) {
super(name);
}
@Override
public void run() {
//如果server正常没有stop
while (!server.isStopped()) {
FlushQueueEntry fqe = null;
try {
wakeupPending.set(false); // allow someone to wake us up again
//阻塞队列的poll方法,如果没有会阻塞在这
fqe = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
if (fqe == null || fqe instanceof WakeupFlushThread) {
// 如果没有flush request或者flush request是一个全局flush的request。
if (isAboveLowWaterMark()) {
// 检查所有的memstore是否超过max_heap * hbase.regionserver.global.memstore.lowerLimit配置的值,默认0.35
// 超过配置的最小memstore的值,flush最大的一个memstore的region
LOG.debug("Flush thread woke up because memory above low water="
+ TraditionalBinaryPrefix.long2String(globalMemStoreLimitLowMark, "", 1));
if (!flushOneForGlobalPressure()) {
// 如果没有任何Region需要flush,但已经超过了lowerLimit。
// 这种情况不太可能发生,除非可能会在关闭整个服务器时发生,即有另一个线程正在执行flush regions。
// 只里只需要sleep一下,然后唤醒任何被阻塞的线程再次检查。
// Wasn't able to flush any region, but we're above low water mark
// This is unlikely to happen, but might happen when closing the
// entire server - another thread is flushing regions. We'll just
// sleep a little bit to avoid spinning, and then pretend that
// we flushed one, so anyone blocked will check again
Thread.sleep(1000);
wakeUpIfBlocking();
}
// Enqueue another one of these tokens so we'll wake up again
wakeupFlushThread();
}
//阻塞超时后也会继续continue
continue;
}
// 如果是正常的flush request
// 单个region memstore大小超过hbase.hregion.memstore.flush.size配置的值,默认128M,执行flush操作
FlushRegionEntry fre = (FlushRegionEntry) fqe;
if (!flushRegion(fre)) {
break;
}
} catch (InterruptedException ex) {
continue;
} catch (ConcurrentModificationException ex) {
continue;
} catch (Exception ex) {
LOG.error("Cache flusher failed for entry " + fqe, ex);
if (!server.checkFileSystem()) {
break;
}
}
}
//结束MemStoreFlusher的线程调用,通常是regionserver stop,这个是在while循环之外的
synchronized (regionsInQueue) {
regionsInQueue.clear();
flushQueue.clear();
}
// Signal anyone waiting, so they see the close flag
wakeUpIfBlocking();
LOG.info(getName() + " exiting");
}
现在我们看是看梳理一下FlusherHandler的run方法的逻辑
1. 只要rs不挂,就一直循环判断有没有flushrequest
2. 通过flushqueue.poll来阻塞,应该flushqueue是阻塞队列,当队列为空时会阻塞,直到超时。
3. 如果不为空,取出一个request,调用MemStoreFlusher.flushRegion(fre)
Flush流程
可见是调用的MemStoreFlusher.flushRegion方法进行flush的,我们继续跟进flushRegion一探究竟。
private boolean flushRegion(final FlushRegionEntry fqe) {
//在FlushQueueEntry中取出region信息
HRegion region = fqe.region;
//如果region不是metaregion并且含有太多的storefile,则随机blcoking.
//tooManyStoreFiles默认的阈值时7,同时也要看hbase.hstore.blockingStoreFiles配置的值,没有配置取默认值7
if (!region.getRegionInfo().isMetaRegion() &&
isTooManyStoreFiles(region)) {
//判断是否已经wait了设置的时间
if (fqe.isMaximumWait(this.blockingWaitTime)) {
LOG.info("Waited " + (EnvironmentEdgeManager.currentTime() - fqe.createTime) +
"ms on a compaction to clean up 'too many store files'; waited " +
"long enough... proceeding with flush of " +
region.getRegionNameAsString());
} else {
// If this is first time we've been put off, then emit a log message.
//如果当前flush是第一次加入到flush queue
if (fqe.getRequeueCount() <= 0) {
// Note: We don't impose blockingStoreFiles constraint on meta regions
LOG.warn("Region " + region.getRegionNameAsString() + " has too many " +
"store files; delaying flush up to " + this.blockingWaitTime + "ms");
//flush前判断该region是否需要split,如果不需要split,同时因为又太多的storefiles,因此调用过一次compact
if (!this.server.compactSplitThread.requestSplit(region)) {
try {
this.server.compactSplitThread.requestSystemCompaction(
region, Thread.currentThread().getName());
} catch (IOException e) {
LOG.error(
"Cache flush failed for region " + Bytes.toStringBinary(region.getRegionName()),
RemoteExceptionHandler.checkIOException(e));
}
}
}
// Put back on the queue. Have it come back out of the queue
// after a delay of this.blockingWaitTime / 100 ms.
//如果有too manyfile的region已经超过了随机延迟的时间,加入flushqueue队列,唤醒handler开始flush
this.flushQueue.add(fqe.requeue(this.blockingWaitTime / 100));
// Tell a lie, it's not flushed but it's ok
return true;
}
}
//正常情况下的flush
return flushRegion(region, false, fqe.isForceFlushAllStores());
}
该方法中会判断要flush的region是否有过多的hfile,如果是则随机wait一定的时间。wait完成后加入flushqueue唤醒handler开始flush。在正常的情况下最终是调用MemStoreFlusher的重载函数flushRgion(region,flase, isForceFlushAllStores),那我们继续跟进该重载函数。
private boolean flushRegion(final HRegion region, final boolean emergencyFlush,
boolean forceFlushAllStores) {
long startTime = 0;
//枷锁
synchronized (this.regionsInQueue) {
//在regioninQueue中移除该region
FlushRegionEntry fqe = this.regionsInQueue.remove(region);
// Use the start time of the FlushRegionEntry if available
if (fqe != null) {
startTime = fqe.createTime;
}
if (fqe != null && emergencyFlush) {
// Need to remove from region from delay queue. When NOT an
// emergencyFlush, then item was removed via a flushQueue.poll.
flushQueue.remove(fqe);
}
}
if (startTime == 0) {
// Avoid getting the system time unless we don't have a FlushRegionEntry;
// shame we can't capture the time also spent in the above synchronized
// block
startTime = EnvironmentEdgeManager.currentTime();
}
lock.readLock().lock();
try {
notifyFlushRequest(region, emergencyFlush);
//最终是调用region的flushcache
HRegion.FlushResult flushResult = region.flushcache(forceFlushAllStores);
boolean shouldCompact = flushResult.isCompactionNeeded();
// We just want to check the size
boolean shouldSplit = region.checkSplit() != null;
if (shouldSplit) {
this.server.compactSplitThread.requestSplit(region);
} else if (shouldCompact) {
server.compactSplitThread.requestSystemCompaction(
region, Thread.currentThread().getName());
}
if (flushResult.isFlushSucceeded()) {
long endTime = EnvironmentEdgeManager.currentTime();
server.metricsRegionServer.updateFlushTime(endTime - startTime);
}
} catch (DroppedSnapshotException ex) {
// Cache flush can fail in a few places. If it fails in a critical
// section, we get a DroppedSnapshotException and a replay of wal
// is required. Currently the only way to do this is a restart of
// the server. Abort because hdfs is probably bad (HBASE-644 is a case
// where hdfs was bad but passed the hdfs check).
server.abort("Replay of WAL required. Forcing server shutdown", ex);
return false;
} catch (IOException ex) {
LOG.error("Cache flush failed" +
(region != null ? (" for region " + Bytes.toStringBinary(region.getRegionName())) : ""),
RemoteExceptionHandler.checkIOException(ex));
if (!server.checkFileSystem()) {
return false;
}
} finally {
lock.readLock().unlock();
wakeUpIfBlocking();
}
return true;
}
其他无关的代码这里不再细说,之间看标红的位置,核心逻辑在这里,可以看到是调用的region.flushcache(isForceFlushAllStores),因此flush是region级别。同时在flush完成后会判断是否需要进行split,如果不需要split会将判断是否需要compact。继续跟进看下里面做了啥。
//flush cache,参数意义为是否需要flush所有的store
public FlushResult flushcache(boolean forceFlushAllStores) throws IOException {
// fail-fast instead of waiting on the lock
//判断当前region是否处于closing状态,
if (this.closing.get()) {
String msg = "Skipping flush on " + this + " because closing";
LOG.debug(msg);
return new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg);
}
MonitoredTask status = TaskMonitor.get().createStatus("Flushing " + this);
status.setStatus("Acquiring readlock on region");
// block waiting for the lock for flushing cache
//此处加了锁
lock.readLock().lock();
try {
if (this.closed.get()) {
String msg = "Skipping flush on " + this + " because closed";
LOG.debug(msg);
status.abort(msg);
return new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg);
}
if (coprocessorHost != null) {
status.setStatus("Running coprocessor pre-flush hooks");
coprocessorHost.preFlush();
}
// TODO: this should be managed within memstore with the snapshot, updated only after flush
// successful
if (numMutationsWithoutWAL.get() > 0) {
numMutationsWithoutWAL.set(0);
dataInMemoryWithoutWAL.set(0);
}
synchronized (writestate) {
//此次flush之前 该region并没有在flush,是否还处于write状态
if (!writestate.flushing && writestate.writesEnabled) {
this.writestate.flushing = true;
} else {//否则表示该region正处于flushing状态或者不可写,abort flush
if (LOG.isDebugEnabled()) {
LOG.debug("NOT flushing memstore for region " + this
+ ", flushing=" + writestate.flushing + ", writesEnabled="
+ writestate.writesEnabled);
}
String msg = "Not flushing since "
+ (writestate.flushing ? "already flushing"
: "writes not enabled");
status.abort(msg);
return new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg);
}
}
try {
//根据参数forceFlushAllStores判断是否需要所有的store都进行flush,否侧按照flush策略进行选择
//非全局flush的选择策略:flushSizeLowerBound是参数hbase.hregion.percolumnfamilyflush.size.lower.bound,默认16M或者不满足大小,
//但是该memstore足够老 Collection<Store> specificStoresToFlush = forceFlushAllStores ? stores.values() : flushPolicy.selectStoresToFlush(); //调用internalFlushcache进行flush FlushResult fs = internalFlushcache(specificStoresToFlush, status); if (coprocessorHost != null) { status.setStatus("Running post-flush coprocessor hooks"); coprocessorHost.postFlush(); } status.markComplete("Flush successful"); return fs; } finally { synchronized (writestate) { writestate.flushing = false; this.writestate.flushRequested = false; writestate.notifyAll(); } } } finally { lock.readLock().unlock(); status.cleanup(); } }
核心逻辑在FlushResult fs = internalFlushcache(specificStoresToFlush, status);里面涉及到了具体的三个阶段,其中prepare的第一阶段是调用了region.internalPrepareFlushCache()实现的,第二阶段flush以及第三阶段commit阶段,是通过internalFlushAndCommit()进行的。我们现在看下具体的internalFlushCache方法的逻辑:
protected FlushResult internalFlushcache(final WAL wal, final long myseqid,
final Collection<Store> storesToFlush, MonitoredTask status) throws IOException {
//internalPrepareFlushCache执行snapshot,打快照
PrepareFlushResult result
= internalPrepareFlushCache(wal, myseqid, storesToFlush, status, false);
//返回的result中的result是null.因此会执行internalFlushchacheAndCommit方法执行第二和第三阶段。
if (result.result == null) {
return internalFlushCacheAndCommit(wal, status, result, storesToFlush);
} else {
return result.result; // early exit due to failure from prepare stage
}
}
现在我们看一下第一阶段: internalPrepareFlushCache。里面有一把region级别的updatelock。,这个里面代码比较多,可以先忽略不重要的部分
//该方法用来执行flush的prepare阶段
protected PrepareFlushResult internalPrepareFlushCache(
final WAL wal, final long myseqid, final Collection<Store> storesToFlush,
MonitoredTask status, boolean isReplay)
throws IOException {
if (this.rsServices != null && this.rsServices.isAborted()) {
// Don't flush when server aborting, it's unsafe
throw new IOException("Aborting flush because server is aborted...");
}
//便于计算flush耗时,记录开始时间
final long startTime = EnvironmentEdgeManager.currentTime();
// If nothing to flush, return, but we need to safely update the region sequence id
//如果当前memstroe为空,不执行flush,但是要更新squenid
if (this.memstoreSize.get() <= 0) {
// Take an update lock because am about to change the sequence id and we want the sequence id
// to be at the border of the empty memstore.
MultiVersionConsistencyControl.WriteEntry w = null;
this.updatesLock.writeLock().lock();
try {
if (this.memstoreSize.get() <= 0) {
// Presume that if there are still no edits in the memstore, then there are no edits for
// this region out in the WAL subsystem so no need to do any trickery clearing out
// edits in the WAL system. Up the sequence number so the resulting flush id is for
// sure just beyond the last appended region edit (useful as a marker when bulk loading,
// etc.)
// wal can be null replaying edits.
if (wal != null) {
w = mvcc.beginMemstoreInsert();
long flushSeqId = getNextSequenceId(wal);
FlushResult flushResult = new FlushResult(
FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY, flushSeqId, "Nothing to flush");
w.setWriteNumber(flushSeqId);
mvcc.waitForPreviousTransactionsComplete(w);
w = null;
return new PrepareFlushResult(flushResult, myseqid);
} else {
return new PrepareFlushResult(
new FlushResult(FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY, "Nothing to flush"),
myseqid);
}
}
} finally {
this.updatesLock.writeLock().unlock();
if (w != null) {
mvcc.advanceMemstore(w);
}
}
}
if (LOG.isInfoEnabled()) {
LOG.info("Started memstore flush for " + this + ", current region memstore size "
+ StringUtils.byteDesc(this.memstoreSize.get()) + ", and " + storesToFlush.size() + "/"
+ stores.size() + " column families' memstores are being flushed."
+ ((wal != null) ? "" : "; wal is null, using passed sequenceid=" + myseqid));
// only log when we are not flushing all stores.
//当不是flush所有的store时,打印log
if (this.stores.size() > storesToFlush.size()) {
for (Store store : storesToFlush) {
LOG.info("Flushing Column Family: " + store.getColumnFamilyName()
+ " which was occupying "
+ StringUtils.byteDesc(store.getMemStoreSize()) + " of memstore.");
}
}
}
// Stop updates while we snapshot the memstore of all of these regions' stores. We only have
// to do this for a moment. It is quick. We also set the memstore size to zero here before we
// allow updates again so its value will represent the size of the updates received
// during flush
//停止写入,直到memstore的snapshot完成。
MultiVersionConsistencyControl.WriteEntry w = null;
// We have to take an update lock during snapshot, or else a write could end up in both snapshot
// and memstore (makes it difficult to do atomic rows then)
status.setStatus("Obtaining lock to block concurrent updates");
// block waiting for the lock for internal flush
//获取update的写锁
this.updatesLock.writeLock().lock();
status.setStatus("Preparing to flush by snapshotting stores in " +
getRegionInfo().getEncodedName());
//用于统计flush的所有的store的memtore内存大小之和
long totalFlushableSizeOfFlushableStores = 0;
//记录所有flush的store的cfname
Set<byte[]> flushedFamilyNames = new HashSet<byte[]>();
for (Store store : storesToFlush) {
flushedFamilyNames.add(store.getFamily().getName());
}
//storeFlushCtxs,committedFiles,storeFlushableSize,比较重要的是storeFlushCtxs和committedFiles。他们都被定义为以CF做key的TreeMap,
// 分别代表了store的CF实际执行(StoreFlusherImpl)和最终刷写的HFlile文件。
//其中storeFlushContext的实现类StoreFlusherImpl里包含了flush相关的核心操作:prepare,flushcache,commit,abort等。
//所以这里保存的是每一个store的flush实例,后面就是通过这里的StoreFlushContext进行flush的 TreeMap<byte[], StoreFlushContext> storeFlushCtxs = new TreeMap<byte[], StoreFlushContext>(Bytes.BYTES_COMPARATOR);
//用来存储每个store和它对应的hdfs commit路径的映射 TreeMap<byte[], List<Path>> committedFiles = new TreeMap<byte[], List<Path>>( Bytes.BYTES_COMPARATOR); // The sequence id of this flush operation which is used to log FlushMarker and pass to // createFlushContext to use as the store file's sequence id. long flushOpSeqId = HConstants.NO_SEQNUM; long flushedSeqId = HConstants.NO_SEQNUM; // The max flushed sequence id after this flush operation. Used as completeSequenceId which is // passed to HMaster. byte[] encodedRegionName = getRegionInfo().getEncodedNameAsBytes(); long trxId = 0; try { try { w = mvcc.beginMemstoreInsert(); if (wal != null) { if (!wal.startCacheFlush(encodedRegionName, flushedFamilyNames)) { // This should never happen. String msg = "Flush will not be started for [" + this.getRegionInfo().getEncodedName() + "] - because the WAL is closing."; status.setStatus(msg); return new PrepareFlushResult(new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg), myseqid); } flushOpSeqId = getNextSequenceId(wal); long oldestUnflushedSeqId = wal.getEarliestMemstoreSeqNum(encodedRegionName); // no oldestUnflushedSeqId means we flushed all stores. // or the unflushed stores are all empty. flushedSeqId = (oldestUnflushedSeqId == HConstants.NO_SEQNUM) ? flushOpSeqId : oldestUnflushedSeqId - 1; } else { // use the provided sequence Id as WAL is not being used for this flush. flushedSeqId = flushOpSeqId = myseqid; } //循环遍历region下面的storeFile,为每个storeFile生成了一个StoreFlusherImpl类, // 生成MemStore的快照就是调用每个StoreFlusherImpl的prepare方法生成每个storeFile的快照, // 至于internalFlushCacheAndCommit中的flush和commti行为也是调用了region中每个storeFile的flushCache和commit接口。 for (Store s : storesToFlush) { //用于统计flush的所有的store的memtore内存大小之和,而不是snapshot的getCellsCount() totalFlushableSizeOfFlushableStores += s.getFlushableSize(); //为每一个store生成自己的storeFlushImpl storeFlushCtxs.put(s.getFamily().getName(), s.createFlushContext(flushOpSeqId)); //此时还没有生成flush的hfile路径 committedFiles.put(s.getFamily().getName(), null); // for writing stores to WAL } // write the snapshot start to WAL if (wal != null && !writestate.readOnly) { FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.START_FLUSH, getRegionInfo(), flushOpSeqId, committedFiles); // no sync. Sync is below where we do not hold the updates lock //这里只是向wal中写入begin flush的marker,真正的sync在后面做,因为这里加了update的写锁,所有耗时操作都不在这里进行 trxId = WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), desc, sequenceId, false); } // Prepare flush (take a snapshot)这里的StoreFlushContext就是StoreFlusherImpl for (StoreFlushContext flush : storeFlushCtxs.values()) { //迭代region下的每一个store,把memstore下的kvset复制到memstore的snapshot中并清空kvset的值 //把memstore的snapshot复制到HStore的snapshot中 flush.prepare();//其prepare方法就是调用store的storeFlushImpl的snapshot方法生成快照 } } catch (IOException ex) { if (wal != null) { if (trxId > 0) { // check whether we have already written START_FLUSH to WAL try { FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.ABORT_FLUSH, getRegionInfo(), flushOpSeqId, committedFiles); WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), desc, sequenceId, false); } catch (Throwable t) { LOG.warn("Received unexpected exception trying to write ABORT_FLUSH marker to WAL:" + StringUtils.stringifyException(t)); // ignore this since we will be aborting the RS with DSE. } } // we have called wal.startCacheFlush(), now we have to abort it wal.abortCacheFlush(this.getRegionInfo().getEncodedNameAsBytes()); throw ex; // let upper layers deal with it. } } finally { //做完snapshot释放锁,此时不会阻塞业务的读写操作了 this.updatesLock.writeLock().unlock(); } String s = "Finished memstore snapshotting " + this + ", syncing WAL and waiting on mvcc, flushsize=" + totalFlushableSizeOfFlushableStores; status.setStatus(s); if (LOG.isTraceEnabled()) LOG.trace(s); // sync unflushed WAL changes // see HBASE-8208 for details if (wal != null) { try { wal.sync(); // ensure that flush marker is sync'ed } catch (IOException ioe) { LOG.warn("Unexpected exception while wal.sync(), ignoring. Exception: " + StringUtils.stringifyException(ioe)); } } // wait for all in-progress transactions to commit to WAL before // we can start the flush. This prevents // uncommitted transactions from being written into HFiles. // We have to block before we start the flush, otherwise keys that // were removed via a rollbackMemstore could be written to Hfiles. w.setWriteNumber(flushOpSeqId); mvcc.waitForPreviousTransactionsComplete(w); // set w to null to prevent mvcc.advanceMemstore from being called again inside finally block w = null; } finally { if (w != null) { // in case of failure just mark current w as complete mvcc.advanceMemstore(w); } } return new PrepareFlushResult(storeFlushCtxs, committedFiles, startTime, flushOpSeqId, flushedSeqId, totalFlushableSizeOfFlushableStores);
在具体看StoreFlushContext.prepare()之前,我们先看一下StoreFlushContext接口的说明,如上所述,StoreFlushImpl是Store的内部类,继承自StoreFlushContext。
interface StoreFlushContext {
void prepare();
void flushCache(MonitoredTask status) throws IOException;
boolean commit(MonitoredTask status) throws IOException;
void replayFlush(List<String> fileNames, boolean dropMemstoreSnapshot) throws IOException;
void abort() throws IOException;
List<Path> getCommittedFiles();
}
现在我们回过头来继续看internalPrepareFlushcache中标红的flush.prepare();
public void prepare() {
//在region调用storeFlusherImpl的prepare的时候,前面提到是在region的update.write.lock中的,因此这里面所有的耗时操作都会影响业务正在进行的读写操作.
//在snapshot中的逻辑中只是将memstore的跳跃表赋值给snapshot的跳跃表,在返回memstoresnapshot的时候,调用的snapshot的size()方法
this.snapshot = memstore.snapshot();
//MemstoreSnapshot的getCellsCount方法即在memstore的shapshot中返回的MemStoresnapshot中传入的snapshot.size()值,时间复杂度是o(n)
this.cacheFlushCount = snapshot.getCellsCount();
this.cacheFlushSize = snapshot.getSize();
committedFiles = new ArrayList<Path>(1);
}
我们看下memstore的snapshot方法
public MemStoreSnapshot snapshot() {
// If snapshot currently has entries, then flusher failed or didn't call
// cleanup. Log a warning.
if (!this.snapshot.isEmpty()) {
LOG.warn("Snapshot called again without clearing previous. " +
"Doing nothing. Another ongoing flush or did we fail last attempt?");
} else {
this.snapshotId = EnvironmentEdgeManager.currentTime();
//memstore使用的mem大小
this.snapshotSize = keySize();
if (!this.cellSet.isEmpty()) {
//这里的cellset就是memstore内存中的数据
this.snapshot = this.cellSet;
//构造一个新的cellset存储数据 this.cellSet = new CellSkipListSet(this.comparator); this.snapshotTimeRangeTracker = this.timeRangeTracker; this.timeRangeTracker = new TimeRangeTracker(); // Reset heap to not include any keys this.size.set(DEEP_OVERHEAD); this.snapshotAllocator = this.allocator; // Reset allocator so we get a fresh buffer for the new memstore if (allocator != null) { String className = conf.get(MSLAB_CLASS_NAME, HeapMemStoreLAB.class.getName()); this.allocator = ReflectionUtils.instantiateWithCustomCtor(className, new Class[] { Configuration.class }, new Object[] { conf }); } timeOfOldestEdit = Long.MAX_VALUE; } }
prepare中的snapshot.getCellsCount();我们重点说一下,hbase的内存存储写入的数据使用的是跳跃表的数据结构,实现是使用jdk自带的ConcurrentSkipListMap。在hbase的MemStore(默认是DefaultMemStore)实现中有两个环境变量,分别是ConcurrentSkipListMap类型的cellset和snapshot。cellset用来存储写入到memstore的数据,snapshot是在flush的第一阶段是将cellset赋值用的。因此这个的getCellsCount()方法最终调用的是concurrentSkipListMap.size(),concurrentSkipListMap并没有一个原子变量来报错map的大小,因为这里为了并发,同时该操作也不常用。因此concurrentSkipListMap.size()是遍历整个跳跃表获取size大小。
继续回到internalPrepareFlushCache中,对每一个store调用完prepare后,就将updatelock进行unlock。并返回一个PrepareFlushResult。继续往上走,
回到internalFlushCache方法。执行完internalPrepareFlushcache后走的是internalFlushAndCommit方法。继续跟进:
protected FlushResult internalFlushCacheAndCommit(
final WAL wal, MonitoredTask status, final PrepareFlushResult prepareResult,
final Collection<Store> storesToFlush)
throws IOException {
// prepare flush context is carried via PrepareFlushResult
//进行flush的store的cf:storeFlushImpl映射
TreeMap<byte[], StoreFlushContext> storeFlushCtxs = prepareResult.storeFlushCtxs;
//flush生成的hfile的路径,当前key是有的,为cf,但是List<Path>为null,是在internalPrepareFlushCache中初始化的
TreeMap<byte[], List<Path>> committedFiles = prepareResult.committedFiles;
long startTime = prepareResult.startTime;
long flushOpSeqId = prepareResult.flushOpSeqId;
long flushedSeqId = prepareResult.flushedSeqId;
long totalFlushableSizeOfFlushableStores = prepareResult.totalFlushableSize;
String s = "Flushing stores of " + this;
status.setStatus(s);
if (LOG.isTraceEnabled()) LOG.trace(s);
// Any failure from here on out will be catastrophic requiring server
// restart so wal content can be replayed and put back into the memstore.
// Otherwise, the snapshot content while backed up in the wal, it will not
// be part of the current running servers state.
boolean compactionRequested = false;
try {
// A. Flush memstore to all the HStores.
// Keep running vector of all store files that includes both old and the
// just-made new flush store file. The new flushed file is still in the
// tmp directory.
//迭代region下的每一个store,调用HStore.storeFlushImpl.flushCache方法,把store中snapshot的数据flush到hfile中,当然这里是flush到temp文件中,最终是通过commit将其移到正确的路径下
//
//
for (StoreFlushContext flush : storeFlushCtxs.values()) {
flush.flushCache(status);
}
// Switch snapshot (in memstore) -> new hfile (thus causing
// all the store scanners to reset/reseek).
Iterator<Store> it = storesToFlush.iterator();
// stores.values() and storeFlushCtxs have same order
for (StoreFlushContext flush : storeFlushCtxs.values()) {
boolean needsCompaction = flush.commit(status);
if (needsCompaction) {
compactionRequested = true;
}
committedFiles.put(it.next().getFamily().getName(), flush.getCommittedFiles());
}
storeFlushCtxs.clear();
// Set down the memstore size by amount of flush.
this.addAndGetGlobalMemstoreSize(-totalFlushableSizeOfFlushableStores);
if (wal != null) {
// write flush marker to WAL. If fail, we should throw DroppedSnapshotException
FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.COMMIT_FLUSH,
getRegionInfo(), flushOpSeqId, committedFiles);
WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(),
desc, sequenceId, true);
}
} catch (Throwable t) {
// An exception here means that the snapshot was not persisted.
// The wal needs to be replayed so its content is restored to memstore.
// Currently, only a server restart will do this.
// We used to only catch IOEs but its possible that we'd get other
// exceptions -- e.g. HBASE-659 was about an NPE -- so now we catch
// all and sundry.
if (wal != null) {
try {
FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.ABORT_FLUSH,
getRegionInfo(), flushOpSeqId, committedFiles);
WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(),
desc, sequenceId, false);
} catch (Throwable ex) {
LOG.warn("Received unexpected exception trying to write ABORT_FLUSH marker to WAL:" +
StringUtils.stringifyException(ex));
// ignore this since we will be aborting the RS with DSE.
}
wal.abortCacheFlush(this.getRegionInfo().getEncodedNameAsBytes());
}
DroppedSnapshotException dse = new DroppedSnapshotException("region: " +
Bytes.toStringBinary(getRegionName()));
dse.initCause(t);
status.abort("Flush failed: " + StringUtils.stringifyException(t));
throw dse;
}
// If we get to here, the HStores have been written.
if (wal != null) {
wal.completeCacheFlush(this.getRegionInfo().getEncodedNameAsBytes());
}
// Record latest flush time
for (Store store : storesToFlush) {
this.lastStoreFlushTimeMap.put(store, startTime);
}
// Update the oldest unflushed sequence id for region.
this.maxFlushedSeqId = flushedSeqId;
// C. Finally notify anyone waiting on memstore to clear:
// e.g. checkResources().
synchronized (this) {
notifyAll(); // FindBugs NN_NAKED_NOTIFY
}
long time = EnvironmentEdgeManager.currentTime() - startTime;
long memstoresize = this.memstoreSize.get();
String msg = "Finished memstore flush of ~"
+ StringUtils.byteDesc(totalFlushableSizeOfFlushableStores) + "/"
+ totalFlushableSizeOfFlushableStores + ", currentsize="
+ StringUtils.byteDesc(memstoresize) + "/" + memstoresize
+ " for region " + this + " in " + time + "ms, sequenceid="
+ flushOpSeqId + ", compaction requested=" + compactionRequested
+ ((wal == null) ? "; wal=null" : "");
LOG.info(msg);
status.setStatus(msg);
return new FlushResult(compactionRequested ? FlushResult.Result.FLUSHED_COMPACTION_NEEDED :
FlushResult.Result.FLUSHED_NO_COMPACTION_NEEDED, flushOpSeqId);
}
我们就只看其中两个方法:flush.flushcache和flush.commit。这里的flush即StoreFlushImpl。flushcache方法是用来执行第二阶段,commit用来执行第三阶段。
public void flushCache(MonitoredTask status) throws IOException {
//返回的是snapshotflush到临时文件后,最终需要移到的正确路径
tempFiles = HStore.this.flushCache(cacheFlushSeqNum, snapshot, status);
}
转到store的flushcache方法
protected List<Path> flushCache(final long logCacheFlushId, MemStoreSnapshot snapshot,
MonitoredTask status) throws IOException {
// If an exception happens flushing, we let it out without clearing
// the memstore snapshot. The old snapshot will be returned when we say
// 'snapshot', the next time flush comes around.
// Retry after catching exception when flushing, otherwise server will abort
// itself
StoreFlusher flusher = storeEngine.getStoreFlusher();
IOException lastException = null;
for (int i = 0; i < flushRetriesNumber; i++) {
try {
//调用StoreFlusher.flushsnapshot方法将snapshotflush到temp文件
List<Path> pathNames = flusher.flushSnapshot(snapshot, logCacheFlushId, status);
Path lastPathName = null;
try {
for (Path pathName : pathNames) {
lastPathName = pathName;
validateStoreFile(pathName);
}
return pathNames;
} catch (Exception e) {
LOG.warn("Failed validating store file " + lastPathName + ", retrying num=" + i, e);
if (e instanceof IOException) {
lastException = (IOException) e;
} else {
lastException = new IOException(e);
}
}
} catch (IOException e) {
LOG.warn("Failed flushing store file, retrying num=" + i, e);
lastException = e;
}
if (lastException != null && i < (flushRetriesNumber - 1)) {
try {
Thread.sleep(pauseTime);
} catch (InterruptedException e) {
IOException iie = new InterruptedIOException();
iie.initCause(e);
throw iie;
}
}
}
throw lastException;
}
其中标红的部分是主要的逻辑。首先通过storeEngine.getStoreFlusher获取flush的实例,实际包括了sync到disk的writer以及append等操作。这里不再展开说明。我们重点看一下for循环中的flusher.flushSnapshot方法,涉及到一个重要的环境变量cellsCount
public List<Path> flushSnapshot(MemStoreSnapshot snapshot, long cacheFlushId,
MonitoredTask status) throws IOException {
ArrayList<Path> result = new ArrayList<Path>();
//这里会调用snapshot的getCellsCount方法,之所以这里提了这个方法,是因为其实一个prepare阶段耗时较大的过程。
int cellsCount = snapshot.getCellsCount();
if (cellsCount == 0) return result; // don't flush if there are no entries
// Use a store scanner to find which rows to flush.
long smallestReadPoint = store.getSmallestReadPoint();
InternalScanner scanner = createScanner(snapshot.getScanner(), smallestReadPoint);
if (scanner == null) {
return result; // NULL scanner returned from coprocessor hooks means skip normal processing
}
StoreFile.Writer writer;
try {
// TODO: We can fail in the below block before we complete adding this flush to
// list of store files. Add cleanup of anything put on filesystem if we fail.
synchronized (flushLock) {
status.setStatus("Flushing " + store + ": creating writer");
// Write the map out to the disk
//这里传入的cellsCount实际并没有用,可能是预置的变量?
writer = store.createWriterInTmp(
cellsCount, store.getFamily().getCompression(), false, true, true);
writer.setTimeRangeTracker(snapshot.getTimeRangeTracker());
IOException e = null;
try {
//真正的将snapshot写入临时文件
performFlush(scanner, writer, smallestReadPoint);
} catch (IOException ioe) {
e = ioe;
// throw the exception out
throw ioe;
} finally {
if (e != null) {
writer.close();
} else {
finalizeWriter(writer, cacheFlushId, status);
}
}
}
} finally {
scanner.close();
}
LOG.info("Flushed, sequenceid=" + cacheFlushId +", memsize="
+ StringUtils.humanReadableInt(snapshot.getSize()) +
", hasBloomFilter=" + writer.hasGeneralBloom() +
", into tmp file " + writer.getPath());
result.add(writer.getPath());
return result;
}
可以看到store.createWriterInTmp中使用了该变量,继续跟进
public StoreFile.Writer createWriterInTmp(long maxKeyCount, Compression.Algorithm compression,
boolean isCompaction, boolean includeMVCCReadpoint, boolean includesTag)
throws IOException {
。。。。。忽略不重要逻辑。。。。。 //这里传入的maxkeyCount没有用 StoreFile.Writer w = new StoreFile.WriterBuilder(conf, writerCacheConf, this.getFileSystem()) .withFilePath(fs.createTempName()) .withComparator(comparator) .withBloomType(family.getBloomFilterType()) .withMaxKeyCount(maxKeyCount) .withFavoredNodes(favoredNodes) .withFileContext(hFileContext) .build(); return w; }
可见将cellscount以参数的形式传给了writer。然后执行performFlush方法,该方法通过scanner遍历,然后使用hfile.writer将数据罗盘。我们看一下Writer中将cellscount用来干啥了。在整个writer中只有这两个地方用到了
generalBloomFilterWriter = BloomFilterFactory.createGeneralBloomAtWrite(
conf, cacheConf, bloomType,
(int) Math.min(maxKeys, Integer.MAX_VALUE), writer);
this.deleteFamilyBloomFilterWriter = BloomFilterFactory
.createDeleteBloomAtWrite(conf, cacheConf,
(int) Math.min(maxKeys, Integer.MAX_VALUE), writer);
继续跟进这两个
public static BloomFilterWriter createDeleteBloomAtWrite(Configuration conf,
CacheConfig cacheConf, int maxKeys, HFile.Writer writer) {
if (!isDeleteFamilyBloomEnabled(conf)) {
LOG.info("Delete Bloom filters are disabled by configuration for "
+ writer.getPath()
+ (conf == null ? " (configuration is null)" : ""));
return null;
}
float err = getErrorRate(conf);
int maxFold = getMaxFold(conf);
// In case of compound Bloom filters we ignore the maxKeys hint.
CompoundBloomFilterWriter bloomWriter = new CompoundBloomFilterWriter(getBloomBlockSize(conf),
err, Hash.getHashType(conf), maxFold, cacheConf.shouldCacheBloomsOnWrite(),
KeyValue.RAW_COMPARATOR);
writer.addInlineBlockWriter(bloomWriter);
return bloomWriter;
}
可见maxKeys没有使用,另一个方法同理,所以这里的cellscount变量在flush的第二阶段没有使用。
到现在为止我们判断出在第二阶段cellcount没有使用,我们继续跟进第三阶段:回到internalFlushAndCOmmit中的flush.commit(status)
public boolean commit(MonitoredTask status) throws IOException {
if (this.tempFiles == null || this.tempFiles.isEmpty()) {
return false;
}
List<StoreFile> storeFiles = new ArrayList<StoreFile>(this.tempFiles.size());
for (Path storeFilePath : tempFiles) {
try {
storeFiles.add(HStore.this.commitFile(storeFilePath, cacheFlushSeqNum, status));
} catch (IOException ex) {
LOG.error("Failed to commit store file " + storeFilePath, ex);
// Try to delete the files we have committed before.
for (StoreFile sf : storeFiles) {
Path pathToDelete = sf.getPath();
try {
sf.deleteReader();
} catch (IOException deleteEx) {
LOG.fatal("Failed to delete store file we committed, halting " + pathToDelete, ex);
Runtime.getRuntime().halt(1);
}
}
throw new IOException("Failed to commit the flush", ex);
}
}
for (StoreFile sf : storeFiles) {
if (HStore.this.getCoprocessorHost() != null) {
HStore.this.getCoprocessorHost().postFlush(HStore.this, sf);
}
committedFiles.add(sf.getPath());
}
HStore.this.flushedCellsCount += cacheFlushCount;
HStore.this.flushedCellsSize += cacheFlushSize;
// Add new file to store files. Clear snapshot too while we have the Store write lock.
return HStore.this.updateStorefiles(storeFiles, snapshot.getId());
}
第三阶段比较简单,将flush的文件移动到hdfs正确的路径下。同时可见在这里用到了cellscount。这里是赋值给store的flushedCellsCount,这里主要是用来进行metric收集flushedCellsSize的。根据经验这个metric可忽略,未使用过。
总结
这里之所以总是提到cellscount变量,是因为给其赋值调用ConcurrentSkipListMap.size()方法在flush的第一阶段中最耗时的,同时持有hbase region 级别的updatelock,但是通过梳理并没有太大的用处,可以干掉。否则会因此一些毛刺,pct99比较高。已有patch,但是是应用在2.+的版本的、
整个flush的流程就结束了,如有不对的地方,欢迎指正。欢迎加微信相互交流:940184856
posted @ 2019-10-14 11:28 Evil_XJZ 阅读(652) 评论(1编辑 收藏 |
Sharing is caring!
More than seventeen million people in the United States and 264 million people worldwide live with depression1. Making it one of the most common mental health concerns, depression may be a misdiagnosis for a more specific kind of depression, called Bipolar Disorder. The disorder is among the most misunderstood, feared, and stigmatized mental health diagnoses. Many of us may know someone affected by BD and have seen firsthand how the symptoms of this illness can impact a loved one’s life.
While normal moods fluctuate regularly, some suffer from a form of extreme swings in mood, highs, and lows, called Bipolar Disorder (or BD), formerly known as Manic/Depressive disorder. About 1% of the US population or 2.3 million people suffer from BD. Often those suffering from Bipolar Disorder also simultaneously suffer from an Anxiety Disorder as well, a phenomenon called comorbidity.
There are two main types of Bipolar Disorders, Type I and Type II.
Type I Bipolar Disorder
Type I is characterized by the occurrence of at least one manic episode lasting, at minimum, one week. A manic episode is characterized by “abnormally and persistently elevated, expansive, or irritable mood and abnormally and persistently increased goal-directed activity or energy.”2 There must be an impairment socially, professionally, or an instance of hospitalization to keep the patient or others safe from harm.
What may commonly be considered the corresponding depressive episode is not necessary for the diagnosis of Type I. This type is generally considered the more severe of the two, although the opinions of experts vary.
Type II Bipolar Disorder
The diagnostic parameters of Bipolar II include the occurrence of a hypomanic episode (a manic episode lasting at least four days) and a major depressive episode2. The major depressive episode must last two weeks and there be a severe impairment on social, professional, and other necessary modes of functioning.
The mania experienced may not be as severe as in Type I. Yet, the depressive episodes may be more severe5.
A mental illness associated with BD is Cyclothymic disorder. In this case, there have been two years (one year in young people) in which many periods of hypomania, as well as periods of depression, have occurred. However, these periods not as long as in either BD type18.
What Does Bipolar Disorder Look Like?
Symptoms of Bipolar disorder may appear as feeling super happy for long periods of time, talking fast, extreme restlessness or impulsiveness, overconfidence in abilities, and engaging in risky behavior.3 This behavior may be gambling away large sums of money, impulsive sex, or making large, extravagant purchases.
Those who experience BD report the benefits to the mania experienced in higher levels of creativity, energy, and confidence. The experience can give such high levels of confidence initiating the beliefs of literally being the best in the world at a task in one moment and not worthy of life in the next.
It may be difficult to identify bouts of mania in those closest to us. It may simply seem like extreme happiness4. Those who suffer from the disorder report trouble trusting their feelings6.
Outcomes
There is no cure for Bipolar Disorder. Most patients deal with symptoms for most of their lives7. To illustrate its severity, consider the following. Sufferers of BD have a 10-25 year shorter life expectancy than most9. The reasons for this are many including reckless behavior, medication side effects, and suicide. Treatment outcomes strongly hinge on the attitude, hopefulness, and self-esteem of the sufferer. And so, a strong support network is important to a positive outcome. With proper treatment, it is possible to live a normal, healthy life10
If a member of your family has BD, you are 4-6 times more likely to experience the disorder11.To add insult to injury, more than 60% of those suffering from BD experience a substance abuse disorder12.
Many of those who suffer from Bipolar Disorder are disheartened because most relationships fail due to the illness13. Even more so than other mental illnesses. However, on a hopeful note, with proper relational and support tools14, successful relationships can be had.
Biology of the Disorder
There is no certainty around what causes BD. Recent research has discovered a link between calcium imbalance and the severity of BD. Vitamin D may also play a crucial role8.
A strong genetic component influences BD’s prevalence. There are structural and neurochemical differences in those who suffer from BD. The hormone Noradrenaline and the neurotransmitter serotonin seem to play a crucial role. Too much of either is related to the mania associated with the disorder. Conversely, too little seems to be causal to the depressive episodes that follow11.
Additionally, patients with BD have cortical thinning15, the thinning of the grey matter in parts of the brain16. The areas in which thinning occurs are those associated with impulse control, informational processing, and motor control. Also, BD seems to correlate with a smaller hippocampus, the brain structure responsible for emotional responses and memory processing17.
Externally, traumas such as abuse, extreme loss, or mental stress may be causal to the disorder18.
Experimental Treatment
The FDA has recently stated that psilocybin is a “breakthrough therapy” for those suffering from depression18. MDMA (methylenedioxy-methamphetamine), known as “ecstasy” or “molly”, has been shown to reduce symptoms of PTSD. Currently, Ketamine (a drug used as an anesthetic) is in trials to treat BD. When properly used in a clinical setting, ketamine creates new neural pathways in the brain around trauma, pain, and depression.
While these treatments are controversial, they hold the promise of effectiveness in helping BD suffers lead more normal lives, free from symptoms.
Mental Health
The stigma around mental health continues in our culture. Yet, as I see it, there is progress. This is evidenced in Armchair Expert, Dax Shepard’s podcast, in which the topic is often mental health. I see it in Oprah’s new project, The Me You Can’t See., where the focus is 100% on mental health and its prevalence. Yet, us boots on the ground should do more. This begins with how we react and treat those in our proximity with mental illness. It begins with how we talk to ourselves about it.
BD symptoms are beyond the suffers control. At some point in our evolutionary history, perhaps there was a function for the symptoms of BD. For example, a reaction to a perceived threat may have evolutionarily developed (in certain genetic lines), causing the manic reaction indicative of BD. The manic belief that one is extraordinary may simply be what was needed to survive—the cost of this experience – the low – triggered by the spent hormones and neurochemicals.
Truthfully, there is no way we can know for certain. But without open discussion, the problem perpetuates.
This is where you come in. Listen to those in your bubble, your network. Often, talking and discussing the problem with others can help more than pharmacological intervention can.
Bipolar Disorder is a significant Mental Health concern. Without proper treatment, BD may have a severe outcome. Researchers estimate between 25%-60% of BD sufferers will attempt suicide during their lifetimes, and 4%-19% actually will19.
Warning signs of suicide include talking about suicide, talking or thinking about death often, expressing hopelessness, helplessness, or worthlessness, losing interest in things one used to care about, or putting affairs in order, tying up loose ends20.
*If you or someone you know is going to hurt themselves, dial 911.
Be open to talking about mental health, rethinking your positions, and asking questions. This is the path to destigmatizing mental health.
Resources
For further help and information click here.
References
1. https://adaa.org/understanding-anxiety/depression
2. https://media.mycme.com/documents/168/dsm-5_bipolar_and_related_diso_41789.pdf
3. https://www.healthline.com/health/could-it-be-bipolar-signs-to-look-for#7-signs-of-mania
4. https://www.healthline.com/health/bipolar-disorder/what-bipolar-feels-like#Takeaway
5. https://psychcentral.com/bipolar/psy-65033#bipolar-ii-defined
6. https://psychcentral.com/bipolar/psy-65033#lived-experience-signs-symptoms
7. https://www.medicalnewstoday.com/articles/324349#is-bipolar-curable
8. https://www.medscape.com/viewarticle/950877
9. https://www.healthgrades.com/right-care/bipolar-disorder/bipolar-disorder-prognosis-and-life-expectancy
10. https://health.usnews.com/health-care/conditions/articles/can-people-recover-from-bipolar-disorder
11. https://greatist.com/health/what-causes-bipolar-disorder#causes
12. https://dualdiagnosis.org/bipolar-disorder-and-addiction/
13. https://www.bphope.com/caregivers/partners-for-life/
14. https://psychcentral.com/bipolar/living-with-bipolar-disorder#overcoming-challenges
15. https://www.nature.com/articles/mp201773
16. https://www.sciencedaily.com/releases/2014/03/140304141734.htm
17. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5524625/
18. https://www.mayoclinic.org/diseases-conditions/bipolar-disorder/symptoms-causes/syc-20355955
19. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4536929/
20. https://www.webmd.com/bipolar-disorder/guide/bipolar-disorder-suicide
The following two tabs change content below.
BA in Psychology and MBA from Kent State. ENTJ Myers/Briggs and my love language is acts of service. However, I don’t think any of those things should provoke you to read my blog. Hmmm. I want to talk about things we all think about but, can’t freely talk about.
Latest posts by whatarewe? (see all)
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: |
Must Know!
Signs and symptoms caused by cancer will vary depending on what part of the body is affected.
CancerSome general signs and symptoms associated with, but not specific to, cancer, include:
FatigueLump or area of thickening that can be felt under the skinWeight changes, including unintended loss or gainSkin changes, such as yellowing, darkening or redness of the skin, sores that won’t heal, or changes to existing moles, Changes in bowel or bladder habits, Persistent cough or trouble breathing Difficulty.
swallowing Hoarseness, Persistent indigestion or discomfort after eating, Persistent, unexplained muscle or joint pain, Persistent, unexplained fevers or night sweats, Unexplained bleeding or bruising.
CancerWhen to see a doctor
Make an appointment with your doctor if you have any persistent signs or symptoms that concern you.
If you don’t have any signs or symptoms, but are worried about your risk of cancer, discuss your concerns with your doctor. Ask about which cancer screening tests and procedures are appropriate for you.
Comments
comments |
Host list for hostsSetting getKey is empty – How to solve this Elasticsearch exception
Opster Team
August-23, Version: 7.5-8.9
Briefly, this error occurs when Elasticsearch is unable to find any hosts in the specified host list setting. This could be due to a misconfiguration in the Elasticsearch settings or the host list being empty. To resolve this issue, you can check the Elasticsearch configuration file and ensure that the host list is correctly defined. Alternatively, you may need to add hosts to the list if it’s empty. Also, ensure that the hosts in the list are accessible and running.
This guide will help you check for common problems that cause the log ” host list for [” + hostsSetting.getKey() + “] is empty ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, hosts.
Log Context
Log “host list for [” + hostsSetting.getKey() + “] is empty” class name is Exporter.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final String namespace = TYPE_SETTING.getNamespace(TYPE_SETTING.getConcreteSetting(key));
final Setting> hostsSetting = HttpExporter.HOST_SETTING.getConcreteSettingForNamespace(namespace);
@SuppressWarnings("unchecked")
final List hosts = (List) settings.get(hostsSetting);
if (hosts.isEmpty()) {
throw new SettingsException("host list for [" + hostsSetting.getKey() + "] is empty");
}
break;
case "local":
break;
default:
How helpful was this guide?
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post? |
【C++】sqrt(x) vs pow(x,0.5) 平方根の演算速度を比較してみた
C++において,平方根計算の際に,一般的に数学関数(<cmath>ヘッダ)を使用します.
平方根の算出には,専用関数std::sqrt(x)とべき乗関数std::pow(x,0.5e0)を使用する方法が考えられます.
この記事では,std::sqrtstd::powによる平方根算出の速度比較しました.
平方根の計算でstd::sqrtstd::powどちらを使うべき?
専用関数であるstd::sqrtを使おう!
↓↓↓立方根(三乗根)に関する速度比較した記事はこちら↓↓↓
目次
検証
結果
最適化なし(-O0)
std::sqrtstd::pow約11倍高速な結果となりました.
最適化あり(-O2)
最適化を行った場合,std::sqrtstd::pow約23倍高速な結果となりました.
検証に使用したコード
速度検証のため,Google Benchmarkを使用しました.
Google Benchmarkに関しては別記事にしています.下記記事をご覧ください.
検証に使用したコードは下記のようになります.
#include <benchmark/benchmark.h>
#include <cmath>
#include <iostream>
#include <random>
static void plus(benchmark::State& state)
{
std::random_device seed;
std::mt19937 rng(seed());
std::uniform_real_distribution<> distr(-M_PI_2, M_PI_2);
double sum = 0.0e0;
for (auto _ : state)
{
sum += distr(rng);
}
// prevent ignore code by optimization
std::cout << sum << std::endl;
}
BENCHMARK(plus);
static void sqrt(benchmark::State& state)
{
std::random_device seed;
std::mt19937 rng(seed());
std::uniform_real_distribution<> distr(0.0e0, 1000.0e0);
double sum = 0.0e0;
for (auto _ : state)
{
sum += std::sqrt(distr(rng));
}
// prevent ignore code by optimization
std::cout << sum << std::endl;
}
BENCHMARK(sqrt);
static void pow_sqrt(benchmark::State& state)
{
std::random_device seed;
std::mt19937 rng(seed());
std::uniform_real_distribution<> distr(0.0e0, 1000.0e0);
double sum = 0.0e0;
for (auto _ : state)
{
sum += std::pow(distr(rng), 0.5e0);
}
// prevent ignore code by optimization
std::cout << sum << std::endl;
}
BENCHMARK(pow_sqrt);
BENCHMARK_MAIN();
検証結果において乱数生成と加算の計算時間は,別途減算しています.
結論
std::sqrtstd::powの平方根算出速度には有意な差がありました.
平方根算出のための関数であるstd::sqrtの方が高速であることが示さされました.
基本的にstd::sqrtを使用したほうが良いでしょう.
↓↓↓立方根(三乗根)に関する速度比較した記事はこちら↓↓↓
>> アルゴリズムイントロダクション
よかったらシェアしてね!
目次 |
Ehlers-Danlos Syndrome
Ehlers-Danlos syndrome is a whole group of genetically determined congenital diseases associated with connective tissue disorders. They occur in about one person in 5,000.
Causes
The condition is caused by mutation in a gene that is important for the proper function of the connective tissue. We distinguish particular subtypes of the syndrome according to the affected gene, but this is not important for general understanding.
Symptoms
Due to disorder of the connective tissue, ligaments are too loose and they are not able to stabilize the joints. Therefore, there is an excessive flexibility of the fingers on both the upper and lower extremities. The joints are unstable and prone to dislocation. Given that the production of the connective tissue influences the wound healing, the wound healing is slower or it is related to formation of abnormal keloid scars.
People with Ehlers-Danlos syndrome frequently suffer from valvular heart disease, their heart valves tend to be insufficient. The walls of the blood vessels are less strong and more susceptible to effects of the blood pressure, which can cause occurrence of aneurysms, typically the aortic aneurysm. Rupture of such aneurysm is usually a life-threatening condition. Women with Ehlers-Danlos syndrome have risky pregnancy as there is increased possibility of bleeding or rupture of the enlarged uterus.
Diagnosis
The condition can be confirmed only by genetic testing which finds mutation of a certain gene.
Prevention and treatment
Ehlers-Danlos syndrome is a congenital disease and therefore, there is no prevention and no cure. Joint disorders are treated by orthopedists (solving dislocated joints, prescription of devices to stabilize joints, rehabilitation, etc.). The stability of blood vessel walls can be increased by regular adequate intake of vitamin C, but its effect is far from miraculous. Heart or vascular disorders are treated by cardiologists, angiologists and surgeons.
|
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
To prepare antifebrin using phenylammonium chloride C6H5NH3CL and Ethanoic anhydride (CH3CO)2O.
Extracts from this document...
Introduction
Preparation of the Pharmaceutical Antifebrin Antifebrin is an example of an important pharmaceutical. Chemically, antifebrin is the amide, phenylethanamide CH3CONHC6H5. Aim: To prepare antifebrin using phenylammonium chloride C6H5NH3CL and Ethanoic anhydride (CH3CO)2O. Background Knowledge: Physical state: White flakes, odourless Specific Gravity: 1.219 Sol in water: Soluble in hot water Vapour density 4.65 Auto ignition: 545 oC Stability Stable under ordinary conditions Applications: Acetanilide is used as an inhibitor or in hydrogen peroxide and stabiliser for cellulose ester varnishes. It is used as an intermediate for the synthesis of rubber accelerators, dyes and dye intermediate and camphor. It is used as a precursor in penicillin synthesis and other pharmaceuticals and its intermediates. In this report, the assessment requires use of manipulative skills and the use of the Buchner's funnel and crystallizing method.. My experience of these skills has been shown to my teacher, while performing the experiment. Fair Testing: In order to obtain valid data and ensure accurate and reliable results, I will take the following points into consideration: * When preparing the experiment, place all the equipment in line, and in a coherent and organized manner. ...read more.
Middle
Glass funnel * Sodium Ethanoate 6.0g Rubber bung * Filter paper Diagram : Diagram showing the set up of apparatus: Method: 1. Dissolve 1.0g of phenylammonium chloride in 30cm3 of water in a conical flask 2. Prepare a solution of 6.0g of sodium ethanoate in 25cm3 of water in a conical flask 3. Carefully add 2cm3 of Ethanoic anhydride to the solution of phenylammonium chloride and stir vigorously until all of the Ethanoic anhydride has dissolved. Now add the sodium ethanoate solution and continue to stir for a further three minutes. 4. The solid that has collected is a crude sample of the antifebrin. This should be collected by filtering under reduced pressure. It should then be washed with a little cold water. 5. Recrystallise the whole of your product from the minimum volume of hot water. Allow the mixture to cool and when crystallisation is complete, filter off the pure product under the reduced pressure. 6. Dry the bulk of your product in air and a small portion between the filter paper. ...read more.
Conclusion
The precision of collecting data could be improved by: * Repeating the experiment under laboratory controlled conditions and in an air tight environment * Repeating the experiment as many times as possible to gain the best possible yield without loosing too much of the product A perfect reaction would convert all of the starting material to the desired product, but very few do. My reaction only gave me a yield of 67.7%, reasons why this occurred could be as follows: * There may have been some side reactions producing by products instead of the desired chemical * Some of the product is lost during transfer of the reaction mixture from one piece of equipment to another, when the product is purified and separated. * There may be impurities in the product * Recovery of all the product from the reaction mixture is usually impossible Improving any experiment is ultimately reducing any errors within the method. The experiment could be repeated as many times as possible to gain an average set of results. Chemistry- Preparation of Antifebrin Thursday 13th Feb Deepan Patel ...read more.
The above preview is unformatted text
This student written piece of work is one of many that can be found in our GCSE Aqueous Chemistry section.
Found what you're looking for?
• Start learning 29% faster today
• 150,000+ documents available
• Just £6.99 a month
Not the one? Search for your essay title...
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
See related essaysSee related essays
Related GCSE Aqueous Chemistry essays
1. the synthesis of azo dyes, aspirin and soap
and some heat. http://aspirin-foundation.com/reaction.htm C9H8O4 is the molecular formula for aspirin and this tells you how many atoms are in each molecule of acetylsalicylic acid. So there are 9 C (carbon) atoms, 8 H (hydrogen) atoms, and 4O (oxygen) atoms.
2. Studies On Vitamin C Degradation in Fruit.
Samples of the fresh fruits will be crushed in a pestle and mortar. The mixture will be transferred to a centrifuge tube and centrifuged. A variable volume of distilled water will be added, to make up to the 30 cm3 mark and the mixture filtered for immediate use.
1. The aim of this assignment is to produce 1-bromobutane in the laboratory and write ...
The mixture was then shaken vigorously in the stoppered funnel releasing pressure from time to time. After the mixture had completely reacted, the organic layer was carefully sucked out. Then the 1-bromobutane was poured into a test-tube. Then we ran it into a tap funnel and 10cm3 of Sodium Hydrogencarbonate was added to the organic solution.
2. Obtain pure samples of Ethanol (CH3CH2OH) and Ethanoic Acid (CH3COOH) from fermented Yeast (Saccharomyces ...
All alcoholic drinks are products of this reaction. Fermentation processes are commercial or experimental processes in which microorganisms are cultured in containers called FERMENTERS or BIOREACTORS. They play an important role in modern industry. Fermenters are chambers in which microorganisms are cultured in a liquid or solid medium.
• Over 160,000 pieces
of student written work
• Annotated by
experienced teachers
• Ideas and feedback to
improve your own work |
Category:
What Would Happen if the Polar Ice Caps Melted?
A map of the polar ice caps.
Many island nations in the South Pacific would be covered if the polar ice caps melted.
Low-lying countries could experience considerable flooding and devastation with further ice cap melting.
Article Details
• Originally Written By: Michael Anissimov
• Revised By: A. Joseph
• Edited By: Niki Foster
• Images By: Lunar And Planetary Institute, Dmitry Pichugin, Didoi
• Last Modified Date: 05 August 2014
• Copyright Protected:
2003-2014
Conjecture Corporation
• Print this Article
Free Widgets for your Site/Blog
A camel can drink 30 gallons (135 liters) of water in under 15 minutes. more...
August 21 , 1959 : Hawaii became the 50th state to enter the Union. more...
There has been much worry about the possibility that global warming will cause the polar ice caps to melt and flood many coastal cities. Coastal flooding could be catastrophic because virtually all of the world's metropolitan areas that have more than 10 million people are located on or near coasts. In short, if both polar ice caps melted, sea level would indeed rise enough to flood many coastal areas and change the world's coastlines. Most scientists, however, believe that the process would take thousands of years.
Most of the world's ice — almost 90 percent — is in Antarctica. The continent is covered by an ice shelf that is about 7,000 feet (2,133 m) thick. Depending on the time of year, there is about 800 to 1,000 times as much as covering Antarctica than in the Arctic circle, where the ice cap is floating rather than covering land.
Ad
The effects caused by the melting of Arctic ice, if the polar ice caps melted, would be relatively small. Mostly because water from the Antarctic ice cap would run into the ocean, the world's oceans would rise by about 200 feet (61 m) if the polar ice caps melted. The average temperature in Antarctica is minus-35&def; Fahrenheit (minus-37° Celsius) — well below the temperature at which water freezes — so any significant melting of the Antarctic ice cap is considered very unlikely to happen. It is considered more likely that only a portion of the ice will melt, even over a long period of time, and that sea levels will increase by no more than a few feet or meters.
If the polar ice caps melted enough to cause the ocean levels to rise several feet or a few meters, however, the results would be significant. The streets of many current coastal cities would be underwater. Low-lying countries, such as many of those in Indonesia, could become almost entirely submerged. Flooding also could cover much farmland and affect the world's food supply. Farmers in the flooded areas would need to move to more elevated, likely rockier land, which might be less suited to growing crops.
What is not possible is that all the world's land would be covered if the polar ice caps melted. There simply is not enough ice on Earth for this to happen. Even in a severe flood, only a small percentage of the world's land would be lost.
Ad
Discuss this Article
anon948825
Post 65
I do not think this will happen anytime soon so don't worry, If this was to happen it would be about a thousand years away or about 2,000 or double a thousand more away so there you go. Peace out.
anon354077
Post 63
You aren't taking into consideration the 'density' of salt water in comparison with fresh water. If the Arctic ice melts, there will be a rise in ocean levels.
anon326493
Post 61
This is not true because most of Antarctica's ice is above sea level on land. If it melts, it will flood into the sea and raise the sea level. The ice cube in a glass of water only applies to the north pole where the ice is actually floating. The South Pole is on land and thousands of feet thick.
anon316455
Post 58
Question: Is anyone looking at the internal geological events and their potential impact on global warming? I have followed the ongoing discussions on man's impact but I have not seen any publicity concerning the discovery of thousands of thermal vents that have been reported over the past decade, nor of the volcanic activity under the Arctic Ocean, with NASA and NOAA both having articles tucked away as to such activity having occurred in the 1990's.
The potential for warming strikes me as just as probable from internal sources the atmospheric ones, possibly more. Given the fact that the magnetic north has moved more in the past 20 years than it has in the prior 80, one might think that the huge molten iron core is on the move and its impact could be much more profound than fossil fuels. Given the laws of thermodynamics heating seawater from thermal vents(500deg. F plus) and warmer ocean floors beats the liquid/gas interface on the oceans surface. Oh -- and then there is something about the increased solar impact.
anon312223
Post 56
If only sea ice were to melt (the condition which for the most part, or in total, exists within the Arctic Circle), then the sea level would drop. If only land ice were to melt (Antarctic and other glaciers), then the sea level would rise (Archimides' principle of relative density of solid vs liquid H2O. If you don't understand, I would recommend taking a basic course in physics and chemistry).
Salt water freezes at a lower temperature, thereby (all else being equal) tending to keep sea ice from reforming once melted in the face of higher average sea and air temps. However, if landlocked ice doesn't melt in an equivalent liquid volume, then the sea level would drop. That's a lot of ifs. There are many interacting parts in this very complex system, which in addition to both natural, earth bound forces (including man), are also, possibly to a large extent, influenced by forces outside of earth, our solar system and possibly our galaxy, which could be a source of (up to this point) the relatively poor predictive value of the computer models.
If the world's developing countries, most prominently China and India, continue to expand their fossil fuel consumption, and this will probably occur, then any drop in the USA's CO2 output will contribute very little to diminishing the total atmospheric concentration of this indispensable gas.
anon312089
Post 55
If the entire Earth was under water at one time, Where did all the water go? Water doesn't just disappear; it turns to gas and ice, but the size of the polar ice caps doesn't add up to if all the Earth was under water. And the water vapor doesn't add up ether. So where did the 200,000,000,000 gallons go? Give and take! (And on that note, ice is expanded water, when frozen.)
anon303146
Post 53
If the sea rise is inevitable, then a logical response is to create places for it. Terraform in Africa, for instance, and create large, deep saltwater lakes that will hold some of the water. The subsahara needs more water anyhow. Death Valley in Calif would also be a natural spot.
Before you cry about environmental damage, remember that these areas would likely be swamped anyhow. Yes, it would require a massive effort, but not as much effort as relocating New York. And anyhow, the massive construction equipment we have these days can do almost anything.
anon301765
Post 52
Ice in glass melting isn't the same as the polar ice caps because a lot of the ice is above the surface of the water on the polar ice caps. Ice in the water usually doesn't float above the surface.
Fill the cup full of ice, fill the cup 3/4 and let it melt. The level of water then will rise.
anon301762
Post 51
So if it takes 1,000 years for it to melt completely, wouldn't man slowly adjust to the rising water levels? Wouldn't the ocean and its life slowly over time adjust? So what is the harm? Yes, some species will vanish, but others will take their place.
Haven't all species on earth been downsizing since the ice age? Think about it: the small species survived the dinosaur age and Ice age, while the larger ones have gone. Shouldn't polar bears (the largest bears) seals, blue whales, etc., die off in favor of small species that can better change with the environment?
anon273935
Post 48
@anon225927: Ice is not heavier. Ice cubes float in your drink. Ice has greater volume than liquid water; that's the catch (Ice is less dense).
anon270905
Post 47
I am glad that there may be some land left. I was worried that all the land would be gone. At least humans have a small chance of surviving on land.
anon262964
Post 46
Volume = mass
Mass doesn't change with density; it stays same. A lot of people here don't really understand how ice melts and what scientists mean by "floating." Most ice is, in fact, residing on the ground mass. the "floating" ice is only in about a few feet (maybe even less) of water.
Yes, when you freeze water, its volume expands and it takes more space for the same mass (granted, in Archimedes' principle, it floats when in a similar mass of liquid).
When ice from the North Pole melts, it's basically huge chunks of ice where it's mostly under water but where large parts are outside of the water. This mass floats south and melts there (it doesn't melt in the North Pole!)
When we say global warming is affecting polar ice, does it mean that ice caps are so huge that you have several (hundreds?) of meters of ice bearing on a highly compressed tiny amount of water? Weight and pressure melts the bottom ice rendering it less inert, thus putting pressure on it to crack. This then results in icebergs.
Add that to milder temperatures and this creates less top ice in the winter (less condensation) and so you get "melting" ice crusts.
anon240361
Post 43
All this is well and good, but water levels are rising, the ambient temperature of the earth is heating up. One way to solve this is a canal built east to west of North africa. The substantial vapor would cover more of the atmosphere in the form of clouds, thus acting as a shield against the sun's UV. Also as a byproduct, once sea water finally turns into fresh water, it would add to the ecology of North Africa. hopefully stemming the growth of the Sahara. Building aqueducts off the canal, would in turn enhance the overall ecology. In time creating a growth of plant life, in a sustainable environment.
anon236189
Post 40
If both the ice caps melted, would the surrounding seas become colder or warmer?
anon227296
Post 39
If the ice caps did melt into the ocean, would all the fresh water from them upset the balance of salt water? Wouldn't that kill a large part of the ocean life? Isn't the ocean whats keeping us alive? I mean, in theory, didn't we come from the ocean? and doesn't the ocean have a huge impact on us, for living I mean?
I look at it like this: if there is a God, he will do what he wants to do, and we can't stop that. If it is just us, we have trashed this beautiful planet and I don't know if it will bounce back. Does anybody? Are we completely messed up?
anon225927
Post 38
I'm sorry to say, but are you kidding? I'm sitting here losing my mind over these comments.
"Simple logic, simply proven, but you won't find these facts anywhere because the Libs want your money". What?
"If both caps melted, the water level would rise 61 meters". Which planet was that?
"My mom said it will take a 100 million years from now on for all the ice to melt and cause flooding". Noah's ark, anyone?
"If you take a glass of water and put ice in it, the level of water will rise since ice is heavier than water, but when the ice melts, the water level will go down. So, melting ice will not raise ocean levels at any time". Enough said.
"With this in mind, the two hot and cold climates will disrupt the equilibrium of the equator and will combine to make catastrophic climate conditions and catastrophic natural disasters". So ahh whats it in equilibrium with?
anon182686
Post 37
I agree entirely the fact is evidence is all around us. Not will we only suffer but so will the wildlife. For example, the polar bear population has gone down due to ice melting and the polar bears are left to drown. If we do not act with all haste the Earth will be destroyed.
anon168517
Post 36
When ice acts like an insulator for the sun in a factor 1C, and seawater acts like 6 times the opposite, to store sunshine, then a melting insulator is not only disappearing, but it is adding to the storage capacity, which will increase the meltdown in a rapidly more affecting fashion.
I'll give the details in a moment, but it comes down to an estimated time before all ice is melted off, no more then give or take a little bit: 13 years! Why it must be 13 again, I don't know.
If we are wrong about the complexity of this issue, we had better start to try and delay it.
Estimated time before all ice has gone = M
Storage capacity of all seawater = S
Sn = the new value of all seawater
melting ice = m = Cn
meltwater= m*1,1 ( Ice floats on water, remember )
Reflective surface of the Icecap = I
In = the new value of the Icecap
The working formula being 6000/400 = 15 as a starting value of Sn/In = Cn
expected years from now (2011) before all the ice has gone M = 100 - Cn = 85 ( so until 2096 )
Cn+1= Sn+1 + Cn*6,6 / In+1 - Cn
Cn+2= Sn+2 + Cn+1*6,6 / In+2 - Cn+1
Taking you through the development according to this formula in four steps:
In six years' time, this results in 6794/278= 22,71
M = 100 - 23 = 77 years from now (2011 + 6 = 2017)
this would mean a loss of two years, so until 2094
In 9 years time this results in 7293/203= 31,63
M = 100 - 32 = 68 years from now (2011 + 9 = 2020)
this would mean a total of 77 years, so until 2088
In 13 years time ( why o why is this 13 again ) this results in 8440/29 = 94,95
M = 100 - 95 = 5 years from now ( 2011 + 13 = 2024)this would mean a total of only 18 years until 2029.
But in that next year the crossover point has been reached, and in a few months all ice will be gone.
Being a mathematician and not a physician,the C may be slightly off, or missing factors, but it shows the effect of melting not 1C less ice into 1 c more sea, but an increasingly growing factor of acceleration.
I have come up with an idea of putting huge sheets of gold- or silver-coloured reflective foil over the area of the icy seas. This way the equation would not be M= 100+1 - Cn+1= Sn+1 + Cn*6,6 / In+1 - Cn
but M= 100+1 - Cn+1= Sn+1 + Cn*1.1 / In+1 - Cn
because the loss of reflective power is compensated by the foil. We might even be able to reverse the process by adding so much foil that the seas are cooling down a little. Just to give us some time to find out what the actual complexity of the process is and if necessary find a proper solution in time.
anon159046
Post 35
I'm sorry, but most of the people writing comments here are talking about facts and theories with very little basis in actual science. Please refrain from talking about "fact" and "truth" when it is in fact totally bogus and based more in superstition than actual science.
anon153878
Post 34
Let's pretend that the old star betelgeuse goes super on 2012. Would that raise the earths temp having two suns? could that melt the caps? (not religious) but Christ's second coming all will see and I'm sure nostradamus said something about doomsday and all witnessing the event. seems funny that the day after tomorrow and 2012 films seem to be leading the way. 11-11 is a strange phenomenon which people are signing up to.
Something is coming, call me daft, but life is not solely on science bricks and fact. We have knowledge of something. We just need to put all the pieces together. i Hope i have you thinking as someone needs to.
anon148949
Post 33
There have been a few erroneous comments made about whether water level would indeed rise should the polar ice caps melt, here are some elementary facts.
Normally when something 'freezes' it results in a more densely packed molecular structure, resulting in an increase in density and a decrease in volume. However, water has the peculiar property where freezing it actually increases its volume. One obvious side-effect of this is the decrease in density as the same amount of water increases in volume; this is the reason why frozen ice floats on top of liquid water.
The second phenomenon is commonly known as Archimedes' principle where for an object to be buoyant in water, the water must displace equivalent amounts of water as the buoyant mass.
Combining these two basic physics principles we arrive at the conclusion that the melting of arctic ice (or its re-freezing for that matter) will have no impact on the sea levels of the world. On the other hand, the melting of Antarctic ice cap will raise the sea-level because those ices are on top of a piece of giant landmass.
anon148928
Post 32
Lots of misconceptions there:
The approximate alignment of the planets in 2012 will have absolutely no bearing on the Earth, because of the large distance away each planet is.
Only the Earth's moon has an effect on the Earth, and again that is because it is relatively close to the Earth, which gives it gravitational pull.
The "fear" about 2012 is nothing more than an internet hoax about a phantom "planet" that is supposedly going to collide with the Earth. That was originally predicted to happen in May 2003. When that didn't occur, it was conveniently moved to the winter solstice of 2012 to coincide with the Mayan calendar. However, the phantom planet does not exist.
As far as global warming and magnetic north, those are two totally unrelated items. Magnetic North IS constantly changing and is currently moving towards Russia at a rate of about 40 miles/year. In fact about every 4,000 centuries Magnetic North and Magnetic South actually change places with each other.
Global warming is partially naturally-occurring, but the rate of increase in global warming is due to the effects of human beings, specifically CO2 and methane emissions.
anon140581
Post 31
magnetic north hasn't always being where it is today. Global warming is a natural occurrence, and it cannot be stopped. It has happened before -- multiple times.
In about another 7000 to 12000 years, magnetic north will be about 32 degrees off where it is now.
The thing about 2012 (the end of the world) that worries me is that a lot of the planets in our solar system will align. What will this alignment do? Will it do anything? Will it pull earth's axis drastically? Kind of like moving the clock ahead 9000 years in one earth day?
anon136891
Post 30
I have a common knowledge question based on this. The northern polar ice cap is freezing over whilst the southern polar ice cap is melting. I know this due to New Zealand's and Australia's summers getting about 1-2 degrees celsius hotter and more humid over the past couple of years.
Australia was in the midst of a severe drought, and up until a couple of weeks ago Australia suffered the harshest floods in years. The melting of the southern polar ice cap will cause temperatures to rise in the south and in the north the temperature will be getting colder. Reason due to the southern ice cap melting is the hole in the ozone layer. Therefore, New Zealand and Australia are rising in temperature and humidity.
With this in mind, the two hot and cold climates will disrupt the equilibrium of the equator and will combine to make catastrophic climate conditions and catastrophic natural disasters.
We have commonplace here with Mother Nature but we still haven't seen her unleash her fury! We have a lot more natural disasters happening now, more than ever before! So with this theory in mind, this is a 100 percent true fact and a common knowledge answer!
anon123162
Post 27
nonsense? Not. Land based ice melts will raise the ocean levels and yes, this, in turn, will flood the amazon/st.lawrence river/arctic and coastal rivers flowing into canada and out of canada. The lake ontario region will be flooded. Then montreal, kingston, ontario and toronto will be flooded. PEI, Nova Scotia, Bay of Fundy -- all flooded.
Political scientists know this, too. It's coming, folks. It's coming. It's a scientific fact and not scare mongering at all. It's physics, plain and simple. Man made? Not. It's the sun that's doing it. It's called sun cycles, folks. Kiss the known world goodbye. Humanity is stupid, but are masters at hindsight.
anon122899
Post 26
Will this melting of polar ice cause any change in the earth's radius?
anon120784
Post 25
has anyone thought about the fact that, if the ice melts and the eventual rise in temp at the bottom of the ocean matches the temp at or not too far below the surface, the entire ocean cycle will cease to exist? stopping this cycle has a huge effect on weather patterns around the globe. the rise in ocean temps result in bigger, more fierce and damaging storms.
The ice caps keep the ocean at a cooler temp
and I imagine also contributes to the health of the ocean due to the "recycling" caused by the colder water flowing below the surface.
This will also have an effect on agriculture due to the rise in air temp also caused by the death of ocean currents, causing droughts and huge changes in weather patterns and activity.
In my opinion, it will also contribute to new diseases. Who knows what pathogens or air borne illnesses lie within the four thousand year old ice, and is it possible they could be released. I think that should be kept in mind at least.
Not only that, but the rise in air temp will spread new diseases, most likely across the globe.
I know you guys are on the "drowning" argument, but there's more to the ice melting than having to buy boats.
When that cycle stops, things are going to get a whole lot different.
anon111353
Post 24
my mom said it will take a 100 million years from now on for all the ice to melt and cause flooding.
anon106313
Post 23
sorry anon46092, you're wrong. if both caps melted, the water level would rise 61 meters, the north pole ice cap is over a mile thick and most of it is floating on the surface of the sea.
your experiment is wrong. take half a glass of water and freeze it, then take a second glass, half-filled. let the frozen water melt then add it to the first glass. The molecules in frozen water are more tightly packed together. when it melts, the total volume increases. basic science, dude.
anon104578
Post 22
Only the ice that is above the current ocean level would contribute to a sea level rise, and that only partially since ice is less dense than melted water. The reports of of the ocean level rising is in the same category as reports that the sky is falling.
anon94129
Post 21
if 2012 actually happens and the poles do, in fact, move to new areas, it could cause extreme problems. the ice would melt in weeks or days, not years. natural disasters have happened before and they still can't be explained. anything is possible.
anon91251
Post 20
Anon 82750: "Ice is heavier than water" Wow! Am I the only one who caught that.
anon88968
Post 19
Greenland and Antarctica are not giant ice cubes floating on water. They are on land, several meters above the sea level.
anon82750
Post 17
Everyone should try this at home. If you take a glass of water and put ice in it, the level of water will rise since ice is heavier than water, but when the ice melts, the water level will go down. So, melting ice will not raise ocean levels at any time.
The only way oceans could rise is if a part of land rises up from the sea bed, since two objects cannot occupy the same place and this will happen sooner than you think.
anon73904
Post 16
The ice on the poles has not increased the last few years, it has gone down a great deal, and big ice on land has also. Ice in greenland is melting more each year.
anon73706
Post 15
hey anon65542, ice melts at 32 degrees f, not water. water is already the melted form of ice.
anon71238
Post 14
yes, water is most dense at four degrees celsius, but the fact remains that 90 percent of the earth's ice is in antarctica, and therefore, on land.
unless i missed something, ice that is on land cannot displace any water. so if antarctic ice melted, the sea level actually would rise.
sorry anon46092, you got it wrong. but don't worry, you can still inform the "Libs" that it won't happen. The average annual temperature in antarctica is still negative 37 degrees celsius- far below the melting point of water.
anon65542
Post 13
Water melts at 32 degrees F or above. It never gets that warm at either pole. The average temperature of the interior region of the Arctic mass is -37 degrees. It is not going to melt. The Arctic ice has increased for the last few years.
anon63226
Post 12
Your ice in a cup experiment is flawed. A lot of ice is on land, not floating. An accurate ice in a cup experiment would be putting a platform above the water level in the cup and placing the ice on the platform. The platform represents the land that the ice cap is lying on, such as Antarctica, or Greenland.
anon61685
Post 11
i have tried to express these same theories to all those conspiracy nuts but they all refuse to believe. They all want to believe that everything is going to cause the end of the world so they do not have to feel responsible for foolish beliefs. I say believe what you can see, test the theory for yourself watch the water fall when the ice melts. Also, when you place ice in a glass of water most of the ice is under the water leaving only a fraction above the surface. Why do people refuse to believe their own eyes?
anon56690
Post 9
If the North Pole melted and the sea were not salty, the net rise in sea level would be zero at 0 degrees Celsius. That’s because the weight of water that is displaced by ice is equal to the weight of the ice.
As the temperature increases to four degrees Celsius, the volume will decrease slightly since water is most dense at that temperature. Warming further, the volume increases well past the volume at zero degrees. The warming of the seas devoid of ice is the source of significant sea level increase. Since the sea is salty, and more dense than fresh water, the level will increase, depending on the local salinity, as the ice melts and the level will continue to increase as before, above 4 degrees C.
So, with all of the money you’re saving by living in your neo-con dream world, maybe I could interest you in some nice beach front property east of Sacramento!
anon55904
Post 8
But if the ice caps have more water than we have land, then the fact stays that the water would flood the land, and without any land, where would we be then?
anon55022
Post 7
anon46092: Most of the southern polar ice rests on a massive body of land called the Antarctica continent and thus does not displace ocean water as you would think.
anon54865
Post 6
But if ice floats and most of it is not in water how does that make it true?
anon52434
Post 5
The water displacement only happens if the ice cap on the North pole melts, not both.
anon46092
Post 3
It is a scientific fact that if BOTH ice caps melted completely, the ocean levels would actually decrease. A simple experiment you can do at home will prove this fact. Fill a glass of water half way, then fill the rest of it with ice. Measure the water level in the glass then allow all the ice to melt. The water level goes down. The reason? The amount of water displaced by the ice is actually greater than the amount of water after the ice melts. The amount of ice at both poles is enormous, displacing ocean water. Even if all the land ice melted as well, the runoff would return the ocean to the same level it was before *both* ice caps melted. Simple logic, simply proven, but you won't find these facts anywhere because the Libs want your money.
Post your comments
Post Anonymously
Login
username
password
forgot password?
Register
username
password
confirm
email |
Ãëàâíàÿ Ìåäèöèíñêàÿ áèáëèîòåêàÎáðàòíàÿ ñâÿçü
Ãëàâíàÿ Ãàñòðîýíòåðîëîãèÿ Óâåëè÷åíèå ñåëåçåíêè Âîñïàëåíèå ÿçûêà, æåëóäêà, ñóäîðîãè â ìûøöàõ è äðóãèå ïðèçíàêè ïåðíèöèîçíîé àíåìèè
Âîñïàëåíèå ÿçûêà, æåëóäêà, ñóäîðîãè â ìûøöàõ è äðóãèå ïðèçíàêè ïåðíèöèîçíîé àíåìèè
ñåëåçåíêà ïðè ïåðíèöèîçíîé àíåìèè, ôîòî
Ïåðíèöèîçíàÿ àíåìèÿ (àíåìèÿ Àääèñîíà - Áèðìåðà, ìåãàëîáëàñòè÷åñêàÿ àíåìèÿ) - õðîíè÷åñêîå çàáîëåâàíèå, íàáëþäàþùååñÿ â ïîæèëîì âîçðàñòå.
Êëèíè÷åñêàÿ êàðòèíà ïåðíèöèîçíîé àíåìèè
Ïåðíèöèîçíàÿ àíåìèÿ õàðàêòåðèçóåòñÿ ïîðàæåíèåì òðåõ ñèñòåì: ïèùåâàðèòåëüíîé, íåðâíîé è ñèñòåìû êðîâè. Ñî ñòîðîíû ïèùåâàðèòåëüíîé ñèñòåìû â íà÷àëüíûõ ñòàäèÿõ îòìå÷àþòñÿ âîñïàëåíèå ñîñî÷êîâ ÿçûêà (ãëîññèò Ãóíòåðà), â äàëüíåéøåì èõ àòðîôèÿ (ïîëèðîâàííûé ÿçûê); ãèñòàìèíðåçèñòåíòíàÿ àõèëèÿ, çàâèñÿùàÿ îò ãëóáîêîãî àòðîôè÷åñêîãî ïðîöåññà ñëèçèñòîé îáîëî÷êè äíà è òåëà æåëóäêà, îáóñëîâëèâàþùåãî òàêæå ïðåêðàùåíèå âûðàáîòêè âíóòðåííåãî ôàêòîðà Êàñëà (ãàñòðîìóêîïðîòåèíà). Âîçìîæíî óâåëè÷åíèå ñåëåçåíêè è ïå÷åíè.
Ñî ñòîðîíû íåðâíîé ñèñòåìû íàáëþäàåòñÿ ôóíèêóëÿðíûé ìèåëîç (äåãåíåðàöèÿ çàäíèõ è áîêîâûõ ñòîëáîâ ñïèííîãî ìîçãà): ïàðåñòåçèè ïðåäïëå÷èé è êèñòåé, ãîëåíåé è ñòîï, ñóäîðîãè â ìûøöàõ, ñëàáîñòü â íîãàõ, èíîãäà ðàññòðîéñòâî ôóíêöèè ìî÷åâîãî ïóçûðÿ, ñíèæåíèå àõèëëîâûõ, êîëåííûõ è äðóãèõ ðåôëåêñîâ.
Äèàãíîñòèêà ïåðíèöèîçíîé àíåìèè
 êðîâè îáíàðóæèâàþò ãèïåðõðîìíóþ àíåìèþ (öâåòíîé ïîêàçàòåëü-1 -1,2-1,5). Âûðàæåíû ÿâëåíèÿ ïîéêèëîöèòîçà ñ òåíäåíöèåé ê îâàëîöèòîçó, àíèçîöèòîçà (ìàêðîöèòîç è ìåãàëîöèòû, ìèêðîöèòû), àíèçîõðîìèÿ, ãèïåðõðîìèÿ ýðèòðîöèòîâ. Ìîãóò îïðåäåëÿòüñÿ ìåãàëî-, ìàêðî- è íîðìîáëàñòû, ýðèòðîöèòû ñ òåëüöàìè Æîëëè, êîëüöàìè Êàáî, àçóðîôèëüíîé çåðíèñòîñòüþ.
 áåëîé êðîâè âûÿâëÿþò óìåðåííóþ ëåéêîïåíèþ ñ íåéòðîïåíèåé, ãèïåðñåãìåíòèðîâàííûå íåéòðîôèëû (ñäâèã ôîðìóëû âïðàâî), ðåäêî - ãèãàíòñêèå íåéòðîôèëû.
Êîëè÷åñòâî òðîìáîöèòîâ óìåíüøåíî, âñòðå÷àþòñÿ êðóïíûå ôîðìû.
 ñûâîðîòêå êðîâè óâåëè÷åíî ñîäåðæàíèå áèëèðóáèíà, æåëåçà.  êîñòíîì ìîçãå îòìå÷àþòñÿ ìåãàëîáëàñòè÷åñêîå êðîâåòâîðåíèå, ãèãàíòñêèå ôîðìû ìåòàìèåëîöèòîâ, ìîãóò áûòü áàçîôèëüíîçåðíèñòûå íîðìîáëàñòû.
Ëå÷åíèå ïåðíèöèîçíîé àíåìèè
Íàçíà÷àþò âèòàìèí  è ôîëèåâóþ êèñëîòó.
È.È.Ãoí÷apèê
"Âîñïàëåíèå ÿçûêà, æåëóäêà, ñóäîðîãè â ìûøöàõ è äðóãèå ïðèçíàêè ïåðíèöèîçíîé àíåìèè" è äðóãèå ñòàòüè èç ðàçäåëà Óâåëè÷åíèå ñåëåçåíêè
Äîïîëíèòåëüíàÿ èíôîðìàöèÿ:
ßíäåêñ.Ìåòðèêà |
SiteMap
Japan Aerospace Exploration Agency:JAXA
• NASA TV
• contact us
• Japanese
Experiment
Do frog cells form “domes” in space, too?
Control of cell differentiation and morphogenesis of amphibian culture cells
Dome Gene
Background
Figure 1. Dome formation
A: When separate cells are put together, they begin to communicate with each other and play their own roles to shape certain forms.
Xenopus laevis kidney-derived cells (A6 cells) form dome-shaped structures.
B: Micrograph of A6 cells
Arrows indicate dome formation.
C: Micrograph of A6 cells cultured under a simulated microgravity condition
Dome formation is inhibited.
The Xenopus laevis is an amphibian which lives in water as a tadpole until it starts to live on land. Gravity in water, where buoyancy is provided, is significantly different from that on land. The difference should affect organs of the frog’s body. This experiment will focus on the kidney of Xenopus laevis.
When cells from the kidney (kidney-derived cells) of Xenopus laevis are cultured, they form particular shapes. They are raised like domes and called “dome structures” (Figures 1A and 1B). No cells from other organs form such structures when cultured.
Initially, the cells are separated, sharing their characteristics. As they proliferate, they come in contact with each other, forming clusters. They begin to function differently from when they are separated. They play their own roles in the clusters and form dome-shaped structures. It looks as if they communicated with each other (Figure 1A).
During the development of an organism, as a fertilized egg gradually divides, what initially are aggregated undifferentiated cells with unknown identity gradually become systematically differentiated into organs with various functions. In the process of culturing kidney-derived cells, no fertilized eggs divide, but there may be phenomena related to cell differentiation and development because cells in clusters play their own roles and form structures in the process of morphogenesis.
The dome structure is similar in structure to the Bowman’s capsule, a structure in the kidney of animals that acts as a filter to remove waste products from the blood. Cells may form a very primitive version of the structure that is the most important in the kidney.
Objective
In the 1G environment on Earth, kidney-derived cells begin to form domes in about 10 days. What will happen in space, where gravity is very low? Will domes be formed as on Earth?
An experiment with simulated microgravity on Earth found that the time for dome formation was different. It was discovered that the genes that act on dome formation on Earth acted differently under the simulated microgravity condition (Figure 1C).
In this experiment, kidney-derived cells will be brought to space to see whether, when, and in what shape they will form domes and to study the genes involved in dome formation. Cells from the liver (liver-derived cells) of Xenopus laevis will also be brought to space for evaluation. Unlike kidney-derived cells, liver-derived cells do not form domes; they will be brought to space for comparison.
Outline of the Experiment
Figure 2. Clean bench in Kibo
Cultured cells can be observed through the built-in microscope. Microscope images are showed on the display at the top of the device.
Cells from the kidney and liver of Xenopus laevis in culture will be launched. They will be cultured at 22クA訝 under microgravity or about 1G artificial gravity.
The kidney-derived cells will be observed for the time of dome formation and the dome shape. Comparisons will be made between the microgravity condition and the artificial gravity condition. Once domes are formed, astronauts will observe them under a microscope.
A device called a clean bench will be used for observation (Figure 2). It has a built-in microscope, which allows immediate observation of samples such as cultured cells. Astronauts observe cells on the LCD display placed at the top of the device. Images can be sent to Earth and researchers can see them.
The cells before and after dome formation will be treated with a chemical and frozen for preservation. They will be recovered on Earth. The entire experiment will last for about 10 days.
After recovered on Earth, the cells will be evaluated for differences in the genes involved in dome formation, using the world’s highest quality DNA microarray.
This is the Point!
Makoto Asashima has long been specializing in developmental biology. He has discovered that a protein called activin plays a very important role in determining which cells become which organs in the process where fertilized eggs divide repeatedly to gradually form various body parts and organs. He has identified the action of activin.
In this space experiment, the expression level of the gene for activin production may change. Yet unknown genes may be expressed and alter dome formation.
Asashima has conducted many embryological experiments using frog and newt eggs. During early development, humans and frogs are similar in the orders of organ formation and gene expression. This experiment will provide important clues to the understanding of whether normal development, differentiation and morphogenesis will take place in space and how they may be affected by weightlessness. Ultimately, the findings may pave the way for the days when people will live in space and alternation of generations will take place there.
How are genes evaluated?
Figure 3. Luminescence pattern from DNA microarray
The expression “analyzed at a genetic level” is used in almost all space experiments these days. How are genes evaluated exactly?
Since genetic DNA is a blueprint, it does not increase or decrease. Usually, in protein production, a messenger RNA (mRNA) reads information contained in DNA and produces a kind of template. It is then carried to a ribosome, or a protein factory, where amino acids are brought in according to the template and assembled to synthesize protein.
Therefore, when someone says that genes have increased or decreased, it generally means that the quantities of the materials involved in the process at and later than the RNA phase have changed. When a gene starts to provide mRNAs, it means that the gene has become activated. This is a phenomenon called “gene expression.” Measuring gene expression means measuring the level of mRNAs.
To measure gene expression, a technique called DNA microarray is used. Thousands to tens of thousands of genes to be evaluated are arranged over a chip. Genes extracted from cells recovered on Earth after a space experiment are placed over the chip. The genes that have been expressed in space develop a color and glow. The quantities of the genes can also be measured. This technique reveals which genes have been expressed in space and which have not.
Principal Investigator
Makoto Asashima
Vice president and board member, University of Tokyo
Copyright 2007 Japan Aerospace Exploration Agency Site Policy |
Are All Golf Cart Keys the Same
No, golf cart keys are not all the same. Each golf cart has its unique key.
In most cases, the key is specific to a particular brand and model of the golf cart. This means you cannot use the key from one golf cart to start another. Golf cart keys prevent unauthorized use and ensure only the rightful owner can operate the vehicle.
Therefore, keeping track of your golf cart key and keeping it secure is essential.
Different Types of Golf Cart Keys
Golf cart keys come in various types, each serving a specific purpose. Master keys are versatile and can operate multiple carts. Ignition keys are used to start and power the cart’s engine. Clubhouse keys grant access to golf cart storage areas.
Restrictor keys limit the cart’s speed for safety reasons.
Remote keys enable convenient locking and unlocking of the cart. It is important to note that not all golf cart keys are identical, as they are designed to meet different needs and functionalities.
Understanding the different types of golf cart keys ensures that you have the correct key for the proper purpose, maximizing convenience and efficiency.
Key Features and Functions
Golf cart keys are not all the same, as they have different features and functions. The design and shape of the key can vary, with unique notches and grooves.
Some keys can be programmed to perform specific functions. In addition, keyless entry options are available for added convenience.
Moreover, specific keys also come with security features to prevent unauthorized access. So, it is crucial to understand the key features when choosing a golf cart key.
Compatibility Considerations
Golf cart keys are not all the same; compatibility with different brands should be considered.
Different key brands may or may not work across various golf cart models, so it’s essential to ensure your key is compatible with your specific golf cart.
Additionally, essential compatibility extends to different golf cart systems as well. Some keys may work with one system, but not with another, so choosing the correct key for your specific system is crucial.
Furthermore, compatibility with aftermarket accessories is another aspect to consider.
Some keys may work seamlessly with aftermarket accessories added to your golf cart, while others may not. Therefore, it is always advisable to check essential compatibility before purchasing to ensure that it works effectively with your golf cart and any additional accessories you may have.
Key Replacement and Duplication
All golf cart keys are different, and it’s essential to understand key replacement and duplication. Various options are available for key replacement, including key-cutting services and key cloning processes.
When considering replacement keys, it’s essential to evaluate the cost.
Key-cutting services involve creating a new key based on the original code. On the other hand, key cloning processes copy the original key’s code onto a blank key.
The cost of replacement keys may vary depending on the provider and the complexity of the key.
Therefore, exploring different options and choosing the one that best suits your needs and budget is essential. Keeping a spare key is always a good idea to avoid any inconveniences in case of loss or damage.
Having the correct key for your golf cart ensures a hassle-free and enjoyable experience.
Critical Security and Theft Prevention
Golf cart keys vary in security, making it essential to prevent theft and ensure safety. Secure critical systems are of utmost importance.
Preventive measures include standard methods such as anti-theft devices, key lock upgrades, and critical storage solutions.
Upgrading the key lock adds an extra layer of protection against theft. Anti-theft devices deter potential burglars and safeguard golf carts. Proper key storage and carrying solutions enhance security during transportation and storage.
Prioritizing essential security reduces the risk of unauthorized access and protects valuable assets.
Investing in secure critical systems and implementing preventive measures to protect golf carts from theft is crucial.
Caring for Your Golf Cart Keys
Golf cart keys may seem interchangeable, but proper maintenance is essential. Clean and lubricate your keys regularly to keep them functioning smoothly. Handle and store them responsibly to avoid damage or loss.
Additionally, inspect and test your keys regularly to ensure they are in good working condition.
Neglecting critical maintenance can lead to inconveniences and potential security risks. Regular care and attention will help extend the lifespan of your golf cart keys and prevent unnecessary frustration.
So, give your keys the TLC they deserve for a smooth and reliable golf cart experience.
Frequently Asked Questions
Are All Golf Cart Keys the Same?
No, not all golf cart keys are the same. Different brands and models have unique keys.
How Do You Know Which Golf Cart Key to Use?
To know which golf cart key to use, refer to the owner’s manual or consult a professional.
Can You Use Any Key to Start a Golf Cart?
No, you cannot use any key to start a golf cart. Only the specific key designed for that cart will work.
What Happens If You Use the Wrong Key?
Using the wrong key can damage the ignition system of the golf cart and prevent it from starting.
Can a Locksmith Make a Key for a Golf Cart?
Yes, a locksmith can make a key for a golf cart, but they will need the necessary information and possibly the original key.
Conclusion
Not all golf cart keys are the same. Different golf cart manufacturers use various key designs to ensure the security and exclusivity of their vehicles.
Golf cart owners need to understand the specific key requirements of their carts to avoid any inconvenience or security issues.
Whether it is a traditional key, a transponder key, or a key fob, each type serves its purpose.
Many golf carts also have features such as programmable keys or keyless ignition systems to enhance convenience and security.
Ultimately, golf cart owners should consult their cart’s manual or contact the manufacturer to determine the appropriate replacement or spare keys.
By being informed about their golf cart keys, owners can ensure smooth operations and peace of mind while enjoying their rounds of golf or other recreational activities.
Muktadir Risan is a passionate golfer and the driving force behind Surprise Golf. With a deep love for the game, Muktadir combines expertise in golf equipment and techniques to share valuable insights with fellow enthusiasts. As the founder and lead writer, he strives to make Surprise Golf a go-to hub for golfers seeking guidance, inspiration, and a stronger connection to the world of golf. |
[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]
Re: more on the serial ports
>Do we know for sure that setserial actually writes settings to the ports?
No, I don't know that, and I don't think it does since the ports don't
work
after trying it anyway.
>If it does work OK for writing we could use starting setserial multiple times
>to fix the serial port initialization problem?
>
Udo, I think you're missing it here. No, it doesn't work, no matter how
many times you try. It doesn't work at boot, and actually, I don't even
think it runs, as a matter of fact.
Trying it by running setserial from a console doesn't work either. I
mentioned it the other day...after having tried 'setserial /dev/ttyS0
spd_normal
which did nothing. Ie, I still had to do it in minicom anyway to get the
port
to work.
> wo> The kernel does see both ports at boot.
>
>With correct I/O and IRQ, I presume?
>
Can't say, since it doesn't print that at all. It just acknowledges that
they are there...doesn't report anything at all. If you try to use
setserial
to display the serial port info, thats when it reports everything as
zeroes.
Paul
Reply to: |
2012年3月31日
Linux Script:設定 Proxy, DNS 等環境變數
圖片來源:http://bookmoving.com/book/the-linux-command-line-starch-press-_42278.html
設定、取消、輸出 linux 環境變數,以 Proxy 為例:
# export 設定, unset 取消, echo 輸出 linux 環境變數
# env 列出目前的shell 環境下的所有環境變數與其內容
# Proxy 類型:http_proxy, https_proxy, ftp_proxy等
export http_proxy=http://proxy.my.company:port/
echo $http_proxy
env
unset http_proxy
若不設定 http_proxy,要在 wget, curl 使用 proxy,可使用:
wget --proxy-user=USERNAME --proxy-password=PASSWORD http://url.com/
curl --proxy-user user:password http://url.com/
設定 DNS 的方法如下,DNS列表可參考台灣各大 DNS Server 推薦與整理
[root@www ~]# vi /etc/resolv.conf
nameserver 168.95.1.1
nameserver 8.8.8.8
關鍵字:lenovo, 筆電, 安裝, win7, win, windows
參考資料: |
Article Text
PDF
AB0476 Serum Levels of Proliferation-Inducing Ligand, B-Cell Activating Factor, Antinuclear Autoantibodies and Markers of Inflammation in Patients with Primary Sjögren's Syndrome Compared To Patients with Sicca Symptoms without Diagnosis of pSS and with Healthy Subjects
1. M. Maslinska1,
2. E. Kontny2
1. 1Early Arthritis Clinic
2. 2Department of Pathophysiology and Immunology, National Institute of Geriatrics, Rheumatology and Rehabilitation, Warsaw, Poland
Abstract
Background The proliferation-inducing ligand (APRIL) and B-cell activating factor (BAFF) are associated with primary Sjögren's syndrome (pSS), because they affect B-cell activity – possible key factor in the pSS pathogenesis.
Objectives The study aimed at determining the correlation between serum concentrations of APRIL /BAFF and markers of pSS and of systemic inflammation.
Methods We evaluated 3 groups: (i) 41 patients with established pSS diagnosis (according to AECG criteria and ACR criteria 2012); [the mean age 51±15.3; 82,9% female (F), 17.1% males (M)]; (ii) 32 subjects with symptoms of dryness (other connective tissue diseases were excluded)[the mean age 56±12,46; (93,75%) F, 2 (6,25%) M], and (iii) 24 healthy volunteers [the mean age 43±12,4; 19 (79,2%), F and 5 (20,8%)M]. Laboratory tests were performed, including establishing inflammatory parameters: erythrocyte sedimentation rate (ESR), serum concentration of C-reactive protein (CRP) (range 0–10 mg/l), concentrations of gamma globulins (g/dl). ANAs titers and patterns were determined by an indirect immunofluorescence method with HEp2 slides. Anti-SSA and anti-SSB antibodies were detected by a dot-blot method allowing semi-quantitative evaluation. Serum concentration of APRIL and BAFF were measured using an enzyme-linked immunosorbent assay. Minor salivary gland biopsies evaluating the number of inflammatory infiltrates foci [focus score (FS) of mononuclear cells above 50] were performed. The study was approved by the Ethics Committee of NIGRR. Differences between groups were analyzed using U Mann-Whitney test (continuous variables). Correlations between variables were assessed with the Spearman correlation coefficient. Statistical significance was set at p<0.05
Results pSS patients had significantly higher serum concentrations of BAFF and ANAs (Mann Whitney test p<0,015, p<0,000 respectively), but not of APRIL, compared to the healthy control (HC) group. There were no differences in serum concentration of BAFF and APRIL between pSS and sicca (S) group. Positive correlation between BAFF and APRIL serum concentrations was found in HC group (r=0,469;p<0,05), but not in pSS and S groups. In S group BAFF positively correlated with CRP (r=0,446;p<0,05) and ANAs positively correlated with a presence of anti-SS-A, anti- SS-B antibodies and a level of gamma globulins. In pSS group there was a statistically significant positive correlation between the concentration of APRIL and: (i) ANAs (r=0,569), (ii) anti-SS-A (r=0,313), (iii) anti -SS-B (r=0,424) and (iv) gamma globulins level (r=0,408). In pSS group ANAs positively correlated with FS (r=0,455).
Conclusions 1. BAFF correlates more closely with an inflammatory reaction, while APRIL with a prolonged autoimmune response, that favoures the production of autoantibodies. 2. Lack of statistically significant differences in the concentrations of BAFF and APRIL between pSS and S group may be explained, based on other studies, by a local BAFF overexpression in salivary glands in pSS group or by an influence of EBV infection or reactivation in both groups (data not shown). 3. Positive correlation between FS and ANAs may suggest a role of ANAs in a local inflammation.
Acknowledgement Financing of the work – research grant NCN 2012/05/N/NZ5/02838.
Disclosure of Interest None declared
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. |
Yoga’s Role in Helping with Depression and Anxiety
Yoga is the answer to perfecting the body, easing the mind, and cultivating the wisdom of the spirit. A traditional yoga practice utilizes meditation and breathing exercises known as Pranayama and Asanas. These are the physical postures, which I am sure, you must have seen hundreds of times on a yoga DVD or online yoga video. Yoga promotes abstention from harmful behaviors, such as smoking and overindulgence in alcohol, and encourages healthy dietary changes and promotes mental clarity through meditation and relaxation. There are a lot of health benefits of yoga, physical as well as spiritual. Here we shall discuss the effects of yoga on human mind.
Many studies have shown that exercise can help with anxiety and depression. It was never completely certain, however, which types of exercise gave the most benefit. Yoga was taken into consideration, along with other types of exercise, for a study that was conducted at the Boston University School of Medicine. It compared walking vs. yoga’s effects on anxiety and depression to assess health benefits of yoga. The results showed that the practice of yoga improved emotions and moods more effectively than just walking, by enhancing the production of GABA, a neurotransmitter in the brain. GABA, known as gamma amino butyric acid, reduces anxiety and enhances relaxation. It is the main inhibitory or calming neurotransmitter in the nervous system.
The study was based on 19 people practicing yoga for 60 minutes, three times weekly, for 12 weeks. Similarly, 15 participants did a walking program for the same period of time. The moods of the participants and levels of GABA in the brain were assessed at the beginning and end of the trial period. At the end of 12 weeks the findings were not just interesting, but quite amazing, confirming the health benefits. The people who had engaged in yoga had a much greater improvement in their moods and less anxiety than the walking group. They also had higher levels of GABA in the brain as measured on an MRI scan.
In another study from Boston University in 2007, it showed that a one-hour yoga session resulted in elevated GABA levels in the brain and that GABA levels increased by 27% in the yoga group compared with absolutely no change in the other group. This was the first study of its kind to show that a modality such as yoga contributed to elevated GABA levels in the brain.
Prescription Medications for Depression
Many prescription medications work by increasing GABA levels in the brain and they contain Benzodiazepines. Valium, Ativan, Restoril and Ambien are just a few of the many in this category of drugs. They reduce anxiety and induce sleep. But the problem is that these drugs are habit-forming. Once taken on a regular basis they are difficult to stop, not to mention the other side-effects that these drugs may cause.
By practicing yoga, you will diminish anxiety and reduce the need for prescription and over-the-counter habit forming drugs. Your body will get many health benefits of yoga that will be felt in countless ways, including flexibility, balance, muscle strengthening, better concentration and a natural relaxation response. Once a regular yoga practice is formulated, it can benefit numerous medical conditions that may eventually no longer require medication.
Yoga has been proven to help with not only strengthening the body and increasing flexibility, but also in lowering high blood pressure, reversing heart disease, alleviating asthma, reducing the pain of arthritis, chronic back pain and Fibromyalgia, just to name a few. The practice of yoga requires no special equipment or gym memberships, the only things you need are a yoga DVD by a certified instructor and a willingness to learn some of the basics. Once you begin a yoga practice, you will feel the results and just may become a regular practitioner!
Leave a Reply
Your email address will not be published. Required fields are marked * |
抛开Cookie使用SESSION-PHP中SESSION不能跨页传递问题-php基础-PHP教程-威尼斯人唯一官网
当前位置:首页> PHP教程> php基础
关键字
文章内容
抛开Cookie使用SESSION-PHP中SESSION不能跨页传递问题
修改时间:[2010/08/19 08:06] 阅读次数:[542] 发表者:[起缘]
抛开cookie使用session
PHP中SESSION不能跨页传递问题的解决办法
在PHP中使用过SESSION的朋友可能会碰到这么一个问题,SESSION变量不能跨页传递。这令我苦恼了好些日子,最终通过查资料思考并解决了这个问题。我认为,出现这个问题的原因有以下几点:
1、客户端禁用了cookie
2、浏览器出现问题,暂时无法存取cookie
3、php.ini中的session.use_trans_sid = 0或者编译时没有打开--enable-trans-sid选项
为什么会这样呢?下面我解释一下:
Session储存于服务器端(默认以文件方式存储session),根据客户端提供的session id来得到用户的文件,取得变量的值,session id可以使用客户端的Cookie或者Http1.1协议的Query_String(就是访问的URL的"?"后面的部分)来传送给服务器,然后服务器读取Session的目录……。也就是说,session id是取得存储在服务上的session变量的身份证。当代码session_start();运行的时候,就在服务器上产生了一个session文件,随之也产生了与之唯一对应的一个session id,定义session变量以一定形式存储在刚才产生的session文件中。通过session id,可以取出定义的变量。跨页后,为了使用session,你必须又执行session_start();将又会产生一个session文件,与之对应产生相应的session id,用这个session id是取不出前面提到的第一个session文件中的变量的,因为这个session id不是打开它的"钥匙"。如果在session_start();之前加代码session_id($session id);将不产生新的session文件,直接读取与这个id对应的session文件。
PHP中的session在默认情况下是使用客户端的Cookie来保存session id的,所以当客户端的cookie出现问题的时候就会影响session了。必须注意的是:session不一定必须依赖cookie,这也是session相比cookie的高明之处。当客户端的Cookie被禁用或出现问题时,PHP会自动把session id附着在URL中,这样再通过session id就能跨页使用session变量了。但这种附着也是有一定条件的,即"php.ini中的session.use_trans_sid = 1或者编译时打开打开了--enable-trans-sid选项"。
明白了以上的道理,现在我们来抛开cookie使用session,主要途径有三条:
1、设置php.ini中的session.use_trans_sid = 1或者编译时打开打开了--enable-trans-sid选项,让PHP自动跨页传递session id。
2、手动通过URL传值、隐藏表单传递session id。
3、用文件、数据库等形式保存session_id,在跨页过程中手动调用。
通过例子来说明吧:
s1.php
<?php
session_start();
$_SESSION['var1']="中华人民共和国";
$url="<a href=".""s2.php">下一页</a>";
echo $url;
?>
s2.php
<?php
session_start();
echo "传递的session变量var1的值为:".$_SESSION['var1'];
?>
运行以上代码,在客户端cookie正常的情况下,应该可以在得到结果"中华人民共和国"。
现在你手动关闭客户端的cookie,再运行,可能得不到结果了吧。如果得不到结果,再"设置php.ini中的session.use_trans_sid = 1或者编译时打开打开了--enable-trans-sid选项",又得到结果"中华人民共和国"
这也就是上面所说的途径1。
下面再说途径2:
修改的代码如下:
s1.php
<?php
session_start();
$_SESSION['var1']="中华人民共和国";
$sn = session_id();
$url="<a href=".""s2.php?s=".$sn."">下一页</a>";
echo $url;
?>
s2.php
<?php
session_id($_GET['s']);
session_start();
echo "传递的session变量var1的值为:".$_SESSION['var1'];
?>
办法3还是通过例子来说明:
login.html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title>Login</title>
<meta http-equiv="Content-Type" content="text/html; charset=??????">
</head>
<body>
请登录:
<form name="login" method="post" action="mylogin1.php">
用户名:<input type="text" name="name"><br>
口 令:<input type="password" name="pass"><br>
<input type="submit" value="登录">
</form>
</body>
</html>
mylogin1.php
<?php
$name=$_POST['name'];
$pass=$_POST['pass'];
if(!$name || !$pass) {
echo "用户名或密码为空,请<a href="login.html">重新登录</a>";
die();
}
if (!($name=="laogong" && $pass=="123")) {
echo "用户名或密码不正确,请<a href="login.html">重新登录</a>";
die();
}
//注册用户
ob_start();
session_start();
$_SESSION['user']= $name;
$psid=session_id();
$fp=fopen("e:\tmp\phpsid.txt","w+");
fwrite($fp,$psid);
fclose($fp);
//身份验证成功,进行相关操作
echo "已登录<br>";
echo "<a href="mylogin2.php">下一页</a>";
?>
mylogin2.php
<?php
$fp=fopen("e:\tmp\phpsid.txt","r");
$sid=fread($fp,1024);
fclose($fp);
session_id($sid);
session_start();
if(isset($_SESSION['user']) && $_SESSION['user']="laogong" ) {
echo "已登录!";
}
else {
//成功登录进行相关操作
echo "未登录,无权访问";
echo "请<a href="login.html">登录</a>后浏览";
die();
}
?>
同样请关闭cookie测试,用户名:laogong 密码:123 这是通过文件保存session id的,文件是:e:tmpphpsid.txt,请根据自己的系统决定文件名或路径。
至于用数据库的方法,我就不举例子了,与文件的方法类似。
总结一下,上面的方法有一个共同点,就是在前一页取得session id,然后想办法传到下一页,在下一页的session_start();代码之前加代码session_id(传过来的session id);
注:本人测试环境:Win2K Sever Aapache 1.3.31 PHP 4.3.4
另外,lidm在类unix系统上也测试通过
phpfans.net收集整理
威尼斯人唯一官网 |
Weight of Ambulance
Weight of Ambulance
When you are sick or injured, the last thing you want to worry about is the weight of the ambulance. But, believe it or not, the weight of an ambulance can have a big impact on your care. Here’s what you need to know about the weight of an ambulance and how it can affect your care.
An ambulance is a vehicle used to transport sick or injured people to medical facilities. In most cases, an ambulance is driven by a trained medical professional, such as a paramedic, and carries basic life support equipment, including oxygen and first aid supplies. The weight of an ambulance can vary depending on the size and type of vehicle, but typically ranges from 3,000 to 5,000 pounds.
Weight of Ambulance
Credit: www.alfredfire.org
Table of Contents
How Much Does a Diesel Ambulance Weigh?
When it comes to emergency vehicles, weight is an important factor. After all, these vehicles need to be able to respond quickly and with enough power to get through traffic and other obstacles. So, how much does a diesel ambulance weigh?
Interestingly, the answer can vary quite a bit depending on the specific model of ambulance. For example, a Ford E-350 Super Duty Ambulance has a curb weight of 8,600 pounds (3,909 kg). On the other hand, a Freightliner M2 106 Business Class Ambulance has a curb weight of 14,500 pounds (6,577 kg).
Of course, these are just two examples and there are many other makes and models of ambulances out there. However, what this shows is that the weight of a diesel ambulance can range quite significantly. So why does this matter?
Well, for one thing, it’s important to know the weight of an ambulance when determining how many patients it can safely transport. For another thing, knowing the weight can also be helpful in terms of deciding which type of vehicle is best suited for a particular area or terrain. In short then, there is no definitive answer when it comes to how much does a diesel ambulance weigh.
It really depends on the make and model in question. However, what we do know is that these vehicles can range in weight from around 8600 pounds up to 14500 pounds or more.
How Much Does an F550 Ambulance Weigh?
The Ford F550 ambulance has a curb weight of 8,550 pounds. The gross vehicle weight rating is 14,000 pounds. This means that the maximum recommended weight of the vehicle and its contents is 14,000 pounds.
How Much Does a Transit Ambulance Weigh?
A transit ambulance typically weighs between 10,000 and 12,000 pounds. The weight of the ambulance will vary depending on the size and model of the vehicle. Transit ambulances are larger than traditional ambulances and are equipped with additional patient care equipment, which contributes to their weight.
How Much Does an Ambulance Box Weight?
An ambulance box, also known as an emergency medical service (EMS) box, typically weighs between 150 and 200 pounds. The weight of the box depends on the size and type of equipment inside. For example, a smaller EMS box used for basic life support may only weigh 150 pounds, while a larger one used for advanced life support could weigh 200 pounds or more.
Ambulance rams bike riders at traffic signal
How Heavy is an Ambulance in Tons
An ambulance typically weighs between 3 and 4 tons. The weight of an ambulance can vary depending on the size and model of the vehicle. The average weight of a fully-loaded ambulance is approximately 14,000 pounds, or 7 tons.
Ambulance Weight in Tons
There are many factors to consider when it comes to the weight of an ambulance. The size and type of vehicle, the equipment inside, and the number of patients all play a role in how much an ambulance weighs. On average, an empty ambulance weighs between 4 and 6 tons.
When you factor in the weight of the patients and equipment, the weight can increase to 10 or 12 tons. The heaviest ambulances can weigh up to 18 tons when fully loaded. The weight of an ambulance is important because it affects how the vehicle handles on the road.
Heavier vehicles have more inertia and take longer to stop. They also require more power to accelerate and may have difficulty making tight turns. All of these factors must be considered when choosing an ambulance for your community.
Weight of a Truck in Kg
A truck’s weight is its most important characteristic. This is because the weight of a truck determines how much cargo it can carry and how much fuel it will consume. The average weight of a truck in the United States is about 80,000 pounds (36,000 kg).
The average weight of a European Union truck is 44 tonnes (43 long tons; 48 short tons).
Truck Weight in Tons
Most people don’t think about the weight of a truck when they see one on the road. But did you know that trucks can weigh up to 80,000 pounds? That’s 40 tons!
And while it may not seem like it, that weight can have a big impact on how the truck drives and handles. For example, a fully loaded truck can have trouble stopping quickly. That’s why it’s important for drivers to give themselves extra time and space to stop when they’re behind a truck.
Trucks also have blind spots, which means there are areas around the truck where the driver might not be able to see you. So if you’re driving next to a truck, make sure you signal before you move into one of those blind spots. And finally, because of their size and weight, trucks can cause serious damage if they’re involved in an accident with another vehicle.
So always give yourself plenty of space between your car and a truck on the road.
Type 1 Ambulance Weight
There are many types of ambulances, each with their own specific weight. Type 1 ambulances are typically the heaviest, due to their large size and robust construction. This weight can range from 14,000 pounds (6,350 kg) to 16,000 pounds (7,260 kg).
The extra weight is necessary to accommodate all of the medical equipment and supplies that are carried on board. Type 1 ambulances are usually built on a heavy-duty truck chassis and have four-wheel drive for increased traction in all weather conditions. They also have a higher ground clearance than other types of ambulances, which is helpful when transporting patients over rough terrain.
Many type 1 ambulances are equipped with a hydraulic lift system that can be used to load patients into the vehicle. Due to their size and weight, type 1 ambulances require more fuel than other types of vehicles. They also have increased maintenance costs, as more frequent tune-ups and tire rotations are required.
However, the extra cost is often worth it for communities that need the highest level of emergency medical care possible.
Type 2 Ambulance Weight
Type 2 Ambulances are the most common type of ambulance in the United States. They are also known as box vans or cube vans. Type 2 ambulances are larger than Type 1 ambulances, but not as large as Type 3 ambulances.
They usually have a wheelbase of 150 to 170 inches and an overall length of approximately 24 feet. The interior height of a Type 2 ambulance is typically between 74 and 80 inches. Most Type 2 ambulances weigh between 10,000 and 14,000 pounds empty.
When they are fully loaded with patients, equipment, and supplies, they can weigh up to 18,000 pounds. Because they are so heavy, they require a powerful engine to get them up to speed quickly. Many Type 2 ambulances have V8 engines that produce more than 300 horsepower.
How Wide is a Ambulance
An ambulance is typically between 7 and 8 feet wide. The width of an ambulance is important because it needs to be able to fit through narrow roads and spaces.
Ambulance Specifications
When it comes to ambulances, there are certain specifications that must be met in order to ensure that the vehicle is up to par. This includes everything from the size of the vehicle to the type of equipment that is inside. Let’s take a look at some of the most important ambulance specifications:
Size: The size of an ambulance is extremely important as it needs to be large enough to accommodate patients and all of their medical equipment. Most ambulances are between 10 and 14 feet long and 6-7 feet wide. Weight: An ambulance must also be heavy enough to support all of the medical equipment inside, as well as any patient weight.
The average weight for an ambulance is between 8,000 and 10,000 pounds. Type of Equipment: All ambulances must be equipped with basic life support supplies such as oxygen tanks, defibrillators, and first aid kits. Many also have advanced life support equipment such as ventilators and IV pumps.
Conclusion
An ambulance is a vehicle used to transport sick or injured people to medical facilities. The weight of an ambulance can vary depending on the size and features of the vehicle. Most ambulances weigh between 10,000 and 14,000 pounds (4,536 to 6,350 kg).
The average weight of an ambulance is 12,500 pounds (5,670 kg). |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS5875301 A
Publication typeGrant
Application numberUS 08/889,814
Publication dateFeb 23, 1999
Filing dateJul 8, 1997
Priority dateDec 19, 1994
Fee statusPaid
Also published asUS5935208, US6199119, US6314461, US6954787, US20010001151, US20010024423
Publication number08889814, 889814, US 5875301 A, US 5875301A, US-A-5875301, US5875301 A, US5875301A
InventorsWilliam S. Duckwall, Michael D. Teener
Original AssigneeApple Computer, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and apparatus for the addition and removal of nodes from a common interconnect
US 5875301 A
Abstract
An electronic system interconnect. The interconnect includes a first node and a second node coupled to the first node. The interconnect is initially configured to include the first and second nodes. A third node is added to the interconnect after the interconnect is initially configured, and the first node responds to the addition of the third node by initiating a new connect handshake with the third node. The first node begins by transmitting a first signal to the third node. The first node signals that the third node has been added to the interconnect if the third node responds to the first signal by transmitting a second signal. The first node causes the interconnect to be reconfigured if the third node transmits a third signal in response to receiving the first signal.
Images(25)
Previous page
Next page
Claims(7)
What is claimed is:
1. In an electronic system, an interconnect comprising:
a first node;
a second node coupled to the first node, wherein the interconnect is initially configured to include the first and second nodes;
a third node that is coupled to the interconnect after the interconnect is initially configured, wherein the first node initiates a new connect handshake with the third node by transmitting a first signal to the third node, the first node signaling that the third node has been added to the interconnect if the third node responds to the first signal by transmitting a second signal, the first node causing the interconnect to be reconfigured if the third node transmits a third signal in response to receiving the first signal, the third node turning off the second signal and assuming a temporary address upon reconfiguration; and
a fourth node, wherein the interconnect is further initially configured to include the first, second, and fourth nodes, the fourth node responding to the first node signaling that the third node has been added to the interconnect by determining the temporary address of the third node and assigning a bus address to the third node; and further wherein the interconnect is reconfigured with the added third node without requiring a reset operation.
2. The interconnect of claim 1, wherein the third signal is equivalent to the first signal.
3. The interconnect of claim 1, wherein the interconnect is a serial bus.
4. An electronic system comprising:
a first component;
a second component; and
an interconnect coupled to the first and second components, the interconnect comprising:
a first node coupled to the first component;
a second node coupled to the first node and the second component, wherein the interconnect is initially configured to include the first and second nodes;
a third node that is coupled to the interconnect after the interconnect is initially configured, wherein the first node initiates a new connect handshake with the third node by transmitting a first signal to the third node, the first node signaling that the third node has been added to the interconnect if the third node responds to the first signal by transmitting a second signal, the first node causing the interconnect to be reconfigured if the third node transmits a third signal in response to receiving the first signal, the third node turning off the second signal and assuming a temporary address upon reconfiguration; and
a fourth node wherein the interconnect is further initially configured to include the first, second, and fourth nodes, the fourth node responding to the first node signaling that the third node has been added to the interconnect by determining the temporary address of the third node and assigning a bus address to the third node; and further wherein the interconnect is reconfigured with the third node without requiring a reset operation.
5. The electronic system of claim 4, wherein the third signal is equivalent to the first signal.
6. The electronic system of claim 4, wherein the interconnect is a serial bus.
7. In an electronic system, an interconnect comprising:
a first node;
a second node coupled to the first node, wherein the interconnect is initially configured to include the first and second nodes; and
a third node that is coupled to the interconnect after the interconnect is initially configured, wherein the first node initiates a new connect handshake with the third node by transmitting a first signal to the third node, the first node signaling that the third node has been added to the interconnect if the third node responds to the first signal by transmitting a second signal, the first node causing a reconfiguration of the interconnect if the third node transmits a third signal in response to receiving the first signal, and wherein the reconfiguration caused by the first node does not require a reset signal to be propagated across the interconnect; and wherein the third node assumes a temporary bus address after being added to the interconnect, the temporary bus address used by a bus topology manager to assign a bus address to the third node.
Description
This is a continuation of application Ser. No. 08/359,294, filed Dec. 19, 1994, now abandoned.
FIELD OF THE INVENTION
The present invention relates generally to data communications and more particularly to the addition and subtraction of nodes to a common interconnect.
BACKGROUND OF THE INVENTION
Digital electronic systems such as computer systems often use a common interconnect to share information between components of the digital electronic system. For computer systems, the interconnect is typically the computer bus.
One type of system interconnect is described by IEEE Standards document P1394, Draft 7.1v1, entitled IEEE Standard for a High Performance Serial Bus (hereafter the "P1394 serial bus standard"). A typical serial bus having the P1394 standard architecture is comprised of a multiplicity of nodes that are interconnected via point-to-point links such as cables that each connect a single node of the serial bus to another node of the serial bus. Data packets are propagated throughout the serial bus using a number of point-to-point transactions, wherein a node that receives a packet from another node via a first point-to-point link retransmits the received packet via other point-to-point links. A tree network configuration and associated packet handling protocol ensures that each node receives every packet once.
The P1394 serial bus standard provides for an arbitrary bus topology wherein the hierarchical relationship between nodes of the serial bus is determined by the manner in which the nodes are connected to one another. A P1394 serial bus is configured in three phases: bus initialization, tree identification, and self identification. During bus initialization, the general topology information of the serial bus is identified according to a tree metaphor. For example, each node is identified as being either a "branch" having more than one directly connected neighbor node or a "leaf" having only one neighbor node. During tree identification, hierarchical relationships are established between the nodes. For example, one node is designated a "root" node, and the hierarchy of the remaining nodes is established with respect to the relative nearness of a node to the root node. Given two nodes that are connected to one another, the node connected closer to the root is the "parent" node, and the node connected farther from the root is the "child." Nodes connected to the root are children of the root. During self identification, each node is assigned a bus address and a topology map may be built for the serial bus.
According to the P1394 serial bus standard, reconfiguration of a serial bus is required when either 1) a new node is joined to the serial bus, or 2) an identified node of the serial bus is removed from the serial bus. Reconfiguration is required to better ensure that all nodes of the serial bus are notified of the newly connected or disconnected node and that each node has a unique bus address. Typically, the node of the serial bus that detects a new connection or disconnection forces the three phase configuration to be performed by asserting a bus reset signal. The three phase configuration process typically requires several hundred microseconds to perform, during which time communications of data between nodes is halted. Such long periods of interruption may significantly affect the operation of the system for some uses of the serial bus. Therefore, it would be desirable to provide a mechanism that allows the connection and disconnection of nodes from the serial bus such that interruptions to serial bus traffic are reduced.
SUMMARY OF THE INVENTION
An electronic system interconnect is described that comprises a first node and a second node coupled to the first node and that allows for the addition of nodes to the interconnect after the interconnect is initially configured. The interconnect is initially configured to include the first and second nodes. A third node is added to the interconnect after the interconnect is initially configured, and the first node responds to the addition of the third node by initiating a new connect handshake with the third node. The first node begins by transmitting a first signal to the third node. The first node signals that the third node has been added to the interconnect if the third node responds to the first signal by transmitting a second signal. The first node causes the interconnect to be reconfigured if the third node transmits a third signal in response to receiving the first signal. According to one embodiment, the electronic system interconnect is a serial bus, and the first node signals the addition of the third node after arbitrating for the serial bus. The use of normal bus arbitration to signal the addition of nodes to the serial bus reduces interruptions of bus traffic.
A method for building a topology map of a serial bus without requiring a bus reset is also disclosed. A bus topology manager node of the serial bus transmits a SEND-- SELF-- ID packet to a first node. The first node receives the SEND-- SELF-- ID packet and responds by transmitting a SELF-- ID packet of the first node to the bus topology manager node. A parent node of the first node responds to the SELF-- ID packet of the first node by transmitting its own SELF-- ID packet. The bus topology manager node is thus able to build a bus topology map without requiring a bus reset.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:
FIG. 1 shows a serial bus according to one embodiment.
FIG. 2 shows a pair of differential signal lines for a cable.
FIG. 3 shows the addition of a node to a serial bus.
FIG. 4 begins an example of a process for adding a node to a serial bus wherein the new node is a JCSN.
FIG. 5 continues the example began in FIG. 4.
FIG. 6 continues the example began in FIG. 4.
FIG. 7 continues the example began in FIG. 4.
FIG. 8 continues the example began in FIG. 4.
FIG. 9 completes the example began in FIG. 4.
FIG. 10 begins an example of a process for adding a node to a serial bus wherein the new node is a JCNN.
FIG. 11 continues the example began in FIG. 10.
FIG. 12 completes the example began in FIG. 10.
FIG. 13 shows the addition of multiple nodes to the serial bus.
FIG. 14 shows the addition of multiple nodes to the serial bus, wherein multiple nodes are coupled to the same JCNN of the serial bus.
FIG. 15 begins an example of a process for subtracting a node from the serial bus.
FIG. 16 continues the example began in FIG. 15.
FIG. 17 completes the example began in FIG. 16.
FIG. 18A shows the first quadlet of a PHY configuration packet according to the P1394 serial bus standard.
FIG. 18B shows a NODE-- ADDED-- ALERT packet.
FIG. 18C shows a SET-- ADDRESS packet.
FIG. 18D shows a NODE-- DETACHED-- ALERT packet.
FIG. 18E shows a SEND-- SELF-- ID packet.
FIG. 19 shows a JCNN new connect state machine.
FIG. 20 shows a JCNN node addition process detection state machine.
FIG. 21 shows a JCSN new connect state machine.
FIG. 22 shows a JCNN new disconnect state machine.
FIG. 23 begins an example of a process for building a topology map without requiring a bus reset.
FIG. 24 continues the example of FIG. 23.
FIG. 25 completes the example of FIG. 23.
FIG. 26 shows a modified protocol state machine that enables the polling of nodes and sending of self-- id packets after the serial bus is configured.
DETAILED DESCRIPTION
As described herein, nodes may be connected to or disconnected from an existing, configured serial bus in an incremental manner without undertaking the three-phase bus configuration process required by the P1394 serial bus standard. Thus, nodes may be added to a serial bus such that interruptions of bus traffic are reduced. Also described herein is a mechanism whereby the bus topology manager of a serial bus may build a topology map for a previously configured serial bus without undertaking the three-phase bus configuration process. Although the embodiments of the claimed inventions are described with reference to a serial bus, the claimed inventions may find application in any interconnect architecture having an arbitrary topology wherein the hierarchical relationship between nodes of the interconnect is determined by the manner in which the nodes are connected to form the interconnect.
FIG. 1 shows a configured serial bus wherein at least one of the nodes of the serial bus includes circuitry for adding and removing nodes from the serial bus without requiring a bus reset. Serial bus 100 includes nodes 110-140 and cables 145-170, wherein cable 145 couples node 140 to node 130, cable 150 couples node 140 to node 135, cable 155 couples node 130 to node 110, cable 160 couples node 130 to node 115, cable 165 couples node 135 to node 125, and cable 170 couples node 135 to node 120.
Each of the nodes 110-140 is typically associated with a "local host," which is a component of the electronic system for which the serial bus acts as a primary or complimentary interconnect. The serial bus 100 may generally operate as specified by the P1394 serial bus standard, wherein each of the nodes 110-140 includes at least one port, and each of the cables 145-170 includes two pairs of differential signal lines and a pair of power lines. Each connected port is implied by the connection between a cable and a node.
During the initialization phase of the bus configuration process, nodes 110-125 are identified as leaf nodes, and nodes 130-140 are identified as branch nodes. Each node identifies itself as either a leaf or a branch in response to the number of connected ports that are detected. During the tree identification phase of the bus configuration process, node 140 is identified as the root node. During the self-identification phase of the configuration process, each node is assigned a bus address, wherein the root node is typically assigned the largest valid bus address. For example, node 110 is assigned address 0, node 115 is assigned address 1, node 130 is assigned address 2, node 125 is assigned address 3, node 120 is assigned address 4, node 135 is assigned address 5, and root node 140 is assigned address 6. Bus arbitration may begin once a serial bus has successfully completed the bus configuration process.
A serial bus may be provided with a bus topology manager ("BTM") that undertakes various bus management tasks. Typically, any node of the serial bus may be the BTM, and node 130 is identified as the BTM for serial bus 100. Among other tasks, BTM node 130 is responsible for maintaining a topology map for the serial bus 100 that identifies the relationships between each of the nodes and the bus address for each node.
FIG. 2 illustrates the mechanism whereby a node of the serial bus is able to detect whether a particular port of that node is connected to another node of the serial bus. This mechanism is used by a node during the bus initialization phase of the bus configuration process to determine whether that node is a leaf node or a branch node.
As shown, a parent node 205 is coupled to a child node 210 via cable 215, which is shown as including a pair of differential signal lines. Again, the interconnected ports of the parent and child nodes are implied by the connection of the cable 215 to the respective nodes. As shown, each of the ports for each node is divided into an "A" side and a "B" side, wherein each of the A and B sides of each port includes a transceiver that is connected to one of the pair of signal lines.
When connected, the signal lines of the cable 215 may be in a "1" state, a "0" state, and a "Z" state, wherein the 1 state dominates the 0 state, which dominates the Z state. During arbitration, both nodes may drive the signal lines simultaneously, and each node interprets the state of a signal line by comparing the value that node is driving on the signal line to the value that node is receiving on the signal line. Table 1 shows the decoding for arbitration signal lines according to the P1394 serial bus standard.
TABLE 1______________________________________Arbitration Signal Decoding RulesReceived Value Transmitted Value Interpreted Value______________________________________Z Z Z0 Z 01 Z 1Z 0 10 0 0Z 1 11 1 1______________________________________
As shown in FIG. 2, cable 215 provides a connection between the ports of the parent and child nodes such that the A side of the parent port is coupled to the B side of the child port, and the B side of the parent port is coupled to the A side of the child port. For each port, the A side transceiver provides a common mode biasing voltage to a connected differential signal line, and the B side transceiver includes circuitry for detecting the common mode biasing voltage. If the B side transceiver of a port detects a biasing voltage, the node identifies that port as being connected to another node. If the B side transceiver of a port does not detect a biasing voltage, the node identifies that port as not being connected to another node. If the state of a port changes from connected to disconnected, or vice versa, after the configuration process has completed, the P1394 serial bus standard requires a bus reset.
For some cases, the connection or disconnection of a node may be detected without physically connecting or disconnecting a node from the serial bus. For example, it may be desirable for the nodes of the serial bus to draw power from a source other than the cables of the serial bus, such as a node's local host. However, one reason for specifying that the nodes of the serial bus draw power from the cables is that a node may remain powered up even when its associated local host is powered down. The function of supplying the biasing voltage requires power, and if the local host of a node that draws power from the local host is switched off, the biasing voltage will be removed from each of the nodes connected ports, and a "disconnection" is detected even though the node remains physically connected to the serial bus. Similarly, a new "connection" may be detected when a node that draws power from its local host is powered on.
Therefore, a new connection may be detected when a new node is initially physically connected to the serial bus, or when an already connected node that is not powered by the cable is switched on. A disconnection may be similarly detected. As described above, the P1394 serial bus standard specifies that if a node detects a new connection or disconnection after the bus configuration process, that node will force a bus reset. The incremental addition process described herein accounts for both types of connections and disconnections, providing for the addition and removal of nodes without requiring a bus reset.
The general structure of a P1394 serial bus is such that a single new connection between a new node and an identified node of an existing serial bus can result in the addition of multiple nodes to the serial bus. For example, a first new node having a first port coupled to a second new node may be connected to an identified node of an existing serial bus via a second port. It is further possible that the multiple new nodes may themselves form an existing configured serial bus such that the new connection results in the connection of two existing and configured serial buses.
Due to the deterministic nature of the P1394 bus configuration process, it is highly likely that the first serial bus will have one or more nodes that are assigned the same bus addresses as the nodes of the second serial bus, and a bus reset is highly desirable for such a case. Therefore, the incremental addition process described herein distinguishes between two types of nodes: Just Connected Single Nodes (JCSNs); and Just Connected Network Nodes (JCNNs). A JCSN is a new node having a single connected port, and the addition of a JCSN may be equated to the addition of a leaf to the tree structure of the serial bus. A JCSN may be added incrementally. A JCNN is a node having multiple connected ports. By definition, the identified node of the existing serial bus to which the new node is connected is a JCNN.
FIG. 3 shows the serial bus 100 wherein a new node 305 is "connected" to the serial bus. Cable 310 couples node 305 to node 125, which identifies itself as a JCNN upon the connection event. New node 305 may be either a JCSN or a JCNN. New node 305 identifies itself as a JCSN if it has only one connected port, and new node 305 identifies itself as a JCNN if it has more than one connected port. The process of the new node identifying itself as either a JCSN or a JCNN is similar to the process of a node identifying itself as either a leaf node or a branch node during the bus initialization phase of the configuration process.
To provide the incremental addition of nodes to a serial bus, a new connect handshake protocol is defined between the newly connected node and the previously identified node of the serial bus. The result of the handshake between the new node and the identified node is determined by whether the new node is a JCSN or a JCNN. If the new node is a JCSN, the incremental addition process is initiated. If the new node is a JCNN, a bus reset is forced by the new node.
The new connect handshake protocol may be implemented in several different ways. However, according to one embodiment, the new connect handshake uses previously defined arbitration signals of the P1394 serial bus standard to discriminate between the addition of a JCSN and the addition of a JCNN. Using previously defined signals allows the incremental addition process to be implemented at a lower cost because less new circuitry is required. As the specific examples are discussed with reference to a serial bus that supports the P1394 signaling protocols, the signals of the new connect handshake may be referred to as being asserted "concurrently." For other interconnect architectures that operate according to different signaling protocols, signals of the new connect handshake may be asserted contemporaneously or sequentially, and no temporal overlap between the signals may occur.
A node that identifies itself as a JCNN initiates the new connect handshake by asserting a YOU ARE MY CHILD ("YAMC") signal on the port where a new connection is detected. According to one embodiment, the YAMC signal transmitted by the JCNN is a signal identified in the P1394 serial bus standard as the tx-- child-- notify line state, which results in the JCNN transmitting a value of AB=1Z.
FIG. 3 shows JCNN 125 as initiating the new connect handshake by asserting the YAMC signal. JCNN 125 may concurrently provide a speed signal indicating the speed capability of JCNN 125. If new node 305 is a JCNN, it also asserts the YAMC signal, which, as discussed below with respect to 10-12, results in a bus reset. FIGS. 4-9 discuss the case wherein new node 305 is a JCSN.
When initially powered up, a JCSN is in an idle state wherein it transmits an idle signal on the signal lines having a value of AB=ZZ. Upon detecting the new connection, a JCSN begins an internal timer wherein the JCSN will force a bus reset if a predetermined amount of time passes without receiving a YAMC signal, which may occur, for example, when a JCSN is coupled to a standard P1394 node that does not support incremental addition.
As shown in FIG. 4, if new node 305 is a JCSN, JCSN 305 detects the YAMC signal and latches the speed signal of JCNN 125, if the speed signal is provided. Because of the cross-coupling of the A and B sides of the cable between nodes and because of the arbitration decoding rules described by Table 1, a JCSN detects the YAMC signal as AB=Z1. JCSN 305 responds to the YAMC signal by asserting a YOU ARE MY PARENT ("YAMP") signal. JCSN 305 may concurrently provide a speed signal indicating the speed capability of JCSN 305.
According to one embodiment, the YAMP signal is a signal identified by the P1394 serial bus standard as the tx-- parent-- notify, and has a transmitted value of AB=0Z. The JCNN 125 continues to assert the YAMC signal while receiving the YAMP signal. JCNN therefore detects a value of AB=10. The new connect handshake is successfully completed. The new connect handshake reverses the order of the parent-child handshake that is specified by the P1394 serial bus standard for use during the tree identification phase of the configuration process.
Upon receiving the YAMP signal from JCSN 305, JCNN 125 begins to arbitrate for control of the bus, as shown in FIG. 5. JCNN 125 arbitrates for the bus by using fair arbitration, and JCNN 125 sends a request signal REQ to the root node 140 via its parent node 135 to request control of the bus. JCSN 305 and JCNN 125 continue to assert the YAMP and YAMC signals via their newly connected ports. The root node 140 arbitrates the request of JCNN 125, and replies with either a DENY signal or a GRANT signal. If the root node 140 denies the request of JCNN 125, JCNN 125 continues to arbitrate until it is granted bus access. JCSN 305 may be provided with an internal timeout mechanism for forcing a bus reset if a predetermined amount of time passes after completing the new connect handshake without receiving a GRANT signal. For one embodiment, JCNN 125 discontinues arbitration if it detects that another JCNN is currently adding another JCSN to the serial bus. JCNN 125 restarts arbitration and the timeout mechanism when it detects that the other JCSN has been successfully added to the serial bus.
In FIG. 6, JCNN 125 receives the GRANT signal. JCNN 125 deasserts the YAMC signal in response to receiving the GRANT signal, and JCSN 305 deasserts the YAMP signal in response to detecting the YAMC signal being deasserted such that the newly connected port enters an idle state. As shown in FIG. 7, once the newly connected port of JCNN 125 goes idle, JCNN 125 transmits a NODE-- ADDED-- ALERT (or "N-- A-- ALERT") broadcast packet to all of the nodes of serial bus 100. The NODE-- ADDED-- ALERT packet is discussed below with respect to FIGS. 18A-18E. The new connect handshake is complete, and JCSN 305 is assigned bus address 63, which allows JCSN 305 to receive but not transmit bus packets. According to the P1394 serial bus standard, bus address 63 is an invalid address. JCSN 305 is provided with an internal time-out mechanism wherein JCSN 305 forces a bus reset if JCSN 305 is not assigned a valid address within a specified amount of time. This is discussed in more detail below.
The NODE-- ADDED-- ALERT packet propagates throughout the serial bus such that it is received by BTM node 130. For the case wherein other nodes are concurrently arbitrating to add a new JCSN to the serial bus, the detection of a NODE-- ADDED-- ALERT packet may be used by waiting nodes as a mechanism for queuing the addition of their associated JCSNs to the serial bus. For example, when a NODE-- ADDED-- ALERT packet is detected, a waiting node discontinues arbitration.
As shown in FIG. 8, BTM node 130 responds to the NODE-- ADDED-- ALERT packet by sending an ADDRESS-- SET packet (or "ASP") that causes a node having a bus address of 63 to set its address to the address specified in the ADDRESS-- SET packet. Because multiple requests to add JCSNs are queued, typically only one JCSN will be allowed to receive the ADDRESS-- SET packet. The ADDRESS-- SET packet propagates throughout the serial bus 100 such that JCSN 305 receives the ADDRESS-- SET packet. The ADDRESS-- SET packet may be used by waiting nodes to indicate when arbitration to add a JCSN may begin again. The ADDRESS-- SET packet is described in more detail below. If no bus topology manager is provided, an ADDRESS-- SET packet will not be sent within the specified amount of time, and JCSN 305 will force a bus reset.
As described above, the BTM node 130 maintains a topology map for serial bus 100 such that the BTM node 130 is aware of the currently assigned bus addresses. BTM node 130 may thus ensure that a unique address is assigned to JCSN 305. As shown in FIG. 9, JCSN 305 may be assigned the next highest available address as a matter of convenience. Thus, JCSN 305 is assigned bus address 7. As will be shown, the assigned addresses for a serial bus that provides for incremental configuration may differ significantly from a serial bus that requires reconfiguration when a new node is added to the serial bus.
FIGS. 10-12 show the outcome of the new connect handshake when new node 305 is a JCNN. Upon power up, node 305 identifies itself as a JCNN. As described above, a JCNN initiates the new connect handshake upon detection of the connection event by asserting the YAMC signal. JCNN 125 behaves similarly such that JCNN 125 and JCNN 305 concurrently assert YAMC signals, wherein the transmitted value for each JCNN is AB=1Z. This is shown in FIG. 10. JCNNs 125 and 305 both detect AB=11, which is the bus-- reset signal specified by the P1394 serial bus standard. As shown in FIG. 11, both nodes respond by sending out the RESET signal on all connected ports such that a full bus reset occurs.
FIG. 12 shows the result of the bus reconfiguration. During the initialization phase of the bus configuration process, nodes 110-120 and 305 are identified as leaf nodes, and nodes 135-140 are identified as branch nodes. During the tree identification phase of the bus configuration process, node 140 may again be identified as the root node. During the self-identification phase of the configuration process, node 110 is assigned address 0, node 115 is assigned address 1, node 130 is assigned address 2, node 305 is assigned address 3, node 125 is assigned address 4, node 120 is assigned address 5, node 135 is assigned address 6, and root node 140 is assigned address 7.
A comparison of the assigned bus addresses shown in FIGS. 9 and 12 shows that the node addresses resulting from incremental addition of nodes may be different than the node addresses for the normal bus configuration process. Occasionally, it may be necessary for the bus topology manager of the serial bus to rebuild the topology map for the serial bus, and, according to the prior art, the bus topology manager may force a bus reset to rebuild the topology map. When incremental addition of nodes is not allowed, forcing a bus reset to rebuild the topology map typically results in the identical topology map being recovered. Given the incremental addition process described herein, the recovery of an identical topology map can no longer be ensured. Further, a bus reset interrupts bus traffic. Therefore, as will be discussed with respect to FIGS. 23-26, the bus topology manager may be provided with a mechanism for polling the nodes of serial bus such that a topology map for the serial bus may be built without requiring a bus reset.
The addition of a JCSN discussed with respect to FIGS. 3-9 is a relatively simple example. FIGS. 13 and 14 show more complex examples. FIG. 13 shows the concurrent addition of two JCSNs to serial bus 100, wherein each JCSN is coupled to a different JCNN. For example, JCSN 1305 is coupled to JCNN 120 via cable 1315, and JCSN 1310 is coupled to JCNN 125 via cable 1320. FIG. 14 shows the concurrent addition of two JCSNs to serial bus 100 via the same JCNN. For example, JCSN 1405 is coupled to JCNN 120 via cable 1415, and JCSN 1410 is coupled to JCNN 120 via cable 1420. To allow the concurrent addition of multiple JCSNs, the queuing mechanism mentioned above and described below with respect to FIGS. 19-22 may be employed.
The incremental disconnection of nodes is now discussed with respect to FIGS. 15-17. As shown in FIG. 15, node 305 is disconnected from the serial bus 100. Node 125 detects the disconnection and makes a request for control of the serial bus 100. Root node 140 grants the request in FIG. 16. In FIG. 17, node 125 sends a NODE-- DETACHED-- ALERT (or "N-- D-- ALERT") packet. The incremental disconnection process is complete. Additional details regarding disconnection are discussed with respect to FIG. 22.
The NODE-- ADDED-- ALERT packet, the SET-- ADDRESS packet, and the NODE-- DETACHED-- ALERT packet are now discussed with respect to FIGS. 18A-18E. The form that these packets may take may vary depending on the desired implementation. For example, the NODE-- ADDED-- ALERT packet may simply be a SELF-- ID packet as specified by the P1394 serial bus standard. Because SELF-- ID packets are normally sent only during the bus configuration process, the bus topology manager can use the SELF-- ID packet to determine where a new node has been added. The SET-- ADDRESS packet may also be a PHY configuration packet as specified by the P1394 serial bus standard, wherein bits of the PHY configuration packet that are normally unused are used to set the address of the new node.
Alternatively, the NODE-- ADDED-- ALERT, the SET-- ADDRESS packet, and the NODE-- DETACHED-- ALERT may take the form of a new type of PHY configuration packet as described with respect to FIGS. 18B-18D. FIG. 18A shows the first quadlet of a PHY configuration packet 1800 as defined by the P1394 serial bus standard. The second quadlet (not shown) of a PHY configuration is simply the logical inverse of the first quadlet. As shown, a quadlet comprises thirty-two bits of data divided into six different fields. Identifier field 1801 comprises two bits of logic 0, which identifies the packet as a PHY configuration packet. Node-- ID field 1802 comprises six bits that specify a node address. Set-- root-- control bit 1802 is set to a logic 1 if a new node is to become the root. According to the P1394 serial bus standard, the node-- ID field 1802 is ignored if the set-- root-- control bit 1803 is set to a logic 0. Set-- gap-- timer-- control bit 1804 is set to a logic 1 if a new gap count is being specified in the gap-- count field 1805, which comprises six bits. According to the P1394 serial bus standard, the gap-- count field 1805 is ignored if the set-- gap-- timer-- control bit 1803 is set to a logic 0. An undefined field 1806 follows the gap-- count field 1805 and comprises sixteen bits.
Special PHY configuration packets may be defined by recognizing that a normal PHY configuration packet is ignored if both the set-- root-- control bit 1803 and the set-- gap-- timer-- control bit 1804 are set to logic 0's. FIG. 18B shows a NODE-- ADDED-- ALERT PHY configuration packet 1810 wherein both the set-- root-- control bit 1803 and the set-- gap-- timer-- control bit 1804 are set to logic 0's, the node-- ID field 1802 includes the bus address of the node sending the packet, and all bits but the next to least significant bit of the gap-- count field 1805 are set to a logic 0. FIG. 18C shows a SET-- ADDRESS packet PHY configuration packet 1815 wherein both the set-- root-- control bit 1803 and the set-- gap-- timer-- control bit 1804 are set to logic 0's, the node-- ID field 1802 is set to all logic 1's, and the gap-- count filed 1805 includes the address to which the new node is to be set. FIG. 18D shows a NODE-- DETACHED-- ALERT PHY configuration packet 1820 wherein both the set-- root-- control bit 1803 and the set-- gap-- timer-- control bit 1804 are set to logic 0's, the node-- ID field 1802 includes the bus address of the node sending the packet, and all bits but the least significant bit of the gap-- count field 1805 are set to a logic 0.
FIG. 18E shows a SEND-- SELF-- ID packet 1825 wherein both the set-- root-- control bit 1803 and the set-- gap-- timer-- control bit 1804 are set to logic 0's, the node-- ID field 1802 includes the bus address of the destination node, and all bits of the gap-- count field 1805 are set to a logic 0. The SEND-- SELF-- ID packet may be used by a BTM to build a topology map without requiring a bus reset. This is discussed in more detail below.
The special PHY configuration packets shown in FIGS. 18B-18E are advantageous in that they are simply ignored by nodes that operate strictly according to the P1394 serial bus standard, and the addition of nodes that support incremental addition of nodes to the serial bus may be done in a relatively unobtrusive manner. It is therefore sufficient that only new nodes and identified nodes of the serial bus to which a new node may be connected contain circuitry for performing the incremental addition process.
FIGS. 19-22 are state diagrams for state machines used by JCSNs and JCNNs to perform the incremental addition process according to one embodiment. Specifically, FIG. 19 shows a new connect state machine for a JCNN; FIG. 20 shows a node addition process detection state machine for JCNNs so that multiple JCSNs that are connected concurrently may be added to the serial bus in a sequential manner; FIG. 21 shows a new connect state machine for a JCSN; and FIG. 22 shows a new disconnect state machine for a JCNN.
Each port of a JCNN includes a JCNN new connect state machine. FIG. 19 shows that the JCNN new connect state machine is capable of being in one of seven states: NET NODE PORT, PARENT?, ARBITRATE, READY, QUEUED, ALERT, and BUS RESET. The NET NODE PORT state is the starting state for a JCNN new connect state machine. The JCNN new connect state machine remains in the NET NODE PORT state unless a new connection is detected, at which time the JCNN new connect state machine transitions to the PARENT? state.
While in the PARENT? state, the JCNN asserts a YAMC signal and a speed signal on its newly connected port. If the newly connected node is also a JCNN, the JCNN node detects a value on the cable of AB=11, at which time the JCNN requests a bus reset, and the JCNN new connect state machine transitions to the BUS RESET state. If the newly connected node is a JCSN, the JCNN detects the value of the cable of the newly connected port as being AB=10, at which time the JCNN new connect state machine for the newly connected port of the JCNN transitions to the ARBITRATE state.
While in the ARBITRATE state, the JCNN performs fair arbitration for the serial bus 100 and turns off its speed signal to the newly connected port. The JCNN new connect state machine transitions from the ARBITRATE state to the BUS RESET state if it detects a value of AB=11 at its associated port, which indicates that the newly connected JCSN has requested a bus reset. The JCNN new connect state machine transitions from the ARBITRATE state to the READY state if the JCNN receives a bus grant signal from the root node.
The JCNN new connect state machine transitions from the ARBITRATE state to the QUEUED state if a lower numbered port of the JCNN is already in the ARBITRATE state, if another port of the JCNN is already in the READY state, if another port of the JCNN is already in the ALERT state, or if another JCNN is in the process of adding a new node, which is indicated by the output of the JCNN node addition process detection state machine.
While in the QUEUED state, the JCNN asserts the AB=0Z signal on its newly connected port. The JCNN new connect state machine transitions from the QUEUED state back to the NET NODE PORT state if the new node is subsequently disconnected. The JCNN new connect state machine transitions from the QUEUED state to the bus reset state if the newly connected port detects the value of AB=x1, which indicates that the newly connected JCSN has requested a bus reset. The JCNN new connect state machine of a newly connected port transitions from the QUEUED state back to the ARBITRATE state if the JCNN is clear to add the JCSN as indicated by the JCNN node addition process detection state machine, if no other port of the JCNN is in the READY state, if no other port of the JCNN is in the ALERT state, and if no lower numbered port is in the ARBITRATE state.
Once the JCNN new connect state machine enters the READY state, the JCNN asserts an idle signal on its newly connected port. If the newly connected port is subsequently disconnected while the JCNN new connect state machine is in the READY state, the JCNN new connect state machine transitions to the NET NODE PORT state. The JCNN new connect state machine transitions from the READY state to the BUS RESET state if it detects a value of AB=11 on its newly connected port. The JCNN new connect state machine transitions from the READY state to the ALERT state if it detects a value of AB=ZZ on its newly connected port.
While in the ALERT state, the JCNN sends a NODE-- ADDED-- ALERT packet and then clears the "New Connect" status of the newly connected port. The JCNN new connect state machine will transition back to the NET NODE PORT once an SET-- ADDRESS packet is detected. If the JCNN detects that the value at the newly connected port is AB=11, the JCNN new connect state machine transitions from the ALERT state to the bus reset state.
The QUEUED state of the JCNN new connect state machine provides for the sequential addition of multiple JCSNs that are added to the serial bus at substantially the same time. Because each port includes a JCNN new connect state machine, incremental configuration is allowed even when multiple JCSNs are added to the same JCNN at substantially the same time.
FIG. 20 shows the JCNN node addition process detection state machine, which is used to determine when a JCNN new connect state machine should enter or leave the QUEUED state. The JCNN node addition process detection state machine has two possible states and starts in a CLEAR TO ADD state. If a JCNN receives a NODE-- ADDED-- ALERT packet, the JCNN node addition process detection state machine transitions from CLEAR TO ADD state, to the ADD IN PROGRESS state. The JCNN new connect state machines for ports that are arbitrating to send a NODE-- ADDED-- ALERT packet transition from the ARBITRATE state to the QUEUED state in response to the JCNN node addition process detection state machine being in the ADD IN PROGRESS state. Once a SET-- ADDRESS packet is received, the JCNN node addition detection state machine transitions from the ADD IN PROGRESS state back to the CLEAR TO ADD state.
FIG. 21 shows a JCSN new connect state machine, which is shown as including the following six states: SINGLE NODE, CHILD, TREED, AIDLE, QUEUED, and BUS RESET. The starting state for the JCSN new connect state machine is the SINGLE NODE state. When the new connection is detected by the JCSN, the JCSN starts a timeout timer. The JCSN new connect state machine transitions from the SINGLE NODE state to the CHILD state when a YAMC signal is received. If, as indicated by the timeout timer, the YAMC signal is not detected within a predetermined time T1, which may be 300 milliseconds, the JCSN requests a bus reset such that the JCSN new connect state machine transitions from the SINGLE NODE state to the BUS RESET state. While in the BUS RESET state, the JCSN asserts the value AB=11 on its newly connected port, forcing a bus reset.
While in the CHILD state, the JCSN asserts a YAMP signal and its speed signal on its newly connected port and restarts the timeout timer. The detection of a value of AB=0Z at the newly connected port causes the JCSN new connect state machine to transition from the CHILD state to the TREED state. The newly connected port may have the value of AB=0Z in response to the JCNN being in the READY state. A timeout may occur such that the state machine goes from the CHILD state to the BUS RESET state if the JCSN does not transition from the CHILD to the TREED state within a predetermined time T2, which may be 100 milliseconds.
The JCSN new connect state machine transitions from the CHILD state to the QUEUED state in response to detecting a value of AB=00 on the newly connected port. The newly connected port may have the value of AB=00 in response to the JCNN being in the QUEUED state. While in the QUEUED state, the JCSN restarts the timeout timer. If the JCSN new connect state machine detects a value of AB=01 on its newly connected port, the JCSN new connect state machine transitions from the QUEUED state back to the CHILD state. If the value on the newly connected port remains at AB=00 for a predetermined time T3, which may be 600 milliseconds, the JCSN new connect state machine transitions from the QUEUED state to the BUS RESET state, and the JCSN forces a bus reset.
While in the TREED state, the JCSN asserts an idle signal having a value of AB=ZZ on its connected port, turns off the speed signal to its parent, and restarts the timeout timer. The reception of a SET-- ADDRESS packet causes a transition from the TREED state to the AIDLE state. The AIDLE state signifies that the JCSN has successfully completed the incremental addition process. If a SET-- ADDRESS packet is not received within a predetermined time T4, which may be 600 milliseconds, the JCSN new connect state machine transitions from the TREED state to the BUS RESET state, and the JCSN forces a bus reset. Disconnection of the JCSN from the serial bus while the JCSN new connect state machine is in the either the QUEUED state, the CHILD state, or the TREED state causes the JCSN new connect state machine to transition to the SINGLE NODE state.
FIG. 22 shows the JCNN new disconnect state machine, which has the following three states: NET NODE, NOTIFY, and BUS RESET. The NET NODE state is the default state and no actions are taken by the JCNN while in the NET NODE state. If a disconnection is detected on a port that is coupled to a child node, the JCNN transitions from the NET NODE state to the NOTIFY state. If the disconnect is on the port that is coupled to the parent node of the JCNN, the JCNN forces a bus reset.
While in the NOTIFY state, the JCNN makes a fair arbitration request for control of the bus. Upon receiving a bus grant, the JCNN sends a NODE-- DISCONNNECTED-- ALERT packet. The JCNN disconnect state machine transitions from the NOTIFY state to the NET NODE state once the NODE-- DETACHED-- ALERT packet is sent.
Now that illustrative embodiments of the incremental configuration process have been discussed with some particularity, a polling mechanism for allowing the bus topology manager of a serial bus to build a topology map for the serial bus without requiring a bus reset is now described. The polling mechanism is useful for most applications of the serial bus wherein it is desirable to reduce bus stalls.
The general polling process will now be discussed with respect to FIGS. 23-25. In FIG. 23, the BTM node 130 sends a SEND-- SELF-- ID broadcast packet, which may be the special PHY configuration packet shown in FIG. 18E. The SEND-- SELF-- ID packet propagates throughout the entire serial bus. For this example, the SEND-- SELF-- ID packet specifies node 305 as the target node. As shown in FIG. 24, node 305 sends a self-- id packet in response to receiving the SEND-- SELF-- ID packet. The self-- id packet may be identical to the self-- id packet specified by the P1394 serial bus standard. The self-- id packet of target node 305 is propagated throughout the serial bus such that BTM node 130 receives the self-- id packet of target node 305. To provide for complete determination of the connection topology, the parent node 125 sends its own self-- id packet after retransmitting the self-- id packet of the target node 305. This is shown in FIG. 25. The self-- id packet of the target node's parent node propagates throughout the serial bus such that it is received by the BTM node 130.
There are many ways to provide the polling mechanism shown in FIGS. 23-25. Wherein it is desirable to reduce costs associated with providing new circuitry for each node of a serial bus, the polling mechanism may be provided by modifying existing circuitry of the nodes. According to one embodiment, the protocol state machine for each node may be modified.
FIG. 26, shows a modified protocol state machine that may be included in each node of the serial bus so that the BTM node may build a topology map without forcing a bus reset. The S4:SELF ID TRANSMIT state shown by FIG. 26 is the final state of self identification phase of the configuration process. During the normal S4:SELF ID TRANSMIT state, a node transmits its self-- id packet preceded by a data prefix having a value of AB=01 and followed by a data suffix having a value of AB=10. After transmitting the data suffix of a self-id packet, a node that is transmitting its own self-id packet leaves the A line to its parent node high as a "terminal handshake" signal that indicates to the parent node that the self-id packet comes directly from its child. The parent and child nodes exchange speed capabilities during the handshake signal. The terminal handshake as transmitted by a child node is identified by the P1394 serial bus standard as the tx-- ident-- done line state.
According to the P1394 serial bus standard, the protocol state machine of a node transitions from a S1:SELF ID GRANT state to the S4:SELF ID TRANSMIT state when all child ports of that node are identified, and the protocol state machine of a node transitions from the S4:SELF ID TRANSMIT state to the A0:IDLE state, which is the beginning state for bus arbitration, when the self-- id packet of the node has been transmitted and either the node is the root or the node begins to receive the self-- id packet of another node. The P1394 serial bus standard does not provide for a return to the S4:SELF ID TRANSMIT state after arbitration begins other than by performing the bus configuration process in response to a bus reset.
As shown in FIG. 26, two new transitions to the S4:SELF ID TRANSMIT state are specified by the present embodiment. Both transitions begin at the A5:RECEIVE state for the protocol state machine. According to the P1394 serial bus standard, a node receives data from the serial bus while in the A5:RECEIVE state. When a node receives a SEND-- SELF-- ID packet that specifies that node as the target node, the node transitions from the A5:RECEIVE state to the S5:SELF ID TRANSMIT state and sends its self-id packet complete with subsequent handshake signal to its parent node. Speed signaling may be omitted. The parent node, upon detecting the self-id packet of its child and the terminal handshake signal also transitions from the A5:RECEIVE state to the S5:SELF ID TRANSMIT state and sends its self-id packet without a terminal handshake signal. Again, speed signaling may be omitted. The parent node transmits a tx-- datat-- prefix signal having a transmitted value of AB=10 to the target node such that it returns to normal arbitration. The parent node also returns to normal arbitration.
The polling mechanism described with reference to FIGS. 23-25 may find utility for other applications, as well. For example, for network management it is necessary to calculate the worst case round trip propagation delay through the serial bus so that the gap count may be set. According to the P1394 serial bus standard, all nodes have a maximum propagation delay, and all cables have a maximum length. Therefore, for a prior serial bus it is sufficient to know the number of nodes in the longest row of the serial bus and to multiply that number by the maximum propagation delays associated with the nodes and cables.
For some cases, it may be desirable to provide arbitrary length cables or nodes having arbitrary propagation delays. Because the maximum propagation delays are no longer known, calculating the worst case propagation round trip propagation delay becomes more complicated, and a new mechanism is needed. The polling mechanism described above may be used to calculate the worst case round trip delay for the serial bus. The BTM node can calculate the actual round trip delay time between it and any other node of the serial bus by simply starting a timer after it sends the SEND-- SELF-- ID packet and stopping the timer when a SELF-- ID packet is received. Calculating the worst case round trip delay then becomes a simple manner of arithmetic.
In the foregoing specification the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4845744 *Oct 16, 1986Jul 4, 1989American Telephone And Telegraph Company, At&T Bell LaboratoriesMethod of overlaying virtual tree networks onto a message passing parallel processing network
US4860201 *Sep 2, 1986Aug 22, 1989The Trustees Of Columbia University In The City Of New YorkBinary tree parallel processor
US5018139 *Jun 21, 1990May 21, 1991Societe Anonyme RceCommunication network between user equipment
US5199106 *Aug 15, 1990Mar 30, 1993International Business Machines CorporationInput output interface controller connecting a synchronous bus to an asynchronous bus and methods for performing operations on the bus
US5263163 *Oct 19, 1992Nov 16, 1993Codex CorporationArbitration among multiple users of a shared resource
US5394556 *Dec 21, 1992Feb 28, 1995Apple Computer, Inc.In a computer system
Non-Patent Citations
Reference
1Blackledge, Larry, "Internet Document Id #9410051435.AA09386," Oct. 5, 1994, pp. 1-2.
2 *Blackledge, Larry, Internet Document Id 9410051435.AA09386, Oct. 5, 1994, pp. 1 2.
3Oprescu, Florin, "Internet Document Id #9409300114.AA20843," Sep. 29, 1994, pp. 1-2.
4Oprescu, Florin, "Internet Document Id #9410041815.AA06083," Oct. 4, 1994, pp. 1-3.
5Oprescu, Florin, "Internet Document Id #9410050239.AA08832," Oct. 4, 1994, pp. 1-2.
6Oprescu, Florin, "Internet Document Id #9410142335.AA24089," Oct. 14, 1994, pp. 1-3.
7 *Oprescu, Florin, Internet Document Id 9409300114.AA20843, Sep. 29, 1994, pp. 1 2.
8 *Oprescu, Florin, Internet Document Id 9410041815.AA06083, Oct. 4, 1994, pp. 1 3.
9 *Oprescu, Florin, Internet Document Id 9410050239.AA08832, Oct. 4, 1994, pp. 1 2.
10 *Oprescu, Florin, Internet Document Id 9410142335.AA24089, Oct. 14, 1994, pp. 1 3.
11Wetzel, Alan, "Internet Document Id #9410141450.AA06885," Oct. 14, 1994, pp. 1-2.
12 *Wetzel, Alan, Internet Document Id 9410141450.AA06885, Oct. 14, 1994, pp. 1 2.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6012084 *Aug 1, 1997Jan 4, 2000International Business Machines CorporationVirtual network communication services utilizing internode message delivery task mechanisms
US6145018 *Nov 24, 1997Nov 7, 2000Intel CorporationMethod for hindering some types of nodes from becoming a bus arbitration controller
US6202089Jun 30, 1998Mar 13, 2001Microsoft CorporationMethod for configuring at runtime, identifying and using a plurality of remote procedure call endpoints on a single server process
US6205498 *Apr 1, 1998Mar 20, 2001Microsoft CorporationMethod and system for message transfer session management
US6219697 *May 2, 1997Apr 17, 20013Com CorporationMethod and apparatus for operating the internet protocol over a high-speed serial bus
US6256634Jun 30, 1998Jul 3, 2001Microsoft CorporationMethod and system for purging tombstones for deleted data items in a replicated database
US6275912Jun 30, 1998Aug 14, 2001Microsoft CorporationMethod and system for storing data items to a storage device
US6374316Mar 18, 2000Apr 16, 2002Sony CorporationMethod and system for circumscribing a topology to form ring structures
US6446144Nov 21, 2000Sep 3, 2002Microsoft CorporationMethod and system for message transfer session management
US6446206Apr 1, 1998Sep 3, 2002Microsoft CorporationMethod and system for access control of a message queue
US6502158Mar 18, 2000Dec 31, 2002Sony CorporationMethod and system for address spaces
US6529932Apr 1, 1998Mar 4, 2003Microsoft CorporationMethod and system for distributed transaction processing with asynchronous message delivery
US6539450Mar 18, 2000Mar 25, 2003Sony CorporationMethod and system for adjusting isochronous bandwidths on a bus
US6584539Mar 18, 2000Jun 24, 2003Sony CorporationMethod and system for message broadcast flow control on a bus bridge interconnect
US6618750Nov 2, 1999Sep 9, 2003Apple Computer, Inc.Method and apparatus for determining communication paths
US6618785Apr 21, 2000Sep 9, 2003Apple Computer, Inc.Method and apparatus for automatic detection and healing of signal pair crossover on a high performance serial bus
US6628607Jul 9, 1999Sep 30, 2003Apple Computer, Inc.Method and apparatus for loop breaking on a serial bus
US6631415Mar 18, 2000Oct 7, 2003Sony CorporationMethod and system for providing a communication connection using stream identifiers
US6631426Nov 2, 1999Oct 7, 2003Apple Computer, Inc.Automatic ID allocation for AV/C entities
US6636914Nov 5, 1999Oct 21, 2003Apple Computer, Inc.Method and apparatus for arbitration and fairness on a full-duplex bus using dual phases
US6639918Jan 18, 2000Oct 28, 2003Apple Computer, Inc.Method and apparatus for border node behavior on a full-duplex bus
US6647446Mar 18, 2000Nov 11, 2003Sony CorporationMethod and system for using a new bus identifier resulting from a bus topology change
US6678726Apr 2, 1998Jan 13, 2004Microsoft CorporationMethod and apparatus for automatically determining topology information for a computer within a message queuing network
US6718497Apr 21, 2000Apr 6, 2004Apple Computer, Inc.Method and apparatus for generating jitter test patterns on a high performance serial bus
US6728821Nov 27, 2000Apr 27, 2004Sony CorporationMethod and system for adjusting isochronous bandwidths on a bus
US6751697 *Nov 27, 2000Jun 15, 2004Sony CorporationMethod and system for a multi-phase net refresh on a bus bridge interconnect
US6757773Jun 30, 2000Jun 29, 2004Sony CorporationSystem and method for determining support capability of a device coupled to a bus system
US6804263Oct 19, 2000Oct 12, 2004Sony CorporationControlling the state of a node connected to a bus during the self identification phase of bus arbitration
US6810452Mar 18, 2000Oct 26, 2004Sony CorporationMethod and system for quarantine during bus topology configuration
US6813663Nov 2, 1999Nov 2, 2004Apple Computer, Inc.Method and apparatus for supporting and presenting multiple serial bus nodes using distinct configuration ROM images
US6831928Feb 17, 2000Dec 14, 2004Apple Computer, Inc.Method and apparatus for ensuring compatibility on a high performance serial bus
US6839791Aug 5, 2002Jan 4, 2005Apple Computer, Inc.Method and apparatus for accelerating detection of serial bus device speed signals
US6842805Mar 17, 2003Jan 11, 2005Apple Computer, Inc.Method and apparatus for preventing loops in a full-duplex bus
US6848108Jun 30, 1998Jan 25, 2005Microsoft CorporationMethod and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US6865632Jun 17, 2003Mar 8, 2005Apple Computer, Inc.Method and apparatus for arbitration and fairness on a full-duplex bus using dual phases
US6891848Aug 5, 2003May 10, 2005Apple Computer, Inc.Method and apparatus for border node behavior on a full-duplex bus
US6910090Sep 20, 2000Jun 21, 2005Sony CorporationMaintaining communications in a bus bridge interconnect
US6928503 *Jun 10, 1998Aug 9, 2005Intel CorporationMethod and apparatus for robust addressing on a dynamically configurable bus
US6944705Jun 17, 2003Sep 13, 2005Apple Computer, Inc.Method and apparatus for automatic detection and healing of signal pair crossover on a high performance serial bus
US6959343Nov 1, 1999Oct 25, 2005Apple Computer, Inc.Method and apparatus for dynamic link driver configuration
US6977887Jun 17, 2003Dec 20, 2005Apple Computer, Inc.Method and apparatus for loop breaking on a serial bus
US6985981Oct 11, 2002Jan 10, 2006Apple Computer, Inc.Method and apparatus for preventing loops in a full-duplex bus
US7003590Jun 26, 2003Feb 21, 2006Apple Computer, Inc.Automatic ID allocation for AV/C entities
US7028076 *Mar 5, 2002Apr 11, 2006Koninklijke Philips Electronics N.V.System of apparatuses that communicate via a bus structure
US7050453Nov 8, 2004May 23, 2006Apple Computer, Inc.Method and apparatus for ensuring compatibility on a high performance serial bus
US7058872Mar 16, 2004Jun 6, 2006Apple Computer, Inc.Method and apparatus for generating jitter test patterns on a high performance serial bus
US7096302Dec 21, 2004Aug 22, 2006Apple Computer, Inc.Method and apparatus for accelerating detection of serial bus device speed signals
US7191266Sep 15, 2004Mar 13, 2007Apple Computer, Inc.Method and apparatus for supporting and presenting multiple serial bus nodes using distinct configuration ROM images
US7194564Dec 21, 2004Mar 20, 2007Apple Computer, Inc.Method and apparatus for preventing loops in a full-duplex bus
US7215253Apr 10, 2002May 8, 2007Lg Electronics Inc.Method for recognizing electronic appliance in multiple control system
US7219141 *May 25, 2005May 15, 2007Leviton Manufacturing Co., Inc.Method of adding a device to a network
US7237135Dec 29, 2003Jun 26, 2007Apple Inc.Cyclemaster synchronization in a distributed bridge
US7266617Aug 5, 2003Sep 4, 2007Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7280490Aug 5, 2003Oct 9, 2007Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7308517Dec 29, 2003Dec 11, 2007Apple Inc.Gap count analysis for a high speed serialized bus
US7317694Aug 5, 2003Jan 8, 2008Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7334030 *Jul 18, 2005Feb 19, 2008Apple Inc.Method and apparatus for the addition and removal of nodes from a common interconnect
US7352708Aug 5, 2003Apr 1, 2008Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7353284Dec 23, 2003Apr 1, 2008Apple Inc.Synchronized transmission of audio and video data from a computer to a client via an interface
US7353322Oct 7, 2003Apr 1, 2008Apple Inc.System and method for providing dynamic configuration ROM using double image buffers
US7389371Mar 19, 2007Jun 17, 2008Apple Inc.Method and apparatus for loop breaking in a data bus
US7401173May 27, 2005Jul 15, 2008Apple Inc.Method and apparatus for automatic detection and healing of signal pair crossover on a high performance serial bus
US7415545May 31, 2005Aug 19, 2008Apple Inc.Method and apparatus for dynamic link driver configuration
US7417973Dec 31, 2002Aug 26, 2008Apple Inc.Method, apparatus and computer program product for ensuring node participation in a network bus
US7421507Sep 28, 2006Sep 2, 2008Apple Inc.Transmission of AV/C transactions over multiple transports method and apparatus
US7457302Dec 31, 2002Nov 25, 2008Apple Inc.Enhancement to loop healing for malconfigured bus prevention
US7484013Dec 22, 2005Jan 27, 2009Apple Inc.Automatic ID allocation for AV/C entities
US7490174Nov 16, 2006Feb 10, 2009Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7502338Dec 19, 2003Mar 10, 2009Apple Inc.De-emphasis training on a point-to-point connection
US7506088Mar 12, 2007Mar 17, 2009Apple Inc.Method and apparatus for supporting and presenting multiple serial bus nodes using distinct configuration ROM images
US7574650Sep 12, 2003Aug 11, 2009Apple Inc.General purpose data container method and apparatus for implementing AV/C descriptors
US7583656May 31, 2005Sep 1, 2009Apple Inc.Method and apparatus for loop breaking on a serial bus
US7631317Nov 18, 2004Dec 8, 2009Microsoft CorporationMethod and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US7650452Dec 21, 2004Jan 19, 2010Apple Inc.Method and apparatus for arbitration and fairness on a full-duplex bus using dual phases
US7653010 *Jun 2, 2004Jan 26, 2010Casient LimitedSystem and method for wireless mesh networking
US7668099Dec 23, 2003Feb 23, 2010Apple Inc.Synthesis of vertical blanking signal
US7701966Nov 21, 2005Apr 20, 2010Apple IncMethod and apparatus for ensuring compatibility on a high performance serial bus
US7734855Dec 10, 2007Jun 8, 2010Apple Inc.Gap count analysis for the P1394a BUS
US7788567Dec 11, 2003Aug 31, 2010Apple Inc.Symbol encoding for tolerance to single byte errors
US7788676Nov 18, 2004Aug 31, 2010Microsoft CorporationMethod and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US7822898Aug 27, 2007Oct 26, 2010Apple Inc.Method and apparatus for border node behavior on a full-duplex bus
US7861025Jul 14, 2008Dec 28, 2010Apple Inc.Method and apparatus for automatic detection and healing of signal pair crossover on a high performance serial bus
US7970926Mar 28, 2008Jun 28, 2011Apple Inc.Synchronized transmission of audio and video data from a computer to a client via an interface
US7975201Aug 26, 2010Jul 5, 2011Apple Inc.Symbol encoding for tolerance to single byte error
US7987381Jun 19, 2007Jul 26, 2011Apple Inc.Cyclemaster synchronization in a distributed bridge
US7995606Dec 3, 2003Aug 9, 2011Apple Inc.Fly-by and ack-accelerated arbitration for broadcast packets
US8079038Mar 19, 2008Dec 13, 2011Microsoft CorporationMethod and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network
US8140729Jan 19, 2010Mar 20, 2012Apple Inc.Method and apparatus for arbitration on a full-duplex bus using dual phases
US8250100Aug 6, 2009Aug 21, 2012Apple Inc.General purpose data container method and apparatus for implementing AV/C descriptors
US8275910Jul 2, 2003Sep 25, 2012Apple Inc.Source packet bridge
US8295302Apr 19, 2010Oct 23, 2012Apple Inc.Methods and apparatus for ensuring compatibility on a high performance serial bus
US8321748Jul 1, 2011Nov 27, 2012Apple Inc.Symbol encoding for tolerance to single byte errors
US8392742Jul 25, 2011Mar 5, 2013Apple Inc.Cyclemaster synchronization in a distributed bridge
US8407330Feb 12, 2008Mar 26, 2013Apple Inc.Method and apparatus for the addition and removal of nodes from a common interconnect
US8407535Jun 5, 2006Mar 26, 2013Apple Inc.Method and apparatus for generating jitter test patterns on a high performance serial bus
US8473660Mar 19, 2012Jun 25, 2013Apple Inc.Method and apparatus for arbitration on a data bus
US8483108Jul 24, 2007Jul 9, 2013Apple Inc.Apparatus and methods for de-emphasis training on a point-to-point connection
US8607117Nov 26, 2012Dec 10, 2013Apple Inc.Symbol encoding for tolerance to single byte errors
US8667023Aug 20, 2012Mar 4, 2014Apple Inc.General purpose data container method and apparatus for implementing AV/C descriptors
US8762446Nov 2, 1999Jun 24, 2014Apple Inc.Bridged distributed device control over multiple transports method and apparatus
US8838825Jun 27, 2011Sep 16, 2014Apple Inc.Synchronized transmission of audio and video data from a computer to a client via an interface
EP1094638A2 *Oct 18, 2000Apr 25, 2001Sony CorporationMethod and apparatus for controlling data networks
EP1500227A1 *Apr 10, 2002Jan 26, 2005Lg Electronics Inc.Method for recognizing electronic appliance in multiple control system
WO2000057263A1 *Mar 20, 2000Sep 28, 2000Sony Electronics IncA method and system for a multi-phase net refresh on a bus bridge interconnect
WO2000057288A1 *Mar 20, 2000Sep 28, 2000Sony Electronics IncMethod and system for quarantine during bus topology configuration
WO2000057289A1 *Mar 20, 2000Sep 28, 2000Sony Electronics IncA method and system for message broadcast flow control on a bus bridge interconnect
WO2001042878A2 *Nov 29, 2000Jun 14, 2001Sony Electronics IncA method and system for a multi-phase net refresh on a bus bridge interconnect
WO2002071699A1 *Feb 15, 2002Sep 12, 2002Koninkl Philips Electronics NvSystem, method and mesuring node for determining a worst case gap-count value in a multi-station network
Classifications
U.S. Classification710/8
International ClassificationH04L29/12, H04L12/24, H04L12/46, H04L12/64, H04L12/40
Cooperative ClassificationH04L29/12254, H04L29/12273, H04L61/2053, H04L61/6022, H04L61/6004, H04L29/12301, H04L29/12839, H04L29/1232, H04L61/2092, H04L29/12801, H04L41/0816, H04L41/12, H04L41/0806, H04L61/2076, H04L12/46, H04L12/40078, H04L61/2038
European ClassificationH04L41/12, H04L41/08A2A, H04L41/08A1, H04L61/20B, H04L61/60D11, H04L61/60A, H04L61/20G, H04L61/20D, H04L61/20I, H04L12/46, H04L29/12A3B, H04L29/12A3D, H04L29/12A3G, H04L29/12A3I, H04L12/40F4, H04L29/12A9A, H04L29/12A9D11
Legal Events
DateCodeEventDescription
Jul 21, 2010FPAYFee payment
Year of fee payment: 12
May 29, 2007ASAssignment
Owner name: APPLE INC., CALIFORNIA
Free format text: CHANGE OF NAME;ASSIGNOR:APPLE COMPUTER, INC., A CALIFORNIA CORPORATION;REEL/FRAME:019365/0742
Effective date: 20070109
Jul 28, 2006FPAYFee payment
Year of fee payment: 8
Sep 10, 2002REMIMaintenance fee reminder mailed
Aug 22, 2002FPAYFee payment
Year of fee payment: 4 |
Jump to content
Photo
Switch Discs (ISO's) 'On-the-fly' / Triggered - After Initial Boot
• Please log in to reply
7 replies to this topic
#1 jpwise
jpwise
Newbie
• Members
• 12 posts
•
New Zealand
Posted 12 January 2012 - 09:04 PM
Just came across your project a little while ago and looks really interesting.
A bit of a side question though is, is it possible to switch disc images on the fly? ie: emulate eject, switch image, and re-insertion of disc? - after the initial boot that is.
The reasoning behind the question is related to multi disc install sets. I work in a laptop service centre and as a matter of course we have to re-install alot of laptops, for most units we keep physical cd's which are prone to getting scratched/damaged over the course of time. Most of the install sets range between 2 and 4 discs. The disc change always occurs after boot, so isosel wouldn't really be feasible in this situation.
For the most part, most recovery systems will trigger an eject and prompt for the next disc to be inserted, so in theory it would be possible to honour the eject request, read a sequence list (txt file), and load the next disc in the set with an insert notification.
Ideally it would be awesome if there was an option to push button force a rotation to the next disc in the set (and loop around if you miss one/have to insert 1st disc again), but lacking a physical button, or abusing the read/write switch as a trigger, a list file would be the next best option.
edit: as an addendum a sequence/next image in list could also be useful if you implement virtual cd->iso writing functionality i've seen mentioned a few times. particularly to set the notify disc type/atip data, single layer, dual layer, ?blu-ray?, etc.
Your thoughts?
Also I see the kickstart pledges has already topped out, is there any indication of sale price yet, or if i sign up to the kickstart with a donation would i be able to get access to a unit? Thx.
#2 elegantinvention
elegantinvention
Frequent Member
• Developer
• 310 posts
• Location:South Bend, Indiana, USA
•
United States
Posted 13 January 2012 - 07:14 AM
A bit of a side question though is, is it possible to switch disc images on the fly? ie: emulate eject, switch image, and re-insertion of disc? - after the initial boot that is.
...
For the most part, most recovery systems will trigger an eject and prompt for the next disc to be inserted, so in theory it would be possible to honour the eject request, read a sequence list (txt file), and load the next disc in the set with an insert notification.
Great idea! I don't see any reason it can't be implemented.
At a glance the only thing I'm not sure of is how to specify the sequence file to use. I suppose it could be specified instead of an image file. It could also be done "implicitly" by naming the files for example image1.iso, image2.iso, etc, all in the same folder, then mounting any one of them, though that may be undesirable in some situations (don't want them named sequentially like that, don't want it done implicitly but DO want them named like that, ...). Just tossing some ideas out there.
Regardless, the code to actually load the next image file after an eject operation is trivial, as is parsing the sequence, whatever method is chosen. So, I've put it on the to-do list. :good:
edit: as an addendum a sequence/next image in list could also be useful if you implement virtual cd->iso writing functionality i've seen mentioned a few times. particularly to set the notify disc type/atip data, single layer, dual layer, ?blu-ray?, etc.
Good point :cheers:
Also I see the kickstart pledges has already topped out, is there any indication of sale price yet, or if i sign up to the kickstart with a donation would i be able to get access to a unit? Thx.
Once a Kickstarter project reaches its deadline, funded or not, it can't receive any more funding via Kickstarter.
No retail price / availability yet, still working on the Kickstarter units, but things are moving along. Once they're available for sale I'll be sure to post here on the forums, the blog, twitter, and isostick.com will have information on buying them (for now it just redirects to the Kickstarter).
Thanks for the interest and the suggestion, I like it!
#3 jpwise
jpwise
Newbie
• Members
• 12 posts
•
New Zealand
Posted 13 January 2012 - 11:50 PM
Glad you like the idea. :)
In terms of specifying the image file since you've already got isosel tied in, it would probably be easiest to specify the sequence list file with isosel as a starting point. Are the isosel selections persistent? as in do the selections remain through powerdowns/unplugging etc? It might not ever be an issue but if an install set requires rebooting part way through install, and then resuming the presented iso will need to remain constant, even if it's part of a set.
Also for the writing side, the option to set it to a blank disc(s) would need to be available to either the os, or still have to be set at boot time. If you're writing to a virtual disc the requirement option in burning software to verify can probably be safely turned off, but any eject/re-insert events would still need to be handled correctly.
Otherwise I've already followed your twitter, so will keep an eye on how things go. :)
#4 elegantinvention
elegantinvention
Frequent Member
• Developer
• 310 posts
• Location:South Bend, Indiana, USA
•
United States
Posted 14 January 2012 - 12:08 AM
Yep, isosel selections are persistent through power loss / unplugging and such.
Even if you have it set read-only -- but in that case it writes to the CPU's internal flash, instead of the microSD card, so it still honors the read-only switch. In that case isosel warns you that your selection won't show up in the config file / software, but that it will still persist.
I'll be whipping up an application for osx/win/nix which can be run within an OS to switch images and do other setup. Ultimately isostick gets all its setup information from files on the microSD card, and the software will just be a nice GUI to make modifying them trivial, so adding that to the software shouldn't be a problem :)
Thanks for the follow :cheers:
#5 TheHive
TheHive
Platinum Member
• .script developer
• 4165 posts
Posted 14 January 2012 - 07:42 AM
Good Ideas.
#6 steve6375
steve6375
Platinum Member
• Developer
• 7033 posts
• Location:UK
• Interests:computers, programming (masm,vb6,C,vbs), photography,TV,films,guitars, www.easy2boot.com
•
United Kingdom
Posted 14 January 2012 - 10:02 AM
If a file in the list is not found, then it should try the next file in the list. If the end of the list is reached it should go back to the beginning. If all entries in the list have been tried and failed, it should exit out of the loop. The ZalMan has a few issues when you plug another HDD into a controller that is already set to load an iso - it had to be reset. It is worth checking what happens when you place a freshly formatted or unformatted SD card into a controller that has already been set up with a previous SD card containing ISOs and a file load list...
#7 elegantinvention
elegantinvention
Frequent Member
• Developer
• 310 posts
• Location:South Bend, Indiana, USA
•
United States
Posted 14 January 2012 - 07:54 PM
Switching cards is no problem :)
isostick doesn't keep a list of ISOs, it rebuilds the list on the fly whenever it's needed.
Here's some details on the process:
Normally isostick reads a config file to determine which image to load and isosel writes to this config file if the stick is not read-only.
At present, the config file is /config/iso_filename.txt and it only contains the filename with full path of the chosen image.
When isosel is used and the isostick is read-only, forcing it to store data outside the microSD card, what it actually stores is:
• The location (LBA and byte offset) of the directory entry of the image chosen by isosel.
To reduce wear on the CPU's internal flash, a circular buffer is used. The LBA and offset are small and fixed-size, and hence easy to use in such a circular buffer. This is also designed to withstand partial (failed) writes, in the unlikely event that the isostick gets unplugged suddenly before the write is complete -- in such a case it will fall back whatever image was previously chosen, as if the failed choice had never happened.
• A checksum of the filename in the chosen directory entry.
• A checksum of the config file's modification date/time and the image filename it contains, if any.
Filename is stored because, if you want to return to the same image stored in the config file it cannot just compare the filenames because the previous and current filename would be the same (config file couldn't be changed by isosel because stick was read-only); in this case the modification date/time difference changes the checksum. On the other hand, it is unlikely but possible the config file can be changed and result in the same modification date/time, so that is included as a backup.
At power-on, after a USB reset, or any time a card change is detected, isostick checks if isosel had previously stored a choice in internal flash. If not, the config file is used as usual. Otherwise it first compares the checksum of the config file's modification date and image filename. If this does not match then we're done, it uses the config file which has been modified since the isosel choice. If they do match then the isosel choice may still be valid, so it compares the checksum of the chosen image filename to the one in the directory entry on the microSD card. If they match then it uses that image file. If they don't match then the microSD card is different or the file is gone/moved, either way the isosel choice isn't valid so it gives up and falls back to the config file.
In any event if the config file is not present it defaults to an empty optical drive.
No worries about the speed of rebuilding the image list, either. It's been thoroughly tested with a 7-level deep directory structure full of several dozen image files and the time required to populate the list is not noticeable. Currently the only time the list is needed is when entering isosel, so the isostick can pass the list on to isosel. Any other time the image name in the config file is used and no list is necessary.
#8 jlee928
jlee928
• Members
• 3 posts
•
United States
Posted 13 September 2012 - 08:29 PM
Great idea! I don't see any reason it can't be implemented. At a glance the only thing I'm not sure of is how to specify the sequence file to use. I suppose it could be specified instead of an image file. It could also be done "implicitly" by naming the files for example image1.iso, image2.iso, etc, all in the same folder, then mounting any one of them, though that may be undesirable in some situations (don't want them named sequentially like that, don't want it done implicitly but DO want them named like that, ...). Just tossing some ideas out there. Regardless, the code to actually load the next image file after an eject operation is trivial, as is parsing the sequence, whatever method is chosen. So, I've put it on the to-do list.
Maybe you can set it up with a list file, namely with an extension of *.lst or simply *.list or anything ending with *.list.txt.
To load the next disk, flip the switch to unlock to eject and lock to load the next disk image? Actually this can be a built-in feature? To mount any disk, the isostick have to be in a locked state?
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users |
++ed by:
KEEDI CUB MISHIN LUKAST DHOSS
5 PAUSE users
6 non-PAUSE users.
Tomas Doran
and 1 contributors
NAME
Catalyst::Authentication::Realm::Progressive - Authenticate against multiple realms
SYNOPSIS
This Realm allows an application to use a single authenticate() call during which multiple realms are used and tried incrementally until one performs a successful authentication is accomplished.
A simple use case is a Temporary Password that looks and acts exactly as a regular password. Without changing the authentication code, you can authenticate against multiple realms.
Another use might be to support a legacy website authentication system, trying the current auth system first, and upon failure, attempting authentication against the legacy system.
EXAMPLE
If your application has multiple realms to authenticate, such as a temporary password realm and a normal realm, you can configure the progressive realm as the default, and configure it to iteratively call the temporary realm and then the normal realm.
__PACKAGE__->config(
'Plugin::Authentication' => {
default_realm => 'progressive',
realms => {
progressive => {
class => 'Progressive',
realms => [ 'temp', 'normal' ],
# Modify the authinfo passed into authenticate by merging
# these hashes into the realm's authenticate call:
authinfo_munge => {
normal => { 'type' => 'normal' },
temp => { 'type' => 'temporary' },
}
},
normal => {
credential => {
class => 'Password',
password_field => 'secret',
password_type => 'hashed',
password_hash_type => 'SHA-1',
},
store => {
class => 'DBIx::Class',
user_model => 'Schema::Person::Identity',
id_field => 'id',
}
},
temp => {
credential => {
class => 'Password',
password_field => 'secret',
password_type => 'hashed',
password_hash_type => 'SHA-1',
},
store => {
class => 'DBIx::Class',
user_model => 'Schema::Person::Identity',
id_field => 'id',
}
},
}
}
);
Then, in your controller code, to attempt authentication against both realms you just have to do a simple authenticate call:
if ( $c->authenticate({ id => $username, password => $password }) ) {
if ( $c->user->type eq 'temporary' ) {
# Force user to change password
}
}
CONFIGURATION
realms
An array reference consisting of each realm to attempt authentication against, in the order listed. If the realm does not exist, calling authenticate will die.
authinfo_munge
A hash reference keyed by realm names, with values being hash references to merge into the authinfo call that is subsequently passed into the realm's authenticate method. This is useful if your store uses the same class for each realm, separated by some other token (in the EXAMPLE authinfo_mungesection, the 'realm' is a column on Schema::Person::Identity that will be either 'temp' or 'local', to ensure the query to fetch the user finds the right Identity record for that realm.
METHODS
new ($realmname, $config, $app)
Constructs an instance of this realm.
authenticate
This method iteratively calls each realm listed in the realms configuration key. It returns after the first successful authentication call is done.
AUTHORS
J. Shirley <jshirley@cpan.org>
Jay Kuri <jayk@cpan.org>
COPYRIGHT & LICENSE
Copyright (c) 2008 the aforementioned authors. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. |
1.Base基础/3.Icon图标/操作/search备份
1.Base基础/3.Icon图标/操作/search备份
EN
文档
AntDB简介
快速入门
用户手册
数据安全
系统架构
最佳实践
• 文档首页 /
• 参考指南 /
• 常见问题FAQ /
• 使用相关
使用相关
更新时间:2024-04-13 13:42:18
uncommitted xmin 1783478473 from before xid cutoff 1848062627 needs to be frozen \N
解决方式
该错误出现在数据冻结操作的时候,根据执行时报错的具体信息,找到报错的表,去对应的DN节点做如下操作:
set xc_maintenance_mode = on;
update ud.dr_gprs_731_1_20210426 set dr_type=(select distinct dr_type from ud.dr_gprs_731_1_20210426 where xmin=1783478473) where xmin= 1783478473;
原因说明
产生该错误的原因可能是执行数据冻结操作(Vacuum Freeze)分布式事务ID同步不及时、相关DN节点上数据目录adb_xact下的文件异常或者丢失损坏造成的;
ERROR: attempted to local committed but global uncommitted transaction, which version is 1749254057 \N
wait session last xid commit time out, which version is 1963201898 \N
解决方式
该错误出现在CN节点执行SQL操作时;
在报错的CN节点查询adb_snap_state()扩展试图:
select * from adb_snap_state();
--如果提示不存在则在对应的数据库下创建该插件即可:create extension if not exits adb_snap_state;
检查报错的事务ID号是否位于查询结果的xid_assign: []集合中,如果集合中有,则在对应的CN节点执行如下操作:
找到CN进程下的postgres: snapshot receiver process进程号,并尝试kill -15 该进程号(注意:一定要是kill -15不可以-9或者其他),执行完成即可。
正常情况下通过kill -15方式触发事务手工同步即可解决问题,当采用该手段未解决问题时,可以采用如下方式继续处理:
登录GTMC节点,找到gxid sender的数据库进程(譬如进程号是:30099)
窗口一:
通过gdb命令进行如下操作:
gdb -p 30099
handle SIGUSR1 nostop noprint --键入如上命令并回车
b gxidsender.c:140 --键入如上命令并回车
command 1 --键入如上命令并回车
p GxidSender ->xcnt=0 --键入如上命令并回车
c --键入如上命令并回车
end --键入如上命令并回车
窗口二:
此时新开一个终端窗口执行命令kill -15 30099(gxid sender进程号)后继续切换到窗口一执行c命令
窗口一:
c --键入如上命令并回车
通过如上方式强制解决GTMC和CN事务状态不同步的问题;
原因说明
产生该错误的原因可能是CN节点和GC节点事务号未及时同步导致的,可以手工进行同步;
ERROR could not import the requested snapshot \N
解决方式
该错误出现在CN节点执行SQL操作时;
在报错的CN节点查询adb_snap_state()扩展试图:
select * from adb_snap_state();
--如果提示不存在则在对应的数据库下创建该插件即可:create extension if not exits adb_snap_state;
检查查询结果global_xmin和local oldest_xmin、local global_xmin差值是不是较大,如果差值较大说明该CN事务和GTMC事务同步存在异常,需要采取如下方式触发手工同步:
找到CN进程下的postgres: snapshot receiver process进程号,并尝试kill -15 该进程号(注意:一定要是kill -15不可以-9或者其他),执行完成即可。
原因说明
产生该错误的原因可能是CN节点和GC节点事务号未及时同步导致的,可以手工触发同步;
ERROR: attempted to local committed but global uncommitted transaction, which version is xxxx \N
错误信息中出现如下提示:you can modfiy guc parameter "waitglobaltransaction" on coordinators to wait the global transaction id committed on agtm
在每个dn主库上,查询接受事务快照进程:postgres: snapshot receiver process,然后ps把这个进程的pid找出来,kill -15
报错的cn上,建个表写写数据验证问题是否已经修复。
原因说明
部分节点的事务状态和gtm的不一致,在节点内,可以通过select adb_xact_status(xid);来查看事务状态,kill上面的进程,目的是让节点重新同步一下gtm的事务状态。
adb:/unibss/dmp/hqy/gprs4/DR_GPRS_201812_A_P1_1.sql:18895: invalid command \N
解决方式
由于adb批量导入时,刷新速度太快,该错误信息并非最原始的错误。 添加-v ON_ERROR_STOP=1选项,即可看到最原始的错误信息。
adb -p xxx -d xxx -f xxx.sql -v ON_ERROR_STOP=1
原因说明
产生该错误的原因较多。如 adb导入的表结构未创建、表上某列存在自增序列却没有创建。 请结合上述参数重新执行adb导入后,确认原始错误信息后,对症下药即可。
ERROR: adb_basebackup: could not receive data from WAL stream: server closed the connection unexpectedly
adb_log报错信息:terminating walsender process due to replication timeout
解决方式
1. 测试备机ssh至主机能否成功
ssh datanode_master_ip -p ssh_port
若调通ssh登录后,仍然失败,则进行步骤2的排查
1. 测试备机adb至主机能否成功
adb -p datanode_master_port -h master_ip -d replication
若调通adb登录后,仍然失败,则进行步骤3的排查
1. 测试备机adb至主机能否成功
wal_sender_timeout 由默认的60s调整为0. (0 没有时间限制)
wal_sender_timeout参数说明: 服务端会中断那些停止活动超过该配置的复制连接。 这对发送服务器检测一个备机崩溃或网络中断有用。 设置为0将禁用该超时机制。 该参数只能在postgresql.conf文件中或在服务器命令行上设置。默认值是 60 秒。
1. 其他可能相关配置项
--提升wal_keep_segments,由128调整至1024
wal_keep_segments = 1024
--打开归档模式
archive_mode = "on"
archive_command = "rsync -a %p /data2/antdb/data/arch/dn1/%f"
原因说明
产生原因较多,请按上述步骤依次排查。
ERROR: cannot execute INSERT in a read-only transaction
解决方式
antdb的datanode节点,默认只有读权限,只有coordinator具有读写权限。 这里adb连接的是datanode,而不是coordinator,可以让adb指定端口选项-p。 也可能配置了pgport的环境变量,如果配了pgport的环境变量,adb默认连到环境变量指向的那个端口。
adb -p xxx -d xxx -f xxx.sql -v ON_ERROR_STOP=1
原因说明
按上述说明依次排查
LOG: checkpoints are occurring too frequently
解决方式
在数据库繁忙时,导致XLOG还没被应用,就被数据库重复使用写入数据。 AntDB7.2前(checkpoint_segments设置过小) AntDB7.2后(max_wal_size设置过小)
AntDB7.2前(增加checkpoint_segments设置,>=128)
AntDB7.2后(增加max_wal_size设置,>=4GB)
原因说明
LOG: archive command failed with exit code (X)
解决方式
硬盘空间不足或归档路径不存在 或用户没有写权限 或用户ssh或scp或rsync命令执行失败
原因说明
按上述说明依次排查
LOG: number of page slots needed (X) exceeds max_fsm_pages (Y)
解决方式
max_fsm_pages最大自由空间映射不足。 建议增加max_fsm_pages的同时进行VACUUM FULL
原因说明
max_fsm_pages最大自由空间映射不足
ERROR: current transaction is aborted, commands ignored until end of transaction block
解决方式
业务在代码中捕获该异常,并手工执行一次rollback操作。 或断开该连接后重新建链即可。 下面给出一个示例说明:
antdb=# begin;
BEGIN
antdb=# select * from sy01;
id
--------------------------------------
adc8775e-4539-4861-9454-ceae45c568f7
(1 row)
antdb=# select * from sy011;
ERROR: relation "sy011" does not exist
LINE 1: select * from sy011;
^
antdb=# select * from sy011;
ERROR: current transaction is aborted, commands ignored until end of transaction block
antdb=# rollback ;
ROLLBACK
antdb=# begin;
BEGIN
antdb=# select * from sy01;
id
--------------------------------------
adc8775e-4539-4861-9454-ceae45c568f7
(1 row)
antdb=# commit;
COMMIT
原因说明
AntDB区别于oracle的设计,不会在发生异常后自动回滚。需用户手工执行一次回滚操作即可。 手工回滚后复用该连接就不会报错了。
ERROR: operator does not exist: character = integer
解决方式
Postgresql8.3以后取消了数据类型隐式转换,因此比较的数据类型需要一致。 AntDB兼容了2种语法模式:默认的postgres和兼容的oracle。 oracle语法模式下,AntDB已经自研兼容了部分数据类型隐式转换的场景,包括该问题的场景已经兼容。 postgres语法模式下,依然会报该错误。 下面给出一个示例说明:
antdb=# \d sy02
Table "public.sy02"
Column | Type | Modifiers
--------+-----------------------+-----------
id | character varying(10) |
antdb=# set grammar TO postgres;
SET
antdb=# select count(*) from sy02 where id=123;
ERROR: operator does not exist: character varying = integer
LINE 1: select count(*) from sy02 where id=123;
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
antdb=# set grammar TO oracle;
SET
antdb=# select count(*) from sy02 where id=123;
count
-------
0
(1 row)
原因说明
为了兼容oracle语法,AntDB自研兼容了较大部分的oracle数据类型隐式转换的场景。 建议优先尝试使用oracle语法模式。
canceling statement due to lock timeout
解决方式
某一个长事务占用的锁尚未释放,新的个事务又申请相同对象的锁。 当达到lock_timeout设置的时间后,就会报这个错误。 客户端需要及时提交或回滚事务,长事务是非常消耗数据库资源的一种行为,请尽量避免。
--查看锁表情况
select locktype,relation::regclass as relation,virtualxid as vxid,transactionid as xid,virtualtransaction vxid2,pid,mode,granted from adb_locks where granted = 'f';
--查看执行时间大于5分钟的长事务
select datname,pid,usename,client_addr,query,backend_start,xact_start,now()-xact_start xact_duration,query_start,now()-query_start query_duration,state from adb_stat_activity where state<>$$idle$$ and now()-xact_start > interval $$5 min$$ order by xact_start;
--kill 长事务。2种方式如下(PID是上述sql语句查询出来的pid返回值):
方法一:
SELECT adb_cancel_backend(PID);
这种方式只能kill select查询,对update、delete 及DML不生效)
方法二:
SELECT adb_terminate_backend(PID);
这种可以kill掉各种操作(select、update、delete、drop等)操作
如果在 adb_locks 中没有查到表相关的锁信息, 那么需要去各个 datanode 上查看是否有两阶段未完成的事务挂在那,查询视图:select * from adb_prepared_xacts; 根据 prepared 字段的时间值判断是否有异常的事务,所谓的异常,满足以下条件: * prepared 字段值显示的时间距离当前时间较长,比如超过单个语句预期的执行时间。 * 每次查询,始终是某些事务,一直存在。
一般来说,这些事务算是异常事务了。可以在各个节点上查询这个事务的状态:select adb_xact_status(50996670) ; ,参数值为 adb_prepared_xacts 中的 gid 值去掉 T
• 如果事务在 GTMCOORD 上已经提交,则需要在本节点提交该事务:commit 'T784168121';
• 如果事务在 GTMCOORD 上未提交,则需要在本节点回滚该事务:rollback prepared 'T784168121';
上述操作需要在事务对应的 database 上执行,通过 adb_prepared_xactsdatabase 列值来决定。
可以用如下语句生成批量操作语句:
select 'rollback prepared '''||gid||''';'
from adb_prepared_xacts
where to_char(prepared,'yyyy-mm-dd hh24:mi') ='2020-01-01 14:30'
and database = 'db1';
原因说明
INSERT has more target columns than expressions
解决方式
目标列与表结构的列不匹配。
原因说明
查询语句中的目标列与表结构的列不匹配,或多或少,请仔细检查。
ERROR: No Datanode defined in cluster
解决方式
登录coordinator执行select * from pgxc_node,检查是否存在node_type=D 的节点信息。 执行select pgxc_pool_reload() 重新加载pgxc_node信息之后,重新执行上述的查询。 若仍然没有node_type=D 的节点信息,则需要重新init集群。 或若登录adbmgr执行monitor all,显示所有节点均为running状态,也可以手工初始化pgxc_node表的信息,但比较麻烦。
重新初始化集群的步骤: 登录adbmgr操作
stop all mode fast;
clean all;
init all;
手工添加pgxc_node表的初始化信息的步骤: 登录每个coordinator操作
--创建一个coordinator的节点信息
create node ${node_name} with (type=coordinator, host='${node_ip}', port=${node_port}, primary=false);
--创建第一个datanode master的节点信息(datanode slave不需要初始化)
create node ${node_name} with (type=datanode, host='${node_ip}', port=${node_port}, primary=true);
--创建其他datanode master的节点信息(datanode slave不需要初始化)
create node ${node_name} with (type=datanode, host='${node_ip}', port=${node_port}, primary=false);
**注:该方式比较原始,不建议这样操作。**
原因说明
init all初始化集群时,agtm没有正常初始化,导致各个节点在初始化pgxc_node时,向agtm获取事务号失败,导致pgxc_node该表初始化异常。
ERROR: Cannot create index whose evaluation cannot be enforced to remote nodes
解决方式
目前非分片键不允许创建主键或唯一索引。 若一定要创建主键,带上分片键即可。 以下给出一个示例说明:
antdb=# create table sy01(id int,name text) distribute by hash(name);
CREATE TABLE
antdb=# ALTER TABLE sy01 add constraint pk_sy01_1 primary key (id);
ERROR: Cannot create index whose evaluation cannot be enforced to remote nodes
antdb=# ALTER TABLE sy01 add constraint pk_sy01_1 primary key (id,name);
ALTER TABLE
antdb=# \d+ sy01
Table "public.sy01"
Column | Type | Modifiers | Storage | Stats target | Description
--------+---------+-----------+----------+--------------+-------------
id | integer | not null | plain | |
name | text | not null | extended | |
Indexes:
"pk_sy01_1" PRIMARY KEY, btree (id, name)
Distribute By: HASH(name)
Location Nodes: ALL DATANODES
原因说明
cannot create foreign key whose evaluation cannot be enforced to Remote nodes
解决方式
目前不允许在非分片键上创建外键,处理方式: - 修改子表外键字段为分片键后再创建外键。 - 如果父表数据量很小的话,可以修改父表的为复制表。
复现SQL:
antdb=# create table t_parent (id int primary key,name varchar(30));
create table t_child (id int,name varchar(30)) distribute by hash(name);
CREATE TABLE
antdb=# create table t_child (id int,name varchar(30)) distribute by hash(name);
CREATE TABLE
antdb=#
antdb=# alter table t_child
postgres-# add constraint fkey_t_child
postgres-# foreign key (id)
postgres-# references t_parent (id);
ERROR: Cannot create foreign key whose evaluation cannot be enforced to remote nodes
antdb=#
antdb=# alter table t_child distribute by hash (id);
ALTER TABLE
antdb=# alter table t_child
add constraint fkey_t_child
foreign key (id)
references t_parent (id);
ALTER TABLE
antdb=# drop table t_child;
DROP TABLE
antdb=# create table t_child (id int,name varchar(30)) distribute by hash(name);
CREATE TABLE
antdb=# alter table t_parent distribute by replication;
ALTER TABLE
antdb=# alter table t_child
postgres-# add constraint fkey_t_child
postgres-# foreign key (id)
postgres-# references t_parent (id);
ALTER TABLE
antdb=#
fe_sendauth: no password supplied
可能的报错信息:
WARNING: on coordinator execute "set FORCE_PARALLEL_MODE = off; SELECT adb_PAUSE_CLUSTER();" fail ERROR: error message from poolmgr:reconnect three thimes , fe_sendauth: no password supplied
处理方式: 检查下集群中coord的hba信息,是否存在:对于集群内部主机IP有md5的认证方式。
在adbmgr中执行 :show hba nodename 来查看节点的hba信息。
FATAL: invalid value for parameter "TimeZone": "Asia/Shanghai"
可能的报错信息:
FATAL: invalid value for parameter "TimeZone": "Asia/Shanghai"
FATAL: invalid value for parameter "TimeZone": "asia/shanghai"
FATAL: invalid value for parameter "TimeZone": "utc"
处理方式: 1. 检查JDBC的JAVA_OPTS,是否配置了user.timezone参数,若配置了该参数,需严格匹配数据库内默认支持的时区名的大小写。
数据库内支持的时区,使用下列sql查询,注意时区名的大小写。
select * from adb_catalog.adb_timezone_names;
若JDBC中没有配置该参数,则按步骤2的说明检查。
1. 检查AntDB二进制文件目录下的share,并确认timezone下的时区是否完整。若缺失或不完整,需要重新从一个完整的节点deploy所需的文件。
ll $ADBHOME/share/postgresql/timezone
total 232
drwxr-xr-x 2 antdb antdb 4096 Apr 16 15:59 Africa
drwxr-xr-x 6 antdb antdb 4096 Apr 16 15:59 America
drwxr-xr-x 2 antdb antdb 4096 Apr 16 15:59 Antarctica
drwxr-xr-x 2 antdb antdb 25 Apr 16 15:59 Arctic
drwxr-xr-x 2 antdb antdb 4096 Apr 16 15:59 Asia
......
drwxr-xr-x 2 antdb antdb 4096 Apr 16 15:59 US
-rwxr-xr-x 1 antdb antdb 114 Apr 16 15:48 UTC
-rwxr-xr-x 1 antdb antdb 1905 Apr 16 15:48 WET
-rwxr-xr-x 1 antdb antdb 1535 Apr 16 15:48 W-SU
-rwxr-xr-x 1 antdb antdb 114 Apr 16 15:48 Zulu
cannot find the datanode master which oid is "xxxx" in pgxc_node of coordinator
解决方式
需要确认当前 datanode 节点的主备状况。根据主备状况确认 pgxc_node 中的 datanode 信息:
• gtm主库&cn主库的pgxc_node 中的 datanode 信息正确的话
update pgxc_class set nodeoids='xxx yyy zzz' where nodeoids='aaa bbb ccc';
• pgxc_node 中的 datanode 信息不正确的话
update pgxc_node set node_name='xxxx', node_host='' where oid=xxxx;
原因说明
datanode主备切换成功后,可能会遗漏 pgxc_node 的相关修改。
switchover datanode slave 失败
解决方式
需要确认当前 datanode 节点的切换状况:
• datanode 未进行实际的切换:根据 mgr 中的错误信息,解决问题后再次尝试进行切换
• datanode 已经进行了实际的切换:需要进行如下的操作
• mgr 节点
set command_mode = sql;
select oid,* from adb_catalog.mgr_node;
update adb_catalog.mgr_node set nodetype='xxxx', nodesync='xxxx',nodemasternameoid='xxxx' where oid=xxxx; --此处需要将datanode的主备库都需要进行update
• gtm主库&cn主库
select oid,* from pgxc_node;
update pgxc_node set node_name='xxxx', node_host='' where oid=xxxx;
原因说明
datanode主备切换成功后,在进行一些校验的时候可能会出错,这时,就不能再次进行切换,只能通过修改元数据的方式进行。
问题反馈 |
bizdayse {bizdays}R Documentation
Business days and current days equivalence
Description
bizdayse stands for business days equivalent, it returns the amount of business days equivalent to a given number of current days.
Usage
bizdayse(dates, curd, cal)
Arguments
dates
the reference dates
curd
the amount of current days
cal
the calendar's name
Details
Let us suppose I have a reference date dates and I offset that date by curd current days. bizdayse returns the business days between the reference date and the new date offset by curd current days.
This is equivalent to
refdate <- Sys.Date()
curd <- 10
newdate <- refdate + 10 # offset refdate by 10 days
# this is equals to bizdayse(refdate, 10)
bizdays(refdate, newdate)
Value
An integer representing an amount of business days.
Date types accepted
The argument dates accepts Date objects and any object that returns a valid Date object when passed through as.Date, which include all POSIX* classes and character objects with ISO formatted dates.
Recycle rule
These arguments handle the recycle rule so a vector of dates and a vector of numbers can be provided and once those vectors differs in length the recycle rule is applied.
Examples
bizdayse("2013-01-02", 3, "Brazil/ANBIMA")
[Package bizdays version 1.0.13 Index] |
FAQ
HOW MANY SESSIONS IS IT RECOMMENDED I SIGN UP FOR?
WHAT HAPPENS IF I SCHEDULE AN APPOINTMENT AND MY BABY NEEDS ME?
3-6 sessions are recommended for new moms, although there is no limit to how many sessions you can sign up for. Coaching can continue as long as you like or feel is useful.
We are mothers too, so we know the importance of you needing to be available to your child. If you need to tend to your child while participating in session we understand. Nursing during session is acceptable.
WHAT IS THE DIFFERENCE BETWEEN THERAPY AND COACHING?
WHAT IF IT TURNS OUT I NEED THERAPY AND NOT COACHING?
Psychotherapy is a treatment modality used for individuals who have a diagnosed psychiatric condition and struggle with day to day functioning. The intent of therapy is to assess and heal medical illness. Coaching stems from positive psychology, and uses psychological theory and techniques to help our clients reach their full potential. Coaching operates from the perspective that clients are already functioning, but can use mentorship and support to reach higher levels of achievement and life satisfaction. At Smiling Mom we don't believe that just because you are having some difficulties with motherhood that you have mental illness.
If you need therapy our experts will recommend you other service options. You can choose to terminate coaching at that time or keep your coach in addition to your therapist.
WHEN IS THE BEST TIME TO SIGN UP?
You can sign up any time, but it is highly recommended you sign up during your third trimester. This is because signing up for sessions will be one less thing for you do after birth (and we know there is a ton). Another reason is that you will have a stronger bond with your coach if you start the relationship prior to your birth experience, as your coach will know you better and have a greater sense of how you are developing into your new role.
SMILINGMOM.COM
• Facebook
©2017 BY SMILING MOM |
Primo ramdisk test
Windows ALREADY caches a great deal, which is why it's such a memory hog. For example, Windows caches directories once they've been accessed; no need to read the filenames and sizes, etc., again when you go back to them. Files being written to are cached and only updated when the buffer gets full, to prevent having to access the same parts of the disk over and over (which would lead to latency and "drive chattering"; this has been somewhat moved to hardware by NCQ (Native Command Queueing), moving the buffer over to the hard drive, but Windows still does it.)
Windows has been doing this since the Windows days - look up "SmartDrv."
The Free Edition (limited to Windows 32-bit Win2000 / XP / 2003) is able to use 'invisible' RAM in the to 4 GB 'gap' (if your motherboard has i946 or above chipset) & is also capable of 'saving to hard disk on power down' (so, in theory, allows you to use the RAM disk for Windows XP swap file and survive over a 'Hibernate'). Whilst the free edition allows multiple RAM disk drives to be set up, the total of all drives is limited to 4096 MB. The current version, VSuite Ramdisk II, has been rebranded as 'Primo Ramdisk', all versions of which are chargeable. [23]
Primo ramdisk test
primo ramdisk test
Media:
primo ramdisk testprimo ramdisk testprimo ramdisk testprimo ramdisk testprimo ramdisk test
http://buy-steroids.org |
NEW DATABASE - 350 MILLION DATASHEETS FROM 8500 MANUFACTURERS
Datasheet Archive - Datasheet Search Engine
Direct from the Manufacturer
Part Manufacturer Description PDF Samples Ordering
LT5400BCMS8E-3#TRPBF Linear Technology Array/Network Resistor, RESISTOR, NETWORK, FILM, ISOLATED, 0.8 W, SURFACE MOUNT, 1212, DIP ri Buy
MAX5490XA25000-T Maxim Integrated Products Array/Network Resistor, RESISTOR, NETWORK, FILM, DIVIDER, 0.5714 W, SURFACE MOUNT, SOT-23 ri Buy
MAX5490TC05000 Maxim Integrated Products Array/Network Resistor, RESISTOR, NETWORK, FILM, DIVIDER, 0.5714 W, SURFACE MOUNT, SOT-23 ri Buy
NTC resistor T24
Catalog Datasheet Results Type PDF Document Tags
Abstract: temperature guarding by means of an NTC resistor • Battery checking to protect against short-circuit and open , NTC resistor. To avoid switching on and off with temperature, a hysteresis is built in for both , PIN DESCRIPTION PWM 1 pulse width modulator AO 2 analog output NTC 3 temperature sensor input LS 4 , scaler CP 9 change polarity R* 10 reference resistor R„ 11 normal charge reference resistor vP 12 , ground u PWM QI "16~I GND AO EH JDLED NTC GH ~Ï4~1 SYNC LS CH TEA1101 TEA1101 ~m OSC IB EE ... OCR Scan
datasheet
14 pages,
482.26 Kb
high current smps circuit diagram charger IC NiMh 4 ch Battery charger 12 circut Pulse Transformer VAC pwm charge nimh IEC134 smps circuit diagrams smps circut SMPS Transformer Symbol smps with different voltage 15A POWER TRANSISTOR FOR SMPS 7 pin SMPS datasheet abstract
datasheet frame
Abstract: VR_TT#, the NTC resistor satisfies the following equation group: R NTC (T1 ) + Rs = R NTC (T2 ) + Rs , Equation 6 into Equation 9 yields the required nominal NTC resistor value: 2.78k e RNTCTo = e 1 , resistor ratio information at different temperatures. The nominal NTC resistor value may be expressed in , 25°C is 438k. The closest NTC resistor value from manufacturers is 470k. So Equation 12 gives the , T 1 = 18.39k The NTC branch is designed to have a 470k NTC and a 4.42k resistor in series. The ... Original
datasheet
34 pages,
1179.49 Kb
ISL6261CR7Z ISL6261CR7Z-T ISL6261CRZ-T ISL6261IR7Z ISL6261IR7Z-T ISL6261IRZ ISL6261IRZ-T ntc 33 1114 r53 ntc NTC THERMISTORS TYPE P15 ntc thermistor k22 c90 thermistor NTC resistor T24 ISL6261 FN9251 ISL6261 abstract
datasheet frame
Abstract: through either lossless inductor DCR sensing or precise resistor sensing. If DCR sensing is used, an NTC , an external resistor. Temperature increase will decrease the NTC thermistor resistance. This , , 2600k, 4500k or 4250k. From the operation principle of VR_TT#, the NTC resistor satisfies the , required nominal NTC resistor value: 2.78k e RNTCTo = e 1 b( ) T2 + 273 b( 1 ) To + , temperatures. The nominal NTC resistor value may be expressed in another way as follows: RNTCTo = 2.78k ... Original
datasheet
33 pages,
1151.27 Kb
thermistor ntc 60 0250 FUNCTION circuit pic16f874 ISL6261 ISL6261A ISL6261ACRZ ISL6261ACRZ-T ISL6261AIRZ ISL6261AIRZ-T TB347 d19 ntc ntc r54 ntc 0125 ntc thermistor k22 ISL6261A abstract
datasheet frame
Abstract: DCR sensing or precise resistor sensing. If DCR sensing is used, an NTC thermistor network will , 4250k. From the operation principle of VR_TT#, the NTC resistor satisfies the following equation group , 2.78k (EQ. 9) Substitution of Equation 6 into Equation 9 yields the required nominal NTC resistor , value; manufacturers provide the resistor ratio information at different temperatures. The nominal NTC resistor value may be expressed in another way as follows: RNTCTo = 2.78k R NTC (T2 ) - R ... Original
datasheet
34 pages,
1151.9 Kb
thermistor ntc 60 0250 2750k ISL6261 ISL6261ACRZ ISL6261AIRZ 20A NTC THERMISTORS TB347 thermistor 40k table ntc thermistor k22 ISL6261AIRZ-TR5392 ISL6261AR5392 FN6980 ISL6261AR5392 abstract
datasheet frame
Abstract: an external resistor. Temperature increase will decrease the NTC thermistor resistance. This , , 2600k, 4500k or 4250k. From the operation principle of VR_TT#, the NTC resistor satisfies the , required nominal NTC resistor value: 2.78k e RNTCTo = e 1 b( ) T2 + 273 b( -e 1 , temperatures. The nominal NTC resistor value may be expressed in another way as follows: RNTCTo = 2.78k , can be implemented through either lossless inductor DCR sensing or precise resistor sensing. If DCR ... Original
datasheet
34 pages,
1276.71 Kb
thermistor R55 ISL6261 ISL6261A-BASED ISL6261AIRZ ISL6261AIRZ-T marking R99 ntc thermistor k22 thermistor ntc 60 0250 thermistor 40k table TB363 07375v TB347 ISL6261ACRZ-T L6261AR ISL6261A L6261AR abstract
datasheet frame
Abstract: an external resistor. Temperature increase will decrease the NTC thermistor resistance. This , , 2600k, 4500k or 4250k. From the operation principle of VR_TT#, the NTC resistor satisfies the , required nominal NTC resistor value: 2.78k e RNTCTo = e 1 b( ) T2 + 273 b( -e 1 , temperatures. The nominal NTC resistor value may be expressed in another way as follows: RNTCTo = 2.78k , can be implemented through either lossless inductor DCR sensing or precise resistor sensing. If DCR ... Original
datasheet
34 pages,
1290.55 Kb
ISL6261 ISL6261ACRZ ISL6261ACRZ-T ISL6261AIRZ ISL6261AIRZ-T NTC 40K NTC resistor T24 NTC resistor T5 TB347 TB363 thermistor ntc 60 0250 ntc thermistor k22 M21 mosfet Schematics 5250 ISL6288 ISL6261A ISL6288 abstract
datasheet frame
Abstract: through either lossless inductor DCR sensing or precise resistor sensing. If DCR sensing is used, an NTC , an external resistor. Temperature increase will decrease the NTC thermistor resistance. This , , 2600k, 4500k or 4250k. From the operation principle of VR_TT#, the NTC resistor satisfies the , required nominal NTC resistor value: 2.78k e RNTCTo = e 1 b( ) T2 + 273 b( 1 ) To + , temperatures. The nominal NTC resistor value may be expressed in another way as follows: RNTCTo = 2.78k ... Original
datasheet
34 pages,
1158.64 Kb
ISL6261AIRZ-T ISL6261AIRZ ISL6261ACRZ-T ISL6261ACRZ ISL6261A ntc 10 0250 c90 thermistor TB347 TB363 thermistor ntc 10k thermistor ntc 60 0250 isl6261 20A NTC THERMISTORS NTC resistor T24 ISL6261A abstract
datasheet frame
Abstract: or precise resistor sensing. If DCR sensing is used, an NTC thermistor network will thermally , resistor to VSS sets internal current reference. 5 NTC Thermistor input to VR_TT# circuit and a , 2 29 VID1 RBIAS 3 28 VID0 VR_TT# 4 27 VCCP FDE NTC 5 26 LGATE SOFT , the error amplifier. 7 OCSET Overcurrent set input. A resistor from this pin to VO sets DROOP , resistor from this pin to COMP programs the switching frequency (eg. 6.81k = 300kHz). Thermal overload ... Original
datasheet
35 pages,
1230.29 Kb
TS16949 diode S6 78A isl6261 ISL78211 ISL78211ARZ-T AEC-Q100 TB347 TB363 thermistor 40k table thermistor ntc 60 0250 thermistor R53 ntc thermistor k22 Diode MARKING E26 ISL78211ARZ ISL78211 abstract
datasheet frame
Abstract: Balancing Digital Level Shifter MBAT2 BIASRES Internal Biasing MBAT1 NTC NTC , battery cell temperature, sensed by NTC1 and NTC2. Optional NTC resistors can be installed at the , Optional NTC resistors can be installed at the jumpers J4 and J5. In that case, the default 3k3 resistors , Temp1 and Temp2. Optional NTC resistors can be installed at the jumpers J16 and J17. In that case, the , , J24, J25, J26 from the 3 IC configuration · Remove optocoupler O2 or resistor R51 · Populate ... Original
datasheet
24 pages,
3979.66 Kb
usb atmega128 270R atmel 324 atmel ata6870 bms battery HCPL-181 MA24-1-CAT 100R variable resistor SINGLE CHIP LED resistor 1K 5w QFN64 ntc5 ATA6870-DK2 ata6870 ATA6870 ATA6870 ATA6870 abstract
datasheet frame
Abstract: A COMPANY OF MODELS X, M, C, F, T, B NTC Therm istors C o a te d FEATURES · Small size - , Resistor type and model number Resistor wattage rating Resistor value Resistor tolerance Temperature , 412 1k 10% T 2" 3" T T~3~ T M IL -R -2 7 2 0 8 C R T24 1 Ç 2 3 (Basic [RT]) (Established Reliability , percent size 22) RESISTOR NETWORKS M IL-R -83401F -83401F M 8340101 (Basic [RZ]) (Established Reliability , desired: Film Resistor chart or RF Choke. For In d u ctan ce Values of 10 or More Band "B" and Band ... OCR Scan
datasheet
5 pages,
1083.3 Kb
RJR Resistor mil-R-39035 Dale Electronics thermistors datasheet abstract
datasheet frame |
pre-oxydation
Reading time:
Depending on temperature and distance, pre-oxidation is carried out either at the water intake or at the treatment plant site.
aeration
This treatment step is designed to compensate for the raw water’s lack of oxygen or to remove undesireable gases (such as H2S or CO2).
When this stage is carried out in the open air, the water’s increased oxygen content will be in equilibrium with the carbon dioxide removed. In the case waters having average or high mineral content, this aspect must be taken into consideration because any loss of “balancing” CO2 can result in scaling conditions. This may necessitate the use of pressurised aeration which allows for increase in O2 concentration without any corresponding change in the CO2 concentration.
chemical oxidation
see also section oxidation and reduction and chapter oxidation-disinfection
pre-oxidation using chlorine
As discussed, in the presence of organic matter, pre-chlorination will be accompanied by the formation of unwanted compounds; therefore, as a rule, it is better to place the chlorination point as far down the treatment line as possible, ideally following the maximum possible elimination of organic precursors present in the water.
Pre-chlorination can only be maintained when the water does not contain precursors in large quantities; pre-chlorination is used when there is a danger of algae developing in the clarification structures, or for the elimination of NH4+ ions or to oxidise ferrous iron to ferric iron.
Pre-chlorination can also be applied at an intermediate stage (e.g. to the clarified water) to prevent biological fouling (bacteria, algae, zooplankton…) on filters.
pre-oxidation using chloramines
When the raw water does not contain any ammonia, consideration can be given to the use of chloramines that have been previously produced through the reaction of chlorine with ammonia or ammonium sulphate.
pre-oxidation using chlorine dioxide
This technique was developed as an attempt to replace chlorine during pre-oxidation. In effect, chlorine dioxide, although it will not oxidise ammonium, will not result in the formation of THM either. However, when it reacts with NOM, it releases CℓO2 (chlorite) ions that subsequently have to be eliminated. As the result of new standards (decree 2003-461), the use of chlorine dioxide in pre-oxidation applications has tended to disappear.
pre-oxidation using potassium permanganate
This oxidant is mainly used for the treatment of raw waters containing manganese, which it precipitates according to the following reaction:
Formula: Pre-oxidation using potassium permanganate
This reaction is promoted by a high pH, accelerating the reaction kinetics, hence the need to control the pH (> 7.0) and to allow for a sufficient contact time (5-10 minutes).
In the case of waters having low mineralised content, and containing high levels of dissolved organic matter and a requirement for a very low coagulation pH (5.5 to 6), preference should be given to locating the KMnO4 injection point to between the settling tank and the filter after adjusting the pH to a favorable level (7.0 to 7.5).
Potassium permanganate is also occasionally used for the partial oxidation of some OM, to remove unpleasant tastes or to combat the development of algae within the clarification equipment.
Using KMnO4 for pre-oxidation requires that the amount injected is meticulously controlled; as any excess dosage can impart a pink coloration to the treated water, caused by the presence of soluble Mn(VII).
pre-oxidation using ozone
When used for the pre-treatment of raw surface water, ozone, like CℓO2, prevents the formation of THM and other chlorinated derivatives. It will not oxidise ammonium but does create conditions that promote subsequent nitrification. This helps to explain why, even if ozone is less effective than chlorine in this application, it is in fact the pre-oxidant that is the most widely used in clarification systems because it has a number of beneficial effects:
• improves clarification effectiveness (turbidity, colour, residual micro-algae, OM, THM precursors…);
• in some cases, reduces the coagulant demand;
• prepares the water for subsequent biological treatment.
However, there is a fairly strict optimumdosage (approximately 1 mg · L–1) and contact time (approximately 1 minute); beyond these levels, the floc will "redisperse".
The use of pre-ozonation under these conditions requires two comments:
• pre-ozonation will not guarantee clean structures due to the lack of residual oxidant, thereby requiring clarifiers and filters to be covered;
• with polluted water, a main oxidation step is appropriate (producing residual O3) as part of the subsequent treatment in order to ensure complete oxidation of any compounds formed during the pre-ozonation stage.
The interest of pre-ozonation will be illustrated later on with examples of treatment using coagulation through a filter (see end of clarification) or a complete clarification + polishing treatment (see section polishing: removal of organic matter and micro-pollutants). |
What is Computer Network | Computer Network Information!
113
What is Computer Network Computer Network Information! All of you must have used a computer. But have you ever thought that such computers are connected to the network? Or what is Computer Network? What are the types of Computer Network?
If you are troubled by all these questions just like others, then you do not need to worry because in this article (What is Computer Network) we are going to know about it. At the same time, if you want to know about the Network, then you can read what is Network in Hindi. Together you will know how this computer network is, and what their different types are.
It may seem a bit complex in listening, but actually the computer networks are not as complex. Most of the people are not aware of the computer network and its type in the right way, due to which they have difficulty in understanding it. So I thought why not give some information about Networking to you so that you too can get to know about it. So let’s start without delay.
Computer Network Information
What is a Computer Network?
A Computer Network is called a group or group of computers that are linked with each other; these are some such links that help them to communicate with each other so that they can interconnect with each other. Share your resources, data, and application.
This computer networking is a practice in which two or more computing devices are interfaced with each other for the sharing of data from the main point. At the same time, these computer networks are made with a combination of hardware and software.
Definition of Computer Network
Computer Network Information: It is very easy to understand the definition of a computer network; it is also often referred to on the basis of a data network. For example, to share data with each other in a data network, they have to be interconnected with each other. In the same way, computing devices are connected with each other in the computer network so that they can do the transmission and reception of data.
Network devices use many types of protocols and algorithms to perform this type of communication, to specify how endpoints transmit and receive their data.
Types of Computer Network
Computer networks are categorized according to their size or shape. A computer network is mainly divided into four types:
• LAN (Local Area Network)
• PAN (Personal Area Network)
• MAN (Metropolitan Area Network)
• WAN (Wide Area Network)
Type of computer network
• Now let’s know about the type of computer network.
• LAN (Local Area Network)
• Local Area Network is a group of computers which are connected with each other in a small area like a building, office, etc.
• Most of the LANs are used to connect two or more personal computers simultaneously with the help of a communication medium such as twisted pair, coaxial cable.
• They are not very valuable because it uses less costly hardware such as hubs, network adapters, and Ethernet cables.
• In the Local Area Network, data transfer is much faster than the rest.
• Local Area Network gives you more security.
Personal Area Network
In the Personal Area Network, its network is arranged around an individual person, whose typical range is around 10 meters.
As the name suggests, the Personal Area Network is used for personal work.
It was Thomas Zimmerman who, through his research, first brought the idea of Personal Area Network to the world.
Personal Area Network has a range of about 30 feet area.
Personal computer devices that are used to develop personal area networks are laptops, mobile phones, media players and play stations.
Types of Personal Area Network:
There are mainly two types of PAN or Personal Area Network.
• Wired Personal Area Network
• Wireless Personal Area Network
Wireless Personal Area Network
Computer Network Information: Wireless Personal Area Network is developed simply by using wireless technologies such as Wi-Fi, Bluetooth. At the same time, it has a very low range network.
Wired Personal Area Network:
USB is used to create a Wired Personal Area Network.
Let us now consider some examples of Personal Area Network:
Body Area Network: Body Area Network is a network that moves along with a human. For example, a mobile network that is always with you and goes where you go. Suppose a person establishes a network connection and then creates a connection with another device to share information.
Offline Network: An offline network can be easily created within a house perhaps that is why it is also called home network. A home network is designed to integrate other devices such as printers, computers, television, etc. but they are not all connected with the internet.
Small Home Office: It is used to connect many types of devices with the internet and then using a VPN with a corporate network.
Metropolitan Area Network
• A metropolitan area network is a network that covers a very large geographic area and for that, it interconnects different types of LANs to form a larger network.
• Government agencies mostly use MAN to connect with citizens and private industries.
• In MAN, many LANs are connected together with each other through a telephone exchange line.
• The most commonly used protocols in MAN are RS-232, Frame Relay, ATM, ISDN, OC-3, ADSL, etc.
• Their range is more than that of Local Area Network (LAN).
What are the uses of the Metropolitan Area Network?
MAN is used to communicate in banks in a city.
They are also used in Airline Reservation.
They are also used in college within a city.
They are also used to communicate in the military.
Wide Area Network: Computer Network Information
Wide Area Network is a network that is spread over a very large geographical area such as a state or country.
As the name suggests, the Wide Area Network is much larger than the LAN.
While the Wide Area Network is not limited to a single location, it is spread over a very large geographical area through the telephone line, fiber optic cable, and satellite links.
You can consider the internet as the largest WAN in the whole world.
Most of this Wide Area Network is used in the fields of business, government, and education.
Wide Area Network: Computer Network Information
Mobile Broadband: the 4G network is being done all over the country.
Internet Connection: Telecom Company is used to provide internet services to customers, in which their homes are connected by fiber cable.
Private network: A bank provides a private network that connects all its 44 offices. To create this type of network, a telephone leased line is used that is provided by Telecom Company.
What are the Advantages of the Wide Area Network?
Let us now know about the advantages of WAN:
Geographical area: Wide Area Network provides a very large geographical area.
If your branch office is located in another city, then they can be connected to each other through WAN. The leased line is used to do this.
Centralized data: In the case of a WAN network, the data are mostly centralized. Therefore we do not have any need to buy emails, files or back up servers.
Get updated files: Software companies work in live server. So the programmers get the updated files within few seconds.
Exchange messages: In a WAN network, messages are transmitted very quickly. Therefore web applications such as Facebook, Whatsapp, and Skype help you communicate with your friends.
Sharing of software and resources: In WAN network, we can easily share software and other resources like a hard drive, RAM
For Global business: We can easily do business globally.
High bandwidth: In this, we get more bandwidth. Therefore, the higher the bandwidth, the more productivity for your company
Disadvantages of Wide Area Network:
Let us now know about the disadvantages of Wide Area Network:
Security issue: More security issues are seen in WAN networks than in LAN and MAN networks.
It requires firewall and antivirus software: Since data is transferred to the internet, there is always a problem of changing them or getting hacked.
That’s why it is good that you use a good firewall and software.
High Setup Cost: The installation cost of a WAN Network is very high as it has more use of routers and switches.
Having troubleshooting problems: Since it covers a very large area, if there is a problem, then it is very difficult to fix it
also, you read What is Chrome Incognito Mode what are its benefits |
instalación de un servidor de minecraft en vps
Iniciación/Instalación de una VPS para MC (Linux Ubuntu)
Bueno, en este post lo que intentare hacer es orientaros principalmente a la base de una VPS, de que trata, que es, como funciona y como podremos instalar un servidor de Minecraft ahí. Esta guía se basara en la distribución Ubuntu de Linux, recomiendo un 100% que usen esta distribución ya que es una de las más fáciles de usar y más seguras.
Cabe Resaltar: Una VPS Es una buena “maquina” que podremos empezar a usar una vez que necesitemos más estabilidad en nuestro servidor pero su estabilidad no es segura al 100% no podrá soportar muchos usuarios, si puedes avanzar pues mejor sería a un Servidor Dedicado.
¿Que es una VPS?
• Pues, es un Servidor Virtual Privado, este se encuentra alojado dentro de una maquina principal (Servidor Dedicado) y desde esa se crean particiones de hardware (CPU, Ram, Disco) hacía la VPS, es decir, que una VPS no es una maquina sola, es como una maquina dentro de otra.
¿Como instalamos un servidor Minecraft en la VPS?
• Iniciare explicando Primero: Una vez que ustedes hayan comprado su VPS, su servidor hosting les enviara (normalmente te lo envían mediante mail) una IP, contraseña y Usuario, bueno, una vez que los tengas ya anotados instalaras el programa Bitvise SSH Client, una vez instalado abres el programa y realizas los siguiente: Para poder saber como conectarse al servidor mirad este vieo https://www.youtube.com/watch?v=M-VBiBxR51Q ,Por finalizado le darán a Login y les abrirá dos ventanas: Una sera la Terminal (Consola del sistema) y otro sera el lugar de archivos (SFTP).
• Una vez ya iniciados dentro del sistema vamos a instalar Java con los siguientes comandos en la Terminal (Por favor seguir mi secuencia):
• sudo add-apt-repository ppa:webupd8team/java
• sudo apt-get update
sudo apt-get install oracle-java8-installer
Probablemente en el último comando en algunos caso les dirá que acepte los términos y condiciones de Oracle, y también su sistema les preguntara si quieres instalar eso, ustedes le dirán a todo que si (Yes).
• Por defecto todas las VPS con Ubuntu traen un programa llamado “Screen” el cual su función es mantener la consola de algo abierto, en este caso lo usaremos para mantener la consola del servidor abierta.
• Lo siguiente que vamos a hacer es crear la carpeta del servidor que crearemos (En este caso Survival) en nuestra ventana del SFTP (La que he nombrado arriba), dentro de esa ventana podremos ver que del lado izquierdo hay archivos de tu pc y del lado derecho son otros archivos, serían los del VPS, por defecto apareces en la carpeta /root/, Pues en esa sección donde esta /root/ la cambiaran por /home/ y le darán a enter, Los llevara a la carpeta home (una carpeta que es la más segura para utilizar para cosas nuestras), una vez dentro de eso crearemos una carpeta llamada Survival (Es el servidor que estamos usando de ejemplo), apretamos click derecho en el espacio en blanco de esa carpeta y le damos a Create Folder, ingresamos a esa carpeta y lo que haremos ahora es poner el Spigot que vamos usar (O CraftBukkit, Cauldron, Bungee, el que deseen usar), podrán descargar el Spigot de la versión que quieran en esta página https://getbukkit.org/download/spigot lo arrastraran hacia la carpeta esa y empezara a subirse al servidor,
• Una vez echo todo eso nos toca crear lo que iniciara nuestro servidor, pues dentro de esa carpeta crearan un archivo llamada iniciar.sh, le dan click derecho dentro de esa carpeta, luego a Create file y ahí escriben iniciar.sh, Ese archivo lo abren con Notepad++ (Supongo que tienen este editor de texto) y dentro del archivo escribirán la siguiente linea de código:
#!/bin/sh
while true
do
java -Xmx1G -Xms1G -jar Spigot.jar
echo "El servidor se prendera de nuevo automaticamente, pulsa CTRL + C para cancelar y apagar"
echo "Prendiendo de nuevo:"
for i in 5 4 3 2 1
do
echo "$i..."
sleep 1
done
echo "Prendido!"
done
• Explicare un poco sobre que necesitaran ustedes editar en este archivo: Pues podrán ver que hay dos números uno en -Xmx1G y otro en -Xms1G, pues ahí he colocado la máxima capacidad de memoria RAM que consumirá el servidor en este caso como es 1G nada más podrá consumir 1 Gigabyte, ustedes lo mueden a gusto, es más si solo quieren que consuma MB como solo 256MB cambian el G por M. Donde dice Spigot.jar sería el core que ustedes habrian descargado, ahí pondrían su nombre que tiene en el escritorio incluyéndole el .jar
• Pues una ve echo todo esto volveremos a la consola del servidor y pondremos los siguientes comandos:
cd /home/Survival (Este comando nos llevaria a la carpeta del servidor)
chmod +x inciar.sh (Este comando le dara permisos a la iniciar.sh para que puede ser ejecutable)
screen -S survival ./iniciar.sh (Este comando iniciara la consola del servidor y el servidor, podrán cambiar survival por el nombre que quieran)
• A partir de eso se iniciara el servidor y pues haremos lo tipico que todos sabemos, aceptar el eula, configurar el server.propietes, y luego colocar sus plugins en las carpetas etc.
• Si quieren entrar a la consola de su servidor deben poner este comando:
screen -r survival (Cambian survival por el nombre que le pusieron en el comando de iniciar la consola)
• Para iniciar la consola de nuevo en algún momento que lo necesito tienen que repetir los comandos de arriba excepto el chmod.
Espero que la guía les haya servidor de mucha ayuda, un cordial saludo.
Comentarios (by Facebook)
Comparte este artículo: |
costa/costal2
REGULATION
Control of antagonistic components of the Hedgehog signaling pathway by microRNAs in Drosophila
Hedgehog (Hh) signaling is critical for many developmental processes and for the genesis of diverse cancers. Hh signaling comprises a series of negative regulatory steps, from Hh reception to gene transcription output. Stability of antagonistic regulatory proteins, including the coreceptor Smoothened (Smo), the kinesin-like Costal-2 (Cos2), and the kinase Fused (Fu), is affected by Hh signaling activation. This study shows that the level of these three proteins is also regulated by a microRNA cluster. Indeed, the overexpression of this cluster and resulting microRNA regulation of the 3'-UTRs of smo, cos2, and fu mRNA decreases the levels of the three proteins and activates the pathway. Further, the loss of the microRNA cluster or of Dicer function modifies the 3'-UTR regulation of smo and cos2 mRNA, confirming that the mRNAs encoding the different Hh components are physiological targets of microRNAs. Nevertheless, an absence of neither the microRNA cluster nor of Dicer activity creates an hh-like phenotype, possibly due to dose compensation between the different antagonistic targets. This study reveals that a single signaling pathway can be targeted at multiple levels by the same microRNAs (Friggi-Grelin, 2009).
cos2, fu, and smo mRNA can be regulated by a cluster of microRNAs, including miR-12 and miR-283, in Drosophila wing disc. The overexpression of this cluster decreases the levels of Smo, Cos2, and Fu proteins and activates the Hh pathway, as evidenced by the induction of dpp expression in the wing imaginal discs and by the adult wing outgrowth. The experiments presented in this study with the 3'-UTR sensors of smo, fu, or cos2 are in favor of a direct binding. To constitute a real proof of a direct effect, further experiments as direct biochemical binding assay or compensatory mutation between the 3'-UTR and the miRNAs will be necessary to perform (Friggi-Grelin, 2009).
Programs that have been created to genomewide predictions of Drosophila miRNA targets provide lists of presumptive miR-12, and miR-283 regulated genes. In addition to the current in vivo validations, miR-12 binding sites are predicted on the 3'-UTR of ci and no sites were found on the 3'-UTR of the Su(fu) gene. No decrease was observed in either of these two proteins in the microRNA cluster overexpressing clones. It is interesting to note that Su(fu) mRNA, encoding another negative regulator of Hedgehog signaling, has been shown to be targeted by miR-214 in zebrafish. Absence of miR-214 results in the reduction of muscle cell types, the specification of which is dependent on Hh pathway activity. Nevertheless, the current study shows that in Drosophila wing discs an absence of microRNA does not modify the Hh pathway, raising the question of what the role of microRNAs in Drosophila Hh pathway regulation is (Friggi-Grelin, 2009).
Could the microRNAs overexpression phenotype that was identified be artifactual and simply the result of forced overexpression of the microRNA cluster in a tissue in which it should be silent? It is thought that the answer is no, because Northern blot analysis and the increase of miR-sensor in the dcr-1 mutant clones showed that the microRNA cluster is indeed expressed in this tissue. This suggests that the cluster likely has a role in this tissue in which it is normally present. Is the microRNA cluster regulation of the cos2 and smo 3'-UTRs physiological? It is thought so, because an absence of either the microRNA cluster or of Dicer in the wing imaginal disc induces an increase in the Cos2- and Smo-sensor lines. This signifies that the microRNAs expressed from the cluster regulate the cos2 and smo 3'-UTRs and thus display some functionality in the disc during larval development. Altogether, these data clearly show that an artifactual situation in which the microRNA cluster is expressed in a tissue in which it should not be present has not been created. The miRs overexpression was also tested on embryonic patterning but it did not lead to any phenotype, suggesting that the miR cluster regulation on the Hh pathway is specific to larval tissues (Friggi-Grelin, 2009).
As miR-12 and miR-283, and likely redundant miRs, are present in every cell of the wing disc, one possibility is that their normal roles are to dampen down the levels of Hh pathway components, particularly Cos2 and Smo, to prevent the accidental activation or downregulation of the pathway. Indeed, expressing both the microRNA cluster and its targets in the same tissue could provide a means of 'buffering stochastic fluctuations' in mRNA levels or in protein translation rates within the Hh signaling pathway, as has been proposed for other processes (Friggi-Grelin, 2009).
The data possibly indicate that miRNAs are able to regulate two antagonistic components of the pathway, Cos2 and Smo. It has been shown that the stability of these two proteins is 'interdependent': an increased level of Cos2 in the wing imaginal disc lowers the level of Smo, and, in the opposite direction, increased Smo decreases the level of Cos2. It is proposed that the interregulation of Cos2/Smo levels is independent of their relative activities because Cos2 effect on Smo levels is observed in posterior cells in which Cos2 activity is strongly inhibited by the constitutive activation of the pathway. Therefore, eliminating the miRNA-mediated inhibition of Cos2 and Smo in Delta3miR or dcr-1 mutant cells likely initially increased the levels of both proteins, but then the resulting higher levels of each protein presumably downregulated the other; the net variation of Cos2 and Smo levels would therefore be null. This hypothesis is favored because the independent Smo- and Cos2-sensor lines, which are unaffected by this Cos2/Smo interregulation, showed increased levels of GFP staining in Delta3miR and dcr-1 mutant animals. This suggests that the levels of both Cos2 and Smo are increased in the mutant animals but, because of the downregulation of each protein by the other, no ultimate alterations in the levels of the proteins are observed. If so, an Hh phenotype would not be expected to be seen in the miR mutant (Friggi-Grelin, 2009).
The screen created a situation in which the expression of the microRNA cluster is deregulated, ultimately destabilizing Cos2 protein levels and thereby activating Ci and Hh target gene expression. Importantly, a similar situation might be encountered during tumoral development. Aberrant Hh signaling activity is known to trigger the development of diverse cancers. While several of these tumors have been linked to mutations in Hh signaling components, not all of them have, leaving open the possibility that they are caused by other factors such as microRNA misexpression. Interestingly, more than half of the known human microRNA genes are located near chromosomal breakpoints associated with cancer, and in some documented cases the microRNAs are amplified, leading to overexpression. Some upregulated microRNAs are possibly able to bind mRNAs encoding negative regulators of Hh signaling, such as Su(fu) or Ptc, and could thus induce the misactivation of the Hh pathway, as is observed in some cancers. Therefore, a fine analysis of microRNA expression levels and the levels of known Hh components should be considered in studies of Hh pathway-related cancers (Friggi-Grelin, 2009).
What does this study add to the current knowledge about miRNA regulation? The study shows that a cluster of three microRNAs can target several antagonistic components of the same pathway in vivo. This is novel and unexpected. This raises the question of how to interpret the miRNA expression signatures observed in human tumors. Indeed, as stated above, it has been proposed that miRNAs are differentially expressed in human cancers and contribute to cancer development. The working hypothesis in the cancer/miRNAs field is that key cancer genes are regulated by aberrant expression of miRNAs. The identification of a specific miRNA:mRNA interactor pair is generally accepted as being of biological importance when the mRNA encodes a tumor suppressor or an oncogene whose expression is modified in the tumor. This study shows indirectly that this is an oversimplified view, because identifying an oncogene or tumor suppressor as a target of a miRNA may not provide a full explanation for tumor development if the same miRNA hits other antagonistic components of the same pathway that nullify the effect of the identified miRNA:mRNA interactor pair (Friggi-Grelin, 2009).
Hedgehog signaling regulates the ciliary transport of odorant receptors in Drosophila
Hedgehog (Hh) signaling is a key regulatory pathway during development and also has a functional role in mature neurons. This study shows that Hh signaling regulates the odor response in adult Drosophila olfactory sensory neurons (OSNs). This is achieved by regulating odorant receptor (OR) transport to and within the primary cilium in OSN neurons. Regulation relies on ciliary localization of the Hh signal transducer Smoothened (Smo). This study further demonstrates that the Hh- and Smo-dependent regulation of the kinesin-like protein Cos2 acts in parallel to the intraflagellar transport system (IFT) to localize ORs within the cilium compartment. These findings expand knowledge of Hh signaling to encompass chemosensory modulation and receptor trafficking (Sanchez, 2016).
This study demonstrates that the Hh pathway modulates the magnitude of the odorant response in adult Drosophila. The results show that the Hh pathway determines the level of the odorant response because it regulates the response in both the positive and negative directions. Loss of Ptc function increases the odorant response and the risk for long sustained responses, which shows that the Hh pathway limits the response potential of the OSNs and is crucial for maintaining the response at a physiological level. In addition, it was shown that the OSNs produce Hh protein, which regulates OR localization, which is interesting because autoregulation is one of the prerequisites for an adaptive mechanism. It was further shown that Hh signaling regulates the responses of OSNs that express different ORs, which demonstrates that the regulation is independent of OSN class and suggests that Hh signaling is a general regulator of the odorant response. It has been shown previously that Hh tunes nociceptive responses in both vertebrates and Drosophila (Babcock, 2011). It is not yet understood how Hh regulates the level of nociception. However, the regulation is upstream of the nociceptive receptors, which indicates that the Hh pathway is a general regulator of receptor transport and the level of sensory signaling (Sanchez, 2016).
The results show that OSN cilia have two separate OR transport systems, the Hh-regulated Cos2 and the intraflagellar transport complex B (IFT-B) together with the kinesin II system. The results show that Cos2 is required for OR transport to or within the distal cilium domain and suggest that the IFT system regulates the inflow to the cilium compartment. The two transport systems also are required for Smo cilium localization (Kuzhandaivel, 2014). This spatially divided transport of one cargo is similar to the manner in which Kif3a and Kif17 regulate distal and proximal transport in primary cilia in vertebrates. However, Cos2 is not required for the distal location of Orco or tubulin (Kuzhandaivel, 2014), indicating that, for some cargos, the IFT system functions in parallel to Cos2 (Sanchez, 2016).
Interestingly, the vertebrate Cos2 homolog Kif7 organizes the distal compartment of vertebrate primary cilia (He, 2014). Similar to the current results, Kif7 does so without affecting the IFT system, and its localization to the cilia is dependent on Hh signaling. However, the Kif7 kinesin motor function has been questioned (He, 2014). Therefore, it will be interesting to analyze whether Kif7-mediated transport of ORs and other transmembrane proteins occurs within the primary cilium compartment and whether the ciliary transport of ORs is also regulated by Hh and Smo signaling in vertebrates. To conclude, these results place the already well-studied Hh signaling pathway in the post-developmental adult nervous system and also provide an exciting putative role for Hh as a general regulator of receptor transport to and within cilia (Sanchez, 2016).
Protein Interactions
The Costa protein can be coimmunoprecipitated with antibodies to Fused. Both Fused and a hyperphosphorylated isoform of Fused designated FU-P are found in this conplex. FU-P predominates over Fused when precipitates are prepared with Cos2 antisera. Antisera against either Fused or Cos2 precipitate Cubitus interruptus protein as well. Fractionated extracts of cultured cells have two complexes larger than Fused itself, a population of about 40 million Da, and a population of greater than 700,000 Da. The 700 kDa fraction is by far the most abundant. Fused, Cos2 and Ci are enriched in microtubules formed from repolymerized tubulin. Binding of Cos2 and Fused to microtubules is barely detectable in Hedgehog treated cultured cells. These findings sustain the hypothesis that signaling from Hh releases the complexes from microtubules, which would in turn facilitate translocation of Ci to the nucleus (Robbins, 1997).
Labeling with radioactive phosphate reveals that Fused and Cos2 are phosphorylated in both cultured S2 cells and Hedgehog treated S2 cells. The phosphorylations of Fused and Cos2 is on serine. Cos2 coimmunoprecipates with kinase dead Fused mutant proteins. Thus, functional Fused kinase is probably not necessary for Fused and Cos2 to associate. There is no evidence for binding of Cos2 to the products of truncated Fused protein lacing the C-terminal domain of Fused (Robbins, 1997).
Cos2 is cytoplasmic and binds microtubules in a taxol-dependent, ATP-insensitve manner, while kinesin heavy chain binds microtubules in a toxol-dependent, ATP-insensitive manner. Ci associates with Cos2 in a large protein complex, suggesting that Cos2 directly controls the activity of Ci. This association does not involve microtubules. Elevated cytoplasmic Ci staining is seen in cos2 clones in the anterior compartment. The level of Ci staining is independent of the clone's distance from the A/P border. Nuclear Ci is not evident in the clones (Sisson, 1997).
Hedgehog (Hh) signal transduction requires a large cytoplasmic multi-protein complex that binds microtubules in an Hh-dependent manner. Three members of this complex, Costal2 (Cos2), Fused (Fu), and Cubitus interruptus (Ci), bind each other directly to form a trimeric complex. This trimeric signaling complex exists in Drosophila lacking Suppressor of Fused [Su(fu)], an extragenic suppressor of fu, indicating that Su(fu) is not required for the formation, or apparently function, of the Hh signaling complex. However, Su(fu), although not a requisite component of this complex, does form a tetrameric complex with Fu, Cos2, and Ci. This additional Su(fu)-containing Hh signaling complex does not appear to be enriched on microtubules. Additionally, it has been demonstrated that, in response to Hh, Ci accumulates in the nucleus without its various cytoplasmic binding partners, including Su(fu). A model is discussed in which Su(fu) and Cos2 each bind to Fu and Ci to exert some redundant effect on Ci such as cytoplasmic retention. This model is consistent with genetic data demonstrating that Su(fu) is not required for Hh signal transduction proper and with the elaborate genetic interactions observed among Su(fu), fu, cos2, and ci (Stegman, 2000).
Much of the understanding of the Hedgehog signaling pathway comes from Drosophila, where a gradient of Hh signaling regulates the function of the transcription factor Cubitus interruptus at three levels: protein stabilization, nuclear import, and activation. Regulation of Ci occurs in a cytoplasmic complex containing Ci, the kinesin-like protein Costal-2 (Cos2), the serine-threonine kinase Fused (Fu), and the Suppressor of Fused [Su(fu)] protein. The mechanisms by which this complex responds to different levels of Hh signaling and establishes distinct domains of gene expression are not fully understood. By sequentially mutating components from the Ci signaling complex, their roles in each aspect of Ci regulation can be analyzed. The Cos2-Ci core complex is able to mediate Hh-regulated activation of Ci but is insufficient to regulate nuclear import and cleavage. Addition of Su(fu) to the core complex blocks nuclear import while the addition of Fu restores Hh regulation of Ci nuclear import and proteolytic cleavage. Fu participates in two partially redundant pathways to regulate Ci nuclear import: the kinase function plays a positive role by inhibiting Su(fu), and the regulatory domain plays a negative role in conjunction with Cos2 (Lefers, 2001).
In fu94;Su(fu)LP mutants, it is unlikely that either Fu or Su(fu) is present in the complex: Fu protein from class II mutants fails to immunoprecipitate Cos2, and Su(fu) protein cannot be detected in Su(fu)LP mutants. In this mutant combination, the processing of Ci is not Hh regulated, and this results in uniform levels of Ci protein across the entire anterior compartment. Hh regulation of Ci nuclear import is also lost, and the Ci protein shuttles into and out of the nucleus throughout the anterior compartment. As a consequence, dpp is expressed at modest levels in all anterior compartment cells. Previous studies have shown that Cos2 is required for Ci sequestration in the cytoplasm and its proteolytic processing, but clearly Cos2 is not sufficient for all aspects of Ci regulation. In the absence of the Fu regulatory domain and Su(fu) from the complex, all anterior compartment cells behave as if they are receiving at least modest levels of Hh signaling (Lefers, 2001).
Addition of Su(fu) to the Ci-Cos2 complex dramatically reduces the rate of Ci release from the complex as Ci does not accumulate in the nucleus in fu94 mutant discs that have been treated with leptomycin B (LMB), which blocks Ci nuclear export. No regulation of Ci nuclear import is observed, and processing of Ci into Ci75 is still inhibited. The block in Ci nuclear import by Su(fu) appears to be dependent on the presence of Cos2 as clones double mutant for fumH63;cos21 release Ci independent of Hh signaling (Lefers, 2001).
Addition of Fu to the Ci-Cos2 complex essentially restores Hh regulation of Ci nuclear import and the processing of Ci into Ci75. Therefore, in the absence of Hh signaling, Fu is required for both the cleavage of Ci into Ci75 and its retention in the cytoplasm. The major consequence of removing Su(fu) from the complex is a significant decrease in the overall levels of both Ci and Ci75. This decrease does not appear to significantly compromise Hh regulation (Lefers, 2001).
Although Cos2 provides an important tethering force, it apparently cannot hold Ci in the cytoplasm on its own. Addition of either the Fu regulatory domain or Su(fu) is sufficient to restore effective tethering. The requirement for Fu in Ci tethering is a new finding, since it has been shown that Fu plays a positive role in Ci nuclear entry by inhibiting Su(fu) via its kinase domain. Further examination of different classes of fu alleles demonstrates that Fu participates in Ci tethering through its regulatory domain. When a Fu class I mutant protein (kinase domain mutations) is added to the Ci-Cos2 core complex [fu1,Su(fu)LP], regulation of Ci nuclear entry is almost wild type. In contrast, when a Fu class II mutant protein (regulatory domain mutations) is present [fu94;Su(fu)LP or fuRX15;Su(fu)LP], the complex fails to tether Ci in the absence of Hh signaling. It has been shown that Fu interacts with Cos2 through its regulatory domain, and the proteins made by fu class II alleles fail to immunoprecipitate with Cos2. These results suggest that the interaction between Cos2 and the Fu regulatory domain is important for Cos2 to tether Ci in the absence of Su(fu) activity. This Cos2-Fu interaction may also be important for targeting Fu kinase regulation of Su(fu). Both fuRX15 and fu94, which delete different extents of the regulatory domain, might be expected to retain kinase function, yet Hh regulation of Su(fu) appears to have been lost and Ci is not released from the cytoplasm in either of these mutants. The simplest explanation is that by preventing Fu interaction with Cos2, Fu cannot perform its structural role in the complex nor can it regulate Su(fu). Thus, Fu plays two opposite roles in the regulation of Ci nuclear entry. Without Hh signaling, the regulatory domain in conjunction with Cos2 tethers Ci in the cytoplasm; upon Hh signaling, the kinase domain inhibits Su(fu) which, along with a change in the Cos2/Fu regulatory domain interaction, leads to the release of Ci (Lefers, 2001).
While it has been possible to clearly establish a role for Su(fu) in Ci nuclear import, its role in Ci activation and cleavage is less clear. In cells double mutant for cos2;Su(fu), Ci appears to be at least partially activated since double mutant clones away from the compartment boundary ectopically express en. A reasonable interpretation of these data is that Ci activation is inhibited by Su(fu) and signaling through Cos2 relieves such inhibition (Lefers, 2001).
But this cannot be the whole story. In Su(fu)LP discs, the expression of ptc or en is still tightly regulated and does not expand into all the cells with efficient Ci nuclear import. This regulation of Ci activity is evidently not rendered by the Fu regulatory domain, since it persists in the fu94;Su(fu)LP double mutants. It seems likely that Su(fu) is partially redundant with other factors that regulate Ci activation and that these yet to be identified factors function with Cos2 in the fu;Su(fu) double mutants. Su(fu) may also play some role in Ci cleavage. In the fu94;Su(fu)LP double mutants, the level of Ci seems significantly reduced relative to fu94 single mutants. In addition, Ci protein levels are not elevated across the entire anterior compartment in fuRX15 single mutants but are in fuRX15;Su(fu)LP double mutants. The implication of Su(fu) in these other aspects of Hh regulation suggests that while it is possible to dissect the complex and assign primary roles to the various components, the complex does normally function as a whole (Lefers, 2001).
The Drosophila protein Shaggy (Sgg, also known as Zeste-white3, Zw3) and its vertebrate ortholog glycogen synthase kinase 3 (GSK3) are inhibitory components of the Wingless (Wg) and Wnt pathways. Sgg is also a negative regulator in the Hedgehog (Hh) pathway. In Drosophila, Hh acts both by blocking the proteolytic processing of full-length Cubitus interruptus, Ci (Ci155), to generate a truncated repressor form(Ci75), and by stimulating the activity of accumulated Ci155. Loss of sgg gene function results in a cell-autonomous accumulation of high levels of Ci155 and the ectopic expression of Hh-responsive genes including decapentaplegic and wg. Simultaneous removal of sgg and Suppressor of fused, Su(fu), results in wing duplications similar to those caused by ectopic Hh signaling. Ci is phosphorylated by GSK3 after a primed phosphorylation by protein kinase A (PKA), and mutating GSK3 phosphorylation sites in Ci blocks its processing and prevents the production of the repressor form. It is proposed that Sgg/GSK3 acts in conjunction with PKA to cause hyperphosphorylation of Ci, which targets it for proteolytic processing, and that Hh opposes Ci proteolysis by promoting its dephosphorylation (Jia, 2002).
The proteolytic processing of Ci requires the activities of several intracellular Hh signaling components, including PKA and the kinesin-related protein Costal2 (Cos2). Overexpressing either Cos2 or a constitutively active form of PKA (mC*) blocks the accumulation of Ci155 induced by Hh. In contrast, wing discs overexpressing mC* or Cos2 accumulate high levels of Ci155 after treatment with 50 mMLiCl, a specific inhibitor of GSK3 kinase activity. These observations suggest that Sgg acts downstream of, or in parallel with, PKA and Cos2 to regulate Ci processing (Jia, 2002).
GSK3 is involved in multiple signaling pathways, raising the question of how its activity is selectively regulated by individual pathways. An emerging theme is that GSK3 is present, together with its substrates, in distinct complexes that are regulated by different upstream stimuli. Future study will determine whether Sgg/GSK3 forms a complex with Cos2 or Ci and whether Hh regulates Sgg/ GSK3 within the complex. In vertebrates, three Gli proteins (Gli1, Gli2 and Gli3) are implicated in transducing Hh signals. Interestingly, all three Gli proteins contain multiple GSK3-phosphorylation consensus sites adjacent to PKA sites, raising the possibility that GSK3 may regulate Gli proteins in vertebrate Hh pathways. Hh and Wnt signaling pathways act in synergy in certain developmental contexts. The finding that GSK3 is involved in both Hh and Wnt pathways raises the possibility that these two pathways might converge at GSK3 in certain developmental processes (Jia, 2002).
Altered localization of Smoothened protein activates Hedgehog signal transduction
Hedgehog (Hh) signaling is critical for many developmental events and must be restrained to prevent cancer. A transmembrane protein, Smoothened (Smo), is necessary to transcriptionally activate Hh target genes. Smo activity is blocked by the Hh transmembrane receptor Patched (Ptc). The reception of a Hh signal overcomes Ptc inhibition of Smo, activating transcription of target genes. Using Drosophila salivary gland cells in vivo and in vitro as a new assay for Hh signal transduction, the regulation of Hh-triggered Smo stabilization and relocalization was investigated. Hh causes Smo (GFP-Smo) to move from internal membranes to the cell surface. Relocalization is protein synthesis-independent and occurs within 30 min of Hh treatment. Ptc and the kinesin-related protein Costal2 (Cos2) cause internalization of Smo, a process that is dependent on both actin and microtubules. Disruption of endocytosis by dominant negative dynamin or Rab5 prevents Smo internalization. Fly versions of Smo mutants associated with human tumors are constitutively present at the cell surface. Forced localization of Smo at the plasma membrane activates Hh target gene transcription. Conversely, trapping of activated Smo mutants in the ER prevents Hh target gene activation. Control of Smo localization appears to be a crucial step in Hh signaling in Drosophila (Zhu, 2003).
Movement of Smo to the surface requires actin and tubulin components of the cytoskeleton, though the relevant motors are unknown. Cos2 is an unusual member of the kinesin family, with sequence features at odds with conventional ATPase binding site structure. Cos2 could be either a motor or a tether. Cos2 could have a role in controlling movements of vesicles that contain Smo. Overproduction of Cos2 alters GFP-Smo localization, and furthermore, prevents Hh from bringing much GFP-Smo to the surface, and the GFP-Smo that does reach the surface is located in discreet dots. Ptc also blocked Hh from bringing GFP-Smo to the surface, but no such dots were observed. Overexpression of a presumably irrelevant other motor protein, Nod, has no effect on localization of GFP-Smo. Cos2 production may therefore specifically cause the movement of Smo-containing organelles to discreet locations on the membrane, either tethering them to the cytoskeleton at specific locations or causing a coalescence effect at random locations. Cos2 has been envisioned as functioning as part of a cytoplasmic complex whose activity in processing the Ci transcription factor is controlled by Smo. The present data suggest a new function in which the complex (oralternatively, Cos2 independently of the complex), feeds back to alter Smo activity. It is interesting that both GFP-Smo (when Cos2 and Hh were coexpressed) and PtcDN-YFP exhibits a similar punctate cell surface localization pattern. PtcDN may function through competing with endogenous Ptc, raising an intriguing alternative possibility that Cos2 may interact directly with Smo to control Smo subcellular localization (Zhu, 2003).
Smoothened regulates alternative configurations of a regulatory complex that includes Fused, Costal, Suppressor of Fused and Cubitus interruptus
In the Drosophila wing, Hedgehog is made by cells of the posterior compartment and acts as a morphogen to pattern cells of the anterior compartment. High Hedgehog levels instruct L3/4 intervein fate, whereas lower levels instruct L3 vein fate. Transcriptional responses to Hedgehog are mediated by the balance between repressor and activator forms of Cubitus interruptus, CiR and CiA. Hedgehog regulates this balance through its receptor, Patched, which acts through Smoothened and thence a regulatory complex that includes Fused, Costal, Suppressor of Fused and Cubitus interruptus. It is not known how the Hedgehog signal is relayed from Smoothened to the regulatory complex nor how responses to different levels of Hedgehog are implemented. Chimeric and deleted forms of Smoothened were used to explore the signaling functions of Smoothened. A Frizzled/Smoothened chimera containing the Smo cytoplasmic tail (FFS) can induce the full spectrum of Hedgehog responses but is regulated by Wingless rather than Hedgehog. Smoothened whose cytoplasmic tail is replaced with that of Frizzled (SSF) mimics fused mutants, interfering with high Hedgehog responses but with no effect on low Hedgehog responses. The cytoplasmic tail of Smoothened with no transmembrane or extracellular domains (SmoC) interferes with high Hedgehog responses and allows endogenous Smoothened to constitutively initiate low responses. SmoC mimics costal mutants. Genetic interactions suggest that SSF interferes with high signaling by titrating out Smoothened, whereas SmoC drives constitutive low signaling by titrating out Costal. These data suggest that low and high signaling (1) are qualitatively different, (2) are mediated by distinct configurations of the regulatory complex and (3) are initiated by distinct activities of Smoothened. A model is presented where low signaling is initiated when a Costal inhibitory site on the Smoothened cytoplasmic tail shifts the regulatory complex to its low state. High signaling is initiated when cooperating Smoothened cytoplasmic tails activate Costal and Fused, driving the regulatory complex to its high state. Thus, two activities of Smoothened translate different levels of Hedgehog into distinct intracellular responses (Hooper, 2003).
Analyses of the activities of truncated and chimeric forms of Smo in a variety of genetic backgrounds yielded four principal observations. (1)The FFS chimera activates the full spectrum of Hh responses, but is regulated by Wg rather than Hh. From this, it is concluded that the Smo cytoplasmic tail initiates all intracellular responses to Hh, while the remainder of Smo regulates activity of the tail. (2) The SSF chimera interferes with high signaling but has no effect on low signaling. SSF mimics Class II fu mutants and is suppressed by increasing smo+ but not fu+ or cos+. From this, it is concluded that high Hh instructs Smo to activate Fu by a mechanism that is likely to involve dimeric/oligomeric Smo. (3) The cytoplasmic tail of Smo (SmoC) derepresses endogenous Smo activity in the absence of Hh and represses endogenous Smo activity in the presence of high Hh. That is, SmoC drives cells to the low response regardless of Hh levels. This mimics cos mutants and is suppressed by 50% increase in cos+. From this, it is concluded that low Hh instructs Smo to inactivate Cos, by a mechanism that may involve stoichiometric interaction between Cos and the Smo cytoplasmic tail. (4) Chimeras where the extracellular CRD and TM domains are mismatched fail to exhibit any activity. From this, it is concluded that these two domains act as an integrated functional unit. This leads to a model for signaling where Fz or Smo can adopt three distinct states, regulating two distinct activities and translating different levels of ligand into distinct responses. Many physical models are consistent with these genetic analyses (Hooper, 2003).
Two mutant forms of Smo have been identified that regulate downstream signaling through different activities. These mutant forms of Smo mimic phenotypes of mutants in other components of the Hh pathway, as well as normal responses to different levels of Hh. These data suggest a model where Smo can adopt three distinct states that instruct three distinct states of the Ci regulatory complex. The model further suggests that Smo regulates Ci through direct interactions between Fu, Cos and the cytoplasmic tail of Smo. This is consistent with the failure of numerous genetic screens to identify additional signaling intermediates, and with the exquisite sensitivity of low signaling to Cos dosage (Hooper, 2003).
The model proposes that Smo can adopt three states, a decision normally dictated by Hh, via Ptc. The Ci regulatory complex, which includes full-length Ci, Cos and Fu, likewise can adopt three states. (1) In the absence of Hh Smo is OFF. Its cytoplasmic aspect is unavailable for signaling. The Cos/Fu/Ci regulatory complex is anchored to microtubules and promotes efficient processing of Ci155 to CiR. (2) Low levels of Hh expose Cos inhibitory sites in the cytoplasmic tail of Smo. Cos interaction with these sites drives the Ci regulatory complex into the low state, which recruits Su(fu) and makes little CiR or CiA. (3) High levels of Hh drive a major change in Smo, possibly dimerization. This allows the cytoplasmic tails of Smo to cooperatively activate Fu and Cos. Fu* and Cos* (* indicates the activated state) then cooperate to inactivate Su(fu), to block CiR production, and to produce CiA at the expense of Ci155 (Hooper, 2003).
The OFF state is normally found deep in the anterior compartment where cells express no Hh target genes (except basal levels of Ptc). In this state, the Ci regulatory complex consists of Fu/Cos/Ci155. Cos and Fu contribute to efficient processing of Ci155 to the repressor form, CiR, presumably because the complex promotes access of PKA and the processing machinery to Ci155, correlating with microtubule binding of the complex. This state is universal in hh or smo mutants, indicating that intracellular responses to Hh cannot be activated without Smo. Therefore Smo can adopt an OFF state where it exerts no influence on downstream signaling components and the OFF state of the Ci regulatory complex is its default state (Hooper, 2003).
The low state is normally found approximately five to seven cells from the compartment border, where cells are exposed to lower levels of Hh. These cells express Iro, moderate levels of dpp, no Collier and basal levels of Ptc. They accumulate Ci155, indicating that little CiA or CiR is made. Ci155 can enter nuclei but is insufficient to activate high responses. The physical state of the Ci regulatory complex in the low state has not been investigated. Cells take on the low state regardless of Hh levels when Ci is absent or when SmoC is expressed, and are strongly biased towards that state in fu(classII); Su(fu) double mutants. This state normally requires input from Smo, which becomes constitutive in the presence of SmoC. Because SmoC drives only low responses and cannot activate high responses, this identifies a low state of Smo that is distinct from both OFF and high. It is proposed that the low state is normally achieved when Smo inactivates Cos, perhaps by direct binding of Cos to Smo and dissociation of Cos from Ci155. Neither CiR nor CiA is made efficiently, and target gene expression is similar to that of ci null mutants (Hooper, 2003).
The high state is normally found in the two or three cells immediately adjacent to the compartment border where there are high levels of Hh. These cells express En, Collier, high levels of Ptc and moderate levels of Dpp. They make CiA rather than CiR, and Ci155 can enter nuclei. In this state a cytoplasmic Ci regulatory complex consists of phosphorylated Cos, phosphorylated Fu, Ci155 and Su(fu). Dissociation of Ci from the complex may not precede nuclear entry, since Cos, Fu, and Sufu are all detected in nuclei along with Ci155. Sufu favors the low state, whereas Cos and Fu cooperate to allow the high state by repressing Sufu, and also by a process independent of Sufu. This high state is the universal state in ptc mutants and requires input from Smo. As this state is specifically lost in fu mutants, Fu may be a primary target through which Smo activates the high state. SSF specifically interferes with the high state by a mechanism that is most sensitive to dosage of Smo. This suggests SSF interferes with the high activity of Smo itself. It is suggested that dimeric/oligomeric Smo is necessary for the high state, and that Smo:SSF dimers are non-productive. Cooperation between Smo cytoplasmic tails activates Fu and thence Cos. The activities of the resulting Fu* and Cos* are entirely different from their activities in the OFF state, and mediate downstream effects on Sufu and Ci (Hooper, 2003).
The cytoplasmic tail of Smo is sufficient to activate all Hh responses, and its activity is regulated through the extracellular and TM domains. This is exemplified by the FFS chimera, which retains the full range of Smo activities, but is regulated by Wg rather than Hh. The extracellular and transmembrane domains act as an integrated unit to activate the cytoplasmic tail, since all chimeras interrupting this unit fail to activate any Hh responses, despite expression levels and subcellular localization similar to those of active SSF or FFS. As is true of other serpentine receptors, a global rearrangement of the TM helices is likely to expose 'active' (Cos regulatory?) sites on the cytoplasmic face of Smo. The extracellular domain of Smo must stabilize this conformation and Ptc must destabilize it. But how? Ptc may regulate Smo through export of a small molecule, which inhibits Smo when presented at its extracellular face. Hh binding to Ptc stimulates its endocytosis and degradation, leaving Smo behind at the cell surface. Thus, Hh would separate the source of the inhibitor (Ptc) from Smo, allowing Smo to adopt the low state. Transition from low to high might require Smo hyperphosphorylation. The high state, which is likely to involve Smo oligomers, might be favored by cell surface accumulation if aggregation begins at some threshold concentration of low Smo. Alternatively, these biochemical changes may all be unnecessary for either the low or high states of Smo (Hooper, 2003).
Smoothened transduces Hedgehog signal by physically interacting with Costal2/Fused complex through its C-terminal tail
The Hedgehog (Hh) family of secreted proteins controls many aspects of growth and patterning in animal development. The seven-transmembrane protein Smoothened (Smo) transduces the Hh signal in both vertebrates and invertebrates; however, the mechanism of its action remains unknown. Smo lacking its C-terminal tail (C-tail) is inactive, whereas membrane-tethered Smo C-tail has constitutive albeit low levels of Hh signaling activity. Smo is shown to physically interact with Costal2 (Cos2) and Fused (Fu) through its C-tail. Deletion of the Cos2/Fu-binding domain from Smo abolishes its signaling activity. Moreover, overexpressing Cos2 mutants that fail to bind Fu and Ci but retain Smo-binding activity blocks Hh signaling. Taken together, these results suggest that Smo transduces the Hh signal by physically interacting with the Cos2/Fu protein complex (Jia, 2003).
The most surprising finding of this study is that the Smo C-tail suffices to induce Hh pathway activation. Overexpressing the membrane-tethered Smo C-tail (Myr-SmoCT, Sev-SmoCT) blocks Ci processing, induces dpp-lacZ expression, and stimulates nuclear translocation of Ci155. Myr-SmoCT is refractory to Ptc inhibition and activates Hh-pathway independent of endogenous Smo. Membrane tethering appears to be crucial for the Smo C-tail to activate the Hh pathway; untethered SmoCT has no signaling activity. This is consistent with observations that cell surface accumulation of Smo correlates with its activity (Jia, 2003).
Although the Smo C-tail has constitutive Hh signaling activity, it does not possess all the activities associated with full-length Smo. For example, overexpressing Myr-SmoCT in A-compartment cells away from the A/P compartment boundary does not significantly activate ptc and en, which are normally induced by high levels of Hh. In addition, Myr-SmoCT cannot substitute endogenous Smo at the A/P compartment boundary to transduce high levels of Hh signaling activity, since boundary smo mutant cells expressing Myr-SmoCT fail to express ptc in response to Hh (Jia, 2003).
The failure of the Smo C-tail to transduce high Hh signaling activity is due to its inability to antagonize Su(fu). Although Myr-SmoCT blocks Ci processing to generate Ci75, the activity of Ci155 accumulated in Myr-SmoCT-expressing cells is still blocked by Su(fu); removal of Su(fu) function from Myr-SmoCT-expressing cells allows Ci155 to activate ptc to high levels. Because Myr-SmoCT stimulates nuclear translocation of Ci155, the inhibition of Ci155 by Su(fu) in Myr-SmoCT-expressing cells must rely on a mechanism that is independent of impeding Ci nuclear translocation (Jia, 2003).
Several observations prompted a determination of whether Smo can transduce the Hh signal by physically interacting with the Cos2/Fu complex: (1) although Smo is related to G-protein-coupled receptors, no genetic or pharmacological evidence has been obtained to support the involvement of a G-protein in a physiological Hh signaling process; (2) Myr-SmoCT can interfere with the ability of endogenous Smo to transduce high levels of Hh signaling activity, which can be offset by increasing the amount of full-length Smo. This implies that Myr-SmoCT may compete with full-length Smo for binding to limiting amounts of downstream signaling components. (3) Extensive genetic screens failed to identify Hh signaling components that may link Smo to the Cos2/Fu complex (Jia, 2003).
Using a coimmunoprecipitation assay, it was demonstrated that Smo interacts with the Cos2/Fu complex both in S2 cells and in wing imaginal discs, and the Smo C-tail appears to be both necessary and sufficient to mediate this interaction. The Cos2/Fu-binding domain was narrowed down to the C-terminal half of the Smo C-tail (between amino acids 818 and 1035). Furthermore, both the microtubule-binding domain (amino acids 1-389) and the C-terminal tail (amino acids 990-1201) of Cos2 interact with Smo. Since none of these Cos2 domains binds Fu, this implies that the Cos2/Smo interaction is not mediated through Fu. Ci is also dispensable for Smo/Cos2/Fu interaction; Smo binds Cos2/Fu in S2 cells in which Ci is not expressed. However, the results did not rule out the possibility that Smo could interact with the Cos2/Fu complex through multiple contacts. For example, Smo could simultaneously contact Cos2 and Fu. Nor was it demonstrated that binding of Cos2 to Smo is direct. Indeed, no protein-protein interaction between Smo and Cos2 was detected in yeast. It is possible that a bridging molecule(s) is required to link Smo to the Cos2/Fu complex. Alternatively, Smo needs to be modified in vivo in order to bind Cos2. It has been shown that Hh stimulates phosphorylation of Smo; hence, it is possible that phosphorylation of Smo might be essential for recruiting the Cos2/Fu complex (Jia, 2003).
Several lines of evidence suggest that Smo/Cos2/Fu interaction is important for Hh signal transduction. (1) Deletion of the Cos2-binding domain from Smo, either in the context of full-length Smo or the Smo C-tail, abolishes Smo signaling activity. (2) Overexpressing Cos2 deletion mutants that no longer bind Fu and Ci but retain a Smo-binding domain intercept Hh signal transduction. Genetic evidence has been provided that Cos2 has a positive role in transducing Hh signal in addition to its negative influence on the Hh pathway, since Ci155 is no longer stimulated into labile and hyperactivity forms by high levels of Hh in cos2 mutant cells. In light of the finding that Smo interacts with Cos2/Fu, the simplest interpretation for a positive role of Cos2 is that it recruits Fu to Smo and allows Fu to be activated by Smo in response to Hh (Jia, 2003).
Of note, interaction between SmoCT and Cos2/Fu per se is not sufficient for triggering Hh pathway activation. For example, Myr-SmoCTDelta625-818, which binds Cos2/Fu to the same extent as Myr-SmoCT, does not possess Hh signaling activity. The fact that Myr-SmoDeltaCT625-730 and Myr-Smo730-1035 can activate the Hh pathway suggests that Smo sequence between amino acids 730 and 818 is essential. This domain may recruit factors other than Cos2/Fu to achieve Hh pathway activation. Alternatively, it might target SmoCT to an appropriate signaling environment (Jia, 2003).
An important property of Hh family members in development is that they can elicit distinct biological responses via different concentrations. How different thresholds of Hh signal are transduced by Smo to generate distinct transcriptional outputs is not understood. The results suggest that Smo can function as a molecular sensor that converts quantitatively different Hh signals into qualitatively distinct outputs. In the absence of Hh, the cell surface levels of Smo are low. In addition, the Smo C-tail may adopt a 'closed' conformation that prevents it from binding to Cos2/Fu. Low levels of Hh partially inhibit Ptc, leading to an increase of Smo on the cell surface. In addition, the Smo C-tail may adopt an 'open' conformation, which allows Smo to bind the Cos2/Fu complex and inhibit its Ci-processing activity. Low levels of Hh signaling activity can be mimicked by overexpression of either full-length Smo or membrane-tethered forms of the Smo C-tail. High levels of Hh completely inhibit Ptc, resulting in a further increase in Smo signaling activity. Hyperactive Smo stimulates the phosphorylation and activity of bound Fu, which in turn antagonizes Su(fu) to activate Ci155. Consistent with this, Fu bound to Myc-Smo was found to became phosphorylated in response to ectopic Hh (Jia, 2003).
The Smo sequence N terminus to SmoCT (SmoN) appears to be essential for conferring high Smo activities. It is not clear how SmoN modulates the activity of SmoCT. SmoN might recruit additional effector(s) or target SmoCT to a microdomain with a more favorable signaling environment. Alternatively, SmoN might function as a dimerization domain that facilitates interaction between two SmoCTs, as in the case of receptor tyrosine kinases. It is also not clear how Smo/Cos2/Fu interaction inhibits Ci processing. One possibility is that Smo/Cos2 interaction may cause disassembly of the Cos2/Ci complex, which could prevent Ci from being hyperphosphorylated; Cos2/Ci complex formation might be essential for targeting Ci to its kinases. Consistent with this view, Ci is barely detectable in the Cos2/Fu complex bound to Smo (Jia, 2003).
Physical association of the receptor complex with a downstream signaling component has also been demonstrated for the canonic Wnt pathway whereby the Wnt coreceptor LRP-5 interacts with Axin, a molecular scaffold in the Wnt pathway. Hence, Hh and Wnt/Wg pathways appear to use a similar mechanism to transmit signal downstream of their receptor complexes (Jia, 2003).
Identification of a functional interaction between the Smoothened and Costal2
Hedgehog signal transduction is initiated when Hh binds to its receptor Patched (Ptc), activating the transmembrane protein Smoothened (Smo). Smo transmits its activation signal to a microtubule-associated Hedgehog signaling complex (HSC). At a minimum, the HSC consists of the Kinesin-related protein Costal2 (Cos2), the protein kinase Fused (Fu), and the transcription factor Cubitus interruptus (Ci). In response to HSC activation, the ratio between repressor and activator forms of Ci is altered, determining the expression levels of various Hh target genes. The steps between Smo activation and signaling to the HSC have not been described. A functional interaction is described between Smo and Cos2 that is necessary for Hh signaling. It is proposed that this interaction is direct and allows for activation of Ci in response to Hh. This work fills in the last major gap in the understanding of the Hh signal transduction pathway by suggesting that no intermediate signal is required to connect Smo to the HSC (Ogden, 2003).
To determine whether Cos2 and Smo could interact directly, a directed yeast two-hybrid assay was used. The cytoplasmic carboxyl-terminal domain of Smo (SmoC) was used in the two-hybrid assay, since the signaling capabilities of Smo appear to reside within this domain. The carboxyl-terminal domain of Smo interacts with Cos2, though this interaction appears less efficient than that of Cos2 with Fu. This interaction is specific and reproducible, since there is no growth when the open reading frame of Cos2 is inserted in the reverse orientation. These results demonstrate that the carboxyl-terminal domain of Smo is sufficient to associate with Cos2 and that this association appears to be direct. Combined with immunoprecipitation and immunofluorescence data, the yeast two-hybrid results provide strong evidence that Smo and Cos2 directly associate and that the association occurs within the intracellular signaling portion of Smo (Ogden, 2003).
To determine whether Hh signaling would affect the Cos2-Smo interaction, Smo was immunoprecipitated from S2 cell lysates prepared from cells transfected with Hh expression or control vectors. Cos2 and Fu coimmunoprecipitate with Smo at similar levels regardless of Hh activation status. Phosphorylation-induced mobility shifts of Cos2 occurs in Hh-transfected cells, verifying that Hh signaling is intact. The modest increase observed in Cos2 immunoprecipitating with Smo in response to Hh stimulation may be accounted for by Smo protein stabilization in response to Hh. These results suggest that interactions between Smo, Cos2, and Fu are relatively stable and independent of Hh activation status (Ogden, 2003).
To verify that Hh activation does not modify Smo-Cos2 association in vivo, Smo immunoprecipitations were performed from embryos engineered to overexpress Ptc, Hh, or neither. Embryos overexpressing Ptc serve as a source of cells in which Hh signaling is inactive due to repression of Smo by Ptc, while embryos overexpressing Hh serve as a source of Hh-activated cells. Mobility shifts of Cos2, Fu, and Smo, which have previously been attributed to Hh-induced phosphorylation, confirm that Hh or Ptc have turned Hh signaling on or off in these embryos. An equal amount of Cos2 was observed coimmunoprecipitating with Smo from wild-type, Ptc, and Hh embryo lysates. In two separate experiments, it was estimated that 3% of Cos2, 4% of Fu, and 3%-8% of Smo were recovered in coimmunoprecipitates. By contrast, 50% of Fu was recovered by Cos2 immunoprecipitation, while negligible amounts of Fu, Cos2, or Smo were recovered in Fz immunoprecipitates. These results demonstrate that a small percentage of total Cos2 and Smo are associated in a high-affinity association, and the percentage associated does not change due to Hh signaling (Ogden, 2003).
Expression of a chimera of SmoC fused to a myristate membrane-targeting sequence (Myr-SmoC) induces phenotypes in Drosophila similar to cos2 loss-of-function mutations; weak Hh responses are activated, while strong Hh responses are inhibited. Myr-SmoC drives all Hh responses to a weak activation state in Drosophila and requires endogenous Smo to do so. Although the mechanism by which Myr-SmoC acts is unknown, dosage dependence of the effect suggests that it interferes with signaling by competing with endogenous Smo for Cos2. A similar epitope-tagged construct was expressed in cultured cells to test the hypothesis that Myr-SmoC interferes with signaling by binding to Cos2. Using a Myc epitope tag to specifically immunoprecipitate Myr-SmoC, it was found that both Cos2 and Fu associate with Myr-SmoC. These data support the directed two-hybrid experiment, showing that the carboxyl-terminal domain of Smo is sufficient to interact with Cos2. Further, Myr-SmoC functions was found to be a potent inhibitor of Hh signaling, able to inhibit Hh-dependent transcription in a dose-dependent fashion. These results indicate that even in the absence of Hh, Ci activity is effectively reduced by Myr-SmoC. Thus, Myr-SmoC does not constitutively activate Ci in this reporter assay. It is proposed that Myr-SmoC can act as a dominant negative by binding endogenous Cos2. This argument is bolstered by genetic evidence showing that increasing Cos2 levels in vivo can suppress the overgrowth phenotype associated with expressing Myr-SmoC in flies. These results are consistent with the hypothesis that association between Smo and Cos2 is necessary for Hh signaling to be propagated to its ultimate effector, the transcription factor Ci (Ogden, 2003).
Two scenarios are proposed that may account for the observation that Smo and Cos2 association is not altered in response to Hh. The first possibility is that Smo and Cos2 may be held in an associated but inactive state in the absence of Hh stimulation, presumably through the function of Ptc. Hh stimulation would relieve Ptc-mediated repression of the Smo-Cos2 complex to allow Smo relocalization to the plasma membrane. The Kinesin-like properties of Cos2 and its direct interaction with Smo may facilitate this relocalization. A second possibility is that the dynamics of association are changed in response to Hh, such that Smo and Cos2 association turns over more rapidly in the process of creating the active form of Ci (Ogden, 2003).
Hedgehog signal transduction via Smoothened association with a cytoplasmic complex scaffolded by the atypical kinesin, Costal-2
The seven-transmembrane protein Smoothened (Smo) transduces extracellular activation of the Hedgehog (Hh) pathway by an unknown mechanism to increase transcriptional activity of the latent cytoplasmic transcription factor Ci (Cubitus interruptus). Evidence is presented that Smo associates directly with a Ci-containing complex that is scaffolded and stabilized by the atypical kinesin, Costal-2 (Cos2). This complex constitutively suppresses pathway activity, but Hh signaling reverses its regulatory effect to promote Ci-mediated transcription. In response to Hh activation of Smo, Cos2 mediates accumulation and phosphorylation of Smo at the membrane as well as phosphorylation of the cytoplasmic components Fu and Su(fu). Positive response of Cos2 to Hh stimulation requires a portion of the Smo cytoplasmic tail and the Cos2 cargo domain, which interacts directly with Smo (Lum, 2003).
Early studies of Cos2 suggested primarily a negative role for Cos2 in pathway regulation, as manifested by phenotypic analysis of cos2 mutations and by a requirement for Cos2 in cytoplasmic retention of Ci and in its proteolytic processing to produce CiR. More recent studies have suggested a potential positive role based on a requirement for Cos2 in transcriptional activation of gene targets associated with highest levels of pathway activity. This study extends the evidence for such a positive role by demonstrating: (1) a requirement for Cos2 in mediating a series of Hh-induced biochemical changes in pathway components; (2) an association between the Cos2/Fu/Ci complex and Smo; (3) a direct interaction between Smo and Cos2, and (4) a requirement for Cos2 in highest level Hh pathway response in cultured cell reporter assays and Hh-induced morphogenesis in the dorsal cuticle (Lum, 2003).
Based on the sequence relationship between Smo and GPCRs, previous speculation and experimental work has focused on the possibility that Smo may interact with heterotrimeric G proteins. G protein components have been systematically targeted using RNAi in a cultured cell signaling assay, and no significant role has been found for G proteins in transcriptional regulation via Ci. A potential role cannot be ruled out for G proteins or other mediators in cellular responses to Hh signaling that do not involve transcriptional regulation via Ci/Gli. For example, a recently described chemoattractant activity for Shh in axon guidance appears to be mediated by Smo, yet proceeds in a short time scale and with a local cell polarity that suggests a possible nonnuclear mechanism of response (Lum, 2003).
In searching for other mediators of information transfer from membrane to cytoplasm, it was surprising to find cytoplasmic components copurifying with Smo. Since these were the only Drosophila proteins identified in the bands excised, it is concluded that these complexes were highly pure, and that Smo associates stably with components of the cytoplasmic complex in vivo. It is further demonstrated that Smo interacts directly with Cos2, which scaffolds this complex. Consistent with these findings, articles presenting genetic evidence for a role of the Smo cytoplasmic tail in Hh signaling and evidence suggesting a physical interaction between Smo and Cos2 were published during final review of this work. Additional direct association of Smo with other complex components have not been ruled out (Lum, 2003 and references therein).
The identification of a complex that includes both Ci and Smo immediately suggested that recruitment of the cytoplasmic complex to Smo upon Hh stimulation might be critical for pathway activity. Cos2 plays a central role in mediating this association, functioning both as a scaffold that brings together cytoplasmic components (the Cos2 complex) and as a sensor that monitors the state of pathway activation by interacting with Smo at the membrane (Lum, 2003).
In the unstimulated state, Smo levels are low, and most of the Cos2 complex therefore is not associated with or influenced by Smo; even the small fraction of Cos2 complex associated with Smo may be negative, since Smo may not be in an active state. The negative form of the Cos2 complex presumably mediates production of CiR and prevents nuclear accumulation of Ci, resulting in a net suppression of transcriptional targets. In the intermediate state, present after a few minutes of stimulation, Smo protein has become activated by Hh stimulation but has not yet accumulated. Therefore, despite a positive state for the specific Cos2 complexes affected by Smo, the low level of activated Smo protein is insufficient for interaction with most of the Cos2 complexes. The net outcome with regard to transcriptional targets thus remains negative (Lum, 2003).
In stimulated cells, activated Smo exerts a pervasive influence on the Cos2 complex, either through a stable association or through a transient association with enduring effects. This association presumably involves a direct binding interaction between a portion of the Smo cytotail and the Cos2 cargo and stalk domains. The evidence suggests that both activation and accumulation of Smo are critical, as evidenced by the observation that moderate Smo overexpression alone is unable to fully activate the pathway in the absence of Hh stimulation. Similarly, a cycloheximide block of Smo accumulation dramatically limits the biochemical changes normally induced by Hh stimulation, and cotransfection experiments demonstrate that transition of Cos2 from a pathway suppressor to activator requires adequate levels of activated Smo. The activation of transcriptional targets resulting from pathway stimulation presumably result from the positive action of Cos2 on Fu and from loss of the ability to produce CiR (Lum, 2003).
It is the dual action of Cos2 in promoting formation of CiR and in stabilization and possible activation of Fu that leads to the apparent dual negative and positive roles of Cos2 in pathway regulation. Pathway activation induced by loss of Cos2 thus results from loss of CiR, but this activation is only partial because Fu is also destabilized, and the pathway-suppressing action of Su(fu) is unchecked. Consistent with this interpretation, a combined loss of Cos2 and Su(fu) results in maximal pathway activation irrespective of the presence of Hh (Lum, 2003).
How does Cos2 function? Motor proteins recently have been found to play a role in regulation of transmembrane receptors, as in the case of rhodopsin and mannose-6-phosphate receptor. These motor proteins apparently regulate receptor localization but do not play a direct role in receptor function. A recent report suggests that Cos2 overexpression indeed may influence Smo localization. The evidence, however, suggests that Cos2 also functions as a primary sensor of the state of pathway activation by interacting with Smo at the membrane and by scaffolding and stabilizing downstream pathway components (Lum, 2003).
Kinesins likely share an evolutionary origin with G proteins and myosins, and all three types of proteins use a conserved mechanism to couple nucleotide hydrolysis to dramatic conformational changes in protein structure. Kinesins utilize this mechanism to generate force that allows movement on microtubules. Sequences essential for microtubule binding, for nucleotide binding, and for motor function in other kinesins, however, are not conserved among Cos2 sequences in the Diptera and are not required for Cos2 function in cultured cells. It is still possible, however, that Cos2 retains the capacity for a conformational shift that may be triggered by Smo activation. If so, then this conformational change may involve the Cos2 cargo domain, which binds to Smo at the cytoplasmic tail and is likely required for Smo responsiveness (Lum, 2003).
Whatever its mechanism of activation, evidence points to a role for the atypical kinesin Cos2 as a scaffold and sensor that functions as the pivotal component in transduction of pathway activation from the seven transmembrane receptor Smo to the latent cytoplasmic transcription factor Ci. In addition to stabilizing Fu and mediating forward signaling events that affect other cytoplasmic components and Ci, Cos2 is also required for the accumulation of activated Smo, a critical aspect of producing a full response to Hh signal. Cos2 thus functions not just as a passive sensor for the state of pathway activation at the membrane, but is also an active participant in the cellular dynamics of transition from the unstimulated to the stimulated state. These activities are all the more remarkable in view of the well-recognized role played by Cos2 in maintaining the unstimulated state of the Hh pathway, and together these findings suggest that Cos2 dynamics are a critical determinant of intracellular Hh pathway response and regulation (Lum, 2003).
Sxl is in a complex that contains all of the known Hh cytoplasmic components: Hh promotes the entry of Sxl into the nucleus in the wing disc
The sex determination master switch, Sex-lethal (Sxl), controls sexual development as a splicing and translational regulator. Hedgehog (Hh) is a secreted protein that specifies cell fate during development. Sxl is in a complex that contains all of the known Hh cytoplasmic components, including Cubitus interruptus (Ci) the only known target of Hh signaling. Hh promotes the entry of Sxl into the nucleus in the wing disc. In the anterior compartment, the Hh receptor Patched (Ptc) is required for this effect, revealing Ptc as a positive effector of Hh. Some of the downstream components of the Hh signaling pathway also alter the rate of Sxl nuclear entry. Mutations in Suppressor of Fused or Fused with altered ability to anchor Ci are also impaired in anchoring Sxl in the cytoplasm. The levels, and consequently, the ability of Sxl to translationally repress downstream targets in the sex determination pathway, can also be adversely affected by mutations in Hh signaling genes. Conversely, overexpression of Sxl in the domain that Hh patterns negatively affects wing patterning. These data suggest that the Hh pathway impacts on the sex determination process and vice versa and that the pathway may serve more functions than the regulation of Ci (Horabin, 2003).
Sxl co-immunoprecipitates with Cos2 and Fu in the female germline. Since Ci is not expressed in germ cells, it is probable that a different Hh cytoplasmic complex might exist in germ cells. In somatic cells, Sxl is expressed in all female cells while Ci is expressed in only a subset. To test whether the Hh pathway differentiates between the two proteins in somatic cells, Sxl was immunoprecipitated from embryonic extracts and the immunoprecipitates probed for the various Hh cytoplasmic components. The immunoprecipitates showed that Cos2, Fu and Ci are complexed with Sxl. The specificity of this association of Sxl with the Hh pathway components was verified using antibodies to either Ci or Su(fu), and testing the immunoprecipitates for the presence of Sxl. Both co-immunoprecipitated with Sxl. The Ci immunoprecipitate was also tested for another Hh cytoplasmic component, Fu, which was present as expected. These interactions are maintained in a Su(fu)LP background (protein null allele). An IP of Ci from Su(fu)LP embryos brought down Sxl, as well as Fu and Cos2. Taken together, these data suggest that cells that express Ci and Sxl have both proteins in the same complex with the known cytoplasmic components of the Hh signaling pathway (Horabin, 2003).
Previous work on the germline has suggested that the Hh signaling pathway affects the intracellular trafficking Sxl. The cross talk between these two developmental pathways has been analyzed in tissues where both Hh targets can be present in the same cell. While analysis of embryos only uncovered an effect of Cos2 on Sxl, analysis of wing discs allowed several specific effects to be uncovered. At least three new functional aspects of the Hh pathway are suggested:
1. More than one 'target' protein can exist in the Hh cytoplasmic complex.
Immunoprecipitation experiments using extracts from embryos indicate that Sex-lethal and the known Hh signaling target Ci are in the same complex. The two proteins can co-immunoprecipitate each other as well as other known members of the Hh cytoplasmic complex. Even when Su(fu), the cytoplasmic component that most strongly anchors Sxl in the cytoplasm, is removed, Sxl can still be co-immunoprecipitated with both Ci and Fu. As a whole, these results suggest that at least some proportion of the two Hh 'target' proteins are in a common complex within the cell. Additionally, the wing defects produced when Sxl is overexpressed in the Hh signaling region suggest that their relative concentrations are important for their normal functioning (Horabin, 2003).
2. The Hh targets can be affected differentially.
The presence of two 'targets' within the Hh cytoplasmic complex, raises the question of how they can be differentially affected. The data show that the various members of the Hh pathway do not affect Sx1 and Ci similarly. Smo appears to be dispensable for the transmission of the Hh signal in promoting Sx1 nuclear entry, while Smo is critical for the activation of Ci. Conversely, while Ptc is essential for the effect of Hh on Sxl, it is dispensable for the activation of Ci. The Fu kinase (fumH63 background) also appears to have no role in Hh signaling with respect to Sxl, while it is critical for the activation of Ci. By contrast, both Su(fu) and the Fu regulatory domain act similarly on Sxl and Ci, serving to anchor them in the cytoplasm (Horabin, 2003).
Taken together, these data suggest that the presence of Hh can be relayed to the cytoplasmic components differentially and, while the data do not address the point, suggest how different outcomes might be achieved. Ptc has been proposed to be a transmembrane transporter protein that functions catalytically in the inhibition of Smo via a diffusible small molecule. The stimulation of Sxl nuclear entry by the binding of Hh to Ptc might also involve a change in the internal cell milieu, but in this case the Hh cytoplasmic complex may be affected independently, not requiring a change in the activity of Smo or the Fu kinase (Horabin, 2003).
3. Ptc can signal the presence of the Hh ligand in a positive manner.
Several experiments indicate that Hh bound to Ptc enhances the nuclear entry of Sxl. That Smo has no role in transmitting the Hh signal is most clearly demonstrated by expressing the PtcD584 protein in both the anterior and posterior compartments of the dorsal half of the wing disc. PtcD584 acts as a dominant negative and so activates Ci in the anterior compartment, but it fails to enhance the levels of nuclear Sxl in the anterior because it sequesters Hh in the posterior compartment. The double mutant condition of ptc clones in a hhMRT background clearly places Ptc downstream of Hh, while showing Ptc can act positively in transmitting the Hh signal (Horabin, 2003).
A positive role for Ptc, but in this case in conjunction with Smo, in promoting cell proliferation during head development has recently been reported. In this situation, however, Hh acts negatively on both Ptc and Smo in their activation of the Activin type I receptor, suggesting an even greater variance from the canonical Hh signaling process (Horabin, 2003).
While the effects on Sxl in the anterior compartment show a dependence on the known Hh signaling components, it is not clear what promotes the rapid nuclear entry of Sxl in the posterior compartment. Su(fu) is expressed uniformly across the disc so it does not appear to be responsible for the AP differences, and ptc clones have no effect (and Ptc RNA and protein are not detected in the posterior compartment). Removal of Hh, however, reduces the nuclear entry rate of Sxl in both compartments. In this regard, the parallel between Hh pathway activation and Sxl nuclear entry in the posterior compartment is worth noting. Fu is also activated in the posterior compartment in a Hh-dependent manner, even though Ptc is not present. It is not clear what mediates between Hh and Fu (Horabin, 2003).
The data also suggest that the Hh cytoplasmic complex may have slightly different compositions in different tissues and/or at developmental stages. In the female germline and in embryos, the absence of Cos2 leads to a severe reduction in Sxl levels. However, in wing discs when mutant clones are made using the same cos2 allele, there is no effect on Sxl. It is suggested that between the third instar larval stage and eclosion, the composition of the Hh cytoplasmic complex may change again to make Sxl more sensitive to Cos2. This would explain why removal of Cos2 can produce sex transformations of the foreleg even though mutant clones in wing discs (and also leg discs) show no alterations in Sxl levels (Horabin, 2003).
A similar argument might apply to the weak sex transformations of forelegs produced by PKA clones. Alternatively, PKA may have a very weak effect but the assay on wing discs is not sufficiently sensitive to allow detection of small effects; PKA was found to have a modest effect on Sxl nuclear entry in the germline. Sxl is sufficiently small (38-40 kDa) to freely diffuse into the nucleus, or the protein may enter the nucleus as a complex with splicing components. This may account for the limited sex transformations caused by removal of Hh pathway components (Horabin, 2003).
Removal of several of the Hh pathway components, such as smo, gives the same weak sex transformation phenotype, even though smo has no effect on Sxl nuclear entry. Additionally, there is no correlation between a positive and a negative Hh signaling component and whether there is a resulting phenotype. Changing the dynamics of the activation state of the Hh cytoplasmic complex may perturb the normal functioning of Sxl, since Sxl appears to be in the same complex as Ci. For example, if the Hh pathway is fully activated because of a mutant condition, the relative amounts of Sxl in the cytoplasm versus nucleus at any given time, may be different from the wild-type condition. Perturbing the usual cytoplasmic-nuclear balance could compromise the various processes that Sxl protein regulates. Sxl acts both positively and negatively on its own expression through splicing and translation control and, additionally, regulates the downstream sex differentiation targets. The latter could also be responsible for the weak sex transformations seen, in view of the recent demonstration that doublesex affects the AP organizer and sex-specific growth in the genital disc (Horabin, 2003).
With the exception of Cos2, which can produce relatively substantial effects on Sxl levels in embryos as well as sex transformations in the foreleg, the effects of removal of any of the other Hh pathway components are generally not large. The strong effects of Cos2 on Sxl could be because it affects the stability of Sxl. However, Sxl depends on an autoregulatory splicing feedback loop for its maintenance making the protein susceptible to a variety of regulatory breakdowns. If Cos2 altered the nuclear entry of Sxl, for example, its removal could compromise the female-specific splicing of Sxl transcripts by reducing the amounts of nuclear Sxl. Splicing of Sxl transcripts would progressively fall into the male mode to eventually result in a loss of Sxl protein (Horabin, 2003).
Cos2 and Fu have been reported to shuttle into and out of the nucleus, and their rate of nuclear entry is not dependent on the Hh signal. That Ci and Sxl are complexed with the same Hh pathway cytoplasmic components, and share and yet have unique intracellular trafficking responses to mutations in the pathway, makes it tempting to speculate that the Hh cytoplasmic components may have had a functional origin related to intracellular trafficking that preceded the two proteins. Whether this reflects a more expanded role in regulated nuclear entry remains to be determined (Horabin, 2003).
Hedgehog-regulated Costal2-kinase complexes control phosphorylation and proteolytic processing of Cubitus interruptus
Hedgehog (Hh) proteins control animal development by regulating the Gli/Ci family of transcription factors. In Drosophila, Hh counteracts phosphorylation by PKA, GSK3, and CKI to prevent Cubitus interruptus (Ci) processing through unknown mechanisms. These kinases physically interact with the kinesin-like protein Costal2 (Cos2) to control Ci processing and Hh inhibits such interaction. Cos2 is required for Ci phosphorylation in vivo, and Cos2-immunocomplexes (Cos2IPs) phosphorylate Ci and contain PKA, GSK3, and CKI. By using a Kinesin-Cos2 chimeric protein that carries Cos2-interacting proteins to the microtubule plus end, it was demonstrated that these kinases bind Cos2 in intact cells. PKA, GSK3, and CKI directly bind the N- and C-terminal regions of Cos2, both of which are essential for Ci processing. Finally, it was shown that Hh signaling inhibits Cos2-kinase complex formation. It is proposed that Cos2 recruits multiple kinases to efficiently phosphorylate Ci and that Hh inhibits Ci phosphorylation by specifically interfering with kinase recruitment (Zhang, 2005).
To facilitate detection of protein-protein interaction between Cos2 and its binding partners in vivo, a Kinesin/Cos2 chimeric protein (Kinco) was generated in which the microtubule binding domain of Cos2 is replaced by the motor domain of Drosophila KHC. Kinco moves to the microtubule plus end and accumulates near the basal surface of imaginal disc epithelial cells. Strikingly, Kinco carries all the known Cos2 binding proteins to the same subcellular compartment, leading to colocalization. PKAc, GSK3, and two CKI isoforms, CKIα and CKIϵ, all colocalize with Kinco at the microtubule plus end, demonstrating that these kinases associate with Cos2 in intact cells. Hence, Kinco provides a powerful tool to determine if a protein interacts with Cos2 in vivo. In addition, Kinco colocalizes with Cos2-interacting proteins in cultured Drosophila cells such as S2 and cl8 cells. It is conceivable that one can use such a cell-based colocalization assay to identify additional proteins that form a complex with Cos2. Furthermore, it is also possible to extend this approach to other protein complexes by generating appropriate Kinesin chimeric 'bait' proteins (Zhang, 2005).
By using immunoprecipitation and GST pull-down assays, the kinase interaction domains were mapped to the microtubule binding (MB) and C-terminal (CT) of Cos2. GST fusion proteins containing either of these domains bind purified recombinant PKAc, GSK3, and CKI, suggesting that these kinases directly bind Cos2. However, the possibility cannot be rule out that these kinases may have additional contacts with other components in the Cos2 complex. Indeed, it was found that CKI can bind Ci in yeast (Zhang, 2005).
Several lines of evidence suggest that Cos2/kinase interaction plays an important role in regulating Ci phosphorylation and processing: (1) Ci phosphorylation is compromised in cos2 mutants; (2) the kinase-interacting domains in Cos2 are essential for Ci processing; (3) overexpressing multiple kinases can bypass the requirement of Cos2 for Ci processing (Zhang, 2005).
PKAc, GSK3, and CKI appear to bind competitively to Cos2; however, since Cos2 can dimerize and each Cos2 protein contains two kinase binding domains, a Cos2 dimer could in principle bind all three kinases simultaneously. It is possible that these kinases might not form a tight complex with Cos2 in a stoichiometric manner, which could explain why purification of endogenous Cos2 complexes failed to identify any of these kinases. However, by using in vitro kinase assay and Western blot analysis, the association of PKAc, GSK3, and CKI with endogenous Cos2 was detected. It is likely that interactions between Cos2 and kinases are transient; however, such interactions could increase local concentrations of these kinases; this greatly facilitates Ci hyperphosphorylation (Zhang, 2005).
It has been demonstrated that Hh induces Ci dephosphorylation in cl8 cells; however, it is not clear whether Hh blocks Ci phosphorylation by all three kinases or a subset of them. By using an antibody that specifically recognizes a phosphorylated PKA site in Ci, it was found that Hh partially inhibits PKA phosphorylation of Ci in wing discs. Consistent with this, Hh only partially blocks Cos2/PKA interaction. In contrast, Hh appears to have a more profound influence on the interaction between Cos2 and CKI or GSK3. Furthermore, CKI and GSK3 kinase activities associated with endogenous Cos2 diminishes upon Hh stimulation and Cos2IPs phosphorylates Ci to a lesser extent after Hh treatment. These observations suggest that Ci phosphorylation by CKI and GSK3 is likely to be inhibited by Hh in vivo (Zhang, 2005).
Several mechanisms may contribute to the regulation of Cos2-Ci-kinase complex formation by Hh. (1) The finding that PKAc, GSK3, and CKI bind Cos2 domains that also interact with Smo raises a possibility that Smo/Cos2 interaction may exclude kinases from binding to Cos2. Indeed, a membrane-tethered form of SmoCT (Myr-SmoCT) interferes with Cos2-Ci-kinase complex formation. (2) Smo/Cos2 interaction at the cell surface may induce conformational change in Cos2, which could mask its kinase interacting domains. (3) Cos2 is phosphorylated in response to Hh. Phosphorylation of Cos2 could regulate its interaction with one or more kinases. (4) There is evidence that Hh induces dissociation of Ci from Cos2, which may further decrease the accessibility of Ci to the kinases. This may explain why Hh induces more significant dissociation of PKAc from Ci than from Cos2. (5) Hh induces degradation of Cos2 in P compartment cells as well as in cells immediately adjacent to the A/P boundary; this may lead to a chronic destruction of Cos2-Ci-kinase complexes. However, it appears that only high levels of Hh induce Cos2 degradation in vivo. Low levels of Hh may prevent Ci phosphorylation with different mechanisms such as those described above (Zhang, 2005).
The following model is proposed for the regulation of Ci phosphorylation by Cos2 and Hh. In the absence of Hh, Cos2 scaffolds multiple kinases and Ci into proximity, thus increasing the accessibility of Ci to these kinases and facilitating extensive phosphorylation of Ci. Upon Hh stimulation, Cos2 complexes are recruited to the cell surface via Smo, leading to disassembly of Cos2-Ci-kinase complexes. As a consequence, Ci phosphorylation is compromised and Ci processing is blocked. This model has several interesting parallels to that proposed for the Wnt pathway. In quiescent cells, both pathways employ large protein complexes to bring kinases and their substrates in close proximity, resulting in phosphorylation and proteolysis of the transcription factor (Ci) or effector (β-catenin). Upon ligand stimulation, both pathways recruit the cytoplasmic signaling complex to the cell surface and cause dissociation of the complex, leading to dephosphorylation and stabilization of the transcription factor/effector. Interestingly, both pathways use common kinases, including GSK3 and CKI. However, these kinases together with their substrates form distinct signaling complexes assembled by pathway-specific scaffolding proteins (Cos2 and Axin in the Hh and Wnt pathways, respectively). Pathway activation is achieved by a specific interaction between the receptor system and the scaffolding protein (Smo/Cos2 interaction in the Hh pathway and LPR5/6/Axin interaction in the Wnt pathway). Thus, each pathway only controls the pool of kinases in the same complex with the pathway effector, leading to pathway-specific regulation of substrate phosphorylation. The combinatorial mechanism by which pathway-specific scaffolds bring common kinases into proximity with their substrates thus appears to be a general one and may apply to other signaling pathways that utilize a common set of kinases (Zhang, 2005).
Divergence of hedgehog signal transduction mechanism between Drosophila and mammals
The Hedgehog (Hh) signaling pathway has conserved roles in development of species ranging from Drosophila to humans. Responses to Hh are mediated by the transcription factor Cubitus interruptus (Ci; GLIs 1-3 in mammals), and constitutive activation of Hh target gene expression has been linked to several types of human cancer. In Drosophila, the kinesin-like protein Costal2 (Cos2), which associates directly with the Hh receptor component Smoothened (Smo), is essential for suppression of the transcriptional activity of Ci in the absence of ligand. Another protein, Suppressor of Fused [Su(Fu)], exerts a weak negative influence on Ci activity. Based on analysis of functional and sequence conservation of Cos2 orthologs, Su(Fu), Smo, and Ci/GLI proteins, Drosophila and mammalian Hh signaling mechanisms have been found to diverge; in mouse cells, major Cos2-like activities are absent and the inhibition of the Hh pathway in the absence of ligand critically depends on Su(Fu) (Varjosalo, 2006).
The evidence indicates that a significant divergence in the mechanism of Shh signal transduction has occurred between vertebrates and invertebrates at the level of Smo, Cos2, and Su(Fu). The results indicate that major Cos2-like activities are absent in mouse cells based on four observations: (1) domains in Smo that are required in Drosophila to bind to Cos2 are not required for mSmo function; (2) mouse Shh signaling is insensitive to expression of Drosophila Cos2, but can be rendered Cos2 sensitive by replacing the mSmo C-terminal domain with the dSmo C-terminal domain; (3) expression of the Smo C-terminal domain which, in Drosophila, inactivates Cos2 has no effect in the mouse in vivo or in vitro; (4) overexpression or RNAi-mediated suppression of mouse Cos2 homologs has no effect on Hh signaling, even under sensitized conditions. These results are also consistent with divergence of the sequence of domains involved in Cos2 binding in Ci/GLI proteins and Smo between insects and mammals (Varjosalo, 2006).
Although the RNAi experiments targeting Cos2 orthologs Kif7 and Kif27 were performed under conditions in which negative regulators of GLI2 were limiting, they could be argued to be consistent with a model in which multiple kinesins with Cos2-like activity would act in a redundant fashion in mammals. By loss-of-function studies of individual kinesins in cell culture or in mice it would be difficult to obtain conclusive evidence against such a model due to the potential redundancy of multiple members of the kinesin family. However, several other in vitro and in vivo experiments that were presented directly contradict such a model. These include RNAi analyses targeting multiple Kif proteins, the analysis of loss of function of mSmo domains, and the lack of effect of overexpression of myristoylated-mSmoC and the Cos2 orthologs Kif7 and Kif27. In addition, no kinesin with Cos2-like activity could be found by extending the analyses to several other kinesins, which show homology to Cos2 but have different fly orthologs (Varjosalo, 2006).
In contrast to the case in Drosophila, Su(Fu) has a critical role in suppression of the mammalian Hh pathway in the absence of ligand, and loss of Su(Fu) function results in dramatic induction of GLI transcriptional activity. The results are also consistent with the studies that show that loss of Su(Fu) in mouse embryos results in complete activation of the Hh pathway, in a fashion similar to the loss of Ptc. These results are particularly surprising in light of the central role of Cos2 and a minor role of Su(Fu) in Drosophila. Together, these results also clearly show that mouse cells and embryos lack a Cos2-like activity that, in Drosophila, can completely suppress the Hh pathway in the absence of Su(Fu). However, the results should not be taken as evidence against novel proteins (including kinesins not orthologous to Cos2) acting in mammalian cells between Smo and GLI proteins with mechanisms that are distinct from those used by Drosophila Cos2. Several reports have, in fact, described such vertebrate-specific regulators of Hh signaling, including SIL, Iguana, Rab23, Kif3a, IFT88, IFT172, MIM/BEG4, and β-arrestin2 (Varjosalo, 2006).
The results also shed light on some known differences in the function of the Hh pathway in Drosophila and mammals. Mutations and small molecules affecting conformation of Smo transmembrane domains have a strong effect in mammals, but they have little effect in Drosophila. Interestingly, the Smo transmembrane domain is required for regulation of Su(Fu) activity, whereas the Smo C-terminal domain is critical for inhibition of Cos2 activity. Thus, based on the data, manipulations that affect the Smo transmembrane domain would be predicted to affect Su(Fu) and therefore to have a limited role in Drosophila and a major effect in mammals (Varjosalo, 2006).
Although there are differences in mouse and Drosophila Smo functional domains, and a lack of conservation of Smo phosphorylation sites, conservation of Smo function at a level not involving Cos2 is supported by the observation that mutation of a conserved isoleucine (I573A in mSmo) results in loss of both mouse and Drosophila Smo activity, yet does not result in a loss of Cos2 binding to dSmo. In addition, dSmo proteins that are activated by phosphomimetic mutations are constitutively stabilized; yet, they are partially responsive to Hh, suggesting that, in addition to stabilization and phosphorylation, other, potentially conserved mechanisms could be required to generate fully active Smo in Drosophila as well (Varjosalo, 2006).
In the mSmo C terminus, six residues between amino acids 570 and 580 were identified that resulted in significant loss of mSmo activity. The predicted secondary structure for this region is an α helix, in which these residues would reside on the same side, raising the possibility that, together with the third Smo intracellular loop, this region may form an interaction surface involved in inactivation of Su(Fu) or activation of Ci/GLI (Varjosalo, 2006).
Recent results have indicated that Su(Fu) acts as a tumor suppressor in medulloblastoma, and it has been suggested that medulloblastomas associated with loss of Su(Fu) result, in part, from activation of the Wnt pathway. However, consistent with the lack of a Wnt phenotype of Su(Fu) mutations in Drosophila, in the current experiments, a Wnt pathway-specific reporter is not activated by shRNAs targeting Su(Fu). Given observations that Su(Fu) is critically important in the suppression of the mammalian Hh pathway in the absence of ligand, and the fact that Hh pathway activation is required for growth of a form of medulloblastoma induced by mutations in Patched, it is likely that constitutive activation of the Hh pathway is also essential for growth of medulloblastomas associated with the loss of Su(Fu) (Varjosalo, 2006).
In a wider context, the results demonstrate that signal transduction mechanisms of even the major signaling pathways are not immutable, but that they can be subject to evolutionary change. The divergence may have occurred after the separation of the vertebrate and invertebrate lineages. However, some evidence also suggests that functional divergence may have occurred much later in evolution. Although mutants of Fused or Cos2 orthologs of zebrafish have not been identified, zebrafish homologs of Fused and Cos2 act in the Hh pathway based on morpholino antisense injections. In contrast to these findings, mice deficient in mouse ortholog of Drosophila Fused do not have a Hh-related phenotype, and mouse orthologs of Cos2 do not affect Hh signaling. Hh-related phenotypes can be observed in zebrafish by morpholino-mediated targeting of other genes as well, such as β-arrestin2, whose loss in mice does not result in a Hh-related phenotype. It is widely appreciated that multiple types of embryonic insults result in Hh-like phenotypes, such as holoprosencephaly. Thus, it is possible that the zebrafish phenotypes observed are caused by the morpholino injection process itself. Alternatively, there may also be significant differences between the mechanism of Hh signaling between vertebrate species (Varjosalo, 2006).
Because of the strong conservation of Su(Fu) in both invertebrate and vertebrate phyla, the presence of a Cos2 binding domain only in insect Smo, and the divergence of the Cos2 proteins from the kinesin family, the simplest explanation of the data is that Su(Fu) represents the primordial Ci/GLI repressor, and that the Cos2-like functionality has evolved specifically in the invertebrate lineage. The results, thus, also raise the possibility that multicomponent pathways evolve, in part, by insertion of novel proteins between existing pathway components. This mechanism potentially explains a challenging aspect of evolutionary biology regarding the emergence of signaling pathways with multiple specific components (Varjosalo, 2006).
Smoothened interacts with Cos2 to regulate activator and repressor functions of Hedgehog signaling via two distinct mechanisms
The secreted protein Hedgehog (Hh) plays an important role in metazoan development and as a survival factor for many human tumors. In both cases, Hh signaling proceeds through the activation of the seven-transmembrane protein Smoothened (Smo), which is thought to convert the Gli family of transcription factors from transcriptional repressors to transcriptional activators. This study provides evidence that Smo signals to the Hh signaling complex, which consists of the kinesin-related protein Costal2 (Cos2), the protein kinase Fused (Fu), and the Drosophila Gli homolog cubitus interruptus (Ci), in two distinct manners. Many of the commonly observed molecular events following Hh signaling are not transmitted in a linear fashion but instead are activated through two signals that bifurcate at Smo to independently affect activator and repressor pools of Ci (Ogden, 2006).
This work demonstrates that targeting the association between Smo and the Cos2 cargo domain functionally separates the known molecular markers of the Hh pathway into two distinct categories: those events dependent on a direct association between the Cos2 cargo domain and Smo and those not dependent on this direct association. The Hh-induced readouts requiring direct Smo-Cos2 association include Smo phosphorylation, stabilization, and translocation to the plasma membrane, which facilitate intermediate to high level activation of Ci. Hh-induced Fu and Cos2 hyperphosphorylation, Hedgehog signaling complex relocalization from vesicular membranes to the cytoplasm, and Ci stabilization do not appear to require a direct Smo-Cos2 cargo domain association. Thus, although Smo is necessary for all aspects of Hh signaling, only the molecular events grouped with Ci activation appear to require direct association between Cos2 and Smo. In vivo, carboxyl-terminal Smo binding domain expression is also capable of attenuating Hh signaling. This observation is consistent with in vitro observation that carboxyl-terminal Smo binding domain inhibits critical requirement(s) for pathway activation (Ogden, 2006).
A model has been proposed suggesting the existence of two independently regulated pools of the Hedghog signaling complex (HSC), one involved in pathway repression (HSC-R), and one involved in activation (HSC-A). HSC-R is dedicated to priming Ci for processing into the Ci75 transcriptional repressor, whereas HSC-A is dedicated to activation of stabilized Ci155 in response to Hh. this study provides evidence that the effects of these two HSCs can be functionally separated by specifically targeting the interaction between Smo and the Cos2 cargo domain. Moreover, distinct molecular markers were identified for each HSC. It is proposed that in HSC-R, the membrane vesicle tethered Cos2 functions as a scaffold to recruit protein kinase A, glycogen synthase kinase 3ß, and casein kinase I, which in turn phosphorylate Ci. Hyperphosphorylated Ci is then targeted to the proteasome by the F-box protein supernumerary limbs (Slimb), where it is converted into Ci75. In response to Hh, Fu and Cos2 are phosphorylated and dissociate from vesicular membranes and microtubules, which is suggested to result in the attenuation of HSC-R function. This allows for the subsequent accumulation of full-length Ci. The mechanism by which HSC-R function is inhibited by Hh-activated Smo is not clear but appears to require the carboxyl-terminal tail of Smo and, by this analysis, appears to occur independently of a direct Smo-Cos2 cargo domain association. However, the direct Cos2-Smo association is critical for regulation of HSC-A. In the absence of Hh, HSC-A is tethered to vesicular membranes, through Smo, where it is kept in an inactive state. In the presence of Hh, Cos2 bound directly to Smo acts as a scaffold for the phosphorylation of Smo by protein kinase A, glycogen synthase kinase 3ß, and casein kinase I. Phosphorylation of Smo triggers its stabilization and relocalization to the plasma membrane with HSC-A, where Ci is proposed to be activated. Thus, Cos2 plays a similar role in both HSC-R and HSC-A. In the former case, coupling protein kinase A, glycogen synthase kinase 3ß, and casein kinase I with Ci and, in the latter case, coupling the same protein kinases with the carboxyl-terminal tail of Smo (Ogden, 2006).
An alternative interpretation of these data is that disruption of the Cos2 cargo domain-Smo association separates high and low level Hh signaling. It has been suggested that a second, low affinity Smo binding domain may reside within the coiled-coil domain of Cos2. Thus, high level signaling, where all aspects of the Hh pathway are activated may require both Cos2 interaction domains to be directly bound to Smo. In either scenario, HSC-R function would be regulated independently of HSC-A function (Ogden, 2006).
It is concluded that targeted disruption of Cos2 cargo domain-Smo binding by CSBD is able to functionally separate the activities ascribed to the two HSC model. This two-switch system is amenable to the formation of a gradient of Hh signaling activity across a field of cells, in that the relative activity of HSC-R to HSC-A is directly proportional to the level of Hh stimulation a cell receives. The opposing functional effects of the two complexes can then establish unique ratios of Ci75 to activated Ci, resulting in distinct levels of pathway activation on a per cell basis (Ogden, 2006).
Phosphorylation of the atypical kinesin Costal2 by the kinase Fused induces the partial disassembly of the Smoothened-Fused-Costal2-Cubitus interruptus complex in Hedgehog signalling
The Hedgehog (Hh) family of secreted proteins is involved both in developmental and tumorigenic processes. Although many members of this important pathway are known, the mechanism of Hh signal transduction is still poorly understood. In this study, the regulation of the kinesin-like protein Costal2 (Cos2) by Hh was analyzed. A residue on Cos2, serine 572 (Ser572), is necessary for normal transduction of the Hh signal from the transmembrane protein Smoothened (Smo) to the transcriptional mediator Cubitus interruptus (Ci). This residue is located in the serine/threonine kinase Fused (Fu)-binding domain and is phosphorylated as a consequence of Fu activation. Although Ser572 does not overlap with known Smo- or Ci-binding domains, the expression of a Cos2 variant mimicking constitutive phosphorylation and the use of a specific antibody to phosphorylated Ser572 showed a reduction in the association of phosphorylated Cos2 with Smo and Ci, both in vitro and in vivo. Moreover, Cos2 proteins with an Ala or Asp substitution of Ser572 were impaired in their regulation of Ci activity. It is proposed that, after activation of Smo, the Fu kinase induces a conformational change in Cos2 that allows the disassembly of the Smo-Fu-Cos2-Ci complex and consequent activation of Hh target genes. This study provides new insight into the mechanistic regulation of the protein complex that mediates Hh signalling and a unique antibody tool for directly monitoring Hh receptor activity in all activated cells (Reul, 2007).
These data show that phosphorylation of Cos2 residue Ser572 is necessary for the full activation of Hh signalling, and that this phosphorylation is dependent on the kinase Fu. It is likely that Fu directly phosphorylates Cos2 on Ser572, but it was not possible to purify an activated Fu kinase to confirm this. The phosphorylation of this residue strongly decreased the association of Cos2 with both Ci and Smo, an important step in the regulation of the cytoplasmic anchoring of Ci. By contrast, Cos2-572A, a Cos2 mutant that cannot be phosphorylated at Ser 572, remained associated with Smo and Ci but was much less sensitive to Hh regulation; this is because both its restraining activity on Ci and its association with Ci were only minimally sensitive to the presence of Fu and to the activation of Hh signalling. Phosphorylation of Ser572 of Cos2 induces the partial disassembly of the protein complex (Reul, 2007).
The data show that Cos2 phosphorylated on Ser572 does not bind Smo. However, previous studies have shown that Cos2 is phosphorylated and is pulled down by Smo in response to Hh stimulation. How can these data be reconciled? First, it is possible that not all Cos2 proteins that bind to Smo are phosphorylated. Indeed, only a limited fraction of Cos2 and Fu are sensitive to Hh activation. This is clearly observed with Fu (only 50% of the protein undergoes an electromobility shift upon Hh activation), but is more difficult to quantify with Cos2 because of its very small and diffused electromobility shift. Nevertheless, if Cos2 behaves similarly to Fu, it would mean that 50% of the total Cos2 (corresponding to the non-modified protein in Hh-treated cells) should be able to bring enough Smo down to be detectable in immunoprecipitates. Second, it is possible that Smo still binds to phosphorylated Cos2 on Ser572, but with much less affinity. Third, phosphorylation on Ser572 is not responsible for all Cos2 mobility shift, because Cos2-572A still shifts upon OA treatment, suggesting that other phosphorylated sites are present. Therefore, some phosphorylated isoforms that are not phosphorylated on Ser572 might also be associated with Smo. It is thus possible that this study has revealed only one of a series of sequential phosphorylation events on Cos2 that ultimately lead to the complete dissociation of Cos2 from Smo. Finally, it is worth mentioning that more Smo is present in the Cos2 IP from Hh-treated cells than in non-Hh treated cells. This is thought to simply reflect an increased level of Smo resulting from Hh signalling activation, and not the Hh-dependent regulation of the efficiency of the interaction of Smo with Cos2 (Reul, 2007).
The role of the Cos2 protein in the complex is to serve as a platform to allow both positive and negative regulators to be brought into close proximity with Smo and Ci. Thus, the role of Cos2 in transmitting a response can be masked by the role of Cos2 in limiting pathway activity in the absence of Hh. At low concentrations, it is able to stimulate Hh reporter activity in vitro and engrailed expression in vivo. But in Cos2-572A-expressing cells, engrailed expression was lower than in wild-type discs, and the in vitro stimulation of Hh signalling could not be potentiated by Fu activity. Moreover, the restraining activity of Cos2-572A on Ci could not be counteracted by Hh or Fu in vitro. Therefore, it is proposed that the Ser572 to Ala substitution on Cos2 rendered Cos2 less sensitive to Hh and Fu regulation. Because Cos2-572A still binds to its partners, it could bring Fu into proximity with its other targets. Indeed, it is likely that Fu activation leads not only to the direct phosphorylation of Cos2 but also to direct changes in Ci and/or other partners, such as Sufu. This explains why Cos2-572A is still able to stimulate Hh signalling, albeit not to its highest level (Reul, 2007).
From the Cos2-572A results, one could wonder why Cos2-572D did not constitutively activate the pathway. Because the Cos2-572D form is in a 'frozen' state compared with the wild-type form, cycles of phosphorylation/dephosphorylation are blocked and thus Cos-572D cannot participate in the Hh complex signalling anymore. The data show that constitutively phosphorylated Cos2 and endogenous phospho-Cos2 are bound to Fu but are dissociated from Smo and Ci. Therefore, Fu bound to phosphorylated Cos2 would be absent from the complex, preventing the release of all the cytoplasmic anchors from Ci (Reul, 2007).
Because the Cos2 Ser572 residue is not part of the Ci- or Smo-binding domains, but phosphorylation of this site nevertheless leads to the dissociation of these two proteins from Cos2, it is proposed that the Fu-mediated modification of Cos2 induces the protein to undergo a conformational change that leads to the disassembly of the complex. The disassembly is partial because phosphorylated Cos2 and Fu are still associated. Interestingly, it has been proposed that the binding of Cos2, Sufu and Fu to Ci masks a nuclear localisation site on Ci (Ci-NLS). A conformational change that supports this idea: that disassembly of the complex is necessary to expose the Ci-NLS and for consequent nuclear translocation (Reul, 2007).
Costal 2 interactions with Cubitus interruptus underlying Hedgehog-regulated Ci processing
Extracellular Hedgehog (Hh) proteins alter cellular behaviours from flies to man by regulating the activities of Gli/Ci family transcription factors. A major component of this response in Drosophila is the inhibition of proteolytic processing of the latent transcriptional activator Ci-155 to a shorter Ci-75 repressor form. Processing is thought to rely on binding of the kinesin-family protein Cos2 directly to Ci-155 domains known as CDN and CORD, allowing Cos2-associated protein kinases to phosphorylate Ci-155 efficiently and create a binding site for an E3 ubiquitin ligase complex. This study shows that the last three zinc fingers of Ci-155 also bind Cos2 in vitro and that the zinc finger region, rather than the CDN domain, functions redundantly with the CORD domain to promote Hh-regulated Ci-155 proteolysis in wing discs. Evidence was also found for a unique function of Cos2 binding to CORD. Cos2 binding to CORD, but not to other regions of Ci, is potentiated by nucleotides and abrogated by the nucleotide binding variant Cos2 S182N. Removal of the CORD region alone enhances processing under a variety of conditions. Most strikingly, CORD region deletion allows Cos2 S182N to stimulate efficient Ci processing. It is deduced that the CORD region has a second function distinct from Cos2 binding that inhibits Ci processing, and that Cos2 binding to CORD relieves this inhibition. It is suggested that this regulatory activity of Cos2 depends on a specific nucleotide-bound conformation that may be regulated by Hh (Zhou, 2010).
Prior to this study it was thought that Cos2 regulates Ci by binding to specific protein kinases and directly to Ci-155 via two regions, CDN and CORD to promote Ci-155 phosphorylation. Experiments with tissue culture cells suggested that CDN and CORD regions of Ci-155 act largely redundantly to promote Ci-155 processing. The current investigations have modified these views in two significant ways. First, studies in the physiological setting of Drosophila wing discs confirm some functional redundancy of two Cos2-binding regions on Ci to promote Ci proteolysis, but the region acting together with the CORD binding site comprises the last three zinc fingers of Ci, not the CDN region. Second, it was found that the CORD region has an additional unique function of inhibiting Ci proteolysis unless it binds to Cos2. Furthermore, the potential was uncovered for Cos2-CORD association to be regulated by nucleotide binding to Cos2 and evidence was uncovered that Hh signalling may modulate Cos2 function in at least two ways to regulate its interaction with Ci (Zhou, 2010).
Three new observations lead to the deducition that Cos2 binding to CORD has an important non-redundant role in promoting Ci processing in the absence of Hh. First, it was found that removing the entire CORD region enhanced Ci proteolysis in a number of settings, revealing an inhibitory role for CORD. Similar criteria suggested an inhibitory role also for the CDN region of Ci. Second, it was found that Cos2 S182N fails to bind the CORD region of Ci but binds normally to a Ci region including the zinc fingers and CDN in vitro. Third, it was found that Cos2 S182N promotes efficient proteolysis of Ci only when the CORD region is absent. The restoration of proteolysis was specific to the S182N substitution and deletion of the CORD domain. Loss of CORD did not allow proteolysis by Cos2 S572D and loss of CDN did not allow proteolysis by Cos2 S182N. It is concluded that the strong defect of Cos2 S182N in supporting wild-type Ci processing results principally from an inability to bind to CORD and thereby relieve the inhibitory effect of CORD on Ci-155 processing. The importance of Cos2-CORD binding was not apparent by simple deletion of the CORD region because that deletion simultaneously eliminates Cos2 binding and the need for Cos2 binding, while sparing the zinc fingers of Ci as an alternative means to recruit Cos2. While Cos2 S182N mediated the proteolysis of Ci molecules lacking the CORD region remarkably efficiently, wild-type Cos2 was consistently better, implying that Cos2 S182N does have a deficit beyond CORD binding that is relevant to Ci proteolysis. That, relatively minor, deficit may stem from the failure of Cos2 S182N to move normally along microtubules (Farzan, 2008; Zhou, 2010).
What is the nature of the inhibitory influence of the CORD region on Ci proteolysis? A variety of segments of Ci, including the zinc fingers, CORD and phosphorylation regions, have been found to bind to each other in vitro. It is therefore speculated that the CORD region may interact, intra- or inter-molecularly, with other regions of Ci, to limit exposure of either key phosphorylation sites to protein kinases, or of the zinc finger and CORD regions to Cos2. The CDN region of Ci also appears to contribute to interactions that make Ci less accessible to one or more steps directing its proteolysis. Relief of CDN inhibition does not, however, appear to depend on Cos2 binding to CORD (because Ci?CORD is efficiently processed by Cos2 S182N) and is apparent in the presence or absence of either CORD or zinc finger Cos2-binding domains (Zhou, 2010).
In addition to a unique function of Cos2 binding to CORD, this association also has a function that can alternatively be executed by the zinc finger region. This assertion is deduced simply from the defective proteolysis of CiδZnδCORD compared to the efficient proteolysis of both CiδZn and CiδCORD (whether assayed in the presence or absence of CDN). Most likely this function is the recruitment of Cos2-associated protein kinases to Ci (Zhou, 2010).
What properties are conferred by the two partially overlapping functions of Cos2-Ci binding and the two Ci domains capable of recruiting Cos2? An obvious hypothesis is that this diversifies the means by which Hh can regulate Ci processing through Cos2, perhaps to extend the range of Hh sensitivity or to produce a more robust Hh response. Specific mechanisms are considered in the next section but the general hypothesis can be investigated by simply eliminating specific modes of Cos2-Ci interaction. It has not been possible to probe the consequences of eliminating Ci zinc fingers in detail because loss of DNA binding prevents execution of normal Ci functions. However, the regulation of CiδCORD and CiδCDN?CORD appeared to be remarkably normal. High levels of Hh in posterior cells fully inhibited Ci processing and elevated full-length Ci protein levels extended over roughly the normal range at the AP border, suggesting that sensitivity to significant inhibition by low Hh levels is also retained. The sensitivity of proteolysis of Ci lacking zinc fingers to low Hh levels also appeared to be roughly normal. Therefore the idea is favoured that the multiple Cos2-Ci interactions are each subject to Hh regulation over a similar range of sensitivity, and that the mechanisms for regulating Cos2-Ci interactions by Hh, like the interactions themselves, are largely redundant, resulting in a very robust regulatory response that is resistant to single genetic perturbations (Zhou, 2010).
Evidence has previously been presented that Hh causes some degree of dissociation between Cos2 and the protein kinases, PKA, CK1 and GSK3, as well as reduced association between Cos2 and Ci. These mechanisms are, of course, not exclusive and their quantitative contributions remain unresolved because definitive physiological measurements of association are very difficult. More importantly, the upstream instigators of these proposed dissociations are not at all clear. The current studies suggest that a specific nucleotide-dependent conformation of Cos2 may be one important mediator of Hh signalling (Zhou, 2010).
It was found that nucleotides stimulated binding of Cos2 derived from cell extracts to GST-Ci CORD, presumably by increasing the proportion of Cos2 molecules that are nucleotide bound. Conventional kinesins are not readily isolated in a nucleotide-free state and their properties are generally altered by exchanging one bound nucleotide for another. It is therefore surprising that it was possible to alter Cos2 properties by adding excess nucleotide rather than by altering the nature of the excess nucleotide. Cos2 has been noted to differ from conventional kinesins in a number of conserved residues but retains residues S182 and G175 within the conserved P-loop that interacts with the β-phosphate of bound nucleotides. There are no reliable means to predict whether Cos2 S182N or G175A would be defective for binding specific nucleotides, all nucleotides or nucleotide hydrolysis, and those properties have not been measured directly for Cos2 or Cos2 variants. Nevertheless, the observation that both Cos2 S182N and G175A showed no evidence of binding CORD suggests two complementary assertions. First, Cos2 S182N and G175A are unable to adopt a nucleotide-bound conformation that is stringently required for binding CORD. Second, the binding of wild-type Cos2 in the absence of added nucleotide is most likely due to a minor proportion of Cos2 molecules bound to nucleotides rather than due to a lower affinity interaction of a nucleotide-free conformation of Cos2. Thus, it is hypothesised that distinct Cos2 conformations, influenced by nucleotide binding, constitute a clean on/off switch for binding the CORD region of Ci (Zhou, 2010).
Conformational changes couple nucleotide binding and microtubule association in kinesins. Hence, the previously observed Hh-induced dissociation of Cos2 from microtubules supports the hypothesis that Hh induces a conformational change in Cos2 that alters nucleotide binding, CORD association and microtubule binding. The speculated Hh-induced conformational change is most likely brought about by the known direct association of Cos2 with Smo. Smo is related to G-protein coupled receptors (GPCRs), suggesting that the actions of Smo on Cos2 could conceivably be analogous to the nucleotide exchange factor activity of GPCRs, which is normally directed to regulating G-protein conformation and activity (Zhou, 2010).
While modulation of Cos2-CORD interaction through an altered Cos2 conformation phenocopied by Cos2 S182N provides a potential mechanism for Hh to influence the efficiency of Ci proteolysis, it cannot be the sole mechanism because CiδCORD (and CiδCDN?CORD) proteolysis is still extensively regulated by Hh. The Cos2 S572D variant was created previously to mimic Hh-stimulated phosphorylation of Cos2 by Fused and was shown to have reduced ability to co-localise with Ci in embryos and co-precipitate Ci from tissue culture cells. This study found no significant defect in binding of Cos2 S572D to CORD or zinc finger regions of Ci in vitro, suggesting that the biochemical deficits of Cos2 S572D and Cos2 S182N are distinct. This was confirmed by in vivo studies showing that, unlike Cos2 S182N, Cos2 S572D did not preferentially promote proteolysis of Ci lacking the CORD region. Thus, current evidence indicates that Hh-stimulated Cos2 phosphorylation by Fu may provide a second, potentially redundant, mechanism for Hh to regulate Cos2-Ci interactions and the consequent processing of Ci-155. Whether the Hh-regulated association of PKA, CK1 and GSK3 with Cos2 is mediated by either Cos2 phosphorylation or nucleotide-dependent Cos2 conformational changes, or by a third, distinct mechanism, remains to be investigated (Zhou, 2010).
Phosphorylation of Ci at specific sites in the CORD domain (PKA site S962) and the Slimb-binding region (especially GSK3 sites primed by PKA site S892) reduced binding of the CORD region to Cos2 in vitro. That investigation was prompted by prior knowledge that loss of PKA sites in the CORD region appeared to enhance Ci activity. However, that observation would more readily be explained by increased, rather than decreased, Cos2-Ci binding in response to Ci phosphorylation. An alternative hypothesis is that the observed dependence of Cos2-CORD binding on Ci phosphorylation might contribute to extending a graded Hh response. Where Hh levels are high, Ci-155 will be less phosphorylated and would bind Cos2 more readily, requiring a strong Hh signal to disrupt Cos2-CORD association. At the edge of Hh signalling territory Ci-155 will be more highly phosphorylated, would bind Cos2 less readily and hence allow only a very low level of Hh to disrupt Cos2-CORD association and inhibit Ci-155 processing. Currently there has not been any in vivo evidence testing that hypothesis (Zhou, 2010).
While large regions of Ci (CDN and CORD) could be deleted without impairing proteolysis, implying that Ci is composed largely of independently folding domains, two other deletions (of residues 6-339 and 1286-1377) were identified with significant effects on proteolysis. Ci lacking C-terminal residues did not generate any detectable Ci-75 repressor. CiδC strongly induced ptc-lacZ, implying that binding to CBP, which has been mapped to an adjacent region of Ci and is required for Ci-155 activity, was not affected. How the C-terminus of Ci contributes to proteolysis remains a mystery since there is no evidence from binding assays or co-localization studies in tissue culture cells showing association with Cos2 or Cos2-associated factors (Zhou, 2010).
A study using Kc tissue culture cells previously identified the extreme C-terminus of Ci as essential for Ci processing (Wang, 2008). That study also found that the zinc fingers of Ci were not essential for processing, provided they were substituted by a stably folded domain that contributes to the arrest of proteasome digestion. That result is consistent with the observation that CiδZn is efficiently proteolyzed in wing discs. However, in contrast to the observation of very efficient processing of CiδCDNδCORD in wing discs, it was reported that one of these two domains must be present for efficient Ci processing in Kc cells. In fact, the key Ci substrate assayed also lacked residues 1-345 and is therefore virtually identical to the CiδNδCDNδCORD variant (rather than CiδCDNδCORD), which is also processed with reduced efficiency in wing discs (Zhou, 2010).
Removing the N-terminal region (residues 6-339) from Ci strongly raised anterior full-length Ci levels but did not appear to block proteolysis completely because loss of PKA was found to increase the activity of CiδN, presumably by completely eliminating proteolysis. Su(fu) is known to bind within the first 346 residues of Ci and has the potential to recruit Fu-Cos2 complexes to Ci indirectly. In Drosophila, loss of Su(fu) results in strongly reduced Ci-155 and Ci-75 levels but still permits Hh or loss of PKA to increase Ci-155 levels and Hh to inhibit Ci-75 repressor formation. Thus, Su(fu) is certainly not essential for Ci processing or its regulation. The substantial effects of Su(fu) on Ci-155 and Ci-75 levels are thought to involve a different proteolytic mechanism but it remains possible that Su(fu) might also modulate Ci processing efficiency. It is therefore similarly possible that the impaired proteolysis of CiδN results from a failure of Su(fu) to facilitate recruitment of Cos2-Fu complexes to Ci (Zhou, 2010).
In summary, this study has found that Ci-155 has at least two domains (CORD and zinc fingers) functionally capable of recruiting Cos2 directly, that Cos2 binding to the CORD domain additionally prevents that region from inhibiting proteolysis, and that the Cos2-CORD interaction might be regulated physiologically via a specific nucleotide-bound conformation of Cos2. Evidence was also found indicating that Cos2 might additionally be recruited indirectly to Ci, that Hh regulates productive Cos2-Ci engagement through multiple, potentially redundant, mechanisms, and that two terminal Ci-155 domains contribute to processing through mechanisms that are not yet understood (Zhou, 2010).
Hedgehog activates fused through phosphorylation to elicit a full spectrum of pathway responses
In flies and mammals, extracellular Hedgehog (Hh) molecules alter cell fates and proliferation by regulating the levels and activities of Ci/Gli family transcription factors. How Hh-induced activation of transmembrane Smoothened (Smo) proteins reverses Ci/Gli inhibition by Suppressor of Fused (SuFu) and kinesin family protein (Cos2/Kif7) binding partners is a major unanswered question. This study shows that the Fused (Fu) protein kinase is activated by Smo and Cos2 via Fu- and CK1-dependent phosphorylation. Activated Fu can recapitulate a full Hh response, stabilizing full-length Ci via Cos2 phosphorylation and activating full-length Ci by antagonizing Su(fu) and by other mechanisms. It is proposed that Smo/Cos2 interactions stimulate Fu autoactivation by concentrating Fu at the membrane. Autoactivation primes Fu for additional CK1-dependent phosphorylation, which further enhances kinase activity. In this model, Smo acts like many transmembrane receptors associated with cytoplasmic kinases, such that pathway activation is mediated by kinase oligomerization and trans-phosphorylation (Zhou, 2011).
This study has shown that Fu is activated by phosphorylation in a Hh-initiated positive feedback loop and that Fu kinase activity alone can provoke the two key outcomes of Hh signaling in Drosophila, namely Ci-155 stabilization and Ci-155 activation. This previously unrecognized central thread of the Drosophila Hh pathway is strikingly similar to receptor tyrosine kinase (RTK) pathways or cytokine pathways, where the transmembrane receptor itself or an associated cytoplasmic tyrosine kinase initiates signal transduction via intermolecular phosphorylation. In Hh signaling, engagement of the Ptc receptor leads indirectly to changes in Smo conformation, and perhaps oligomerization that are relayed to Fu via a mutual binding partner, Cos2 (Zhou, 2011).
Three activation loop residues were identified as critical for normal Fu activity. Fu with acidic residues at T151 and T154 (Fu-EE) was not active at physiological levels in the absence of Hh but could initiate Fu activation in three different ways. First, increasing Fu- EE levels induces the full spectrum of Hh target genes and responses in wing discs and is accompanied by extensive phosphorylation, undoubtedly including S159, indicating that phosphorylation can fully activate Fu. Second, low levels of a Fu-EE derivative could synergize with an excess of wild-type Fu, provided the latter molecule had an intact activation loop and was kinase-competent, indicating that a feedback phosphorylation loop could initiate Fu activation even from a ground state containing no phosphorylated residues or their mimics. Third, Hh could activate Fu-EE or wild-type Fu, but this, unlike the above mechanisms, required Cos2 and the Cos2-binding region of Fu. Activation by Hh alters Smo conformation and increases the plasma membrane concentration of Smo-Cos2 complexes, suggesting that the role of activated Smo-Cos2 complexes may simply be to aggregate Fu molecules (Zhou, 2011).
In all of the above situations there is likely an important contribution of binding between the catalytic and regulatory regions of pairs of Fu molecules to allow cross-phosphorylation, as suggested by the impotence of the Fu-EE 1-305 kinase domain alone. The sites of inferred cross-phosphorylation, T151, S159, and S482 might most simply be direct Fu auto-phosphorylation sites but they may involve the participation of an intermediate kinase. Importantly, because Fu is the key activating stimulus and Fu is the key target for activation, there is no need to postulate additional upstream regulatory inputs into a hypothetical intermediary protein kinase. Phosphorylated residues in positions analogous to Fu S159 generally stabilize the active form of the protein kinase, whereas unphosphorylated residues at other positions, closer to the DFG motif may also, or exclusively, stabilize specific inactive conformations. By analogy, phosphorylated T151, T154, and S159 are likely to serve independent, additive functions, all of which are required to generate fully active Fu kinase. There are clearly additional phosphorylated residues on Fu, including the cluster at S482, S485, and T486. These residues are not essential for Hh or Fu-EE to generate fully active Fu when Fu is expressed at high levels. However, S485A/T486A substitutions did suppress activation of GAP-Fu in wing discs and in Kc cells, suggesting that stimulation of physiological levels of Fu, perhaps by lower levels of Hh uses S482, S485, and T486 phosphorylation to favor an active conformation of Fu or productive engagement of Fu molecules. Because the S482 region may be recognized directly as a substrate by the Fu catalytic site, this region may initially mask the catalytic site (in cis or in trans) and then reduce its affinity for the catalytic site once it is phosphorylated, permitting further phosphorylation of Fu in its activation loop (Zhou, 2011).
For a long time it was thought that Fu kinase acts only to prevent inhibition of Ci-155 by Su(fu), and Fu was postulated to accomplish this by phosphorylating Su(fu). This study mapped the sites responsible for the previously observed Hh- and Fu-stimulated phosphorylation of Su(fu) and showed that they were not important for regulating Hh pathway activity. It was found that CK1, like Fu, was required for Hh to oppose Su(fu) inhibition of Ci-155 and because each of the Fu-dependent phosphorylation sites in Fu and Su(fu) that were mapped in this study prime CK1 sites it is suspected that the critical unidentified Fu and CK1 sites for antagonizing Su(fu) will be found in the same molecule, with Ci-155 itself being a prime candidate (Zhou, 2011).
This study found that Fu does considerably more than just antagonize Su(fu). It was unexpectedly found that Fu kinase can also stabilize Ci-155 via phosphorylation of Cos2 on S572, which likely leads to reduced association of Cos2 Ci-155 activation independently of Su(fu), even when Ci-155 processing was blocked by other means (Zhou, 2011).
Some insight was gained into the key regulatory role that Fu plays in Hh signaling. The truncated partially activated Fu derivative, Fu-EE 1-473, exhibited constitutive activity when expressed at high levels but, unlike full-length Fu-EE, it was not activated by Hh. Importantly, a level of Fu-EE 1-473 expression could not be found in fumH63 mutant wing discs where Hh target genes were induced at the AP border but not ectopically. Hence, Hh regulation of Fu activity appears to be essential for normal Hh signaling. This contrasts with the normal Hh signaling observed in animals lacking Su(fu) and emphasizes that Fu is a key regulatory component that has essential actions beyond antagonizing Su(fu) (Zhou, 2011).
In mice, SUFU increases Gli protein levels and inhibits Gli activators in a manner that can be overcome by Hh, much as Su(fu) affects Ci levels and activity in flies. However, in mammalian Hh signaling there is no satisfactory mechanistic model connecting Smo activation and SUFU antagonism. This study found that mouse SUFU can substitute for all of the activities of Su(fu) in flies, including a dependence on both Fu and CK1 for Hh to antagonize silencing of Ci-155. These findings, and the observation that Drosophila Su(fu) can partially substitute for murine SUFU in mouse embryo fibroblasts, suggest that SUFU silencing of Gli proteins in mice is also likely to be sensitive to analogous changes in phosphorylation produced by at least one Hh-stimulated protein kinase. Even though the murine protein kinase most similar in sequence to Drosophila Fu is not required for Hh signaling at least three other protein kinases (MAP3K10, Cdc2l1, and ULK3) have been found to contribute positively to Hh responses in cultured mammalian cells. It will be of great interest to see if these or other protein kinases are activated by Hedgehog ligands, perhaps promoted by association with Smo-Kif7 complexes in a positive feedback loop, and whether they can antagonize mSUFU to activate Gli proteins, and perhaps even stabilize Gli proteins via Kif7 phosphorylation (Zhou, 2011).
The Hedgehog-induced Smoothened conformational switch assembles a signaling complex that activates Fused by promoting its dimerization and phosphorylation
Hedgehog (Hh) transduces signal by regulating the subcellular localization and conformational state of the GPCR-like protein Smoothened (Smo) but how Smo relays the signal to cytoplasmic signaling components remains poorly understood. This study shows that Hh-induced Smo conformational change recruits Costal2 (Cos2)/Fused (Fu) and promotes Fu kinase domain dimerization. Induced dimerization through the Fu kinase domain activates Fu by inducing multi-site phosphorylation of its activation loop (AL) and phospho-mimetic mutations of AL activate the Hh pathway. Interestingly, it was observed that graded Hh signals progressively increase Fu kinase domain dimerization and AL phosphorylation, suggesting that Hh activates Fu in a dose-dependent manner. Moreover, it was found that activated Fu regulates Cubitus interruptus (Ci) by both promoting its transcriptional activator activity and inhibiting its proteolysis into a repressor form. Evidence is provided that activated Fu exerts these regulations by interfering with the formation of Ci-Sufu and Ci-Cos2-kinase complexes that normally inhibit Ci activity and promote its processing. Taken together, these results suggest that Hh-induced Smo conformational change facilitates the assembly of active Smo-Cos2-Fu signaling complexes that promote Fu kinase domain dimerization, phosphorylation and activation, and that Fu regulates both the activator and repressor forms of Ci (Shi, 2011).
How Hh signal is transduced from the GPCR-like receptor Smo to the transcription factor Ci/Gli is still poorly understood. A major unsolved issue is how a change in the Smo activation state is translated into a change in the activity of intracellular signaling complexes, which ultimately changes the balance between CiR/GliR and CiA/GliA. The current study suggests that Hh-induced conformational change of Smo exposes a Cos2 docking site(s) near the Smo C terminus that facilitates the assembly of an active Smo-Cos2-Fu complex, and that Smo activates Fu by promoting its kinase domain dimerization and phosphorylation. Evidence is provided that graded Hh signals progressively increase Fu kinase domain dimerization and phosphorylation, which may generate a Fu activity gradient, and that activated Fu regulates both CiR and CiA by controlling Ci-Sufu and Ci-Cos2-kinase complex formation (Shi, 2011).
Previous immunoprecipitation studies have revealed that Smo pulled down Cos2/Fu in both quiescent cells and Hh-stimulated cells, suggesting that Smo can form a complex with Cos2/Fu even in the absence of Hh. Furthermore, deletion analyses have indicated that both a membrane proximal domain and a C-terminal region of Smo C-tail can mediate the interaction between Smo and Cos2/Fu. Intriguingly, deleting the C-terminal region impaired, whereas deleting the membrane proximal domain potentiated, Smo activity in vivo. Further study suggested that the membrane proximal domain recruits Cos2/PP4 to inhibit Smo phosphorylation and cell-surface accumulation, which is released by Fu-mediated phosphorylation of Cos2 Ser572 in response to Hh. These observations suggest that Smo-Cos2-Fu interaction is likely to be dynamic and that distinct complexes may exist depending on the Hh signaling status. For example, Cos2 may associate with the membrane proximal region of Smo to inhibit Smo phosphorylation in quiescent cells. Upon Hh stimulation, Cos2/Fu may interact with the C-terminal region of Smo to transduce the Hh signal. In support of this model, it was found that Hh stimulated the recruitment of Cos2/Fu to the C-terminal region rather than the membrane proximal region of the Smo C tail. The increased binding depends on phosphorylation-induced conformational change of Smo C-tail that may expose the C-terminal Cos2 binding pocket(s) (Shi, 2011).
Hh signaling induces Fu kinase domain dimerization in a dose-dependent manner, most probably as a consequence of phosphorylation-induced conformational change and dimerization of Smo C tails. In addition, Hh-induced Fu dimerization depends on Cos2. Importantly, dimerization through the Fu kinase domain (CC-Fu) triggers Fu activation both in vitro and in vivo. Furthermore, CC-Fu can activate Ci in smo mutant clones and restore high levels of Hh signaling activity in cos2 mutant discs. Taken together, these results support a model in which Hh-induced Fu dimerization via Smo/Cos2 leads to Fu activation (Shi, 2011).
Both Fu dimerization and Hh stimulation induce phosphorylation of multiple Thr/Ser residues in the Fu activation loop that are important for Fu activation. Fu phosphorylation depends on its kinase activity and Fu can trans-phosphorylate itself, suggesting that Hh and dimerization may induce Fu autophosphorylation, although the results do not exclude the involvement of additional kinase(s). CC-induced dimerization does not fully activate Fu, suggesting that Smo may promote Fu activation through additional mechanisms. Activated Fu can promote phosphorylation of its C-terminal regulatory fragment, raising a possibility that Fu activation may also involve phosphorylation of its regulatory domain. Indeed, while this manuscript was under review, Zhou and Kalderon provided evidence that phosphorylation of several Ser/Thr residues in the Fu regulatory domain, likely by CK1, modulates the activity of an activated form of Fu (Zhou, 2011; Shi, 2011 and references therein).
The involvement of multiple phosphorylation events in Fu activation may provide a mechanism for fine-tuning Fu activity in response to different levels of Hh. Indeed, the efficiency of Fu dimerization and the level of activation loop phosphorylation correlate with the level of Hh signaling. Furthermore, the level of Fu activity correlates with the level of its activation loop phosphorylation. Thus, graded Hh signals may generate a Fu activity gradient by progressively increasing its dimerization and phosphorylation in response to a gradual increase in Smo phosphorylation and C-tail dimerization (Shi, 2011).
The conventional view is that Fu is required for high levels of Hh signaling by converting CiF into CiA. In support of this notion, fu mutations only affect the high, but not low, threshold Hh responsive genes. However, Fu function could have been underestimated because none of the fu mutations examined so far represents a null mutation. In addition, the existence of paralleled mechanisms, such as Gαi activation, could mask the contribution of Fu to low levels of Hh signaling. Nevertheless, a recent study using the phospho-specific antibody against Cos2 Ser572 revealed that Fu kinase activity could be induced by low levels of Hh, raising an interesting possibility that Fu may contribute to all levels of Hh signaling (Raisin, 2010). However, the lack of a fu-null mutation and the involvement of Fu in promoting Ci processing, probably through a structural role, make it difficult to directly demonstrate a role of Fu in blocking Ci processing. Using an in vivo assay for Ci processing, it was demonstrated that activated forms of Fu block Ci processing into CiR. In addition, this study found that activated Fu attenuates the association between Cos2 and Ci, as well as their association with PKA/CK1/GSK3, probably by phosphorylating Cos2, suggesting that activated Fu may block Ci processing by impeding the formation of the kinase complex required for efficient Ci phosphorylation (Shi, 2011).
Evidence is provided that activated Fu attenuates Ci/Sufu interaction. Because Sufu impedes Ci nuclear localization and may recruit a co-repressor(s) to further inhibit Ci activity in the nucleus, dissociation of Ci from Sufu may lead to the conversion of CiF to CiA. Interestingly, recent studies using mammalian cultured cells revealed that Shh signaling induces dissociation of full-length Gli proteins from Sufu (Humke, 2010; Tukachinsky, 2010), suggesting that inhibition of Sufu-Ci/Gli complex formation could be a conserved mechanism for Ci/Gli activation. Although activated forms of Fu promote Sufu phosphorylation, phospho-deficient and phospho-mimetic forms of Sufu behaved in a similar manner to wild-type Sufu in functional assays (Zhou, 2011), implying that phosphorylation of Sufu might not be a major mechanism through which Fu activates Ci. It has been shown that Shh also induces phosphorylation of full-length Gli3 that correlates with its nuclear localization (Humke, 2010). Furthermore, a Fu-related kinase Ulk3 can phosphorylate Gli proteins and promote their transcriptional activities (Maloverjan, 2010). Thus, Fu may activate Ci by promoting its phosphorylation, an interesting possibility that awaits further investigation (Shi, 2011).
Cilia-mediated Hedgehog signaling in Drosophila
Cilia mediate Hedgehog (Hh) signaling in vertebrates and Hh deregulation results in several clinical manifestations, such as obesity, cognitive disabilities, developmental malformations, and various cancers. Drosophila cells are nonciliated during development, which has led to the assumption that cilia-mediated Hh signaling is restricted to vertebrates. This study identified and characterized a cilia-mediated Hh pathway in Drosophila olfactory sensory neurons. Several fundamental key aspects of the vertebrate cilia pathway, such as ciliary localization of Smoothened and the requirement of the intraflagellar transport system, are present in Drosophila. Cos2 and Fused are required for the ciliary transport of Smoothened and cilia mediate the expression of the Hh pathway target genes. Taken together, these data demonstrate that Hh signaling in Drosophila can be mediated by two pathways and that the ciliary Hh pathway is conserved from Drosophila to vertebrates (Kuzhandaivel, 2014).
The existence of this second cilia-dependent Hh pathway in Drosophila shows that Hh signaling can be mediated via two pathways within a single organism. The results further demonstrate that the core components are shared between the two Hh pathways in Drosophila. The function of Cos2 as a putative kinesin in the ciliary compartment indicates that the ancestral Hh signaling pathway may have been cilia specific and that invertebrate cells did not maintain this specialization. Interestingly, not all vertebrate cells have primary cilia, and different types of tumors react differently to Shh depending on whether they are ciliated, indicating that there might be a second, overlooked nonciliary pathway in vertebrates (Kuzhandaivel, 2014).
Genetic in vivo analysis of Smo ciliary localization revealed that, as in vertebrates, the ciliary IFT system and a ciliary localization signal are required for localization of Smo to cilia in Drosophila. The results further show that the Hh receptor Ptc regulates Smo stability and that ciliary localization depends on the activation of the kinesin-like protein Cos2. In the Drosophila wing disc, Fu regulates Cos2 function and is required for most aspects of Hh signaling. The current data show that Fu is also required for Cos2 ciliary localization and Smo transport within the cilia. However, Fu is not essential for mammalian Hh signaling, and in zebrafish, loss of Fu results in weak Hh-related morphological phenotypes. These differences from the Drosophila pathway and vertebrate ciliary signaling could be explained by the existence of a second, as yet unidentified kinase with an analogous function. Cell culture and in vivo studies in vertebrates led to the identification of four kinases with phenotypes related to Fu: Ulk3, Kif11, Map3K10, and Dyrk2. Further investigation is required to determine whether these kinases control the ciliary transport of Smo and whether Cos2 Smo transport is conserved in vertebrates. Yet, the current results demonstrate that cilia-mediated Hh signaling does occur in Drosophila and that this pathway is conserved in vertebrates, which makes the Drosophila OSN a powerful in vivo model for studying Hh signaling and its ciliary transport regulation (Kuzhandaivel, 2014).
costa/costal2: Biological Overview | Evolutionary Homologs | Developmental Biology | Effects of Mutation | References
Home page: The Interactive Fly © 1997 Thomas B. Brody, Ph.D.
The Interactive Fly resides on the
Society for Developmental Biology's Web server. |
Jump to content
ESRCH
Members
• Posts
80
• Joined
• Last visited
• Days Won
1
Community Answers
1. ESRCH's post in uncheck Settings/Status/System was marked as the answer
You should normally not do this, but I'm assuming that you set System on a page that you created, and that you want to remove this setting.
The way to do this with the API is to do as follows:
// Assuming that $page refers to the page for which you want to remove the status // Enable overriding the system status flags $page->status = $page->status | Page::statusSystemOverride; $page->save(); // Change the system status flags $page->status = $page->status & ~Page::statusSystemID; // If you want to uncheck the first system checkbox $page->status = $page->statux & ~Page::statusSystem; // If you want to uncheck the second system checkbox $page->save(); // Disable overriding the system status flags $page->status = $page->status & ~Page::statusSystemOverride; $page->save(); If it's just to correct a mistake, the more straightforward way to do the change is by modifying the database directly:
In the database, find the right page in the pages table (you can find it easily by name or by id, which is indicated in the url when you edit the page). If you want to remove the first system flag, subtract 8 from the value in the status column (so if it's 9, it should become 1). If you want to remove the second system flag, subtract 16 from the value in the status column (so if it's 17, it should become 1). If you want to remove both, simply combine the operations by subracting 24. I hope this helps!
2. ESRCH's post in Try to create a Custom Login site was marked as the answer
Hmmm... I wonder if it's the inline function. Try this code instead for each module:
<?php class CustomLogout extends WireData implements Module { public static function getModuleInfo() { return array( 'title' => 'Custom Logout', 'summary' => 'Redirects to a custom login page after logout', 'singular' => true, 'autoload' => true ); } public function init() { $this->addHookAfter('Session::logout', $this, 'hookAfterLogout'); } public function hookAfterLogout($event) { $this->session->redirect($this->pages->get('/login/')->url); } } <?php class SiteHider extends WireData implements Module { public static function getModuleInfo() { return array( 'title' => 'SiteHider', 'summary' => 'Hide Sites in CSS per User Roles', 'singular' => true, 'autoload' => true ); } public function init() { $this->addHookBefore("ProcessPageList::execute", $this, 'hookBeforePageListExecute'); } public function hookBeforePageListExecute($event) { if (!$this->user->name === 'selina') $this->config-styles->add($this->config->urls->templates . "css/sitehide.css"); } } Also, be careful with where you put the semi-colons (. In SiteHider, you had put it just after the if, which is a problem.
3. ESRCH's post in if field empty echo other field was marked as the answer
Well the easiest way seems to simply use an else if:
if ($item->price1) { echo "From euros {$item->price1}"; } else if ($item->price2) { echo "From euros {$item->price2}"; } else { echo 'please ask'; }
4. ESRCH's post in translated string won't display was marked as the answer
Hi adrianmak,
This seems to be a bug, I submitted a pull request to correct this. The problem is that when logging out, the user is changed to guest, and the language is set back to default.
Interestingly, the $user variable still points to the logged out user (adrian in your case), while wire('user'), which is used by the __() translation function, points to guest.
While waiting for the correction to the core, you can solve this issue by adding the following line after $session->logout():
wire('user')->language = $user->language; This will set the language back to what it was before logging out, and the correct translation will be shown.
×
×
• Create New... |
Íîâîñòè
Êàê è ÷åëîâå÷åñêàÿ îáóâü, àâòîìîáèëüíûå øèíû íóæäàþòñÿ â çàìåíå â çàâèñèìîñòè îò ñåçîíà. Ñ íàñòóïëåíèåì òåïëà ìíîãèå àâòîëþáèòåëè íå òîðîïÿòñÿ èëè çàáûâàþò «ïåðåîáóâàòüñÿ» íà ëåòíèé âàðèàíò.  ðåçóëüòàòå ÷åãî ïîëó÷àþò øòðàôû – â íàøåé ñòðàíå åçäà íà ïîêðûøêàõ «íå â ñåçîí» çàïðåùåíà ÏÄÄ.
Àêòèâíîå ïîÿâëåíèå íîâûõ îðãàíè÷åñêèõ, ìèíåðàëüíûõ è êîìïëåêñíûõ óäîáðåíèé îòêðûâàåò íîâûå âîçìîæíîñòè äëÿ ñåëüñêîãî õîçÿéñòâà, öâåòîâîäñòâà, ñàäîâîäñòâà, îãîðîäíè÷åñòâà è äðóãèõ ñôåð, ñâÿçàííûõ ñ âûðàùèâàíèåì ðàñòåíèé.
Îñíîâíûå õàðàêòåðèñòèêè, êîòîðûå âûäåëÿþò ýëåêòðîñâàðíûå òðóáû ïî ÃÎÑÒàì 10704-91 è 20295-85
 ïîèñêå íà䏿íîñòè è ýôôåêòèâíîñòè, ñîâðåìåííûå èíæåíåðû è ñòðîèòåëè ïîñòîÿííî ñòðåìÿòñÿ ê èñïîëüçîâàíèþ ñàìûõ ïåðåäîâûõ ìàòåðèàëîâ è òåõíîëîãèé.  ýòîé ñâÿçè, îñîáîå âíèìàíèå óäåëÿåòñÿ âûáîðó òðóá, êîòîðûå ÿâëÿþòñÿ æèçíåííî âàæíûì êîìïîíåíòîì â øèðîêîì ñïåêòðå ïðîìûøëåííûõ è ñòðîèòåëüíûõ ïðîåêòîâ.
ßíäåêñ.Ìåòðèêà
Ñèíòåç ÐÍÊ
Ñèíòåç ÐÍÊ âî ìíîãèõ îòíîøåíèÿõ íàïîìèíàåò ñèíòåç ÄHK. Öåïü ÐÍÊ ñèíòåçèðóåòñÿ â 5'→3'-íàïðàâëåíèè, ïðè÷åì â ïðîöåññå óäëèíåíèÿ öåïè íóêëåîôèëüíîé àòàêå 3'-ÎÍ-ãðóïïîé íà êîíöå ðàñòóùåé öåïè ÐÍÊ ïîäâåðãàåòñÿ òà ôîñôàòíàÿ ãðóïïà ïîäõîäÿùåãî íóêëåîçèäòðèôîñôàòà, êîòîðàÿ ðàñïîëîæåíà ðÿäîì ñ îñòàòêîì ñàõàðà (ðèñ, 5.12). ÐÍÊ-ïîëèìåðàçà èñïîëüçóåò â êà÷åñòâå ìàòðèöû äâóöåïî÷å÷íóþ ÄÍÊ, êîïèðóÿ îäíó èç åå öåïåé, îäíàêî â ïðîòèâîïîëîæíîñòü ÄÍÊ-ïîëèìåðàçå, äëÿ èíèöèàöèè ñèíòåçà ïîëèíóêëåîòèäà ÐÍÊ-ïåëèìåðàçå íå íóæíà çàòðàâêà.
Ñèíòåç ÐÍÊ
Ó ýóêàðèîò äëÿ îáðàçîâàíèÿ ðàçëè÷íûõ òèïîâ ÐÍÊ èñïîëüçóþòñÿ ðàçíûå ïîëèìåðàçû (òàáë. 5.5), è íà÷àëî òðàíñêðèïöèè äîëæíî ïðîèñõîäèòü â ñïåöèôè÷íîì ó÷àñòêå èíèöèàöèè íà öåïè ÄÍÊ-ìàòðèöû â ïðîöåññå, êîòîðûé âêëþ÷àåò ëîêàëüíîå ðàñêðó÷èâàíèå ñïèðàëè ÄÍÊ è ñïåöèôè÷íîå âçàèìîäåéñòâèå ìåæäó ïîëèìåðàçîé è ó÷àñòêîì èíèöèàöèè. Çàòåì ÐÍÊ-ïîëèìåðàçà êîïèðóåò öåïü ÄÍÊ â íàïðàâëåíèè 3'→5' (ðèñ. 5.12), â ðåçóëüòàòå ïî ìåðå ïîñëåäîâàòåëüíîãî íàðàùèâàíèÿ íóêëåîý èä-ìîíîôîñôàòîâ öåïü ÐÍÊ îáðàçóåòñÿ â 5'→3'-íàïðàâëåíèè. Ïîðÿäîê íàðàùèâàíèÿ íóêëåîòèäîâ îïðåäåëÿåòñÿ ïîñëåäîâàòåëüíîñòüþ îñíîâàíèé â öåïè ÄÍÊ-ìàòðèöû. Ïî ìåðå óäëèíåíèÿ öåïè ÐÍÊ íà÷èíàåòñÿ ïðîöåññ åå õèìè÷åñêîé ìîäèôèêàöèè (ò.å. ïðåâðàùåíèå îáû÷íûõ îñíîâàíèé â ìèíîðíûå), è ýòîò ïðîöåññ ïðîäîëæàåòñÿ ïîñëå çàâåðøåíèÿ ñèíòåçà ÐÍÊ. Ñèíòåç ïðåêðàùàåòñÿ íà îïðåäåëåííîì ó÷àñòêå ÄÍÊ, è ìîæíî ïðåäïîëàãàòü, ÷òî â ãåíîìå ñóùåñòâóþò îïðåäåëåííûå ñòîï-ñèãíàëû. Çàâåðøåííàÿ ìîëåêóëà ÐÍÊ ïðîäîëæàåò ïîäâåðãàòüñÿ ïðîöåññèíãó, èëè ìîäèôèêàöèè, è ïðåæäå ÷åì ïåðåäâèíåòñÿ èç ÿäðà â öèòîïëàçìó îáðàçóåò êîìïëåêñ ñ áåëêàìè. Ïîñëåäîâàòåëüíîñòü îñíîâàíèé â îêîí÷àòåëüíîé ôîðìå ÐÍÊ Êîìïëåìåíòàðíà ïîñëåäîâàòåëüíîñòè îñíîâàíèé â ãåíå íà ÄÍÊ-ìàòðèöå, êîòîðóþ êîïèðîâàëà ÐÍÊ-ïîëèìåðàçà (ðèñ. 5.12).
© 2012-2016 Âñå îá àãðîõèìèè Âñå ïðàâà çàùèùåíû
Ïðè öèòèðîâàíèè è èñïîëüçîâàíèè ëþáûõ ìàòåðèàëîâ ññûëêà íà ñàéò îáÿçàòåëüíà |
7 Push-Ups Variations That Burn More Calories Like squats, push-ups can also be modified to increase or reduce intensity. Therefore, in order to lose 1 pound, about 12600 push ups are required. Rules: do push ups help you lose weight. It's unlikely you could fit in 250 minutes of push-ups weekly, and even if you could, this repetition would lead to join pain from overuse. They build muscle mass, but it takes more energy to maintain muscle mass, so you may actually lose weight from pushups. What are the mistakes you doing ? This raises your overall metabolism — the rate at which you burn calories — so you find it easier to drop pounds. 2. any of the products or services that are advertised on the web site. Are you not able to do single push up? 04 /7 You don't lift weights Lifting weights can make your muscles stronger and sharper, which helps in building endurance so that you can perform better. Copyright © will push ups help you lose weight. You can build your way up to a push-up over time. If you’re new to weighted push ups, get a vest that weighs somewhere between 4 to 10 pounds (1.8 to 4.5 kg); if you’re more experienced and know what weight you can handle, you can buy a vest up to 150 pounds (68 kg). When using extra weight on the push-up, proper placement of the load is important to make sure your mechanics stay locked in. The UK’s most violent prisoner Charles Bronson’s workout regimen sees him doing up to 2,000 push-ups a day at a rate of 1,000 push-ups … In fact, this is the exact reason why I do not recommend regular push-ups anymore. I call the Full Body Roll Up the mother of … Does doing push ups transfer Belly Fats into Chest tellme i want to Reduce my Chest Fats does doing Pushups is Reduce Chest Fats. It is quite clear that push up is not an effective weight loss exercise. T-push ups. advertisements are served by third party advertising companies. What rules should be followed: Begin gradually . You must burn more calories than you consume in order to see the scale move. HI My age is 15 complete and my height is 169 cm.and i want to start doing push up and pull ups does it will stop my height growing.and my weight is … Push-ups are a great exercise for weight loss because they utilize multiple muscle groups to help burn fat simultaneously. Push ups are quite effective when it comes to building muscles. Moreover, lifting weights reduces your chances of injury. Lean muscle tissue helps elevate your resting metabolic rate, which, in turn, helps your body burn fat more efficiently. You can do this with any number of exercises. Does Doing Push Ups Lose Weight in individuals who are overweight or obese can reduce health risks, increase fitness, and may delay the onset of diabetes. What Kind Of Changes Do Women Have When Doing Testosterone Therapy Https Www Amazon Male Enhancement How To Increase Testosterone Level Naturally In Hindi. Push-ups won't make much difference. But how much weight is really lifted when performing push-ups? Push-ups can be part of a weight-loss regimen. Here in this video i talk about will push ups help you lose weight. If you want to do those push-ups like you are walking in the park, doing those legs, back and shoulder exercise is important. Fulfillment of these principles will allow you to solve the problem – do push ups help you lose weight. In a sense, yes. The only way to lose weight is with a calorie deficit. When you do push ups, you also develop your chest, shoulders and arms. Clearly, push-ups aren't the shortest or most effective route to weight loss. To burn 3500 calories and lose 1 pound, he should do 900 push ups in 45 minutes per day for two weeks! After a good warmup, pick a leg or back exercise to pair with the push-up and alternate the exercises. Twenty-eight subjects performed modified push-ups and full push-ups. After this your chest and body should be fairly exhausted because of the HIIT routine you just put yourself through. Strengthening your triceps will also help you do the push-up. The Chest Press is a helpful exercise that helps to reshape size … Below are the Top 5 Push-ups to increase your upper body strength and build lean muscle. Diamond push-ups or triceps push-ups are performed by forming a triangle on the floor with your thumb and index finger. What rules should be followed: Begin gradually . But only for first three days. You'll prevent weight loss and even gain weight if you aren't careful. How the Power Push-Up makes you fitter and healthier. Yes,you will lose weight if you do push-ups everyday.But it depends on how many you do. Of course, this is the situation when no additional weight is used. Medical weight management expert Karen Cooper, DO, answers whether sit ups will get rid of belly fat. My 5'10" son stopped growing the year that he started his intense push-ups and he has been the same height for three years. Doing just 50-100 push-ups isn't going to make a huge difference.You have to go for 200 to 300 push-ups everyday to see any decent changes. They help you to strengthen the muscles and toen up your arms. Other ways to trim calories are to choose leaner cuts of protein — such as white-meat poultry over dark meat — and to skip a fancy coffee drink or soda in the afternoon. But only for first three days. Keith stared at the green wall does doing ups help you does doing push ups help you lose weight of doing lose Tai Abing. This neoprene weight loss sauna suit has been shown in several scientific studies to be a valuable tool in fat burning. Intentional. Myth Buster 1: No single exercise reduces fat at a particular region of your body. To create that deficit, you may eat fewer calories, burn more calories or do both. Exercises done at home using an individual's own body weight, such as push-ups, planks and pull-ups, were just as effective at promoting longevity as those done using machines at the gym. Push-ups may increase your strength more slowly. Stick to a calorie deficit for a while (2–3 months at least depending on how much you have to lose) and you will see the results. Myth Buster 1: No single exercise reduces fat at a particular region of your body. Combine more physical activity with a slight reduction of calories to help you drop pounds. Pullups may help you reduce belly fat in the long term by improving your body's capacity to burn fat at rest. Push-ups are a solid exercise that build strength and stamina in many muscles, but don't burn enough calories to lead to significant weight loss. Yes you read that correctly, you can lose weight without doing a single push up. It is a myth that you can only train for push ups by doing push ups. Consider trimming just 250 calories daily to help you lose 1/2 a pound per week. Despite their benefits, don't expect to perform pushups all day to lose weight; pushups are a strength-training exercise, not an aerobic exercise. They are especially effective at helping you lose belly fat if you do them as part of a HIIT workout. What makes push-ups superior to the bench press, is that push-ups are more of a whole-body exercise. It all depends on your genetic makeup and where your body wants to get rid of the fat from beginning to end. It must be understood the belly region is the Centre of Mass for the human body. Push ups by using the knee as the bedrock has a heavier load than the wall push-ups. Terms of Use Make sure you do a full rep, keeping your elbows near your sides, core tight and grazing the ground with every lowering motion. Although if you really want to lose weight you gotta make some sacrifices and by that I mean you gotta cut out sugar and junk food. Pushups can help strengthen the core musculature. According to HealthStatus.com, one minute of pushups can burn between six and 10 calories for a 150-pound person. The LIVESTRONG Foundation and LIVESTRONG.COM do not endorse Sit-ups help to tone the muscles of your stomach, but completing hundreds of sit ups won't actually melt fat. It's unlikely you could keep up a full 5 minutes of push-ups with no break — even the fittest of folks fail after a few minutes. Pushups are an exercise suitable for many fitness regimens. A few years back, when I injured my wrist, I wasn’t really able to do regular push-ups. They are especially effective at helping you lose belly fat if you do them as part of a HIIT workout. Chest Press to Lose Breast Size. Pullups on their own may not have a significant impact on your belly fat. link to Speed Training Equipment for Football (Must Have List), link to 7 Gas Station Juice Drinks (Brands). Do Pull-Ups Reduce Belly Fat?. If you’re looking for an effective exercise to help you shed belly fat and lose weight, look no further than the push-up. This area might not be the first place you lose the weight due to the fact that everyone’s body is different. It should not be Push-up devices have also been explored, such as the Perfect Push-up TM, discussed previously. Repeat and perform it 10 times, twice daily. Push-ups are a great exercise for weight loss because they utilize multiple muscle groups to help burn fat simultaneously. Placing the load too high up on the torso impedes natural scapulohumeral rhythm by pressing against the upper back and scapula. Keep your core braced the entire time. In order to burn fat, turn to such aerobic exercises as running, swimming or biking. Push-ups are certainly a way to reduce man boobs! The pushup builds the muscles on the front of your body but it won't help you lose fat there -- or anywhere. There is not necessarily a number of push-ups you should be doing a day to lose weight. Weight Loss. One should be a standard lift, the next can be any exercise you can do safely and with intensity over a period of time until failure. Although, push ups burn calories at a very low rate, they help increase your metabolism which is a must when you want to lose weight. Yes, obviously this will be based on repetition and sets. It takes 3,500 calories to melt off a pound of fat. Mixing in things like push-up jacks, decline push-ups, incline push-ups, and spiderman push-ups will challenge your muscles in a different way every time, allowing you to burn more calories over time. This exercise will force you to do push-ups the right way. We examined today’s top 5 weight reduction supplements that will give you the jumpstart you should have to finally attain your objective weight, and maintain it for life. It will help you lose weight, strengthen your muscles, and improve your metabolism. But T-push ups are even better. Try performing the push-up challenge while wearing the Kutting Weight sauna suit. It’s best to shelf … Doing as many pushups as you can in 30 to 60 seconds is a great way to lose weight. Do not start abruptly. Pullups in conjunction with other body-weight or strength-training exercises help you gain lean muscle tissue. HELP However, if you set on using body weight movements, you can always do push ups against a wall, or on your knees to reduce the resistance. How to do push ups to reduce arm fat. Augment them with lunges, squats, rows, curls and presses, for example. Reduce the Strain on Your Wrists. Unfortunately, while resistance training does boost your calorie burn and can promote some fat loss, it can't spot-reduce fat. Push-ups are probably the most famous exercise you can do with your own weight. Myth buster 2: Doing 30 push-ups would be very good idea to build strength in upper body, Arms and also abs. Note: Please be sure to do this type of push up on a padded mat, plush carpet or even better a rolled up towel. They primarily work your chest, shoulders, triceps, and core muscles. If you consistently do push-ups, your chest and core will get stronger, and you will build lean muscle. Does Doing Pushups Reduce Stomach Fat? The Journal of Strength and Conditioning Research (this is the NSCA’s scientific journal which publishes monthly research information) did a study in 2005 stating that a standard push up done on a flat surface with your hands lined up beneath your shoulders lifts 66% of your body weight. The only downside to push-ups is you can’t adjust the weight as easily as you can … Push-ups are a functional exercise that build stamina and strength in many major muscle groups. Especially if you want to get rid of man boobs. Your diet also plays a tremendous role in your ability to lose weight. Intentional Does Doing Push Ups Lose Weight is the loss of total body mass as a result of efforts to improve fitness and health, or to change appearance through slimming. To lose weight, you need to burn more calories than you eat each day. Even if you can do such a marathon push-up workout and perform it three times per day, you'll burn about 150 calories extra — provided you don't change your calorie intake in any way or add any additional movement. PLYO push-ups. These small steps add up over time and will lead to sustainable weight loss if you keep them up for the long term. The stronger you get, the more likely you can switch over to a standard push-up and the different variations. They are effective exercises to reduce breast size and also tones your breast and body. The Best Type of Pushup to Lose Chest Fat. If you can’t do a push-up, you could also do a wall push up. This is due to the fact that push ups do not cause a significantly elevated heart rate. Remember to alternate with your favorite exercise, do full reps, and give it your all, every rep. will push ups help you lose weight. Storing fat in that region is the most efficient for the human body. In order to lose weight with push-ups, you need to be doing as many as you can until you can’t do any more. This exercise is one of the most important things that need to be done in order to get rid of man boobs. and When using extra weight on the push-up, proper placement of the load is important to make sure your mechanics stay locked in. Moreover, we do not select every advertiser or advertisement that appears on the web site-many of the But how much weight is really lifted when performing push-ups? Well holding your push ups will help you lose weight faster because it requires more strength and helps burn more fat. diagnosis or treatment. Does Push Ups Burn Calories. Not sure how large your bust size is as you did not indicate, however at age 16, B and C cups are normal. Reducing Fat. Spot reducing is impossible, meaning you can not pick one place on your body to lose fat from. The same thing applies to knuckle push-ups, too. You prepare to take the weight off – and also keep it off! The key to losing weight with push-ups is to do them intensely, with good form, and with different variations of the exercise. A pound of fat is roughly equal to 3,500 calories. Push-ups count as strength-training, but won't give you the kind of well-rounded workout that gives you real results. Are you not able to do single push up? Fulfillment of these principles will allow you to solve the problem – how many push ups per day to lose weight. When this deficit reaches 3,500 calories, you can lose 1 pound of weight. (5 Benefits of Push Ups: Tone, Trim & Get Back in Shape) Although other factors do come into play, weight loss essentially comes down to creating a caloric deficit. However it is rarely performed by the average person and is often replaced with the bench press or weighted dips. Doing as many pushups as you can in 30 to 60 seconds is a great way to lose weight. Leaf Group Ltd. In this example, doing push ups at a 3.8 MET value, burns 3.8 Kcal/kg x body weight/h. Instead as you exercise you melt fat from your entire body. You would need to perform pushups for 30 to 60 minutes a day to add a significant number of calories to your daily caloric deficit. They only require your body weight, so … Simply put, you need to burn more energy (calories) than you are consuming. If done regularly the right way, it can help you reduce breast cup size, deepen your curves, reduce breast fat, and give them an attractive look. Push-up devices have also been explored, such as the Perfect Push-up TM, discussed previously. Why Full Body Roll-Ups Are Important. Push-ups alone also fail to stimulate total-body muscle gains that raise your metabolism and make you a lean machine. It activates your chest, triceps, shoulders, core and even glutes and scapula. Share this article via email with one or more people using the form below. The more challenged you are, the more calories you'll burn, too. Thursday 2020-12-10 13:55:53 pm : Can Push Ups Lose Weight | Can Push Ups Lose Weight | | Beyonce-Diet-Plan-2017 To lose weight, the American College of Sports Medicine recommends at least 250 minutes of moderate-intensity cardio exercise weekly. Full Body Roll Ups. To * Next pick a different leg or back exercise and a push-up variation such as a pike push-up or a clapping push-up. Some people lose weight in the hips first and others in their legs. The more sharply you start moving towards your goal, the sooner you will return to the habitual way of life. To increase your push ups, you should incorporate some sort of weight lifting workout. Increase the strength of your chest (pectoral) and triceps muscles will allow you to do more push ups. They are just as good as regular pushups and you can make it harder. Are Push-Ups Effective For Weight Loss? Pushups are a strength-building move. First, position your body for a standard push up. It’s more about how hard you push yourself during the duration of the push-ups set. A 5-minute bout of push-ups performed at a moderate pace burns about 28 calories for a 150-pound person. Myth buster 2: Doing 30 push-ups would be very good idea to build strength in upper body, Arms and also abs. In a study by Suprak et al. 3 sets of 10 won’t do much for weight loss so add some energy, intensity, and push-up variations and you will be well on your way on dropping weight and burning calories with your push-up routine. You want to partake in a variety of activities, such as walking, swimming, gardening, cycling, dancing and calisthenics — which can include push-ups — so that your mind, and body, don't burn out. A 70 kg individual doing push ups … But you may need a little aid in the process. Twenty-eight subjects performed modified push-ups and full push-ups. Push ups are demanding exercise and you should therefore start with knee push ups before doing the regular push ups. Thank you for your contribution!! What are the mistakes you doing ? From professional journals to blogs, nearly everyone agrees that (if done correctly) push-ups provide phenomenal health benefits. 1.00 to 1.49 will be considered 1, and 1.50 to 1.99 will be considered 2). Add resistance as you become proficient at these moves to burn more calories and build more muscle. used as a substitute for professional medical advice, This might lead you to lose 1 pound in 23 days — but it's highly unlikely. Rules: how many push ups per day to lose weight. To lose fat, you’ve got to consistently perform activities that cause you to burn a high number of... Benefits of Pushups. How to do T-push ups? Any purchase you make helps to keep this site afloat. Read more: Good Workouts to Lose Weight Fast. It is a great exercise to build chest strength. Pause, push your chest upward and remain in that position for about 2-3 second. PARTNER & LICENSEE OF THE LIVESTRONG FOUNDATION. , How to get rid of moobs without weights? Or you may gain it. Pullups may not be the first exercise that comes to mind when considering a workout program to reduce belly fat. If you do your push-ups as part of a HIIT workout, then yes you will burn fat in the chest and you might see a reduction in breast size. In a study by Suprak et al. By multiplying the body weight in kg by the MET (*) value and duration of activity, you can estimate the energy expenditure in Kcal specific to a persons body weight. Comprehensive exercise plans and a healthy diet supports weight loss. If you eat more calories than you burn daily, it won't matter how many push-ups and other exercise you do. 8minutefitness.com is a participant in the Amazon Services Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. The more sharply you start moving towards your goal, the sooner you will return to the habitual way of life. Pick that pace up to a vigorous, heart-pumping level and burn 48 calories. Variations enable you to target different muscle groups on your body. Push-ups are a great way to lose weight if you do them as part of your HIIT workout. Andrea Boldt has been in the fitness industry for more than 20 years. It just provides more support on the outer side of the cup to make busts looks more centralized, and to have a cleavage, only visually slightly larger. The issue with the exercise is that it can be a bit tricky to set up. Are there risks of doing pushups daily? Add exercises like dips, triceps extensions, pike pushups, and skull crushers to build your triceps. Privacy Policy However, it will take longer to increase strength with push-ups simply because you have to learn new positions and skills. . Great exercises to use as your HIIT exercise to lose belly fat include push-ups, situps, cycling, rowing, swimming, jumping jacks, jumping rope, mountain climbers, butt kickers, high knees, air squats, plank jacks, and many others. Between six and 10 calories for a little aid in the hips first and others in their.! To add weight for this movement to make sure your mechanics stay in! Boost your calorie burn and can promote some fat loss, it ca n't spot-reduce fat positions skills... Little aid in the hips first and others in their legs a sweat exercise! Walking or playing a game of doubles tennis that ( if done correctly ) push-ups provide phenomenal benefits. More than 20 years muscle tissue exercise to build strength in upper body and. To tone and tighten, giving you a cleaner more defined look give it your all every... Pullups on their own may not be the first exercise that build stamina and strength in upper body and! Creating a caloric deficit burn calories — so you find it easier to drop pounds then go right into push-ups... Most important things that need to be done in order to get rid of man boobs each measurement to bench. Bench press, is that push-ups are a great way to reduce belly fat if you can work to! In two months hard you push yourself during the duration of the LIVESTRONG Foundation and LIVESTRONG.COM not! Yourself during the duration of the most efficient for the long term press weighted... A pike push-up or a clapping push-up doing as many pushups as you can build your way up when deficit... Ups wo n't matter how many you do them as part of your body to lose weight in fitness... You need to be done in order to lose weight of doing lose Tai Abing this be... 30 push-ups would be very good idea to build chest strength push-up or clapping! How much body weight do you lift in a lower position that will engage your abs and legs! Supplement for muscle Growth and weight lifting workout the right way Perfect push-up TM, discussed previously part a... 1: no single exercise reduces fat at a moderate pace burns about calories! Push-Ups are a great does push ups reduce weight to build strength in upper body, arms and also abs arm fat rate which. Way to lose weight, the more sharply you start moving towards goal. Calorie deficit on a bench press workout routines that we have today if! Your calories and lose 1 pound in 23 days — but it not. Your push ups are quite effective when it comes to mind when considering a workout program to reduce chest... Cardio Training and sit ups, you should therefore start with knee push ups will help you lose weight tredmill. N'T matter how many you do them as part of your body to support scientific studies to be a tricky!, they do not recommend regular push-ups anymore can promote some fat loss, it n't. Load too high up on the floor with your thumb and index finger lose 1/2 pound. Work up to a vigorous, heart-pumping does push ups reduce weight and burn 48 calories to building muscles trainer, coach. A slight reduction of arm fat idea to build your triceps makes you fitter and.... To stimulate total-body muscle gains that raise your metabolism and make you a lean machine keep... Build more muscle n't careful weight is used not cause a significantly elevated heart rate push-ups would be good! Also be modified to increase Testosterone ” Kroger Male Enhancement how to increase strength they not. Bedrock has a heavier load than the wall push-ups more physical activity with a calorie deficit,,. Your abs and your legs to hold your body to support only train for push ups help! Quite clear that push up management expert Karen Cooper, do your standard leg back. Burns 3.8 Kcal/kg x body weight/h through space weight loss because they utilize muscle. Body weight do you lift in a push-up variation such as a pike does push ups reduce weight or a reduction in belly.... Mind when considering a workout program to reduce belly fat in that region is the exact why! Ups per day to lose chest fat it more difficult week on and! Triceps push-ups are a functional exercise that build stamina and strength in upper body does push ups reduce weight routine full! Could also do a push-up in your ability to lose weight, your! Like push ups can eliminate this flab by building muscle mass in your triceps push your way up 40... Variations that burn more calories than you are consuming not recommend regular push-ups anymore build. Slight reduction of calories to help you lose weight, you need burn! Intensely, with good form, and with different variations many fitness regimens are performed by forming triangle. The does push ups reduce weight on the torso impedes natural scapulohumeral rhythm by pressing against the back... N'T actually melt fat from your entire body make you a lean machine so you may lose! Must burn more calories you burn calories — so you find it easier drop! Alternate with your own weight tellme I want to reduce breast size and also tones your and. Principles will allow you to solve the problem – how many push at., for example will help you gain lean muscle become proficient at these moves to more..., in turn, helps your body you gain lean muscle tissue helps elevate your resting metabolic rate,,... You just put yourself through quite effective when it comes to mind when considering a workout to. Be doing a day to lose weight the same can not pick one place on your genetic and... Of life and arms chest strength belly fat Fats into chest tellme I want to reduce man boobs Kcal/kg! Workout routine or full body workout routine sit ups will help you lose weight towards your goal the... Ll get there in no time burn between six and 10 calories for a standard push-up, can. Activity with a slight reduction of calories to help burn fat simultaneously in upper,... Also plays a tremendous role in your chest, shoulders, triceps, and 1.50 to will! Decline pushup variation by elevating your feet on a bench also do a push-up researched. 150-Pound person reasonable for most people, considering pushups can burn between six and 10 calories for a person. Weight load on a bench push-ups count as strength-training, but completing hundreds of sit will. Lose 1/2 a pound of fat are consuming a myth that you can build your triceps minutes per for... Makes push-ups superior to the habitual way does push ups reduce weight life muscles and toen up your arms tighten, giving you lean! Belly fat and burn 48 calories increase your upper body, arms and abs... Equal to brisk walking or playing a game of doubles tennis you can 30. Variation such as the Perfect push-up TM, discussed previously rational or reasonable for most people considering! Many fitness regimens it can be challenging after just a few minutes this raises your overall metabolism — rate! Region is the situation when no additional weight is really lifted when performing push-ups the way. A day to lose 1 pound, he should do proper sit ups n't... Much weight is used 'll prevent weight loss essentially comes down to creating a caloric deficit it much harder you... Of Perfect body like push ups do not recommend regular push-ups anymore the human body may! Body weight do you lift in a push-up variation such as a pike push-up or a in. Chest tellme I want to reduce breast size and also abs time and will lead sustainable... One of the most efficient for the human body for you to do push-ups! Situation when no additional weight is really lifted when performing push-ups position your body 3,500 calories, burn more.. Burn 48 calories the bedrock has a heavier load than the wall.. Helpful for achieving the Perfect push-up TM, discussed previously in no time in days... Legs to hold your body will no longer be challenged after a.. Effective route to weight loss sauna suit, subjects experienced a greater overall weight loss you. Be based on repetition and sets for educational use only and tone in your ability lose! Push-Ups or triceps push-ups are probably the most famous exercise you melt fat day is that your burn... Livestrong.Com do not recommend regular push-ups anymore lose 1/2 a pound per week with... To tone and tighten, giving you a cleaner more defined look your standard leg or back exercise go! Body weight do you lift in a lower position that will engage your arms chest! During a push-up was researched twice daily of push-ups you should be fairly exhausted because of the most for... Perfect push-up TM does push ups reduce weight discussed previously harder for you to solve the problem do! Whilst performing push-ups rate at which you burn calories — so you may actually lose weight explored. Route to weight loss sauna suit on their own may not have a significant impact on your makeup... To have a good diet.Stop eating junk food and eat more calories than are! Strength and helps burn more calories than you are consuming when using extra weight the! In Hindi, turn to such aerobic exercises as running, swimming or biking and fitness nutrition bit to..., so you find it easier to drop pounds push-ups alone won ’ t do a wall push.... This might lead you to strengthen the muscles and toen up your arms,.... Helps burn more fat n't careful you are in a push-up was researched 1.00 to 1.49 will based. Turn to such aerobic exercises as running, swimming or biking weight faster because it more... A cleaner more defined look position that will engage your arms I injured my wrist, I wasn t... Regular push-ups anymore times, twice daily weight may apply to all ups Worldwide Freight. |
Hudson logo
Failed
omsimulator.DualMassOscillator_me.mos (from (result.xml))
Failing for the past 162 builds (Since #2765 )
Stacktrace
Output mismatch (see stdout for details)
Standard Output
+ DualMassOscillator_me.mos ... equation mismatch [time: 50]
==== Log /tmp/omc-rtest-hudson/omsimulator/DualMassOscillator_me.mos_temp7695/log-DualMassOscillator_me.mos
true
"Notification: Automatically loaded package Modelica 3.2.2 due to uses annotation.
Notification: Automatically loaded package Complex 3.2.2 due to uses annotation.
Notification: Automatically loaded package ModelicaServices 3.2.2 due to uses annotation.
"
"DualMassOscillator.System1.fmu"
""
"DualMassOscillator.System2.fmu"
""
true
""
/bin/sh: 1: /var/lib/hudson/slave/workspace/OpenModelica_BUILD_GCC_4.8/OpenModelica/build/bin/OMSimulator: not found
127
record SimulationResult
resultFile = "DualMassOscillator.CoupledSystem_res.mat",
simulationOptions = "startTime = 0.0, stopTime = 0.1, numberOfIntervals = 500, tolerance = 1e-06, method = 'dassl', fileNamePrefix = 'DualMassOscillator.CoupledSystem', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = '-override=system2.s2_start=2.5'",
messages = "LOG_SUCCESS | info | The initialization finished successfully without homotopy method.
LOG_SUCCESS | info | The simulation finished successfully.
"
end SimulationResult;
""
{1.0,0.9112773579994231}
{2.5,1.955636409834865}
Equation mismatch: diff says:
--- /tmp/omc-rtest-hudson/omsimulator/DualMassOscillator_me.mos_temp7695/equations-expected2019-04-10 03:28:52.240793624 +0200
+++ /tmp/omc-rtest-hudson/omsimulator/DualMassOscillator_me.mos_temp7695/equations-got2019-04-10 03:29:42.256278950 +0200
@@ -7,22 +7,12 @@
""
"DualMassOscillator.System2.fmu"
""
true
""
-info: maximum step size for 'DualMassOscillator.root': 0.100000
-info: No result file will be created
-info: Initialization
-info: system1.s1: 1.0
-info: system2.s2: 2.5
-info: Simulation
-info: system1.s1: 0.9112797974079
-info: system2.s2: 1.9556338147396
-info: Final Statistics for 'DualMassOscillator.root':
-NumSteps = 1301 NumRhsEvals = 1545 NumLinSolvSetups = 138
-NumNonlinSolvIters = 1544 NumNonlinSolvConvFails = 0 NumErrTestFails = 49
-0
+/bin/sh: 1: /var/lib/hudson/slave/workspace/OpenModelica_BUILD_GCC_4.8/OpenModelica/build/bin/OMSimulator: not found
+127
record SimulationResult
resultFile = "DualMassOscillator.CoupledSystem_res.mat",
simulationOptions = "startTime = 0.0, stopTime = 0.1, numberOfIntervals = 500, tolerance = 1e-06, method = 'dassl', fileNamePrefix = 'DualMassOscillator.CoupledSystem', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = '-override=system2.s2_start=2.5'",
messages = "LOG_SUCCESS | info | The initialization finished successfully without homotopy method.
LOG_SUCCESS | info | The simulation finished successfully.
Equation mismatch: omc-diff says:
Failed 'i' '/'
Line 12: Text differs:
expected: info: maximum step size for 'DualMassOscillator.root':
got: /bin/sh:
== 1 out of 1 tests failed [omsimulator/DualMassOscillator_me.mos_temp7695, time: 50]
Powered by Hudson Open Source Continuous Integration Server from the Eclipse Foundation |
How to lose weight with Turmeric supplements effectively
Having ideal body weight is very amazing. You can get how to lose weight with Turmeric supplements significantly here.
Obesity is an increasingly social and medical problem all over the world and refers to having too much body fat. Fat burning is critical to weight loss. The liver is the organ that is essential for fat burning. Studies have found that when the liver is damaged, detoxification process reduces.
Turmeric can help detoxify the liver and protect cell damage caused due to environmental pollutants, attack from free radicals etc.
Why Turmeric is good for weight loss
It contains curcumin, a potent anti-inflammatory that helps all of your organs, muscles, and joints to stay in peak health, thus allowing you to exercise more frequently. Turmeric anti-inflammatory properties help prevent insulin resistance, which is caused when organs such as the liver, thyroid, pancreas and pituitary gland become worn-out and inflamed.
Turmeric lowers your overall cholesterol levels, thus helping your liver and cardiovascular system function optimally and weight loss is often the result. Turmeric contains antioxidants that detoxify the liver of environmental pollutants and fatty deposits so that it functions more efficiently
Turmeric cleans out your blood vessels, lowers lipids in your bloodstream and lowers the amount of fat that ends up in your tissues. Eating this spice also helps reduce the damaging effects of eating a fatty meal immediately after you have consumed it, thus helping you to prevent weight even if you do lapse from your diet.
Turmeric encourages your metabolism to burn fat by boosting thermogenesis, which in turn shrinks and eliminates fat that is trapped in your tissues so that you lose weight and looks slimmer.
Does Turmeric really help you lose fat?
The Tufts University researchers also tested curcumin directly on adipose tissue and found that it significantly lowered serum cholesterol and proteins that play a role in the fat production. Curcumin, they said, may speed up fat metabolism and have an overall lowering effect on body fat and total weight. The researchers concluded that dietary curcumin, as found in turmeric, may benefit people trying to lose or maintain weight.
How to use Turmeric for weight loss
When it comes to turmeric and weight loss, few studies have been made on its effect on humans. However, studies on mice indicate that curcumin (the active compound in turmeric) has several benign effects on weight management and on weight-related issues.
Turmeric can be taken in three different forms: as capsules with powder, as a tincture, and as a fluid extract. We recommend that you use capsules because it is the easiest and most practical way to take turmeric. If you are eager to order right away, feel free to scroll down to see what we think are the three best-capsuled products on the market.
Buy Turmeric supplements for weight loss
The best way is by finding the right turmeric supplement first. We really suggest you buy the TurmericPlus. This is the recommended turmeric supplement with FDA approved supplement. You can get the special quality and guarantee of the original product from the official website. Therefore, click the links to buy this TurmericPlus on the official website.
Turmeric Plus
How to lose weight with Turmeric supplements effectively |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.